uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,477,468,750,681
arxiv
\section{\label{sec:intro}Introduction} Studies of inclusive \Wvar\ and \Zgam\ boson production in proton-proton collisions provide valuable information, both to test the Standard Model of particle physics and to advance our understanding of the proton's substructure. Measurements of the production cross sections $\sigma(pp \to \WpmD X) \cdot \BR(\WpmDtoenu)$ and $\sigma(pp \to \Zgam \, X) \cdot \BR(\Zgtoee)$ can be compared to theoretical calculations that involve the weak couplings between intermediate vector bosons and quarks, and which must account for higher-order terms in perturbative \QCD. Such calculations also rely on models of the parton distribution functions ($\PDF$s) for the quarks and, in $pp$ collisions, for the antiquark `sea.' Until recently, most measurements of \Wvar\ and \Zgam\ production in hadronic interactions have been confined to experiments using proton-antiproton collisions. First results were obtained by the UA1 \cite{Albajar:1987yz,Albajar:1988ka} and UA2 \cite{Alitti:1990gj,Alitti:1991dm} collaborations at \sqrts\ = 630~GeV at the \CERN\ $\rm{Sp \bar{p} S}$ facility, followed by the CDF \cite{Abe:1995bm,Abulencia:2005ix} and D0 \cite{Abachi:1995xc,Abbott:1999tt} \ppbar\ measurements at the Fermilab Tevatron, at \sqrts\ = 1.8 and 1.96~TeV. It is only in the last few years that \pp\ colliders have reached sufficient center of mass energies for comparable studies, at \sqrts\ = 500~GeV by the \STAR\ \cite{Aggarwal:2010vc} and PHENIX \cite{Adare:2010xa} collaborations at the Relativistic Heavy Ion Collider (RHIC), and most recently by the LHC experiments ATLAS \cite{Aad:2010yt} and CMS \cite{Khachatryan:2010xn,Chatrchyan:2011nx} at \sqrts\ = 7~TeV. RHIC is unique in its capability to collide high energy polarized proton beams, and the observation of \Wvar\ production in these polarized proton collisions provides a new means to explore the spin-flavor structure of proton sea quark distributions. First measurements of the parity-violating longitudinal single-spin asymmetry for \Wpm\ decay leptons have also been reported by the \STAR\ \cite{Aggarwal:2010vc} and PHENIX \cite{Adare:2010xa} collaborations and are in good agreement with predictions from \NLO\ and resummed calculations \cite{deFlorian:2010aa,Nadolsky:2003ga}. At hadron colliders, the leading process in \WpmD\ production is $u+\bar{d} \, (d+\bar{u})$ fusion. This suggests that while the \Wpl\ and \Wmi\ production cross sections should be close to equal in $p\bar{p}$ collisions, they can be expected to differ in $pp$ measurements due to differences in the $u$ and $d$ quark and antiquark distributions within the proton. The $\PDF$s that characterize the valence $u$ and $d$ quarks of the proton (or $\bar{u}$ and $\bar{d}$ in the antiproton) are well determined from decades of high precision, deep-inelastic lepton scattering experiments (see, for example, Ref.~\cite{Martin:2002aw}). Comparable distributions for the antiquarks within the proton sea, however, are much more weakly constrained. Interest in these poorly-known antiquark \PDF's has also increased over the last few years, due to results from Drell-Yan experiments \cite{Baldit:1994jk,Towell:2001nh} which find evidence for a much larger $\bar{d}/\bar{u}$ flavor asymmetry in the nucleon than had been anticipated, especially at momentum fractions near and above $x \sim 0.2$. Detailed measurements of \Wpm\ and \Zgam\ production in proton-proton collisions will provide new and complementary information about this flavor asymmetry in the sea, from different reactions and at very different momentum scales. This paper describes the first measurement of the \Wpl, \Wmi, and \Zgam\ boson production cross sections in proton-proton collisions at $\sqrt{s}=500$ GeV by the STAR collaboration at RHIC. The cross sections are derived from studies of the charge-separated \WpmDtoenu\ and \Zgtoee\ decay channels for outgoing leptons near mid-rapidity ($|\eta_e| < 1$), and are based on 13.2~\invpb\ of data recorded during the 2009 run. In addition to the individual cross sections, a first measurement of the \Wpl/\Wmi\ cross section ratio at \sqrts\ = 500~GeV is also presented. The paper is organized as follows. Section~\ref{sec:detector} provides a brief overview of the STAR detector, focusing on the subsystems used in this analysis. Section~\ref{sec:data} describes the data and simulation samples analyzed, Sec.~\ref{sec:signal} details the extraction of the \Wvar\ and \Zgam\ signal spectra, and Sec.~\ref{sec:background} explains the estimation and subtraction of the background from the signal spectra. Finally, we discuss the calculation of the \Wvar\ and \Zgam\ production cross sections in Sec.~\ref{sec:xsec} and the \Wpl/\Wmi\ cross section ratio in Sec.~\ref{sec:ratio}, and compare these results to several theoretical calculations. Some of the data analysis methods employed here have been described briefly in Ref.~\cite{Aggarwal:2010vc}, and are discussed in more detail in this paper which incorporates a slightly larger data sample as well as improved detector calibrations with respect to the previous publication. \section{\label{sec:detector}The STAR Detector} The \STAR\ detector (Solenoidal Tracker at RHIC) \cite{Ackermann:2002ad}, shown schematically in Fig.~\ref{fig:STAR}, is a large acceptance, multipurpose detector designed primarily for measurements of hadronic and electromagnetic particle production in high-energy heavy ion and polarized proton-proton collisions. \STAR\ is comprised of many separate subsystems, each with specific capabilities; only those subsystems most relevant for the present analysis will be mentioned below. The heart of \STAR\ is a large Time Projection Chamber (\TPC) \cite{Anderson:2003ur} which is situated within a highly uniform, 0.5~T solenoidal magnetic field. The \TPC\ provides charged particle tracking, particle identification (via ionization energy loss, $dE/dx$), and precision momentum measurements over the range $|\eta| < 1.3$ and with full $2\pi$ azimuthal coverage. Although the \pT\ resolution of the \TPC\ deteriorates with increasing \pT, the spacial accuracy of tracks reconstructed between the inner and outer radius of the $\TPC$, located at 50 and 200~cm respectively, remains accurate up to $\SIM$1-2~mm in Cartesian space. In this analysis, \TPC\ tracks were used in identifying the high-\pT\ decay lepton (\epm) candidates, determining candidate charge signs, reducing contamination from the significant QCD background (see Sec.~\ref{sec:signal}), and reconstructing the interaction vertex for the events of interest. Surrounding the \TPC\ radially is the Barrel Electromagnetic Calorimeter (\BEMC) \cite{Beddo:2002zx}, a high granularity lead/scintillator-based sampling calorimeter. This detector is used to measure the energy deposited by energetic photons and electrons with pseudorapidities $|\eta| < 1.0$ over the full azimuth. The \BEMC\ is segmented into 4800 optically isolated projective towers, each of which subtends 0.05~rad in azimuth ($\phi$) and 0.05 units in $\eta$, and is roughly 20 radiation lengths deep. Based on cosmic ray and test beam data, the nominal energy resolution of the barrel calorimeter is calculated to be $\delta E/E = 14\%/\sqrt{E\mathrm{(GeV)}} \oplus 1.5\%$ \cite{Beddo:2002zx}. The \BEMC\ was used to measure the \epm\ candidate energy, and to aid in background reduction. By identifying events with large, highly localized, electromagnetic energy deposition, the \BEMC\ also provided our first-level trigger signal for leptonic \Wvar\ and \Zvar\ decays. Located at one end of the \STAR\ \TPC, directly in front of the magnetic field return poletip, is the Endcap Electromagnetic Calorimeter (\EEMC) \cite{Allgower:2002zy}, which provides electromagnetic energy measurement over the range $1.09 < \eta < 2$ and 2$\pi$ in azimuth. The \EEMC\ is similar in design to the \BEMC: a lead/scintillator sampling calorimeter, finely segmented in $\eta$ and $\phi$ into 720 towers with projective geometries, though it is approximately 3-4 radiation lengths thicker than the \BEMC\ due to its more forward position. In the work presented here, the \EEMC\ was used only as part of the background reduction via isolation and missing energy conditions discussed in Sec.~\ref{sec:signal}. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{star} \includegraphics[width=1.0\columnwidth]{lego} \caption{(Color online) (a) \Wvar\ candidate event display embedded in a schematic of the \STAR\ detector. Descriptions of the subsystems relevant for this analysis are given in Sec.~\ref{sec:detector}. (b) TPC \pT\ and (c) BEMC and EEMC \ET\ distributions in $\eta$ and $\phi$ for the same \Wvar\ candidate event as shown in (a). } \label{fig:STAR} \end{figure} \section{\label{sec:data}Data and Simulation Samples} Candidate events were selected online using a two-level trigger requirement in the $\BEMC$. The hardware level-0 trigger accepted events containing a tower with a transverse energy, $\ET$, greater than 7.3~\GeV. A dedicated software trigger algorithm then selected events by constructing 2$\TIMES$2 clusters of towers, and requiring that at least one cluster consist of a seed tower with $\ET \GREATER$~5~$\GeV$ and a cluster sum $\ET~\GREATER$~13~$\GeV$. During the 2009 run $1.2\TIMES10^6$ events were recorded satisfying these trigger conditions. The integrated luminosity of the data sample was determined using the Vernier Scan technique \cite{SvdeMeer1968}. The transverse widths ($\sigma_x$ and $\sigma_y$) of the beam overlap region are determined by measuring the trigger rate as the beams are swept through each other in the transverse plane. The intensity of each beam is determined during a scan by the Wall Current Monitors (WCM) \cite{WCM}. With the assumption of Gaussian beams, the instantaneous luminosity can be written as \begin{equation} \mathcal{L}=\frac{f_{rev}K}{2\pi\sigma_x\sigma_y} \end{equation} where $f_{rev}$ is the revolution frequency and $K=\sum N^a_iN^b_i$ is the product of the bunch intensities ($N_i$) of the two beams ($a$,$b$) summed over all bunches. The dedicated trigger used in the Vernier Scan, and also to monitor the luminosity in this analysis, is the level-0 hardware trigger, described above, with a coincidence away-side $\ET$ requirement imposed offline to reduce non-collision background. The cross section for this trigger can be written as $\sigma_{\RM{ver}}=\RM{R}^{\RM{max}}_{\RM{ver}}/\mathcal{L}$, where $\RM{R}^{\RM{max}}_{\RM{ver}}$ is the maximum trigger rate while the beams are fully overlapping. The value measured for this work was $\sigma_{\RM{ver}}$ = 434 $\pm$ 8(stat) $\pm$ 56(syst)~nb. Figure \ref{Fig:vernier} shows an example of the trigger rate as a function of the $x$ and $y$ beam displacements during one of the vernier scans, which was fit to extract the transverse beam widths and maximum trigger rate. The fit function used was a Gaussian in $x$ and $y$ combined with a constant term to account for remaining non-collision background. The largest contribution to the $\sigma_{\RM{ver}}$ systematic uncertainty was attributed to possible non-Gaussian components of the beam profile (10\%), with smaller contributions coming from possible \BEMC\ gain drift (5\%), and uncertainties in the bunch intensity measurements (4\%). This value for $\sigma_{\RM{ver}}$ was used to normalize the total number of events which satisfy this trigger condition, resulting in an integrated luminosity for the data sample of $L$ = $\int\mathcal{L}~dt$ = 13.2 $\pm$ 0.2(stat) $\pm$ 1.7(syst)~\invpb. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{vernier} \caption{(Color online) Trigger rate as a function of vernier scan beam displacement in the $x$ and $y$ directions. The transverse beam widths ($\sigma_x$ and $\sigma_y$) and maximum trigger rate ($\RM{R}^{\RM{max}}_{\RM{ver}}$) were extracted from the fit, which is superimposed. } \label{Fig:vernier} \end{figure} Monte-Carlo (MC) simulation samples were generated in order to determine detector efficiencies, estimate background contributions from electroweak processes, and compare various predicted observables to data. Signal samples for both the \Wtoenu\ and \Zgtoee\ channels were generated, along with a \Wtaunu\ sample which is an expected background in the \Wvar\ analysis due to the $\tau$'s leptonic decay. All the samples were produced using the \PYTHIA\ 6.422 \cite{Sjostrand:2006za} event generator and a \GEANT\ \cite{Brun:1978fy} model of the \STAR\ detector response. The same reconstruction and analysis algorithm was used for both the data and MC samples, and each MC sample was normalized to the integrated luminosity of the data unless otherwise stated. Due to the high luminosity of the \pp\ collision environment at \sqrts\ = 500~\GeV\ at \STAR, a significant number of pile-up tracks are present in the \TPC\ at any given time. The pile-up tracks are the result of either another collision from the same bunch crossing as the triggered event, or a collision that occurred in an earlier or later bunch crossing. Note that the bunch crossing period at RHIC is about 107~ns, while it can take up to $\sim$38~$\mu$s for track ionization to drift through the \TPC. In the simulation, these pile-up tracks are accounted for by embedding the full \GEANT\ detector response of the simulated event into a zero-bias triggered event before reconstruction. The zero-bias events are selected randomly during nominal beam crossings at a rate of $\LESSAPPROX 1 \Hz$ with no detector requirements, resulting in a good representation of the pile-up contained in the \TPC\ for \BEMC\ triggered collision events. \section{\label{sec:signal}\texorpdfstring{$\bm{\Wvar}$ and $\bm{\Zgam}$ Signal Reconstruction}{W and Z/gamma* Signal Reconstruction}} This section details the identification and reconstruction of $\Wvar$ and $\Zgam$ candidate events, as well as the reduction of the large QCD background. This reduction is achieved through a number of cuts designed to take advantage of the kinematical and topological differences between electroweak and QCD processes. ``\Zgam" will be used interchangeably with ``\Zvar" for the remainder of this paper. Candidate events were selected from the sample of \BEMC\ triggered events described in Sec.~\ref{sec:data} by requiring a reconstructed primary vertex. A primary vertex is one reconstructed from either a single \TPC\ track with $\pT \GREATER 10~\GeVc$ or multiple tracks originating from the same location along the beamline. Each track considered in vertex reconstruction is assigned an increased weight if it either points to a region of energy deposition in the calorimeters, or if it uses hit points from both sides of the \TPC\ central membrane. Tracks satisfying either of these two conditions are likely to be from the triggered collision, therefore weighting these tracks more heavily in vertex reconstruction strongly reduces the contamination from pile-up tracks. The distribution of primary vertices along the beam direction is approximately Gaussian with an RMS width of 52 cm. Events of interest were required to have a $|\zvertex|\LESS100$ cm, where $\zvertex$ is the distance along the beam direction of the primary vertex from the nominal collision point at the center of the STAR interaction region. \subsection{\label{subsec:epm_isolation}\texorpdfstring{Identification of High-$\bm{\ET}$ Isolated Electrons and Positrons}{Identification of High-ET Isolated Electrons and Positrons}} A candidate electron or positron track is defined to be a \TPC\ track with $\pT \GREATER 10~\GeVc$ that is associated with a primary vertex satisfying the criteria described above. Candidate tracks were also required to have: \begin{itemize}\addtolength{\itemsep}{-0.5\baselineskip} \item a minimum of 15 \TPC\ points, \item more than 51\% of the maximum number of \TPC\ points allowed, \item a first \TPC\ point with radius less than 90 cm, \item a last \TPC\ point with radius greater than 160 cm. \end{itemize} These requirements help to ensure that the track and its charge sign are well reconstructed, as well as reject pile-up tracks which may be mistakenly associated with a primary vertex. Candidate \TPC\ tracks are extrapolated to the \BEMC\ to determine to which tower the track points, then the four possible 2$\TIMES$2 \BEMC\ tower clusters containing the tower pointed to by the track are formed. The 2$\TIMES$2 cluster with the largest summed transverse energy, $\EeT$, is assigned to the $\epm$ candidate. The candidate $\EeT$ is required to be greater than 15 $\GeV$ to be safely above the trigger turn-on region. Also, the two dimensional distance between the energy log-weighted centroid of the tower cluster position and the extrapolated \TPC\ track position, $|\DELTA\vec{r}|$, is required to be less than 7 cm, to reject candidates where the \BEMC\ cluster may not have originated from the particle which produced the high \pT\ \TPC\ track. Electrons and positrons from \Wvar\ and \Zvar\ decays should be well isolated from other particles in $\eta-\phi$ space; thus, in the next stage of the candidate selection process two isolation criteria are applied. The first isolation cut was made by summing the \ET\ in the 4$\TIMES$4 \BEMC\ tower cluster which surrounds the $\epm$ candidate cluster, $\ET^{4\TIMES4}$, and requiring $\EeT/\ET^{4\TIMES4} \GREATER$ 0.95. The other isolation requirement is imposed to reduce jet-like events. The quantity $\ET^{\DELTA\RM{R}\LESS0.7}$ is defined as the sum of all \BEMC\ and \EEMC\ tower $\ET$ and \TPC\ track $\pT$ within a cone radius of $\DELTA\RM{R}=\sqrt{\DELTA\eta^2+\DELTA\phi^2}\LESS0.7$ around the candidate track, and the ratio $\EeT/\ET^{\DELTA\RM{R}\LESS0.7}$ is required to be greater than 0.88. The \epm\ candidate track is excluded from the sum of \TPC\ track $\pT$ to avoid double-counting the candidate energy in the $\ET^{\DELTA\RM{R}\LESS0.7}$ sum. Figure \ref{Fig:isoCuts} shows the isolation ratios described above for both data and \Wtoenu\ MC. The placements of the cuts, shown by the dashed lines, were chosen to retain a large fraction of the signal, while significantly reducing the QCD background. Note that differences between the isolation ratios in Fig.~\ref{Fig:isoCuts} of this paper and Fig.~1 of Ref.~\cite{Aggarwal:2010vc} are expected due to differences in the data samples used and improved calibrations. Also, the order of the $\EeT/\ET^{4\TIMES4}$ and candidate track-cluster matching $|\DELTA\vec{r}|$ cuts were inverted in Ref.~\cite{Aggarwal:2010vc} with respect to the ordering described in this section. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{isoCuts} \caption{(Color online) Distributions of the isolation ratios $\EeT/\ET^{4\TIMES4}$ (left) and $\EeT/\ET^{\DELTA\RM{R}\LESS0.7}$ (right) used in \epm\ candidate selection. \Wtoenu\ MC shape distributions (arbitrary normalization) are shown as filled histograms for comparison with the data distributions. The vertical dashed lines indicate the placements of the cuts on these isolation ratios.} \label{Fig:isoCuts} \end{figure} \subsection{\label{subsec:Wsignal}\texorpdfstring{$\bm{\Wvar}$ Candidate Event Selection}{W Candidate Event Selection}} The selection of \Wtoenu\ candidate events is based on differences in the event topology between leptonic \Wvar\ decays and the QCD background or \Zvar\ events. \Wtoenu\ events contain a nearly isolated \epm\ with a neutrino close to opposite in azimuth. Electrons and positrons emitted near mid-rapidity from \Wvar\ decay are characterized by a large \EeT\ that is peaked near half the \Wvar\ mass (\SIM40~\GeV) with a distribution referred to as a Jacobian peak. There is also a large missing transverse energy in \Wtoenu\ events opposite, in azimuth, to the \epm\ due to the undetected neutrino. As a result, there is a large imbalance in the vector \pT\ sum of all reconstructed final state objects for \Wvar\ events. In contrast, \Zee\ events and QCD hard-scattering events, such as di-jets, are characterized by a small magnitude of this vector \pT\ sum imbalance. In order to enforce this missing energy requirement, we define the \pT\ balance vector: \begin{equation} \vec{p}_{T}^{~bal} = \vec{p}_{T}^{~e} + \sum_{\DELTA\RM{R}>0.7} \vec{p}_{T}^{~jets} \label{eqn:ptBal} \end{equation} where $\vec{p}_{T}^{~e}$ is the \epm\ candidate \pT\ vector, which is composed of a momentum direction and a magnitude determined by the candidate TPC track and BEMC cluster, respectively. The second term on the right of Eq.~\ref{eqn:ptBal} is the sum of the \pT\ vectors for all reconstructed jets whose thrust axes are \textit{outside} the cone radius of $\DELTA\RM{R}=0.7$ around the candidate. Jets are reconstructed using a standard mid-point cone algorithm used in STAR jet measurements \cite{Abelev:2006uq} based on the tracks from the \TPC\, and tower energies in the \BEMC\ and \EEMC. A scalar signed $P_T$-balance variable is then formed, defined as \begin{equation} \mbox{signed }P_{T}\mbox{-balance} = \mbox{sign}\left(\vec{p}_{T}^{~e} \cdot \vec{p}_{T}^{~bal}\right) \left|\vec{p}_{T}^{~bal}\right|. \end{equation} This quantity is required to be larger than 15 \GeVc\ as indicated by the dashed line in Fig.~\ref{Fig:sPtBalcut}. Also in Fig.~\ref{Fig:sPtBalcut} one can see that in the \Wtoenu\ MC sample, the signed $P_T$-balance variable and $\EeT$ are very well correlated, as contributions to the $\vec{p}_{T}^{~bal}$ vector from reconstructed jets outside the cone of $\DELTA\RM{R}=0.7$ are generally small. The data show a similar correlation at high \EeT, where the distribution is dominated by \Wtoenu\ events. At low \EeT\, where contributions from QCD background events are larger, more events have a small value for the signed $P_T$-balance variable, as expected. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{sPtBalcut} \caption{(Color online) Correlation of the signed $P_T$-balance variable and \EeT\ for data (left) and \Wtoenu\ MC (right).} \label{Fig:sPtBalcut} \end{figure} Background events from \Zee\ decays are further suppressed by rejecting events with an additional $e$-like 2$\TIMES$2 cluster in a reconstructed jet where $\ET^{2\TIMES2} > p^{jet}_T/2$ and the invariant mass of the two $\epm$-like clusters is within the range of 70 to 140 \GeVcc. This reduces $\Zee$ contamination in both the \Wvar\ signal spectra and in the spectra that will be used for the data-driven QCD background, described in Sec.~\ref{subsec:Wback}. The reduction in the \Wvar\ candidate yield after each of the selection criteria is shown in Fig.~\ref{Fig:wStack}. Initially, when only a candidate \TPC\ track and \BEMC\ cluster have been reconstructed, the distribution (solid line) is dominated by QCD background, which is exponentially falling with \EeT, and there is no evidence of the Jacobian peak. However, once the \epm\ selection, isolation and signed $P_T$-balance cuts are applied, a \Wvar\ signal can be seen above the background at $\EeT \SIM M_W/2$. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{wStack} \caption{(Color online) Distributions of \EeT\ for \Wvar\ candidate events after sequentially applying the selection criteria described in Secs.~\ref{subsec:epm_isolation} and \ref{subsec:Wsignal}.} \label{Fig:wStack} \end{figure} The charge sign of the \epm\ candidate is determined by the direction of curvature of the \TPC\ track in the \STAR\ magnetic field, while the magnitude of the track curvature provides a measure of 1/$\pT$. Figure \ref{Fig:chargesep} shows the product of the reconstructed charge sign and 1/$\pT$ for the lepton candidates that satisfy all the cuts described above with $\EeT \GREATER 25~\GeV$. Two well-separated regions are seen for the positive and negative charges, cleanly distinguishing between the $e^+$ and $e^-$ candidates. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{qPT} \caption{(Color online) Distribution of the product of the \TPC\ reconstructed charge sign and 1/$\pT$ for candidates satisfying all the \Wvar\ signal selection criteria and $\EeT \GREATER 25~\GeV$.} \label{Fig:chargesep} \end{figure} \subsection{\label{subsec:Zsignal}\texorpdfstring{$\bm{\Zvar}$ Candidate Event Selection}{Z Candidate Event Selection}} Using the isolated \epm\ sample found in Sec.~\ref{subsec:epm_isolation}, \Zee\ events were selected by requiring a pair of isolated \epm\ candidates with opposite charge signs. The invariant mass of each $e^+e^-$ pair was reconstructed, and the resulting mass distributions are shown in Fig.~\ref{Fig:zStack} after each of the selection criteria described in Sec.~\ref{subsec:epm_isolation} has been satisfied for both the $e^+$ and $e^-$ candidates. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{zStack} \caption{(Color online) Distributions of the invariant mass of pairs of oppositely charged \epm\ candidates after sequentially applying the selection criteria described in Sec.~\ref{subsec:epm_isolation} to both \epm\ candidates.} \label{Fig:zStack} \end{figure} After all selection cuts are applied, there is a signal near the invariant mass of the \Zvar\ and a small signal at lower invariant mass. This is consistent with the expectations from the \Zgtoee\ MC, as shown in Fig.~\ref{Fig:zDataMC}. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{zDataMC} \caption{(Color online) Distributions of the invariant mass of \Zgtoee\ candidate events satisfying all selection criteria described in Sec.~\ref{subsec:Zsignal}. The \Zgtoee\ MC distribution (dashed line) is shown for comparison. Note the larger bin widths relative to Fig.~\ref{Fig:zStack}} \label{Fig:zDataMC} \end{figure} \section{\label{sec:background}Background Estimation} \subsection{\label{subsec:Wback}\texorpdfstring{$\bm{\Wvar}$ Background Estimation}{W Background Estimation}} There are a number of background processes that can contribute to the \Wtoenu\ candidate yield. Other electroweak processes also yield isolated electrons that can be misidentified as \Wtoenu\ events, and QCD jets can fragment in such a way that they satisfy all the \Wvar\ signal requirements. This section describes how the contributions of these background processes to the \Wvar\ candidate yield are estimated. The electroweak background processes considered in this analysis are \Wtaunu\ and \Zee. Their contributions to the \Wtoenu\ signal yield were estimated using the MC samples described in Sec.~\ref{sec:data}. \Wtaunu\ events, where the $\tau$ decays leptonically (\textit{i.e.} $\tau \to e\nu\bar{\nu})$, contain an isolated \epm\ with a large missing energy opposite in azimuth, similar to the \Wtoenu\ signal. However, the \epm\ which comes from the $\tau$ decay must share the energy of the $\tau$ with the two secondary neutrinos, and thus it has a much lower $\EeT$ on average than those \epm\ which come directly from a \Wvar\ decay. Therefore, the \Wtaunu\ background contributions are largest at low $\EeT$, as can be seen in Fig.~\ref{Fig:wJacob}. \Zee\ events can contaminate the \Wvar\ signal when one of the decay leptons escapes detection, either from a detector inefficiency or by emission into an uninstrumented region of phase space. Unlike the other background sources described in this section, the \Zee\ background yield is approximately constant in $\EeT$, resulting in a significant contribution to the total background, even though the cross section is small compared to other processes. Table \ref{Table:bkgd} lists each of the background processes and its estimated contribution to the \Wvar\ yield for candidates with $\EeT \GREATER 25~\GeV$. The uncertainties for these electroweak background components are due to the statistical uncertainty of the MC calculation and the uncertainty in the normalization of the MC samples to the integrated luminosity of the data. The \STAR\ detector has only one \EEMC, resulting in missing calorimetry acceptance for the pseudorapidity region $-2\LESS\eta\LESS-1.09$ compared to the positive pseudorapidity portion of the detector. If the isolation cone of $\DELTA\RM{R}\LESS0.7$ around an \epm\ candidate overlaps with this missing acceptance, or a jet opposite in azimuth of an \epm\ candidate falls within this acceptance, background QCD events may satisfy all the \Wtoenu\ selection requirements. This contamination of the \Wvar\ yield, referred to as the `second \EEMC' background, was determined by repeating the \Wvar\ signal selection a second time, with the \EEMC\ towers excluded from the isolation ratio, $\EeT/\ET^{\DELTA\RM{R}\LESS0.7}$, and the reconstruction of jets summed in the $\vec{p}_{T}^{~bal}$ vector. The events which satisfy the requirements of this second pass analysis (without the \EEMC), but fail the nominal requirements described in Secs.~\ref{subsec:epm_isolation} and \ref{subsec:Wsignal} are a direct measure of the background rejected by the \EEMC. Moreover, these events also estimate the amount of background that would have been rejected by a second \EEMC. While this sample of second \EEMC\ background is expected to be predominantly the result of QCD processes, it does contain a small amount of \Zee\ contamination as well. Because background from the \Zee\ process was already taken into account separately, the \Zee\ MC sample was used to remove any contamination from \Zee\ processes in the second \EEMC\ background distribution, to avoid double-counting. The uncertainty on the second \EEMC\ background is the statistical uncertainty of the events vetoed by the \EEMC\ and the systematic uncertainty in the normalization of \Zee\ contamination which was subtracted using the \Zee\ MC. The remaining contribution to the background is predominantly from QCD $2 \to 2$ processes in which one jet fragments such that it satisfies our \epm\ candidate requirements, while all other jets escape detection outside the $|\eta|\LESS2$ acceptance. This component of the background was estimated using a data-driven QCD background distribution as a function of $\EeT$, which is obtained by selecting events which satisfy all the isolated \epm\ candidate criteria, but have a signed $P_{T}$-balance $\LESS 15~\GeVc$. Similar to the way the second \EEMC\ background was corrected, contributions to the data-driven background distribution from the \Zee\ process were removed using the \Zee\ MC sample, to avoid double-counting the \Zee\ background. The data-driven QCD background distribution was then normalized to the remaining \Wtoenu\ candidate signal distribution after the \Wtaunu, \Zee, and second \EEMC\ background components had been removed. The normalization was determined over the range 15~\LESS~\EeT~\LESS~19~\GeV, and accounts for the possibility of true \Wvar\ signal events in this region using the \Wtoenu\ MC. The systematic uncertainty of this data-driven QCD background contribution was estimated by varying the data-driven background distribution and the \EeT\ region over which the distribution was normalized. Twenty different background distributions were obtained by varying the cut on the signed $P_T$-balance variable from 5 to 25 GeV/c in steps of 1 GeV/c. The twenty background distributions were then fit to the signal, as described above, using three different normalization regions (15~$\LESS~\EeT~\LESS$~17,~19,~and~21~\GeV), resulting in sixty different normalized background distributions. The systematic uncertainty in each \EeT\ bin was taken to be the largest deviation among these sixty distributions from the nominal value. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{wJacob} \caption{(Color online) $\EeT$ distribution of \Wpl\ (top) and \Wmi\ (bottom) candidate events, background components, and \Wtoenu\ MC signal for comparison. Note the factor of two difference in the vertical scales.} \label{Fig:wJacob} \end{figure} \begin{table}[!ht] \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|} \hline & \Wptoenu\ & \Wmtoenu\ \\ \hline \Wtaunu\ & 13.4 $\pm$ 1.7 $\pm$ 3.2 & 3.3 $\pm$ 0.8 $\pm$ 0.8 \\ \Zee\ & 7.3 $\pm$ 0.4 $\pm$ 1.7 & 7.3 $\pm$ 0.4 $\pm$ 1.7 \\ Second \EEMC\ & 9.1 $\pm$ 3.0 $\pm$ 0.5 & 9.2 $\pm$ 3.0 $\pm$ 0.4 \\ Data-driven QCD & 7.0 $\pm$ 0.6 $^{+2.3}_{-1.6}$ & 5.8 $\pm$ 0.5 $^{+2.6}_{-1.2}$ \\ \hline Total & 36.6 $\pm$ 3.5 $^{+5.4}_{-5.2}$ & 25.8 $\pm$ 3.2 $^{+3.6}_{-2.8}$ \\ \hline \end{tabular} \caption{Summary of background event contributions to the \Wtoenu\ yield and their uncertainties for candidates with $\EeT\GREATER25~\GeV$ and $|\eta_e|\LESS1$.} \label{Table:bkgd} \end{table} The charge-separated $\EeT$ distributions of \Wpmtoenu\ candidates satisfying all the selection criteria described in Secs.~\ref{subsec:epm_isolation} and \ref{subsec:Wsignal} are shown in Fig.~\ref{Fig:wJacob}. Also shown here are the contributions from the different backgrounds discussed in this section and the \Wtoenu\ signal MC distribution. A $\chi^2$ test of homogeneity comparing the data and the sum of background components and \Wtoenu\ signal MC (dashed line) $\EeT$ spectra results in a $\chi^2$ value of 9.5 and 6.9 for the $\Wpl$ and $\Wmi$, respectively. For 12 degrees of freedom this results in a 66\% and 86\% probability, respectively, to obtain a larger $\chi^2$. This indicates a good agreement between data and MC and further validates the procedure used in the background estimation described in this section. The \epm\ pseudorapidity distributions are shown in Fig.~\ref{Fig:wEta}, where the background contributions were found independently for each $\eta_e$ bin using the methods described above. Again, good agreement is found between the data and the sum of the \Wtoenu\ signal MC and background components. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{wEta} \caption{(Color online) Lepton pseudorapidity distribution of \Wpl\ (left) and \Wmi\ (right) candidate events, background components, and \Wtoenu\ MC signal for comparison.} \label{Fig:wEta} \end{figure} \subsection{\label{subsec:Zback}\texorpdfstring{$\bm{\Zvar}$ Background Estimation}{Z Background Estimation}} The background for the \Zee\ signal is expected to be very small due to the coincidence requirement of a pair of oppositely charged, high $\ET$, isolated $e^+$ and $e^-$. Background contributions from electroweak processes were estimated using the MC samples described in Sec.~\ref{sec:data}. Within the defined mass window to be used for the cross section ($70\LESS \mee\LESS110~\GeVcc$), the background contributions were determined to be 0.1 $^{+0.3}_{-0.1}$ events from \Wtoenu\ and negligible from the other \Zvar\ decay channels. The \Wtoenu\ background uncertainty was estimated using the 68\% C.L. interval of the unified statistical approach described in Ref.~\cite{Feldman:1997qc}. An accurate data-driven estimate of the QCD background is difficult to obtain for the \Zvar\ signal due to the limited statistics of the data set. One method for estimating the background is to determine the number of \epm\ pairs that satisfy all the \Zee\ signal criteria other than the opposite charge-sign requirement. However, no same charge-sign pairs were observed in the data, therefore, the QCD background was found to be consistent with zero. An upper bound on the QCD background systematic uncertainty was estimated to be 1.3 events using a 68\% C.L. interval \cite{Feldman:1997qc}. \section{\label{sec:xsec}\texorpdfstring{The $\bm{\Wvar}$ and $\bm{\Zvar}$ Cross Sections}{The W and Z Cross Sections}} The \Wvar\ and \Zvar\ production cross sections were measured from the sample of events which satisfy the fiducial and kinematic requirements of this analysis. As stated previously, only \epm\ candidates at mid-rapidity ($|\eta_e|\LESS1$) were considered in this analysis. Candidates for the \Wvar\ analysis must have $\EeT\GREATER25~\GeV$, and for the \Zvar\ analysis we required that both $e^+$ and $e^-$ have $\EeT\GREATER15~\GeV$ and $70\LESS \mee\LESS110~\GeVcc$. The cross sections measured within these constraints are defined as the fiducial cross sections, and can be written as: \begin{equation} \sigma_{\Wvar(\Zvar)}^{fid} \cdot \BR(\Wvar(\Zvar)\to e\nu(ee)) = \frac{N^{obs}_{\Wvar(\Zvar)} - N^{bkgd}_{\Wvar(\Zvar)}}{L \cdot \epsilon^{tot}_{\Wvar(\Zvar)}} \label{eqn:xSecFidW} \end{equation} where \begin{itemize} \item $N^{obs}_{\Wvar(\Zvar)}$ is the number of observed $\Wvar(\Zvar)$ candidates within the defined kinematic acceptance, which satisfy all the selection criteria described in Sec.~\ref{sec:signal}, \item $N^{bkgd}_{\Wvar(\Zvar)}$ is the total number of $\Wvar(\Zvar)$ background events within the defined kinematic acceptance described in Sec.~\ref{sec:background}, \item $\epsilon^{tot}_{\Wvar(\Zvar)}$ is the total efficiency correction described in Sec.~\ref{subsec:effic} below, \item and $L$ is the integrated luminosity of the data set discussed in Sec.~\ref{sec:data}. \end{itemize} To determine the total production cross sections, it is necessary to apply acceptance correction factors, $A_{\Wvar(\Zvar)}$, to the fiducial cross sections defined above, to account for the fiducial and kinematic constraints imposed in the analysis. The total production cross sections are then defined via the relations \begin{equation} \sigma_{\Wvar}^{tot} \cdot \BR(\Wtoenu) = \frac{\sigma_{\Wvar}^{fid} \cdot \BR(\Wtoenu)}{A_{\Wvar}} \label{eqn:xSecTotW} \end{equation} \begin{equation} \sigma_{\Zvar}^{tot} \cdot \BR(\Zee) = \frac{\sigma_{\Zvar}^{fid} \cdot \BR(\Zee)}{A_{\Zvar}}. \label{eqn:xSecTotZ} \end{equation} The determination of the acceptance corrections necessary to extract the total production cross sections is discussed in Sec.~\ref{subsec:accept}. \subsection{\label{subsec:effic}The Efficiency Correction Factors} The efficiency corrections were obtained using the \Wtoenu\ and \Zee\ \PYTHIA\ MC samples described in Sec.~\ref{sec:data}. Only the subset of events from the MC samples which satisfy the acceptance conditions for the fiducial cross sections were used in the efficiency calculations, as the acceptance correction is accounted for separately in the definition of the total cross section. The total efficiency can be factorized into four conditional efficiency terms, written as: \begin{equation} \epsilon_{\Wvar(\Zvar)}^{tot} = \epsilon_{\Wvar(\Zvar)}^{trig} \cdot \epsilon_{\Wvar(\Zvar)}^{vert} \cdot \epsilon_{\Wvar(\Zvar)}^{trk} \cdot \epsilon_{\Wvar(\Zvar)}^{algo} . \label{eqn:effic} \end{equation} The values for each of the terms in Eq.~\ref{eqn:effic} are listed in Table \ref{Table:effic}, along with their uncertainties, for the \Wpl, \Wmi, and \Zvar\ signals. The remainder of this section describes how those values were obtained. The trigger efficiency, $\epsilon^{trig}$, is the fraction of MC signal events which satisfy the online trigger condition defined in Sec.~\ref{sec:data}. This was determined by emulating the trigger condition used online in the MC. Due to the relatively wide \zvertex\ distribution of our data sample, some candidates may satisfy the $|\eta_e|\LESS1$ kinematic condition at the MC generator level, but will fall outside the acceptance of the \BEMC. This was observed in the \Wvar\ analysis as an $\EeT$-dependent trigger efficiency due to the correlation of the $\EeT$ and $\eta_e$ of the decay \epm. An $\EeT$-dependent trigger efficiency correction was therefore used in the computation of the \Wpm\ cross sections. This effect also leads to a notably smaller average \Wmi\ trigger efficiency relative to \Wpl, as the $\eta_e$ distribution is expected to be peaked more strongly at zero for the \Wpl\ candidates than \Wmi, which is consistent with Fig.~\ref{Fig:wEta}. To estimate the uncertainty on $\epsilon^{trig}$, the \BEMC\ energy scale was varied by its uncertainty of $\pm$3.6\%. Because the offline kinematic requirement of $\EeT\GREATER25~\GeV$ was significantly larger than the trigger threshold of 13$~\GeV$, for this analysis we observed only small variations in the trigger efficiency due to the uncertainty of the \BEMC\ energy calibration. The vertex efficiency, $\epsilon^{vert}$, is defined as the fraction of events satisfying the trigger which contain a reconstructed primary vertex within the fiducial cut of $|\zvertex|\LESS100~\cm$, as described in Sec.~\ref{sec:signal}. The tracking efficiencies for the \Wvar\ and \Zvar\ decay $\epm$s are defined as follows. For \Wvar\ events with a reconstructed primary vertex, $\epsilon_{W}^{trk}$ is the efficiency for reconstructing a single \TPC\ track which satisfies the track requirements in Sec.~\ref{subsec:epm_isolation}, however for \Zee\ events the tracking efficiency, $\epsilon_{Z}^{trk}$, is the efficiency for reconstructing \textit{two} \TPC\ tracks satisfying those conditions. In comparing the reconstructed \TPC\ track 1/\pT\ distributions between data and MC, a slightly worse resolution was seen in the data. This was accounted for by re-weighting the MC distributions to match the data. The uncertainty on the tracking efficiency was estimated from the error in this re-weighting resulting from the limited statistics of the data distribution. Finally, the algorithm efficiency, $\epsilon^{algo}$, is the fraction of events with one (two) reconstructed \epm\ candidate \TPC\ tracks, which satisfy the remaining \Wvar\ (\Zvar) selection criteria. As discussed in Sec.~\ref{sec:signal}, these remaining selection criteria include reconstruction of \BEMC\ clusters, matching extrapolated track and cluster positions, isolation requirements, and finally the signed $P_T$-balance and pair of opposite charge-sign candidate requirements for \Wvar\ and \Zvar\ events, respectively. A weak $\EeT$ dependence was observed in the algorithm efficiency for the \Wtoenu\ MC due mainly to the efficiency of the $\EeT/\ET^{\DELTA\RM{R}\LESS0.7}$ isolation cut being reduced at low $\EeT$. Thus, an $\EeT$-dependent algorithm efficiency correction was used in the computation of the \Wpm\ cross sections. The uncertainty on $\epsilon^{algo}$ was determined by varying the \BEMC\ scale uncertainty, as was done for the trigger efficiency. \begin{table}[!ht] \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|} \hline & \Wptoenu\ & \Wmtoenu\ & \Zee\ \\ \hline $\epsilon^{trig}$ & 0.857 $\pm$ 0.007 & 0.825 $\pm$ 0.007 & 0.968 $\pm$ 0.006 \\ $\epsilon^{vert}$ & 0.881 $\pm$ 0.005 & 0.886 $\pm$ 0.006 & 0.938 $\pm$ 0.006 \\ $\epsilon^{trk}$ & 0.741 $\pm$ 0.030 & 0.748 $\pm$ 0.031 & 0.511 $\pm$ 0.032 \\ $\epsilon^{algo}$ & 0.892 $\pm$ 0.024 & 0.892 $\pm$ 0.024 & 0.730 $\pm$ 0.024 \\ \hline $\epsilon^{tot}$ & 0.498 $\pm$ 0.026 & 0.488 $\pm$ 0.026 & 0.338 $\pm$ 0.024 \\ \hline \end{tabular} \caption{Summary of conditional efficiency correction factors included in Eq.~\ref{eqn:effic}. The average values for the trigger and algorithm efficiencies for the \Wpm\ analysis are given here, however an $\EeT$-dependent correction was used for the measured cross section, as described in the text.} \label{Table:effic} \end{table} \subsection{\label{subsec:xSecFid}The Measured Fiducial Cross Sections} The fiducial cross sections are calculated according to Eq.~\ref{eqn:xSecFidW}, and the measured values are summarized in Tables \ref{Table:xSecFidW} and \ref{Table:xSecFidZ} for \Wpm\ and \Zvar\, respectively. The dominant uncertainty for both the \Wpl\ and \Wmi\ cross sections comes from the systematic uncertainty in the measured luminosity of the data sample. The \Zvar\ cross section measurement, however, is currently dominated by the statistical uncertainty. \begin{table}[!ht] \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{$\Wptoenu$} & \multicolumn{4}{c|}{$\Wmtoenu$} \\ \hline & value & stat & syst & lumi & value & stat & syst & lumi \\ \hline $N^{obs}$ & 496 & 22.3 & - & - & 148 & 12.2 & - & - \\ $N^{bkgd}$ & 36.6 & 3.5 & $^{+5.4}_{-5.2}$ & - & 25.8 & 3.2 & $^{+3.6}_{-2.8}$ & - \\ $\epsilon^{tot}$ & 0.498 & 0.006 & 0.025 & - & 0.488 & 0.007 & 0.025 & - \\ $L~(\pbinv)$ & 13.2 & 0.2 & - & 1.7 & 13.2 & 0.2 & - & 1.7 \\ \hline $\sigma^{fid}~(\pb)$ & 70.0 & 3.5 & 3.5 & 9.1 & 19.2 & 2.1 & 1.1 & 2.5 \\ \hline \end{tabular} \caption{Summary of input and measured values for the $\Wtoenu$ fiducial cross sections, with their statistical, systematic, and luminosity uncertainties. As noted in the text, an $\EeT$-dependent efficiency correction factor is used for the cross section measurement, and only the average value is shown here.} \label{Table:xSecFidW} \end{table} \begin{table}[!ht] \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{$\Zee$} \\ \hline & value & stat & syst & lumi \\ \hline $N^{obs}$ & 13 & 3.6 & - & - \\ $N^{bkgd}$ & 0.1 & 0.1 & $^{+1.3}_{-0.0}$ & - \\ $\epsilon^{tot}$ & 0.338 & 0.012 & 0.021 & - \\ $L~(\pbinv)$ & 13.2 & 0.2 & - & 1.7 \\ \hline $\sigma^{fid}~(\pb)$ & 2.9 & 0.8 & $^{+0.2}_{-0.3}$ & 0.4 \\ \hline \end{tabular} \caption{Summary of input and measured values for the \Zee\ fiducial cross section, with their statistical, systematic, and luminosity uncertainties.} \label{Table:xSecFidZ} \end{table} \subsection{\label{subsec:accept}The Acceptance Correction Factors} As stated previously, to determine the total cross sections, acceptance correction factors $A_{\Wvar(\Zvar)}$, must be used to account for the fiducial and kinematic acceptance requirements of the analysis, which are defined at the beginning of Sec.~\ref{sec:xsec}. $A_{\Wvar(\Zvar)}$ were calculated using the \FEWZ\ program \cite{Melnikov:2006kv}, which provides cross section calculations for \Wvar\ and \Zvar\ boson production up to NNLO in pQCD. Table~\ref{Table:accept} lists the values of the acceptance factors using the MSTW 2008 \cite{Martin:2009iq} and CTEQ 6.6 \cite{Nadolsky:2008zw} parton distribution function sets. The nominal values for the acceptance corrections, used in the total cross section measurements, were taken from the next-to-leading order (\NLO) calculation using the MSTW08 PDF set. Theoretical uncertainties in the calculation of these factors arise from several sources, including differences between PDF sets, uncertainties within a PDF set, and uncertainties in the modeling of the production process. \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|} \hline & $A_{\Wpl}$ & $A_{\Wmi}$ & $A_{\Zvar}$ \\ \hline LO MSTW08 & 0.591 & 0.444 & 0.377 \\ \NLO\ MSTW08 & 0.597 & 0.444 & 0.378 \\ \NNLO\ MSTW08 & 0.603 & 0.435 & 0.385 \\ \hline \NLO\ CTEQ 6.6 & 0.592 & 0.430 & 0.370 \\ \hline \end{tabular} \caption{ Summary of acceptance values calculated with the $\FEWZ$ program. The \NLO\ MSTW08 values are used for the total cross section calculations in Sec.~\ref{subsec:xSecTot}. } \label{Table:accept} \end{table} The uncertainty due to differences between PDF sets was taken to be the difference between the CTEQ 6.6 and MSTW08 acceptance values at NLO. Both groups provide error eigenvector PDF sets which were used to estimate the acceptance uncertainty, at the 90\% confidence level, within each set. The average of the CTEQ 6.6 and MSTW08 error eigenvector uncertainty was taken to be the uncertainty due to the PDF itself. Finally, the uncertainty in the modeling of the production process was estimated by comparing the acceptance values from calculations with different orders of QCD corrections, using the MSTW08 PDF set. The maximum difference from the nominal value (\NLO\ MSTW08) was taken as this final uncertainty contribution. Table \ref{Table:acceptUncert} summarizes the contributions to the uncertainties in the acceptance values. The individual contributions were added in quadrature to determine the total uncertainty for each acceptance factor. The $A_{\Wmi}$ uncertainties are significantly larger than those for $A_{\Wpl}$, driven primarily by the PDF-related errors. This is expected, due to the larger uncertainties in the $\bar{u}$ and $d$ quark PDFs with respect to those of the $\bar{d}$ and $u$ quarks. \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|} \hline & $\delta A_{\Wpl} (\%)$ & $\delta A_{\Wmi} (\%)$ & $\delta A_{\Zvar} (\%)$ \\ \hline Difference between PDFs & 1.0 & 3.2 & 2.1 \\ MSTW08 \NLO\ error PDFs & 0.9 & 2.7 & 1.2 \\ CTEQ 6.6 \NLO\ error PDFs & 0.9 & 4.5 & 1.8 \\ Calculation Order & 1.0 & 2.0 & 1.9 \\ \hline Total & 1.7 & 5.2 & 3.2 \\ \hline \end{tabular} \caption{ Summary of the relative uncertainties in the acceptance correction factors, $A_{\Wvar(\Zvar)}$, as computed by the \FEWZ\ program. } \label{Table:acceptUncert} \end{table} \subsection{\label{subsec:xSecTot}The Measured Total Cross Sections} The total cross sections are calculated according to Eqs. \ref{eqn:xSecTotW} and \ref{eqn:xSecTotZ}, by dividing the measured fiducial cross sections by the acceptance correction factors determined in the previous section. The results for $\pp \to \Wpm$ total production cross sections at \sqrts\ = 500~\GeV\ are the following: \begin{center} $\sigma_{\Wpl}^{tot} \cdot \BR(\Wptoenu)$ = 117.3 $\pm$ 5.9(stat) \\ $\pm$ 6.2(syst) $\pm$ 15.2(lumi) pb, \bigskip $\sigma_{\Wmi}^{tot} \cdot \BR(\Wmtoenu)$ = 43.3 $\pm$ 4.6(stat) \\ $\pm$ 3.4(syst) $\pm$ 5.6(lumi) pb. \end{center} The result for the $\pp \to \Zgam$ total production cross section at \sqrts\ = 500~\GeV\ in the invariant mass range of $70\LESS \mee\LESS110~\GeVcc$ is \begin{center} $\sigma_{\Zgam}^{tot} \cdot \BR(\Zgtoee)$ = 7.7 $\pm$ 2.1(stat) \\ $^{+0.5}_{-0.9}$(syst) $\pm$ 1.0(lumi) pb. \end{center} Figure \ref{Fig:xSecBR} shows the measured total cross sections, multiplied by the respective branching ratios, in comparison with the theoretical predictions at \NLO\ from the \FEWZ\ program using the MSTW08 PDF set. Measurements from other experiments at the $\rm{Sp \bar{p} S}$, Tevatron, RHIC, and LHC are also shown as a function of \sqrts\ for comparison. \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{xsecBR} \caption{(Color online) Measurements of \Wvar\ and \Zvar\ total cross sections times branching ratio versus center-of-mass energy. For the \Wvar\ cross sections in \pp\ collisions, the closed symbols represent \Wpl\ and the open symbols represent \Wmi. The theory curves are from the \FEWZ\ program at \NLO\ using the MSTW08 PDF set.} \label{Fig:xSecBR} \end{figure} Theoretical predictions for the production cross sections computed by the \FEWZ\ \cite{Melnikov:2006kv} and fully resummed \RHICBOS\ \cite{Nadolsky:2003ga} calculations are shown in Table \ref{Table:xSecTheory}. The theoretical uncertainties were determined for the \FEWZ\ predictions using the 90\% confidence level error eigenvector PDF sets; error eigenvector sets are not provided for the \RHICBOS\ calculation. Variations in the strong coupling constant, $\alpha_s$, from the associated error PDF sets were considered as well, but the uncertainties were found to be negligible compared to the uncertainties from the PDFs. The theoretical predictions agree well with the measured cross sections within the theoretical and experimental uncertainties. Interestingly, differences between the MSTW08 and CTEQ 6.6 PDF sets result in significant differences in the predicted cross sections at \NLO. \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|} \hline & $\sigma^{tot}_{\Wpl} (\pb)$ & $\sigma^{tot}_{\Wmi} (\pb)$ & $\sigma^{tot}_{\Zvar}\cdot (\pb)$ \\ \hline \NLO\ MSTW08 & 132.4 $\pm$ 9.0 & 45.7 $\pm$ 3.6 & 10.8 $\pm$ 0.8 \\ \NNLO\ MSTW08 & 136.7 $\pm$ 9.5 & 48.1 $\pm$ 3.0 & 11.2 $\pm$ 0.8 \\ \hline \NLO\ CTEQ 6.6 & 121.8 $\pm$ 8.8 & 41.1 $\pm$ 4.3 & 9.8 $\pm$ 0.8 \\ \hline Ressum. CTEQ 6.6 & 121.1 & 39.9 & - \\ \hline \end{tabular} \caption{ Summary of total cross section (times branching ratio) theoretical predictions at \sqrts\ = 500~\GeV\ calculated with the \FEWZ\ and \RHICBOS\ programs. The \Zgam\ values are defined within the invariant mass range of $70\LESS \mee\LESS110~\GeVcc$. } \label{Table:xSecTheory} \end{table} \section{\label{sec:ratio}\texorpdfstring{The $\bm{\Wvar}$ Cross Section Ratio}{The W Cross Section Ratio}} The $\Wvar$ cross section ratio is defined as \begin{equation} R_\Wvar=\frac{\sigma^{fid}_{\Wpl}}{\sigma^{fid}_{\Wmi}} = \frac{N^{obs}_{\Wpl} - N^{bkgd}_{\Wpl}}{N^{obs}_{\Wmi} - N^{bkgd}_{\Wmi}} \cdot \frac{\epsilon^{tot}_{\Wmi}}{\epsilon^{tot}_{\Wpl}}. \end{equation} If the small contributions from strange quarks are neglected, this ratio should be equal to~\cite{Peng:1995ba} \begin{equation} R_\Wvar=\frac{u(x_1)\bar{d}(x_2)+\bar{d}(x_1)u(x_2)}{\bar{u}(x_1)d(x_2)+d(x_1)\bar{u}(x_2)}. \label{eqn:R_W} \end{equation} Measurements of the cross section ratio should therefore be sensitive to the flavor asymmetry of the antiquark sea in the Bjorken-$x$ range $0.1\LESSAPPROX\IT{x}\LESSAPPROX0.3$ probed at RHIC. Drell-Yan experiments \cite{Baldit:1994jk,Towell:2001nh} have measured a large asymmetry in this $x$ range, and precision measurements of $R_W$ at RHIC can provide independent constraints on the flavor asymmetry which are free from the assumption of charge symmetry required in Drell-Yan. Measurements of the lepton charge asymmetry at the LHC \cite{Aad201131,Chatrchyan:2011jz} provide similar constraints on the quark and antiquark PDFs, though at significantly lower $x$ due to the much higher energy of the collisions. The \Wvar\ cross section ratio was measured in two $|\eta_e|$ regions, as this coarsely constrains the $x$ of the partons involved in the \Wvar\ production. In each $|\eta_e|$ bin, the fiducial cross sections were computed using the same procedures described in Sec.~\ref{sec:xsec}, where the background and efficiencies were separately calculated for each charge and $|\eta_e|$ bin. The luminosity, and its sizable uncertainty, cancel in the cross section ratio, significantly reducing the systematic uncertainty, with respect to the \Wpl\ and \Wmi\ cross sections independently. \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|} \hline & $R_\Wvar \pm$ (stat) $\pm$ (syst) \\ \hline $|\eta_e|\LESS0.5$ & 4.3 $\pm$ 0.7 $\pm$ 0.3 \\ $0.5\LESS|\eta_e|\LESS1.0$ & 2.9 $\pm$ 0.5 $\pm$ 0.2 \\ \hline \end{tabular} \caption{(Color online) Measurements of the \Wvar\ cross section ratio, $R_\Wvar$, for the two \epm\ pseudorapidity bins. } \label{Table:xSecRatio} \end{table} \begin{figure}[!ht] \includegraphics[width=1.0\columnwidth]{xSecRatio} \caption{(Color online) \Wvar\ cross section ratio, $R_\Wvar$, for the two \epm\ pseudorapidity bins. Theory calculations at NLO from the \FEWZ\ program using the MSTW08 and CTEQ 6.6 PDF sets (with 90\% confidence level error eigenvector uncertainties) are shown for comparison.} \label{Fig:xSecRatio} \end{figure} Our results for the measured cross section ratio are listed in Table \ref{Table:xSecRatio}. Figure \ref{Fig:xSecRatio} shows the cross section ratio as a function of $|\eta_e|$, where the statistical and systematic uncertainties of the data have been added in quadrature. Also displayed in Fig.~\ref{Fig:xSecRatio} are theoretical calculations of the cross section ratio computed with the \FEWZ\ program at \NLO. Both the MSTW08 and CTEQ 6.6 PDF sets were used to compute the ratio; the error bands shown are the 90\% confidence level error eigenvector uncertainties. The predictions agree with the measured values within the large uncertainties, which are dominated by the statistical precision of the \Wmi\ yield. \section{\label{sec:summary}Summary} We have presented measurements of the \Wptoenu, \Wmtoenu, and \Zgtoee\ production cross sections in proton-proton collisions at $\sqrts = 500~\GeV$ by the \STAR\ detector at \RHIC. Theoretical predictions based on pQCD calculations are in good agreement with the measured cross sections. In addition, a first measurement of the \Wvar\ cross section ratio is presented. Future high statistics measurements of the \Wvar\ cross section ratio at RHIC will provide a new means of studying the flavor asymmetry of the antiquark sea which is complementary to fixed-target Drell-Yan and LHC collider measurements. \bigskip \begin{acknowledgments} We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL and the Open Science Grid consortium for providing resources and support. We are grateful to F. Petriello for useful discussions. This work was supported in part by the Offices of NP and HEP within the U.S. DOE Office of Science, the U.S. NSF, the Sloan Foundation, the DFG cluster of excellence `Origin and Structure of the Universe' of Germany, CNRS/IN2P3, FAPESP CNPq of Brazil, Ministry of Ed. and Sci. of the Russian Federation, NNSFC, CAS, MoST, and MoE of China, GA and MSMT of the Czech Republic, FOM and NWO of the Netherlands, DAE, DST, and CSIR of India, Polish Ministry of Sci. and Higher Ed., Korea Research Foundation, Ministry of Sci., Ed. and Sports of the Rep. of Croatia, and RosAtom of Russia. \end{acknowledgments}
1,477,468,750,682
arxiv
\section{Introduction} One of the topics in the modern field of high-energy astrophysics is the origin and propagation of cosmic rays. However, the directional information of the cosmic rays is lost due to their interaction with the interstellar magnetic field. The observation of the gamma-ray emission by the interaction of cosmic rays with interstellar media (i.e. radiation fields, matter, etc) is one of the tools that helps trace the origin, acceleration, propagation and distribution of cosmic rays through the galaxy. Probing the flux of cosmic rays in distant galactic regions can be achieved by measuring the gamma-ray emission from Giant Molecular Clouds (GMCs) that are located far from cosmic-ray sources, i.e. passive clouds. This is because the gamma-ray emission from GMCs is proportional to the cosmic-ray flux, hence this is an indirect way to measure the galactic cosmic-ray flux and can be compared to the measured cosmic ray flux at Earth \cite{casanova2010}. The gamma-ray signal can also help distinguish between its possible hadronic or leptonic origin, leading to a study of the composition and origin of the cosmic rays. In the case of emission from the Fermi Bubbles specifically, constraining the mechanism of gamma-ray production can point to their origin \cite{crocker11, Cheng11, Guo12, Mou14} and give an understanding of the evolution of our galaxy. The HAWC gamma-ray observatory can search for large-scale structures thanks to its large field of view of 2\,sr and high-duty cycle of $> 95\%$. HAWC is sensitive to gamma rays with energies between 100\,GeV and 100\,TeV. It is located on the volcano Sierra Negra in the state of Puebla, Mexico, at an altitude of 4100\,m a.s.l. HAWC uses the water Cherenkov technique to detect the electromagnetic component of the shower fronts of extensive air showers that reach the ground. From the footprint of the shower, the direction of the primary gamma ray or cosmic ray that interacts with the atmosphere is reconstructed. In this presentation, data recorded by the HAWC observatory is used to search for gamma-ray signal from large galactic structures. \section{Giant Molecular Clouds} GMCs are dense concentrations of interstellar gas containing masses around $10^4 - 10^6 \, M_{\odot}$ and with sizes of $50 - 200 \, \text{pc}$. They are composed mainly of cold, dark dust and molecular gas --- mostly molecular hydrogen and helium. GMCs are the main factories of stars in the galaxy. The gamma-ray flux, produced by the interaction of cosmic rays with the GMC is proportional to \begin{equation}\label{eq:flux} F_{\gamma} \propto \Phi_{CR}\frac{M_5}{d_{kpc}^2}, \end{equation} where $\Phi_{CR}$ is the cosmic-ray flux, $M_5 = M/10^5M_{\odot}$ is the mass of the molecular cloud, $d_{\text{kpc}} = d / 1\,\text{kpc}$ is the distance to the molecular cloud\cite{casanova2010,aharonian90}. Assuming that $\Phi_{CR}$ is equal to the locally measured cosmic-ray flux, the gamma-ray flux from equation \ref{eq:flux} can be estimated as \begin{align}\label{eq:flux2} F_{\gamma} = \left\{ \begin{array}{cc} 1.45\times10^{-13}E_{\text{TeV}}^{-1.75} (M_5/d_{\text{kpc}}^2) \,\text{cm}^{-2} \, \text{s}^{-1} & \hspace{5mm} 100\,\text{MeV} <E_{\gamma}<\, 1 \, \text{TeV} \\ & \\ 2.85\times10^{-13}E_{\text{TeV}}^{-1.6} (M_5/d_{\text{kpc}}^2) \,\text{cm}^{-2} \, \text{s}^{-1} & \hspace{5mm} E_{\gamma}>\, 1 \, \text{TeV} \\ \end{array}, \right. \end{align} where $E_{\text{TeV}} = E/1\,\text{TeV}$ and $F_{\gamma}$ is the energy-integrated flux \cite{aharonian90}. GMCs have been observed mostly in the radio and infrared part of the electromagnetic spectrum since optical photons are not able to penetrate these dense regions. The most recent survey of GMCs has be done by the CfA-Chile survey. The survey is based on the observation of the CO-115\,GHz frequency \cite{dame} and it is shown in Figure \ref{fig:survey}. \begin{figure} \centering \includegraphics[scale=0.28]{CO_map.png} \caption{Distribution of CO gas from the CfA-Chile survey \cite{dame}.} \label{fig:survey} \end{figure} An overlap of 760 days of HAWC data, in the galactic region $0^{\degree} < l < 90^{\degree}$, with a contour from the Cfa-Chile survey is shown in Figure \ref{fig:cont}. \begin{figure} \centering \includegraphics[scale=0.4]{inner_Galaxy5.png} \caption{HAWC observations with CO maps contour.} \label{fig:cont} \end{figure} \subsection{Sensitivity to Molecular clouds} Using equation \ref{eq:flux2}, we compare the value of the expected flux to the sensitivity of the HAWC detector to extended sources. For the calculation of the sensitivity we assume disc regions of $3^{\degree}$ and $5^{\degree}$ due to the variety of the morphology of the GMCs. For the sensitivity we apply the procedure described in \cite{kashyap} \footnote{Named upper limit instead of sensitivity in the publication}. The probability for false positives is set to $\alpha = 0.003 \, (3\sigma)$ and $\alpha = 0.000006 \, (5\sigma)$, and the probability of detection is set to $\beta = 0.5$. Figure \ref{fig:sensi} shows the sensitivity plot compared to the predictions from the GMCs Aquila Rift, Taurus and Hercules. Table \ref{tab:gmcs} describes the properties of the GMCs. HAWC does not expect to detect either of these objects with the current dataset (760 days). However, HAWC may be sensitive to the GMCs at the $3\sigma$ level with its 5 year dataset. \begin{figure} \centering \includegraphics[scale=0.4]{sensi.png} \includegraphics[scale=0.4]{sensi_5sig.png} \caption{HAWC sensitivity to extended sources and predicted integral fluxes of the GMCs in their respective declination. The error bars are calculated from the respective mass and distance errors given by the references.} \label{fig:sensi} \end{figure} \begin{table}[!h] \centering \begin{tabular}{|c|c|c|c|c|} \hline GMC & Mass & Distance & Decl. Center & Extension \\ \hline Aquila Rift & $1.5\times10^5 \, M_{\odot}$ \cite{aqher} & $225\pm55\,\text{pc}$ \cite{aq1} & $-7.6^{\degree}$ & $<$0.068 sr \\ \hline Taurus & $0.2\times10^5 \, M_{\odot}$ \cite{tau} & $135 \pm 20\, \text{pc}$ \cite{her1} & $25.8^{\degree}$ & $<$0.203 sr\\ \hline Hercules & $0.5\times10^5 \, M_{\odot}$ & $200 \pm 30\,\text{pc}$ \cite{her1} & $14.7^{\degree}$ & $<$0.013sr\\ \hline \end{tabular} \caption{Description of GMCs. The mass of Hercules is assumed since no value was found in the literature.} \label{tab:gmcs} \end{table} A zoom-in of the GMCs in the HAWC map are shown in Figures \ref{fig:aqexc}, \ref{fig:taexc} and \ref{fig:herexc}. \begin{figure}[!ht] \centering \includegraphics[scale=0.45]{aquila.png} \caption{Aquila Rift is located in the region $10^{\degree} < l < 35^{\degree}$ and $0^{\degree} < b < 15^{\degree}$. White is the contour line of the CO-gas map. } \label{fig:aqexc} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=0.45]{taurus.png} \caption{Taurus is located in the region $150^{\degree} < l < 175^{\degree}$ and $-28^{\degree} < b < -2^{\degree}$. White is the contour line of the CO-gas map. } \label{fig:taexc} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=0.5]{hercules.png} \caption{Hercules is located in the region $40^{\degree} < l < 48^{\degree}$ and $7^{\degree} < b < 11^{\degree}$. White is the contour line of the CO-gas map. } \label{fig:herexc} \end{figure} \section{\textit{Fermi} Bubbles} The \textit{Fermi}-LAT Bubbles are two bubble-like structures extending $55^{\degree}$ above and below the Galactic plane. They were discovered after looking for a counterpart of the microwave haze in data from the \textit{Fermi} Telescope \cite{bubble1}. In the literature exists several models that try to explain the origin of the \textit{Fermi} Bubbles. Some of these ideas include: outflow can be generated by activity of the nucleus in our galaxy producing a jet \cite{Guo12}, wind from long-timescale star formation\cite{crocker11}, periodic star capture processes by the supermassive black hole in the Galactic center \cite{Cheng11} or by winds produced by the hot accretion flow in Sgr A* \cite{Mou14}. These models can be constrained by measuring the energy spectrum of the gamma-ray emission. For example, if the gamma-ray emission cannot be explained by hadronic processes, the wind from long-timescale star formation model could be ruled out. \subsection{Method} We use HAWC data, corresponding to the dates of 2014 November 27 to 2016 February 11, to search for gamma-ray emission from the Northern \textit{Fermi} Bubble region. The main challenge of the analysis is to estimate the background. First, we need to distinguish between the shower signatures of cosmic rays and gamma rays that deposit their energy in the HAWC observatory. Then we need to estimate the isotropic flux by the direct integration method \cite{atkins03}. Due to the fact that the direct integration needs stable performance from the detector, the lifetime of the the analysis is reduced to 290 days. Finally, since the integration time used is 24 hours due to the size of the\textit{Fermi} Bubbles, effects from the large-scale anisotropy needs to be removed \cite{aniso14}. The analysis is done in 7 analysis bins defined in the similar way as in \cite{crabpaper}. After taking into account these effects, the excess calculation is given by \begin{equation} G'_i = \varepsilon_{G,i} \frac{E'_i-\varepsilon_{C,i}E_i}{\varepsilon_{G,i}-\varepsilon_{C,i}}, \end{equation} where, $G'_i$ is the final excess after corrections in pixel $i$; $E'_i$ is the excess in pixel $i$ without corrections after applying gamma-hadron cuts; $E_i$ is the excess in pixel $i$ without corrections before applying gamma-hadron cuts; $\varepsilon_{G,i}$ and $\varepsilon_{C,i}$ are the gamma and hadron efficiencies after applying gamma-hadron cuts. For more details on the analysis see \cite{hawcbubble}. \subsection{Upper Limits} No significant excess was found, we proceeded to calculate upper limits on the flux. Figure \ref{fig:bubble}, shows the data points from the \textit{Fermi} measurements, together with the HAWC upper limits. The energy bins are obtained by combining the analysis bins with a weighted average (see \cite{hawcbubble} for the description). The plot also features predictions from two hadronic and two leptonic models, all obtained from \cite{ackerman14}. The leptonic models are obtained from an electron spectrum with the shape of a power-law with exponential cutoff interacting with the interstellar radiation field at 5\,kpc from the Galactic plane and the cosmic microwave background. The hadronic models assume a power-law and power-law with a cutoff for the proton spectrum. The protons interact with the interstellar medium and produce photons through pion decay. The IceCube model is obtained from \cite{lunardini15} and it is the counterpart of a neutrino flux model that best fist the IceCube data. Our result is not able to constrain the models at energies below 1\,TeV. However at high energies, it implies, for a hadronic model, that there is a cutoff in the proton spectrum. \begin{figure} \centering \includegraphics[scale=0.4]{figure9.png} \caption{Measured flux of the Fermi Bubbles with HAWC upper limits. Hadronic and leptonic models that explain the emission of the Bubbles are overlaid too. } \label{fig:bubble} \end{figure} \section{Conclusion} Observations of large gamma-ray structures can give us an insight in how cosmic rays propagate and distribute in the galaxy, as well as the mechanisms that produce them. This information is useful to understand the evolution of our galaxy. Using data from the HAWC observatory, we searched for gamma-ray signal in three GMCs and the Fermi Bubbles. In the case of the Fermi Bubbles we calculated upper limits at $95\%$ C.L. The upper limits constrain some hadronic models, including a neutrino model that describes the IceCube data. \acknowledgments{ We acknowledge the support from: the US National Science Foundation (NSF); the US Department of Energy Office of High-Energy Physics; the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnolog\'{\i}a (CONACyT), M{\'e}xico (grants 271051, 232656, 260378, 179588, 239762, 254964, 271737, 258865, 243290, 132197), Laboratorio Nacional HAWC de rayos gamma; L'OREAL Fellowship for Women in Science 2014; Red HAWC, M{\'e}xico; DGAPA-UNAM (grants RG100414, IN111315, IN111716-3, IA102715, 109916, IA102917); VIEP-BUAP; PIFI 2012, 2013, PROFOCIE 2014, 2015; the University of Wisconsin Alumni Research Foundation; the Institute of Geophysics, Planetary Physics, and Signatures at Los Alamos National Laboratory; Polish Science Centre grant DEC-2014/13/B/ST9/945; Coordinaci{\'o}n de la Investigaci{\'o}n Cient\'{\i}fica de la Universidad Michoacana. Thanks to Luciano D\'{\i}az and Eduardo Murrieta for technical support. }
1,477,468,750,683
arxiv
\subsection{Introduction} Phase-coherent transport in mesoscopic superconductor/normal metal ($S/N$) systems has been an active area of research during the last decade \cite{LR}. The interest in the theoretical investigations was stimulated by impressive technological advances and by experimental activity in studying various properties of small mesoscopic structures \cite{r1,r2,r3,r4,r5,r6,r7,r8,r9}. Interesting phenomena in mesoscopic systems are due to the importance of both the phase coherence established in the $s$ constriction by the proximity effect and significant departure of quasiparticles from equilibrium. This is completely true for two-barrier structures $N-s-S$ with barriers at the interfaces between the $N$\ and $S\;$ electrodes connected by a superconducting constriction $ s\;$of length $d$. The dimensions of the constriction transverse to the current direction are assumed to be small in comparison with the London penetration depth in the $S\;$electrode. We consider the system with diffusive transport, i.e. we suppose that the mean free path $l\;$in the $s\;$ region is small with respect to the constriction length $d.$ Because of non-conservation of the momentum, the interference of normal electron wave functions related to reflections from the barriers is not essential. Nevertheless the coherence of different (ordinary and Andreev) reflection processes related to the condensate wave function and nonzero order parameter $\Delta \;$in the superconductor $s\;$is very important because the inter-barrier distance\ $d\;$is supposed to be small in comparison with$\;\sqrt{\hbar D/\Delta }$,\ where $D=lv_F/3\;$is the diffusion coefficient. In what follows we assume that transparencies of both barriers ${\mathcal D}_{1,2},\;$(averaged over momentum directions) are small enough to allow the main contribution to the resistance of the system to be due to the barrier resistances. Tunneling processes determine the dwell times $ \tau _{b1,2}={\mathcal D}_{1,2}v_F/4d\;$in the $s\;$region which are supposed to be shorter than the inelastic relaxation time $\tau _{in}\;$in the superconductor $s,\;$so that the following conditions should be fulfilled \begin{equation} \tau _{dif}\ll \tau _{b1,2}\ll \tau _{in} \label{c1} \end{equation} where $\tau _{dif}=\hbar /(D/d^2)\;$is the diffusion time of quasiparticles through the length $d$\ . It is clear that the quantum nature of the tunneling processes becomes more pronounced if the tunneling rates $\hbar /\tau _{b1,2}\;$are comparable with the characteristic scale of the quasiparticle energy, \begin{equation} \hbar /\tau _{b1,2}\sim \Delta , \label{c2} \end{equation} because under this condition the classical notion of a quasiparticle whose dwell time should be longer than its energy, loses its sense. Nevertheless the Green's function approach enables one to obtain the quantum kinetic equations as given in \cite{LO} which are valid beyond the classical limits, i.e. when the quasiparticle energy is not large compared to the tunneling rate. Note that under the conditions (\ref{c1}) the proximity effect, i.e. the influence of $S\;$and $N\;$electrodes on the condensate wave function and on the order parameter in the $s$ region, is strong. We also note that unusual features of transport properties are due to the significant role of Andreev reflection processeses in the considered system. These processeses occur in the presence of two potential barriers (at $x=0$ and $x=d$) and the superconducting order parameter which has a two-step form:\ $\Delta (x)=\Delta \theta (x)\theta (d-x)+\Delta _S$ $\theta (x-d)\exp (i\varphi )$, where $\varphi \;$is the phase difference between the superconductors arising at non-zero voltage, $V$ and $\theta (x),$ is the Heavyside function. As a result, the energy dependent transmission coefficient ${\mathcal D} _\epsilon (\Delta ,\varphi )\;$ of quasiparticles with energy $\epsilon<\Delta _S$, that determines the current at low temperatures, appears to be a function strongly dependent on $V$ through the voltage dependence of $\Delta $ and $\varphi $. All these circumstances result in nontrivial features of the quasiparticle phase coherent transport through the $N-s-S\;$system which will be investigated in this paper. It should be noted that some of these phenomena have been studied in \cite{Z}. We therefore investigate the transport phenomena in more detail with emphasis upon the case of a weak pairing electron interaction in the $s$ region, i.e. when the critical temperature,$\;T_{c0}$ of the superconductor,$s$ in the absence of the pair-breaking and proximity effect ($\tau _{b1,2}=\infty $), is small in comparison with the critical temperature of the $S$ electrode, $\;T_{cS}$. It will be shown that in spite of a small ratio $t_c=T_{c0}/T_{cS}\;$the properties of the $N-s-S\;$system\ may radically differ from properties of a two barrier $N-N-S\;$structure well studied in the limit$\;t_c=0$ \cite{VZK,r11}. We also study the zero bias conductance as a function of the phase difference between the $S$ electrodes in a quasiparticle tunneling interferometer with two superconducting electrodes coupled by a mesoscopic superconductor $s$. It is shown that the amplitude of the conductance oscillation may exceed the conductance of this structure in the normal state. \subsection{The N-s-S system} We consider the $N-s-S\;$system shown in Fig.1a. As in Refs.\cite{VZK,Z,VZ,Z1,N,r10,Spiv,r11,r12} we use the approach based on the equations for the quasiclassical Green's function $\check{G}=\check{G}({\bf r,p}_F;\epsilon )\;$ which is the 4x4 supermatrix(see Ref.\cite{LO}), \[ \check{G}=\left( \begin{tabular}{ll} $\hat{G}^R$ & $\hat{G}^K$ \\ $\hat{0}$ & $\hat{G}^A$ \end{tabular} \right) \] consisting of the retarded $\hat{G}^R\;$,advanced $\hat{G}^A,\;$and Keldysh $ \hat{G}^K,\;$Green's functions which are 2x2 matrices in Nambu space. Note that we suppose that a stationary solution realizes and the Green's functions do not depend on time. This non-obvious assumption is justified by the final result. The matrix $\hat{G}^K\;$is related to the\ matrix distribution function ${\it \hat{f}\;=}$ ${\it f}_0$ $\hat{1}$ $+{\it f} \hat{\sigma}_z$ \begin{equation} \hat{G}^K=\hat{G}^R{\it \hat{f}-\hat{f}}\hat{G}^A \label{GK} \end{equation} The matrices $\hat{G}^{R,A}\;$have the following form \[ \hat{G}^\mu =g^\mu \hat{\sigma}_z+\hat{f}^\mu ,\;\;\hat{f}^\mu =f^\mu i\hat{% \sigma}_y\exp (i\hat{\sigma}_z\chi ) \] where $\chi \;$is the phase of the order parameter $\mu = R(A)$. The current in the system is given by the following relation \begin{equation} I=\frac{\sigma {\it A}}8\mbox{Tr}\hat{\sigma}_z\int d\epsilon (\hat{G^R} \partial _x\hat{G}^K+\hat{G}^K\partial _x\hat{G}^A) \label{I} \end{equation} where ${\it A}=w_yw_z$\ is the cross-section area of the $s\;$region. The transverse dimensions $w_{y,z}\;$should be small compared to the London penetration depth. Therefore we need to solve a one-dimensional equation in the $s\;$region\ ($0<x<d,\;x$-axis coincides with the direction of the current), where in the considered diffusive case the matrix $\check{G}\equiv \check{G}% (x,\epsilon )$ averaged over the momentum direction obeys the equation (see Ref.\cite{LO}) \begin{equation} D\partial _x(\check{G}\partial _x\check{G})+i[\epsilon \check{\sigma}_z+% \check{\Delta},\check{G}]=\check{0}. \label{E1} \end{equation} $\check{\sigma}_z=\check{1}\hat{\sigma}_z\;$is the Pauli supermatrix, and the order parameter supermatrix in the film is $\check{\Delta}=\check{1}\hat{% \Delta}\;$where \[ \hat{\Delta}=\left( \begin{tabular}{ll} $0$ & $\Delta $ \\ $-\Delta ^{*}$ & $0$% \end{tabular} \right) \] The order parameter is given by the self-consistency relation which in the framework of the weak coupling theory has the form \begin{equation} \hat{\Delta}=\lambda \int_0^{\omega _D}d\epsilon (\hat{f}^R{\it \hat{f}-\hat{% f}}\hat{f}^A) \label{self} \end{equation} where the constant $\lambda \;$determines the critical temperature $T_{c0}\;$% of the superconductor $s$ in the absence of pair-breaking factors and the proximity effect, \[ T_{c0}=1.14\omega _D\exp (-1/\lambda ) \] \ The matrix $\check{G}\;$obeys the normalization condition \begin{equation} \check{G}^2=1 \label{nc} \end{equation} $\;$ In order to solve Eq.(\ref{E1})\ we need to take into account the boundary conditions \cite{r13} that in the diffusive case reduce to \cite{Z2} (see also \cite{LR}) \begin{equation} D(\check{G}\partial _x\check{G})(+0)=\epsilon _{b1}d[\check{G}(+0),\check{G}% _N],\;D(\check{G}\partial _x\check{G})(d-0)=\epsilon _{b2}d[\check{G}_S,% \check{G}(d-0)] \label{bc} \end{equation} where $\epsilon _{bj}=\rho D/2dR_{bj\Box },\;R_{b1,2\Box }\;$is the interface resistance per unit area at the $N/s\;(x=0)\;$and\ $s/S\;(x=d)$\ interfaces,\ $\check{G}_{S,N}\;$equilibrium Green's functions in the electrodes, $\rho \;$is the normal state specific resistivity of the superconductor\ $s\;$. Note that the energies, $\epsilon _{bj}\;$, are connected with the characteristic dwell times: $\tau _{bj}=\hbar /\epsilon _{bj}.\;$% In terms of the Thouless energy $D/d^2=E_{Th}$,the conditions (\ref {c1})\ may be written \begin{equation} \hbar /\tau _{in}\ll \epsilon _{bj}\ll E_{Th} \label{c1a} \end{equation} Suppose that the length $d$ of the $s\;$region is small enough i.e.$\;d\ll \sqrt{\hbar D/\Delta _S}.\;$Then the solution of Eq.(\ref{E1}) is readily found(see Appendix I). The retarded and advanced Green's functions are given by \begin{equation} \hat{G}^\mu (\epsilon )=g^\mu (\epsilon )\hat{\sigma}_z+\hat{f}^\mu (\epsilon )=\frac{\epsilon ^\mu (\epsilon )\hat{\sigma}_z+[\Delta i\hat{\sigma }_y+i\epsilon _{b2}\hat{f}_S^\mu (\epsilon )]}{\zeta ^\mu (\epsilon )} \label{Gm} \end{equation} where \begin{eqnarray} \zeta ^\mu (\epsilon ) &=&[(\epsilon ^\mu (\epsilon ))^2-\Delta ^2-2\Delta \i\epsilon _{b2}f_S^\mu \cos \varphi +(\epsilon _{b2}f_S^\mu )^2]^{1/2}\nonumber \\ \epsilon ^{R,A}(\epsilon ) &=&\epsilon +i\epsilon _{b2}g_S^{R,A}(\epsilon ) \pm i\epsilon _{b1}\nonumber \end{eqnarray} The Keldysh function is given by Eq.(\ref{GK2}) and it is convenient to separate the anomalous part $\hat{G}_a^K\;$ to give \[ \hat{G}^K=\hat{G}^Rn-n\hat{G}^A+\hat{G}_a^K \] We have for $\hat{G}_a^K\;$ \begin{equation} \hat{G}_a^K=(\hat{E}_a^K-\hat{G}^R\hat{E}_a^K\hat{G}^A)\frac 1{(\zeta ^R+\zeta ^A)} \label{GKa} \end{equation} with the anomalous self-energy \begin{equation} \hat{E}_a^K=2i\epsilon _{b1}\{n_{-}(\epsilon )+\hat{\sigma}_z[n_{+}(\epsilon )-n(\epsilon )]\} \label{EKa} \end{equation} where $n(\epsilon )=\tanh (\epsilon /2T),$ \begin{equation} n_{\pm }(\epsilon )=[n(\epsilon +eV)\pm n(\epsilon -eV)]/2 \end{equation} $\;$Thus the anomalous part $\hat{G}_a^K\;$is determined by $\hat{E}_a^K$ which contains only the self-energy\ depending on the N-electrode\ Green's function $\hat{G}^K$. Using Eq.(\ref{GKa}) enables one to find the non-equilibrium part of the matrix distribution function \begin{equation} \delta {\it \hat{f}}=({\it f}_0-n)+{\it f}\hat{\sigma}_z \label{df} \end{equation} which determines the anomalous part of $\hat{G}^K$ : \begin{equation} \hat{G}_a^K=\hat{G}^R\delta {\it \hat{f}}-\delta {\it \hat{f}}\hat{G}^A \end{equation} From Eq.(\ref{GKa}) we find for the non equilibrium parts of the distribution functions \begin{equation} {\it f}=\frac 1{4\nu (\zeta ^R+\zeta ^A)}\mbox{Tr}\hat{E}_a^K(1-\hat{G}^A% \hat{G}^R) \end{equation} \begin{equation} \delta {\it f}_0=\frac 1{4\nu (\zeta ^R+\zeta ^A)}\mbox{Tr}\hat{E}_a^K(\hat{% \sigma}_z-\hat{G}^A\hat{\sigma}_z\hat{G}^R) \end{equation} where $\nu = \mathop{\rm Re} g^R\;$is the density of states, $\delta {\it f}_0={\it f}_0-n,$. Using Eq.(\ref{EKa}) for $\hat{E}_a^K$ the non-equilibrium part of the distribution functions may be written in the following form \begin{eqnarray} \delta {\it f}_0 &=&a_{+}(n_{+}-n)+bn_{-} \label{dfa} \\ {\it f} &=&a_{-}n_{-}-b(n_{+}-n) \nonumber \end{eqnarray} where \[ a_{\pm }=\frac{\epsilon _{b1}M_{\pm }}{2\nu \mathop{\rm Im} \zeta ^R},\;\;b=\frac{\Delta \epsilon _{b1}\epsilon _{b2}}{\nu \mathop{\rm Im} \zeta ^R}\frac{% \mathop{\rm Re} f_S^R}{\left| \zeta ^R\right| ^2}\sin \varphi \] \[ M_{\pm }=1-g^Rg^A\pm [\Delta ^2-2\epsilon _{b2}\Delta \mathop{\rm Im} f_S^R\cos \varphi +(\epsilon _{b2}\left| f_S^R\right| )^2]\frac 1{\left| \zeta ^R\right| ^2} \] We took into account that $\zeta ^A=-(\zeta ^R)^{*},\;$ $g^A=-(g^R)^{*}.$ From the self-consistency relation (\ref{self}) we obtain the following system of equations for $\Delta \;$and\ $\varphi $, \begin{equation} \Lambda \Delta =\epsilon _{b2}(\alpha \cos \varphi -\beta _1\sin \varphi ), \label{sc1} \end{equation} \begin{equation} \beta _0\Delta =\epsilon _{b2}(\alpha \sin \varphi +\beta _1\cos \varphi ), \label{sc2} \end{equation} where \[ \Lambda =\ln (T/T_{c0})-\int_0^\infty d\epsilon \left( {\it f}_0(\epsilon) \mathop{\rm Re} \frac 1{\zeta ^R(\epsilon )}-\frac{n(\epsilon )}\epsilon \right) \] \[ \alpha =-\int_0^\infty d\epsilon {\it f}_0(\epsilon )% \mathop{\rm Im} \frac{f_S^R(\epsilon )}{\zeta ^R(\epsilon )} \] \[ \beta _k=\int_0^\infty d\epsilon {\it f}(\epsilon )% \mathop{\rm Im} \frac{k-1+kif_S^R(\epsilon )}{\zeta ^R(\epsilon )}\;,\;k=0,1. \] Note that Eq.(\ref{sc2}) is the consequence of the current conservation law in the $x$-direction.\ Introducing the normalized order parameter $\delta =\Delta /\epsilon _{b2}\;$one can reduce Eqs. (\ref{sc1}) and (\ref{sc2}) to the equivalent ones \begin{equation} \delta =\sqrt{\frac{\alpha ^2+\beta _1^2}{\Lambda ^2+\beta _0^2}}, \label{sc1a} \end{equation} \begin{equation} \exp (i\varphi )=\frac{\alpha \Lambda +\beta _0\beta _1+i(\alpha \beta _0-\Lambda \beta _1)}{\sqrt{\alpha ^2+\beta _1^2}\sqrt{\Lambda ^2+\beta _0^2}% } \label{sc2a} \end{equation} From the self-consistency equations (\ref{sc1}) and (\ref{sc2}) or (\ref{sc1a}) and (\ref{sc2a}), it follows that the stationary solution for the order parameter exists at arbitrary $V$\ and transition to the ac Josephson effect (time dependent phase difference\ $\varphi $) does not occur with increasing voltage. In other words the critical current of the $S/s\;$tunnel junction is absent in the considered mesoscopic system. Such a situation differs radically from that in a single $S/S\;$tunnel junction composed of two bulk superconductors. If at least one of the two superconductors has mesoscopic dimensions, it is important how it is connected with the conductors and non-equilibrium states arising in the presence of the current play a significant role in this case. In our system one of the important aspects of the non equilibrium state is the quasiparticle charge-imbalance determined by the distribution function ${\it f}(\epsilon )\;$, and, as a consequence, the gauge-invariant potential $\mu =\Phi + (\hbar/2e) \partial _t\chi $\ in the $s$ region, where $\Phi \;$is electrical potential and $\chi $ is the order parameter phase. Under the assumption (\ref{c1}) the solution for the phase difference between the superconductors is stationary for arbitrary voltages. Therefore we can set $\chi =0$ in the $s$ region so that $\mu =\frac 1e% \int_0^\infty d\epsilon {\it f}(\epsilon )\nu (\epsilon )\;$coincides with the voltage between the superconductors $\Phi \neq -(\hbar/2e) \partial _t\varphi =0.$ In other words the Josephson relation between the frequency (equal to zero)\ and the voltage drop across the superconducting tunnel junction is violated in the structure under consideration. The current may be calculated at any point\ $x$ and, for example, at the $N/s$ interface we obtain \begin{equation} I=\frac 1{2eR_N}\int_{-\infty }^\infty d\epsilon \{F_{-}(\epsilon )n_{-}(\epsilon )+F_{+}(\epsilon )[n_{+}(\epsilon )-n(\epsilon )]\}\;, \label{Cur} \end{equation} where \begin{eqnarray*} F_{-}(\epsilon ) &=&(1+r)\nu (\epsilon )[1-a_{-}(\epsilon )],\;F_{+}(\epsilon )=\nu (\epsilon )b(\epsilon )(1+r)\;, \\ r &=&R_{b2}/R_{b1}=\epsilon _{b1}/\epsilon _{b2}\;. \end{eqnarray*} $\;$If the $S\;$electrode is a conventional BCS superconductor, \[ g_S^R(\epsilon )=f_S^R(\epsilon )\epsilon /\Delta _S=\epsilon /\sqrt{% (\epsilon +i0)^2-\Delta _S^2}\;. \] Then for $\left| \epsilon \right| <\Delta _{S,\;}b(\epsilon )=0,\;F_{+}(\epsilon )=0\;$and at $\left| eV\right| <\Delta \;$at zero temperature the current reads \begin{equation} I=\frac 1{eR_N}\int_0^Vd\epsilon F_{-}(\epsilon )\;. \label{Cur0} \end{equation} Note that the function $F_{-}(\epsilon )=F_{-}(\epsilon ;V)\;$depends on voltage through the voltage dependence of $\Delta $\ and $\varphi .\;$It represents the transmission coefficient of the system which determines the efficiency of Andreev reflection processes. Taking into account that $% b(\epsilon )=0$ and assuming $\left| eV\right| <\Delta _S\;$, we find for the non-equilibrium part of the distribution functions \begin{eqnarray} {\it f}(\epsilon ) &=&a_{-}(\epsilon )\mbox{sgn}(eV)\theta (\left| eV\right| -\left| \epsilon \right| )\;, \\ \delta {\it f}_0(\epsilon ) &=&-a_{+}(\epsilon )\mbox{sgn}(\epsilon )\theta (\left| eV\right| -\left| \epsilon \right| )\;.\; \end{eqnarray} Consider the case of small critical temperatures of the superconductor $% s,\;T_{c0}/T_{cS}\ll 1,\;$and also assume that the following condition is fulfilled \begin{equation} \Delta ,\;\epsilon _{b1},\;\epsilon _{b2},eV\ll \Delta _S\;. \label{c3} \end{equation} In this case Eq.(\ref{sc1a}) and (\ref{sc2a})\ for $\Delta $ and $\varphi $ can be simplified and presented in the form (see Appendix II). \begin{equation} \delta =\sqrt{\frac{\alpha ^2+\beta _0^2}{\Lambda ^2+\beta _0^2}}, \label{sc3a} \end{equation} \begin{equation} \cos \varphi +i\sin \varphi =\frac{\alpha \Lambda -\beta _0^2+i\beta _0(\alpha +\Lambda )}{\sqrt{\alpha ^2+\beta _0^2}\sqrt{\Lambda ^2+\beta _0^2}% }\;. \label{sc3b} \end{equation} From Eq.(\ref{Cur0}) and (\ref{a-}) we find the current \begin{equation} \frac I{(\epsilon _{b1}+\epsilon _{b2})/eR_N}=2\Omega _\varphi \int_0^vdu% \frac{\nu (u,\Omega _\varphi )}{u^2+r^2+\Omega _\varphi +\left| \zeta (u,\Omega _\varphi )\right| ^2}\;. \label{ncur} \end{equation} The I-V curves obtained by numerical calculations results on the basis of Eqs.(\ref{sc3a}),\ (\ref{sc3b}) and (\ref{ncur}) are presented in Fig.2. One can see that (as a consequence of the order parameter suppression in the $% s\; $region) the differential conductance becomes negative with growing voltage. In general the solution of Eqs.(\ref{sc3a}),\ (\ref{sc3b}) and the current can only be determined numerically because the formulas are rather complicated. Nevertheless the zero-bias conductance ${\it g}_0={\it G}(0)/% {\it G}_N\;$ can be found from Eq.(\ref{ncur}), where\ ${\it G}(V)=dI/dV\;$% .\ It is given by \begin{equation} {\it g}_0(\delta ,r)=\frac{(1+r)r(1+\delta )^2}{[r^2+(1+\delta )^2]^{3/2}} \label{g0} \end{equation} where according to Eqs.(\ref{sc3a}) and (\ref{sc1})$\;\delta =\Delta /\epsilon _{b2}\;$is defined\ by the equation \begin{equation} (\delta +1)\ln \frac{[r+\sqrt{r^2+(\delta +1)^2}]}{\delta _0}=\ln \frac 4{t_c% } \label{D0} \end{equation} It follows from Eq.(\ref{D0})\ that (under the condition (\ref{c3})) the proximity effect is strong, i.e. $\Delta \gg \Delta _{0.}\;$Moreover \[ \frac \Delta {\Delta _0}\rightarrow \infty \text{\ at}\;\Delta _0\rightarrow 0\;, \] i.e. due to the proximity effect, anomalously big enhancements of the order parameter occur at very weak pairing electron interaction in the $s\;$% region.\ Assuming $\delta \ll 1$, one can obtain from (\ref{D0})\ that \begin{equation} \frac \Delta {\Delta _0}=\frac 1{\delta _0}\frac{\ln \frac{4\Delta _S}{% \epsilon _{b2}(r+\sqrt{r^2+1})}}{\ln \frac{(r+\sqrt{r^2+1})}{\delta _0}}\;. \label{asr} \end{equation} This expression is valid for very small $\delta _0\;$which satisfies the condition \[ \ln \frac{(r+\sqrt{r^2+1})}{\delta _0}\gg \ln \frac{4\Delta _S}{\epsilon _{b2}(r+\sqrt{r^2+1})}\; \] that is fulfilled provided $\delta _0\ll (\epsilon _{b1}+\epsilon _{b2})^2/\Delta _S^2\ll 1.\;$In particular if $\epsilon _{b1}+\epsilon _{b2}\sim 10^{-2}\Delta _S$ the requirement $\delta _0\ll 10^{-4\;}$means that (\ref{asr})\ is valid provided $T_{c0}\;$is anomalously small, $% T_{c0}\ll 10^{-6}T_{cS},\;$then $\Delta \gg 10^4\Delta _0.\;$ It can be seen from condition (\ref{c3}) that one can ignore the presence of the order parameter in the $s$\ region ($\delta \ll 1)\;$only if the pairing interaction in the s region is very weak, i.e.\ $T_{c0}\ll \epsilon _{b2}(\epsilon _{b1}+\epsilon _{b2})^2/\Delta _S^2$ . The zero-bias conductance as a function of {\it $t_c$}\ is shown in Fig.3 for different parameters {\it $r=R_{b2}/R_{b1}\;$(}and{\it \ }$\epsilon _{b2}=0.05\Delta _S $)$.$ One can see that the normalized conductance may be both smaller and bigger than unity. In particular\ from (\ref{g0}) we find that for $r>1/% \sqrt{2}\;$the maximum value of the conductance, ${\it g}$ corresponds to $% \delta =\delta _m$\ where $(1+\delta _m)=\sqrt{2}r\;$and Eq.(\ref{g0}) equals \[ {\it g}_{0\max }=\frac 2{3\sqrt{3}}(1+r)\;. \] From Eq.(\ref{D0}) we find that the maximum conductance is realized for the case when the critical temperature $T_{c0}=T_{c0}^m,$ where \begin{equation} T_{c0}^m=4T_{cS}\left[ \frac{\epsilon _{b1}(1+\sqrt{3})}{4T_{cS}}\right] ^{1/(1-1/\sqrt{2}r)}\;. \label{Tcm} \end{equation} Eq.(\ref{Tcm}) is applicable for $r\;$satisfying the condition $T_{c0}^m\ll T_{cS};$ in particular it is true for $r>3\sqrt{3}/2-1\;$which corresponds to ${\it g}_{0\max }>1.$ At low temperatures\ $T\ll \Delta _S\;$for zero-bias conductance we find from (\ref{Cur}) \begin{equation} {\it g}(t)=2(1+r)\Omega (t)\int_0^\infty \frac{du}{\cosh ^2u}\frac{\mbox{Re}% (2tu+ir)/\zeta (2tu,\Omega (t))}{(2tu)^2+r^2+\Omega (t)+\left| \zeta (2tu,\Omega (t))\right| ^2} \label{g(t)} \end{equation} where $t=T/\epsilon _{b2},\;\Omega (t)=(1+\delta (t))^2\;$and\ $\zeta (u,\Omega )=[(u+ir)^2-\Omega ]^{1/2}$, the function $\delta (t)\;$is defined by the equation \begin{equation} \delta =\frac{\alpha (\Omega ,t)}{\Lambda (\Omega ,t)}\; \label{d(t)} \end{equation} with \[ \alpha (\Omega ,t)=\ln \frac{4\Delta _S}{\left( \sqrt{\Omega +r^2}+r\right) \epsilon _{b2}}-\int_0^\infty du\frac 2{\exp u+1}\mbox{Re}\frac 1{\zeta (tu,\Omega )} \] \[ \Lambda (\Omega ,t)=\ln \frac{\sqrt{\Omega +r^2}+r}{\delta _0}+\int_0^\infty \frac{du}{\cosh ^2u}\ln \frac{\left| \zeta (2tu,\Omega )+2ut+ir\right| }{% \sqrt{\Omega +r^2}+r} \] The results of numerical calculations on the basis of Eqs.(\ref{g(t)}),\ (% \ref{d(t)})\ are presented in Figs. 4 and 5.\ We see that the conductance may be a non-monotonic function of temperature that radically differs from the corresponding dependencies occurring in the case of normal mesoscopic region with $T_{c0}=0\;$shown by\ dashed lines in Figs. 4 and 5.\ Thus a weak pairing electron interaction results in significant qualitative (for $r\geq 1 $)\ and quantitative changes of the conductance dependence with respect to the case of structure within the normal mesoscopic region. Fig.6 shows that if $r<1,\;$the conductance may be non-monotonic function of temperature even at $T_{c0}=0.\;$We see that the pairing electron interaction results in a shift of the position of the conductance maximum to higher temperatures together with an increase in width of the maximum. The latter is due to the slow decrease of the order parameter with increasing temperature. Consider the case when the resistance of the barrier at the $N/s$ interface is small enough\ ($r\gg 1$). To be more exact we suppose that \begin{equation} \epsilon _{b1}\gg \Delta \;,\;\epsilon _{b2}\;. \end{equation} i.e.$\;r\gg \delta .$ In this case the energy gap is absent in the superconductor $s$ due to a strong pair-breaking effect of the normal electrode. If the condensate Green's function is small, all the expressions are significantly simplified and from (\ref{ncur}) at $T,eV\ll \Delta _S\;$% we find for the current \begin{equation} I=\frac{(\epsilon _{b1}+\epsilon _{b2})}{eR_N}(\delta ^2+2\delta \cos \varphi +1)% \mathop{\rm Im} \Psi (\Gamma +ieV/2\pi T)\;, \label{I(V,T)} \end{equation} where $\Psi (z)\;$is the digamma-function, $\Gamma =1/2+\epsilon _{b1}/2\pi T,\;\delta \;$and cos$\varphi \;$are given by Eqs.(\ref{sc3a}),\ (\ref{sc3b}% )\ with \begin{equation} \Lambda =\ln (T/T_{c0})+% \mathop{\rm Re} \Psi (\Gamma +ieV/2\pi T)-\Psi (1/2),\;\;\beta _0=% \mathop{\rm Im} \Psi (\Gamma +ieV/2\pi T), \label{coef} \end{equation} \[ \alpha =\Delta _S\int_0^{\Delta _S}d\epsilon \frac{\epsilon n_{+}(\epsilon )% }{(\epsilon ^2+\epsilon _{b1}^2)\sqrt{\Delta _S^2-\epsilon ^2}}\;. \] At zero temperature we obtain from Eqs.(\ref{I(V,T)}),\ (\ref{coef}) \begin{equation} \frac I{(\epsilon _{b1}+\epsilon _{b2})/eR_N}=(\delta ^2+2\delta \cos \varphi +1)\arctan (v/r)\;, \end{equation} where Eqs.(\ref{coef})\ reduce to \[ \Lambda =\ln \frac{2\sqrt{v^2+r^2}}{\delta _0},\;\;\alpha =\ln \frac{2\delta _0}{t_c\sqrt{v^2+r^2}}\;,\;\beta _0=\arctan (v/r)\;. \] The I-V curves computed for this case are shown in Fig.7. At small voltages we have $IR_N={\it g}_0V\;$, where the normalized conductance is given by the expression \[ {\it g}_0=\frac{\ln ^2\frac 4{t_c}}{r\ln ^2\frac{2r}{\delta _0}}\;. \] At large voltages, $\epsilon _{b1}\ll eV\ll \Delta _S,\;$the normalized current has the form \[ \frac I{(\epsilon _{b1}+\epsilon _{b2})/eR_N}=\frac{\ln ^2(2\Delta _S/eV)+\ln ^2(2eV/\Delta _0)+2\sqrt{[\ln ^2(2\Delta _S/eV)+1][\ln ^2(2eV/\Delta _0)+1]}}{\ln ^2(2eV/\Delta _0)+1}\;. \] At $(2eV)^2>\Delta _S\Delta _0$ this function slowly decreases with increasing voltage. \subsection{Quasiparticle interferometer\ } Consider a quasiparticle interferometer composed of three tunnel junctions (see Fig.1b) in which the phase difference $\varphi \;$between two different $S/s\;$interfaces is set by an external magnetic field. A similar system in which $S$\ and $N$ electrodes were in contact with a normal metal was considered in \cite{VZ,Z1,N,r10}. Suppose that two barriers at the $S/s$ interfaces are symmetrical with resistances equal to $R_{b2}$\ and the resistance of the barrier at the $N/s$ interface equals $R_{b1}.\;$We again assume that the resistance of the system is determined by the barriers and in the normal state is given by the expression $R_N=R_{b2}/2+R_{b1}$. Assuming that the width of the superconductor $s$ is small, $W\;\ll \sqrt{% \hbar D/\Delta }$\ \ one can neglect the spatial\ variation of the Green's function. Then Eq.(\ref{A1}) is valid with \begin{equation} \check \Sigma =i\epsilon _{b2}\check G_{S+}+i\epsilon _{b2}\check G% _{S-}+\epsilon _{b1}\check G_N\;, \label{Sig} \end{equation} where the Green's functions $\check G_{S\pm }\;$correspond to the phases $% \pm \varphi /2,\;\epsilon _{bj}=\rho Dw_j/2dWR_{bj\Box }\;,w_1=W$\ being the width of the $s$ region and $w_2$ is the width of the $S/s$ interfaces. As in previous cases we find \begin{equation} \hat G^\mu (\epsilon )=g^\mu (\epsilon )\hat \sigma _z+\hat f^\mu (\epsilon )=\frac{\epsilon ^\mu (\epsilon )\hat \sigma _z+\Delta ^\mu (\epsilon )i\hat \sigma _y}{\zeta ^\mu (\epsilon )}\;, \label{Gm1} \end{equation} where \begin{eqnarray} \zeta ^\mu (\epsilon ) &=&\{(\epsilon ^\mu (\epsilon ))^2-[\Delta ^\mu (\epsilon )]^2\}^{1/2}\;, \nonumber \\ \epsilon ^{R,A}(\epsilon ) &=&\epsilon +2i\epsilon _{b2}g_S^{R,A}(\epsilon )\pm i\epsilon _{b1},\;\Delta ^\mu (\epsilon )=\Delta +2i\epsilon _{b2}f_S^\mu (\epsilon )\cos (\varphi /2)\;. \nonumber \end{eqnarray} From the self-consistency relation (\ref{self})\ at zero voltage between the $S\;$and $N$ electrodes the following system of equations for $\Delta \;$% and\ $\varphi $ can be found \begin{equation} \Lambda \Delta =\alpha =-2\epsilon _{b2}\cos (\varphi /2)\int_0^\infty d\epsilon n(\epsilon )% \mathop{\rm Im} \frac{f_S^R(\epsilon )}{\zeta ^R(\epsilon )}\;. \label{sc1aa} \end{equation} where \[ \Lambda =\ln (T/T_{c0})-\int_0^\infty d\epsilon \left( \mathop{\rm Re} \frac 1{\zeta ^R(\epsilon )}-\frac 1\epsilon \right) n(\epsilon ) \] At $T=0$ assuming as before that $\Delta _0,\epsilon _{b1,2}\ll \Delta _S\;$% , we obtain \begin{equation} \Lambda =\ln \frac{\sqrt{\bar \Delta _\varphi ^2+\epsilon _{b1}^2}+\epsilon _{b1}}{\Delta _0}\;,\; \label{Laa} \end{equation} \begin{equation} \alpha =2\epsilon _{b2}\cos (\varphi /2)\ln \frac{4\Delta _S}{\epsilon _{b1}+% \sqrt{\epsilon _{b1}^2+\bar \Delta _\varphi ^2}}\;\;, \label{ao} \end{equation} where $\bar \Delta _\varphi =\Delta +2\epsilon _{b2}\cos (\varphi /2).$\ It is convenient to introduce the function $\bar \delta _\varphi :$ $\Delta /2\epsilon _{b2}=\bar \delta _\varphi \cos (\varphi /2)$, then from Eqs.(\ref {sc1aa}) and (\ref{ao})\ the following equation for $\bar \delta _\varphi $ can be found \begin{equation} (\bar \delta _\varphi +1)\ln \frac{\sqrt{(\bar \delta _\varphi +1)^2\cos ^2(\varphi /2)+r^2}+r}{\delta _0}=\ln \frac 4{t_c}\;, \end{equation} where $\delta _0=\Delta _0/2\epsilon _{b2}.\;$After calculations similar to those carried out in Refs.\cite{VZ,Z1,N,r10} we obtain for the zero-bias conductance of a symmetrical quasiparticle interferometer at zero temperature \begin{equation} \frac{{\it G}(0,\varphi )}{{\it G}_N}=\frac{(1+r)r(1+\bar \delta _\varphi )^2\cos ^2(\varphi /2)}{[r^2+\cos ^2(\varphi /2)(1+\bar \delta _\varphi )^2]^{3/2}}\;. \label{G0f} \end{equation} Note that at $\varphi =0$ Eq.(\ref{G0f})\ is identical to Eq.(\ref{g0}).\ Thus the amplitude of the conductance oscillations may exceed ${\it G}_N\;$% if the ratio $r$\ is\ large enough.\ For $r^2\gg (\delta +1)^2\;$(when the energy gap is absent in the $s$ region), Eq.(\ref{G0f}) yields \begin{equation} \frac{{\it G}(0,\varphi )}{{\it G}_N}=\frac{\ln ^2(4/t_c)}{r\ln ^2(2r/\delta _0)}\cos ^2(\varphi /2)\;. \label{Gf} \end{equation} Since $4/t_c\gg 2r/\delta _0\gg 1\;(\Delta _S\gg \epsilon _{b1})$ the\ amplitude of the conductance oscillations appears to be much larger than in the case of interferometer with a normal mesoscopic region \cite{VZ,Z1,N,r10} \subsection{Conclusion} In conclusion, we have studied phase-coherent diffusive transport through different tunnel structures with $S$\ and $N$\ electrodes coupled by a mesoscopic superconductor $s$. Our study has centered upon the case of a weak pairing electron interaction in the $s$\ region which defines a critical temperature $T_{c0}$\ $\ll T_{cS}.$\ If the dwell time in the $s$ region, determined by the tunneling processes $\tau _b$, is small or comparable with\ $\hbar /\Delta _0$, the proximity effect is strong, i.e. the order parameter $\Delta \gg \Delta _0\sim T_{c0}\;$. As a consequence, the subgap conductance of an $N-s-S\;$tunneling structure and of a quasiparticle tunneling interferometer, depends strongly upon the pairing electron interaction in the $s\;$region when $T_{c0}\ll T_{cS}\;$. Depending upon the ratio of the barrier resistances, the value of the subgap conductance, determined by Andreev reflection processes may be both larger and smaller than the conductance of these structures in the normal state. We have shown that even weak pairing electron interaction may result in the significant qualitative (in the case $r\geq 1$)\ and quantitative change of the conductance temperature dependence with respect to the case of structures with the normal mesoscopic region. The subgap current non-monotonously depends upon the voltage, due to the suppression of the order parameter in the mesoscopic superconductor.\\ \par {\bf Acknowledgments. }The authors (A.V.Z., A.F.V.) acknowledge the financial support of CRDF project RPA-165,\ A.V.Z. acknowledge the financial support of Royal Society and Russian Fond For Fundamental Research (project 96-02-18613) A.V.Z. is grateful to C.J. Lambert for hospitality during his visit to Lancaster where this work has been completed. \newpage \section{appendix} \renewcommand{\theequation}{A\arabic{equation}} \setcounter{equation}{0} Integrating Eq.(\ref{E1})\ over$\;x$ and taking into account the boundary conditions, we obtain the following equation for $\check{G}\;$(in the following $\check{G}\;$denotes the function averaged over the length $d$ ) \begin{equation} \lbrack \check{E},\check{G}]=\check{0}, \label{A1} \end{equation} where \[ \check{E}=\epsilon \check{\sigma}_z+\check{\Delta}+\check{\Sigma}, \] \begin{equation} \check{\Sigma}=i\epsilon _{b1}\check{G}_N+i\epsilon _{b2}\check{G}_S \end{equation} When writing equations (\ref{E1}) and (\ref{A1})\ we disregarded inelastic collisions due to condition (\ref{c1}).\ Let the potential of the superconducting electrode be zero and the potential of the normal electrode be equal to $V$,\ so that \begin{equation} \hat{G}_N^{R,A}=\pm \hat{\sigma}_z,\;\hat{G}_N^K=(1+\hat{\sigma}% _z)n(\epsilon +eV)+(1-\hat{\sigma}_z)n(\epsilon -eV), \end{equation} \begin{equation} \hat{G}_S^{R,A}=g_S^{R,A}\hat{\sigma}_z+\hat{f}_S^{R,A},\;\hat{G}_S^K=(\hat{G% }_S^R-\hat{G}_S^A)n(\epsilon ). \end{equation} When finding the solution of (\ref{A1}) it is convenient to let the phase of the order parameter in the $s$ layer equal zero and the phase of the superconducting electrode $S$ be equal to the phase difference $\varphi \;$% which arises in the presence of the current. Then from (\ref{A1}) and (\ref {nc})\ we find the expressions for the retarded and advanced Green's given in Eq.(\ref{Gm}). The equation for $\hat{G}^K\;$has the form \begin{equation} \hat{E}^R\hat{G}^K-\hat{G}^K\hat{E}^A=\hat{G}^R\hat{E}^K-\hat{E}^K\hat{G}^A \label{EGK} \end{equation} where $\hat{E}^K=\hat{\Sigma}_1^K+\hat{\Sigma}_2^K.\;$It is useful to take into account that \begin{equation} \hat{E}^{R,A}=\zeta ^{R,A}\hat{G}^{R,A} \label{ERA} \end{equation} Then using Eq.(\ref{nc}), we have \begin{equation} \hat{G}^R\hat{G}^K+\hat{G}^K\hat{G}^A=\hat{0} \label{nc2} \end{equation} Therefore from (\ref{ERA}) it follows that $\hat{E}^R\hat{G}^K-\hat{G}^K\hat{% E}^A=(\zeta ^R+\zeta ^A)\hat{G}^R\hat{G}^K$\ and from (\ref{EGK})\ we find for the Keldysh functions \begin{equation} \hat{G}^K=(\hat{E}^K-\hat{G}^R\hat{E}^K\hat{G}^A)\frac 1{(\zeta ^R+\zeta ^A)} \label{GK2} \end{equation} \section{appendix} Here we present simplified formulas for $\Omega\;$,$\alpha\;$ and $\beta\;$ by taking into account the following identity which may be readily proved for small energies $\epsilon \ll \Delta _S:$ \[ (2\nu \mathop{\rm Im} \zeta ^R)(u,\Omega _\varphi )=r\frac{(u^2+r^2+\Omega _\varphi )}{\left| \zeta (u,\Omega _\varphi )\right| ^2}+r, \] Using the notations $u=\epsilon /\epsilon _{b2},\zeta (u,\Omega )=[(u+ir)^2-\Omega ]^{1/2},\;\Omega _\varphi =\delta ^2+2\delta \cos \varphi +1\;$ one can obtain the expressions for $a\pm $ from (\ref{dfa}). \begin{eqnarray} a_{+} &=&1, \label{a+} \\ a_{-} &=&1-\frac{2\Omega _\varphi }{u^2+r^2+\Omega _\varphi +\left| \zeta (u,\Omega _\varphi )\right| ^2}. \label{a-} \end{eqnarray} At zero temperature and $eV\ll \Delta _S$, one has \begin{equation} \Lambda =\Lambda _0-\int_0^{eV}d\epsilon \delta {\it f}_0(\epsilon )% \mathop{\rm Re} \frac 1{\zeta ^R(\epsilon )}, \label{La1} \end{equation} where \[ \Lambda _0=\ln \frac{\sqrt{\Omega _\varphi +r^2}+r}{\delta _0}, \] \[ \delta _0=\Delta _0/\epsilon _{b2},\;\Delta _0=1.76T_{c0}, \] $\Delta _0\;$being the gap of the superconductor $s$\ at $T=0$ in the absence of pair-breaking\ factors\ and the proximity effect ($\epsilon _{bj}=0)$. Introducing the normalized voltage $v=Ve/\epsilon _{b2}$, from Eq.(\ref{La1}) we find expressions for $\alpha\;$ and $\beta\;$, (see Eqs.\ref{sc1} and (\ref{sc2}). \begin{equation} \Lambda =\ln \frac{\left| \zeta (v,\Omega _\varphi )+v+ir\right| }{\delta _0} \end{equation} \[ \alpha =\ln \frac{4\delta _0}{\left| \zeta (v,\Omega )+v+ir\right| t_c} \] \[ \beta _0=-\beta _1=\frac r2\int_0^vdu\frac{M_{-}(u,\Omega _\varphi )}{\nu (u,\Omega _\varphi )\left| \zeta (u,\Omega _\varphi )\right| ^2} \] \newpage
1,477,468,750,684
arxiv
\section{Introduction} With the CERN LHC program underway, we started seeing an exponential acceleration of data growths in High-Energy Physics (HEP) field. By the end of Run II, the CERN experiments were already operating in the petabyte (PB) regime, producing $O$(100)PB of data each year. And, the new HL-LHC program will bring us to the Exa-Byte scale. The usage of Machine Learning in the HEP is on the rise too. It has been successfully used in online, offline reconstruction programs, and there is huge interest to apply it for detector simulation, object reconstruction, identification, MC generation, and beyond \cite{MLCWP}. But the main obstacle of using ML frameworks and brining CS expertise in ML to HEP lies in differences of data-formats used by ML practitioners and HEP users. In particular, the former mostly rely on flat-format data representation, e.g. CSV or NumPy data formats, while HEP data are stored in tree based data-structures used by ROOT \cite{ROOT} data-format. As was pointed out in HEP ML Community White Paper \cite{MLCWP}, the usage of ROOT data-format outside of HEP practically does not exists. This fact creates an artificial gap between ML and HEP communities. The recent kaggle challenges, e.g. ATLAS for identification of Higgs boson \cite{kaggleATLAS} and the cross-experiment tracking ML challenge \cite{kaggletracking}, were specifically adopted (in terms of input datasets) and presented to ML competitors in CSV data format. But, within the HEP community these datasets are easily accessible, without any pre-processing or transformation in the ROOT data-format. To close this gap, we present in this paper a novel approach to use HEP ROOT data natively for training purposes, reading ROOT files from remote storages via XrootD, and presenting pre-trained models as a service accessible via HTTP protocol. Such Machine Learning as a Service (MLaaS) modular design opens up a possibility to train ML models on PB datasets remotely accessible from the Worlwide LHC Computing GRID (WLCG) sites without requiring data transformation and data locality. \section{Related work and solutions} Machine Learning as a Service is a well known concept in industry, and major IT companies offer these solutions to their customers. For example, Amazon ML, Microsoft Azure ML Studio, Google Prediction API and ML engine, and IBM Watson are good examples of MLaaS, see \cite{MLaaScomparison}. Usually, MLaaS is used an umbrella of various ML tasks such as data pre-processing, model training and evaluation, and prediction results can be accessed by clients through REST APIs. Even though they can provide very good results and interfaces, most of the time these services are designed to cover standard use-cases. For instance, data are expected to be fed in flat based data formats. All data preprocessing operations are performed automatically where a concrete service identifies which fields are categorical and which are numerical, and it does not ask a user to choose the methods of further data preprocessing. The model predictions are limited to well-established patterns, such as binary classifications, multi-class classifications, and regressions. Although quite often MLaaS service providers offer pre-defined models that can be used to cover standard use-cases, e.g. image classifications, etc. Obviously, all them are designed to make a profit by charging customers on the amount of predictions they want to make, or use tiered structure on the amount of calls placed by clients. In HEP, usage of these services is quite limited though for several reasons. Among them, the HEP ROOT data-format can't be used directly in any of these services, and pre-processing operations may be more complex than offered by service providers. For instance, the two HEP kaggle challenges \cite{kaggleATLAS, kaggletracking} used custom HEP metrics for evaluation procedure which is not available in out-of-the box industry solutions, and ML workflow in both competitions is far from trivial, e.g the pre-processing step required writing custom code to include event selection and perform other steps. Therefore, after rounds of evaluations we found that provided solutions most often are ineffective for HEP use-cases (cost and functionality-wise), even though the CERN OpenLab initiative and others continue close cooperation with almost all aforementioned service providers. At the same time, various R\&D activities within HEP is underway. For example: hls4ml project \cite{hls4ml} targets ML inference in FPGAs, while SonicCMS project \cite{SonicCMS} is designed as Services for Optimal Network Inference on Coprocessors. Both are designed for optimization of inference phase rather than targeting the whole ML pipeline from reading data, to training and serving predictions. At the moment we are unaware of any final product which can be used as MLaaS in HEP. The novelty of the proposed solution is three fold. First, we are proposing to use HEP ROOT files directly, either using them locally or remotely, without requiring data transformation operations to flat data format. Second, the training layer can use external 3rd party ML frameworks, from well established ML, e.g. scikit-learn, libraries to Deep-Learning (DL) frameworks such as TensorFlow, PyTorch and others. Third the inference phase is provided via RESTful APIs of TensorFlow as a Service (TFaaS) similar to industry solutions. The latter does not require significant changes of existing HEP infrastructures, frameworks and applications due to usage of HTTP protocol between clients and TFaaS server(s). \section{MLaaS architecture}\label{Architecture} A typical ML workflow consists of several steps: acquire the data necessary for training, use ML framework to train the model, and utilize the trained model for predictions. This workflow can be further abstracted as data streaming, data training, and inference phases. Each of these steps can be either tightly integrated into application design or composed and used individually. The choice is mostly driven by particular use cases. In HEP we can define these layers as following, see Fig. \ref{fig:MLaaSArchitecture}: \begin{figure} \centering \includegraphics[width=0.8\textwidth]{architecture.pdf} \caption{MLaaS architecture diagram representing three independent layers: data streaming layer to read local or remote ROOT files, a data training layer to feed Tree based HEP data into ML framework, and data inference layer via TensorFlow as a Service.} \label{fig:MLaaSArchitecture} \end{figure} \begin{itemize} \item {\bf Data Streaming Layer} is responsible for reading local and/or remote ROOT files, and streaming data batches upstream to the Data Training Layer. The implementation of this layer requires ROOT I/O layer with support of remote I/O file access; \item {\bf Data Training Layer} represents a thin wrapper around standard ML libraries such as TensorFlow, PyTorch, and others. It reads data from the Data Streaming Layer in chunks, transforms them from ROOT TTree based representation to the format suitable for underlying ML framework and uses it for training purposes; \item {\bf Data Inference Layer} refers to the inference part of pre-trained models and can be either tightly integrated within underlying HEP framework or represented as a Service (aaS). \end{itemize} Even though the implementation of these layers can differ from one experiment to another (or other scientific domains/fields using ROOT files), it can be easily generalized and be part of the foundation for generic MLaaS framework. Further, we will discuss individual layers and outline particular sets of problems which should be addressed in their implementation. \subsection{Data Streaming Layer}\label{Streaming} The data streaming layer represents a simple task of streaming data from local or remote data storages. Originally reading ROOT files was mostly possible from C++ frameworks, but recent development of ROOT I/O now allows to easily access ROOT data from Python, and use XrootD protocol for remote file access. The main development was done in uproot \cite{uproot} framework backed by the DIANA-HEP initiative \cite{DIANAHEP}. The uproot library uses NumPy \cite{NumPy} calls to rapidly cast data blocks in the ROOT file as NumPy arrays, and provides integration with the XrootD protocol \cite{XrootD}. Among the implemented features it allows a partial reading of ROOT TBranches, non-flat TTrees, non TTrees histograms and more. It relies on data caching and parallel processing to achieve high throughput. In our benchmarks we were able to read HEP events at the level of $\sim O(100)-O(1000)$kHz from local and from remote storages\footnote{Speed varies based on many factors, including caching, type of storage and network bandwidth.}. In our implementation of MLaaS, see Sect. \ref{Prototype}, this layer was composed as a Data Generator which is capable of reading either local or remote file(s) with a pre-defined size. The batch data size can be easily fine tuned based on the complexity of the event and available bandwidth. The output of the Data Generator was a NumPy array with flat and Jagged Array attributes, see next Section for further discussion. \subsection{Data Training Layer}\label{Training} This layer is required to encapsulate HEP data and present it into ML to be used by the application. The main obstacle here is usage of non-flat representation of HEP data in ML frameworks. In particular, the ROOT data-format can be represented in so called Jagged Arrays\footnote{Jagged Array is an array of arrays of which the member arrays can be of different sizes.}, see Fig. \ref{fig:JaggedArray}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{JaggedArray.pdf} \caption{Jagged Array data representation. It consists of flat attributes followed by Jagged attributes whose dimensions vary event by event.} \label{fig:JaggedArray} \end{figure} The HEP tree-based data representation is optimized for data storage but it is not directly suitable for ML frameworks. Therefore a certain data transformation is required to feed tree-based data structures into ML framework as flat data structure. We explored two possible transformation: a vector representation with padded values, see Fig. \ref{fig:JaggedArray2Vector}, and matrix representation into one of the multiple phase spaces, see Fig. \ref{fig:JaggedArray2Matrix}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{JaggedArray2Vector.pdf} \caption{A vector representation of Jagged Array with padded values.} \label{fig:JaggedArray2Vector} \end{figure} The idea of the vector representation approach is to identify a dimensionality of Jagged Array attributes in a vector via one time pass across the data, and the subsequent composition of the final vector with sufficient allocation for Jagged Array attribute values based on their dimensionality. If a certain event will have Jagged Array attribute shorter then its dimensionality a padded values can be used. For instance, a physics event is composed by a set of particles. A priori we may not know how many particles can be created in an event, and therefore we don't know how much space we need to allocate for particle attributes even though their attributes have a fixed size, e.g. particle momentum values can be represented by three numerical values ($p_x$, $p_y$ and $p_z$). However, knowing the distributions of the particles in all events of certain physics dataset can allow us to choose the dimensionality of their Jagged Array attributes. For instance, we can run MC process and identify how many electrons per even we may have. A maximum number of electrons in this distribution will represent a dimensionality for corresponding Jagged Array attributes. Using these dimensionality numbers we can represent an event as a flat vector of certain size. The allocated values of Jagged Array attributes will vary event by event where extra slots of Jagged Array attributes will be filled with pre-defined pad values, e.g. NaN\footnote{Since all numerical values can be used, e.g. in case of an angle distribution we may have negative, positive and zero values, the only choice for padded values we have will be NaN.}. Additionally, the one time pass across a series of events can be used to determine the min, max, and mean values of jagged array attributes which can be later used for normalization purposes. The matrix representation of Jagged Array, see Fig. \ref{fig:JaggedArray2Matrix}, can use certain phase space if it is present in a dataset. For example, the spatial coordinates or attribute components are often part of HEP datasets, and therefore can be used for Jagged Array mappings. This approach can resolve the ambiguity of vector representation (in terms of dimensionality choice) but it has its own problem with the choice of granularity of a phase space matrix. For example, if the X-Y phase space (where X and Y refers to an arbitrary pair of attributes) will be used in matrix presentation we don't know a cell size in this space. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{JaggedArray2Matrix.pdf} \caption{A matrix representation of Jagged Array into certain phase space, e.g. eta-phi.} \label{fig:JaggedArray2Matrix} \end{figure} A choice of matrix granularity may introduce a collision problem with Jagged Array attribute values, e.g. if two particles have the same phase space values of the cell, i.e. two particles point into the same cell in X-Y space. Such ambiguity may be easily resolved either by reducing matrix granularity or adding other phase space, e.g. using matrices in X-Y, Y-Z and X-Z phase spaces and concatenate them together into a final vector. But such enhancement will increase the sparsity of the final matrix and therefore will require more resources at the training time. In our prototype, discussed in Sect. \ref{Prototype}, we used vector representation with padded values and applied two pass procedure over the data. The first pass read data streams and determined dimensionality of Jagged Arrays along with min, max, and mean values used for normalization. The second pass was used for reading and transforming data from the streaming layer to the underlying ML framework. In Neural Network models it is natural to assign padded NaN values to zeros since they are used in the multiplication operations between input values and weight matrix elements. But knowledge of locations of padded values in vector representation approach may be valuable in certain circumstances. For instance, when training AutoEncoder networks the knowledge of locations of padded values in input vector can be used at a decoding phase. Therefore our initial implementation of vector representation, discussed in Sect. \ref{Prototype}, used additional mask vector to preserve the knowledge of padded values locations. \subsection{Data Inference Layer} A choice of a data inference layer should be driven by the usage of underlying technology, e.g. ML framework. It can be either tightly integrated with application frameworks (both CMS and ATLAS experiments followed this approach in their CMSSW-DNN \cite{CMSSWDNN} and LTNN \cite{ATLASLNN} solutions) or it can be developed as a Service (aaS) solution. The former has the advantage of reducing latency of the inference step per processing event, but later can be easily generalized and become independent from the internal infrastructure. As such, it can be easily integrated into cloud platforms, be used as repository of pre-trained models, and serve models across experiment boundaries. We decided to implement the latter solution via TensorFlow as a Service (TFaaS) architecture, see \cite{TFaaS}. We evaluated several ML frameworks and decided to use TensorFlow \cite{TF} graphs for the inference phase. The TF model represents a computational graph in a static form, i.e. the mathematical computations, graph edges and data flow are well-defined at run time. Reading TF model can be done in different programming languages due to support of APIs provided by TF library. Moreover, the TF graphs are very well optimized for GPUs and TPUs. We chose the Go programming language to implement the Tensor Flow as a Service (TFaaS) \cite{TFaaS} part of MLaaS framework based on the following factors: the Go language natively supports concurrency via goroutines and channels, it is the language developed and used by Google and very well integrated with TF library, it provides a final static executable which significantly simplifies its deployment on premises and to various (cloud) service providers. We also opted out in favor of REST interface where clients may upload their TF models to the TFaaS server and use it for their inference needs via the same interface. Both Python and C++ clients were developed on top of the REST APIs (end-points) and other clients can be easily developed thanks to HTTP protocol used by the TFaaS Go RESTful implementation. We performed several benchmarks using TFaaS server running on CentOS 7 Linux, 16 cores, 30GB of RAM. The benchmarks were done in two modes: using 1000 calls with 100 concurrent clients and 5000 calls with 200 concurrent clients. We tested both JSON and ProtoBuffer data format while sending and fetching the data to/from TFaaS server. In both scenarios we achieved a throughput of $\sim 500$ req/sec. These numbers were obtained with serving mid-size pre-trained model which consists of 1024x1024 hidden layers. Even though a single TFaaS server may not be as efficient as an integrated solution it can be easily horizontally scaled, e.g. using kubernetes or other cluster solutions, and may provide desire throughput for concurrent clients. It also decouples application layer/framework from the inference phase which can be easily integrated into any existing infrastructure by using HTTP protocol to TFaaS server for inference results. Also, the TFaaS can be used as a repository of pre-trained model which can be easily shared across experiment boundaries or domains. For instance, the current implementation of TFaaS allows visual inspection of uploaded models, versioning, tagging, etc. A simple search engine can be put on top of TFaaS with little effort. For full list of planned improvements see Sect. \ref{Improvements}. \subsection{Proof-of-concept prototype}\label{Prototype} When all layers of the MLaaS framework were developed, we composed a working prototype of the system by using ROOT files accessible through XrootD servers. The data were read by 1000 event batches, where single batch was approximately 4MB in size. Each batch was fed into both Tensor Flow (implemented via Keras framework) and PyTorch models. The Data Generator representing data streaming layer yields a vector representation of Jagged Array ROOT data structures along with mask vector representing positions of padded values, see Fig. \ref{fig:JaggedArrayMask}, into corresponding model. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{JaggedArrayMask.pdf} \caption{A vector representation of Jagged Array along with corresponding mask vector.} \label{fig:JaggedArrayMask} \end{figure} This was done to avoid misinterpretation of real values of attributes from padded values. This mask vector was used in both models to cast NaN values to zeros. We tested this prototype on a local machine as well as successfully deploying it on the GPU node. The implementation of data streaming and data training layers was done in python. The workflow consisted of running python scripts for reading the data, training ML models, and uploading them into TFaaS server via HTTP protocol. The prediction was served to python, C++, and curl clients. The further details of this proof-of-concept prototype can be found in the MLaaS4HEP github repository \cite{MLaaS4HEP}. \section{Future directions}\label{Improvements} We foresee that MLaaS approach can be widely applicable in HEP. As such, further improvements will be necessary to achieve and implement. \subsection{Data Streaming Layer} In a Data Streaming Layer we plan to introduce proper data shuffling. It should be done carefully when reading data from multiple remote ROOT files. Current implementation of MLaaS reads data sequentially from file to file and feeds the data batches directly to ML framework. In order to implement proper data shuffling a reading parallelism should be introduced into MLaaS framework. We also need to look at further optimization of the streaming layer to achieve better throughput from remote data-providers. \subsection{Data Training Layer} The current landscape of ML framework is changing rapidly, and we should be adapting MLaaS to existing and future ML framework and innovations. For instance, Open Network Exchange Format \cite{onnxai} open up a door to migration of models from one framework into another. So far we are working on automatic transformation of PyTorch \cite{PyTorch} and fast.ai \cite{fastai} models into TensorFlow which is used by the TFaaS service. As discussed in Sect. \ref{Training} there are different approaches to feed Jagged Array into ML framework and R\&D in this direction is in progress. For instance, for AutoEncoder (AE) models the vector representation with padded values should always keep around a cast vector since AE model transform input vector into an internal dense representation and then should decode it back into original representation. The latter transformation can use cast vector to assign back the padded values, and if necessary convert vector representation of the data back to Jagged Array or ROOT TTree data-structures. \subsection{Data Inference Layer} On the inference side (TFaaS) we plan to extend the "aaS" part to become a repository of uploaded models. As such, several functionalities should be added, such as search capabilities, extended model tagging, and versioning. It can be easily achieved by adding proper meta-data description of uploaded models and storing it into a back-end database for later look-up, indexing and versioning. \subsection{MLaaS services} The proposed architecture allows to develop and deploy training and inference layers as independent MLaaS services where separate resource providers can be used and dynamically scaled if necessary, e.g. GPUs/TPUs can be provisioned on demand using commercial cloud(s) for training purposes of specific models, while inference TFaaS service can reside at CERN premises. For instance, the continuous training of complex DL models would be possible when data produced by the experiment will be placed on the GRID sites, and the training MLaaS service will receive a set of notifications about newly available data, and re-train specific model(s). When new model is ready it can be easily pushed to TFaaS and be available for end-users immediately without any intervention on the existing infrastructure. The TFaaS can be further optimized to use FPGAs to speed up the inference phase. We foresee that such approach may be more flexible and cost effective for HEP experiments in HL-LHC era. As such, we plan to perform additional R\&D studies in this direction and evaluate further MLaaS services using available resources. \section{Summary} In this paper we presented a novel Machine Learning as a Service approach to training ML models using native ROOT format of HEP data. It consists of three layers: data streaming, training, and inference layers, which were implemented as independent components. The data streaming layer relies on the uproot library for reading data from ROOT files (local or remote) and yielding NumPy (Jagged) arrays upstream. The data training layer transforms the input Jagged Array portion of the data into vector representation and passes it into ML framework of user choice. Finally, the inference layer was implemented as an independent service (TFaaS) to serve a TensorFlow models via HTTP interface. Such flexible architecture allows to perform ML training over HEP ROOT data without physically downloading data into a local storage. It reads and transforms ROOT Tree data representation (Jagged Array) into intermediate flat data format suitable as an input for underlying ML framework. A prototype proof-of-concept system was developed to demonstrate MLaaS capabilities to read arbitrary size datasets, and potentially allow to train HEP ML models over large datasets at any scale. \begin{acknowledgement} This work was done as a part of CMS experiment R\&D program. I would like to thank Jim Pivarski for his numerous and helpful discussions and hard work on uproot (and many other) packages which open up a possibility to implement MLaaS. \end{acknowledgement}
1,477,468,750,685
arxiv
\section{Introduction}\label{section1} In an introductory statistics course, we usually teach students how to conduct a hypothesis test based on independent samples to compare the means of two populations with equal, but unknown variance. Let $y_{ij}$ be random samples drawn from independent and normally distributed populations with means $\mu_i$ and common variance $\sigma^2$ for $j=1, \cdots, n_i$ and $i=1, 2$. We are interested in testing \be \label{test:01} H_0: \mu_1 = \mu_2 \quad \mathrm{versus} \quad H_1: \mu_1 \neq \mu_2. \ee Within a frequentist framework, the pooled-variance two-sample $t$ test is commonly used for the above hypothesis testing. The test statistic is given by \be \label{tstat:01} t = \frac{\bar y_1 - \bar y_2}{s_p/\sqrt{n_\delta}}, \ee where $\bar y_i = \sum_{j=1}^{n_i} y_{ij}/n_i$ and \be s_p^2 = \frac{(n_1 -1)s_1^2 + (n_2 -1)s_2^2}{n_1 + n_2 -2} \ee is the pooled-variance estimate of $\sigma^2$ with $s_i^2 = \sum_{j=1}^{n_i}(y_{ij} - \bar y_i)^2/(n_i-1)$ for $i=1, 2$. Here, $n_\delta = (1/n_1 + 1/n_2)^{-1}$ is often called the ``effective sample size'' in the two-sample experiment. At the $\alpha$ significance level, we obtain the critical value $t_{1-\alpha/2, v}$ or P-value $p = 2P(T \geq |t|)$ with degrees of freedom $v = n_1 + n_2 -2$, where $t_{1-\alpha/2, v}$ is the $(1 - \alpha/2)$ quantile of $T_v$ distribution and $T$ has the $T_v$ distribution. We reject the null hypothesis $H_0$ if either $|t| > t_{1-\alpha/2, v}$ or $p < \alpha$; see \cite{Weis:2012}. Bayesian approaches to hypothesis testing have recently received considerable attention and are becoming important in different disciplines, such as sociology (\citeauthor{West:1999}, \citeyear{West:1999}), economics (\citeauthor{Fern:2001}, \citeyear{Fern:2001}), and psychology (\citeauthor{Roud:Speck:Sun:2009}, \citeyear{Roud:Speck:Sun:2009}). Many recent studies suggest that we should offer at least one course about Bayesian methods to students at early stages in their mathematics and statistics education; see, for example, \cite{Albe:1997}, \cite{Gone:John:Lu:West:2005}, \cite{Wetz:Gras:2012}, \cite{Wulf:Robin:2014}, among others. Specifically, as stated by \cite{Carl:Loui:2000}, ``\textit{The Bayesian approach to statistical design and analysis is emerging as an increasingly effective and practical alternative to the frequentist one.}'' Such a course will not only motivate students' interests in Bayesian thinking, but also help them know how to formulate Bayesian methods in simple statistical scenarios, such as the hypothesis testing in (\ref{test:01}). More importantly, it will make students ready to use both Bayesian and frequentist ideas. A natural approach within a Bayesian framework to compare hypotheses is the Bayes factor (ratio of the marginal densities of the two models); see \cite{Kass:95}. For the hypothesis testing in (\ref{test:01}), \cite{Gone:John:Lu:West:2005} proposed a simple closed-form Bayes factor based on the two-sample $t$-statistic and it is given by \be \label{BF:01} \mathrm{GBF}[H_1: H_0](\sigma^2_a) = \biggl[\frac{1+t^2/v}{1+t^2/\bigr(v(1+n_\delta\sigma_a^2)\bigr)}\biggr]^{(v+1)/2}(1+n_\delta\sigma_a^2)^{-1/2}, \ee where $\sigma^2_a$ is a hyperparameter of the prior that needs to be specified. The choice of prior distributions for deriving the GBF will be stated in detail in the following section. The GBF in (\ref{BF:01}) shows a close relationship between frequentist and Bayesian ideas and can be easily covered in an elementary statistics course. Note that the choice of $\sigma^2_a$ is critical, because it acts as an inverse prior sample size. Specifically, the GBF with fixed $\sigma^2_a$ may exhibit some undesirable features, such as Bartlett's paradox and the information paradox; see \cite{liang:2008}. These paradoxes will definitely confuse students and even make them struggle when conducting Bayesian data analysis In this paper, we specify a hyper-prior for the hyperparameter $\sigma^2_a$ to reduce the impact of misspecified hyperparameter values. The prior will still result in an explicit expression of the Bayes factor based on the two-sample $t$-statistic. It is shown that the proposed approach resolves several potential difficulties and paradoxes encountered by the previous approach due to \cite{Gone:John:Lu:West:2005}. We hope that our results will facilitate an intuitive understanding and discussion of the relationship between frequentist and Bayesian ideas, but also shed some light on the importance of hyper-prior specifications to students, teachers, and researchers. The remainder of this paper is organized as follows. In Section \ref{section:02}, we review the existing Bayes factor of \cite{Gone:John:Lu:West:2005} and discuss potential difficulties associated with fixed hyperparameter values. In Section \ref{section:03}, we specify a hyper-prior on that hyperparameter, which yields a closed-form expression for the Bayes factor. We investigate the finite sample performance of the two Bayesian procedures in a simulation study (Section \ref{section:03}) and a real-data example (Section \ref{section:04}). Some concluding remarks are given in Section \ref{section:05}, with derivation of the proposed procedure in the appendix. \section{Bayes inference} \label{section:02} The Bayesian analysis begins with prior specifications for the unknown parameters. Let $p(\bfY\mid \theta_j)$ and $\pi_j$ be the likelihood function of $\bfY$ and the prior probability on hypothesis $H_j$ $(\pi_0 + \pi_1 =1)$ for $j =0, 1$, respectively. From Bayes theorem, the posterior probability of $H_j$ is defined as \begin{equation}\label{bayesfactor:1} P(H_j\mid\bfY) = \frac{\pi_j m_j(\bfY)}{\pi_0 m_0(\bfY) + \pi_1m_1(\bfY)}. \end{equation} The corresponding marginal likelihood of $\bfY$ given $H_j$ is \begin{equation} \label{likeh:01} m_j(\bfY) = \int{p(\bfY\mid \theta_j)\pi_j(\theta_j)}\,d\theta_j, \end{equation} where $\pi_j(\theta_j)$ is the prior for the unknown parameter $\theta_j$ under $H_j$ for $j =0, 1$. The posterior probability of $H_1$ can be expressed as \begin{equation}\label{bayesfactor:2} P(H_1 \mid\bfY) = \frac{\pi_1\mathrm{BF}[H_1 : H_0]}{\pi_0 + \pi_1\mathrm{BF}[H_1 : H_0]} = \biggl[1 + \frac{\pi_0}{\pi_1}\frac{1}{\mathrm{BF}[H_1 : H_0]}\biggr]^{-1}, \end{equation} where the Bayes factor, $\mathrm{BF}[H_1 : H_0]$, for comparing $H_1$ to $H_0$ is given by \begin{equation}\label{m01:BF} \mathrm{BF}[H_1 : H_0] = \frac{m_1(\bfY)}{m_0(\bfY)}. \end{equation} The hypothesis $H_1$ $(H_0)$ is more likely to be selected when $\mathrm{BF}[H_1 : H_0] >1$ $(< 1)$. More specifically, \cite{Jeff:1961} suggested that $\mathrm{BF}[H_1 : H_0]<0.1$, provides ``strong'' evidence in favor of $H_0$, and $\mathrm{BF}[H_1 : H_0]<0.01$, provides ``decisive'' evidence. Note that the Bayes factor for the null relative to the alternative, denoted by $\mathrm{BF}[H_0 : H_1]$, is given by \begin{equation*} \mathrm{BF}[H_0 : H_1] = \frac{1}{\mathrm{BF}[H_1 : H_0]}. \end{equation*} For the hypothesis testing problem in (\ref{test:01}), we need to specify appropriate prior distributions for $(\mu_1, \mu_2, \sigma^2)$. \cite{Gone:John:Lu:West:2005} show that this testing problem can be written in equivalent form as \be \label{testing:01} H_0: \delta = \mu_1 - \mu_2 = 0 \quad \mathrm{versus} \quad H_1: \delta \neq 0. \ee Therefore, they advocate a prior for $\delta/\sigma^2$, instead of $\mu$, where $\mu = (\mu_1 + \mu_2)/2$. After reparameterization from $(\mu_1, \mu_2, \sigma^2)$ to $(\mu, \delta, \sigma^2)$, the suggested priors are given by \be \label{prior:01} \pi(\mu, \sigma^2) \propto 1 / \sigma^2 \quad \mathrm{and} \quad \delta/\sigma \mid \mu, \sigma^2, \delta \neq 0 \sim N\bigl(\lambda, ~\sigma_a^2\bigr), \ee where $\lambda$ and $\sigma_a^2$ are the hyperparameters that need to be pre-specified. Due to lack of prior knowledge in practice, it is natural to set $\lambda = 0$ to reflect the uncertain direction of an effect. Thus, the case for which $\lambda=0$ will be of interest to us in what follows. The Bayes factor under the above priors is \be \label{BF:0001} \mathrm{GBF}[H_1: H_0](\sigma^2_a) = \biggl[\frac{1+t^2/v}{1+t^2/\bigr(v(1+n_\delta\sigma_a^2)\bigr)}\biggr]^{(v+1)/2}(1+n_\delta\sigma_a^2)^{-1/2}, \ee where $v = n_1 + n_2 -2$. Note that the Bayes factor depends on the data only through the $t$-statistic and can often be calculated using a pocket calculator. As mentioned in the Introduction, the choice of $\sigma^2_a$ is quite critical, and in particular, the Bayes factor with fixed $\sigma^2_a$ may lead to several undesirable properties, such as Bartlett's paradox and the information paradox, briefly summarized as follows. \noindent{\ \ Bartlett's paradox:} Because the hyperparameter $\sigma^2_a$ reflects the variance of the univariate normal distribution in (\ref{prior:01}), a large value of $\sigma^2_a$ is often chosen to minimize prior information. However, when $\sigma^2_a$ becomes sufficiently large, while $v$ is fixed ($n_\delta$ is also fixed), the GBF tends to 0, indicating that it always favors the null hypothesis, regardless of the information from the data. This phenomenon is often called Bartlett's paradox, which has been studied by \cite{Jeff:1961} and more recently by \cite{liang:2008}. \noindent{\ \ Information paradox:} Suppose that samples are generated under $H_1$. In this setting, when $v$ is fixed, the posterior probability of $H_1$ should be higher than the one for $H_0$ when the $t$-statistic goes to infinity. We thus expect that the GBF tends to infinity as the information against $H_0$ accumulates. However, with a fixed value of $\sigma_a^2$, the GBF becomes a constant $(1+n_\delta\sigma_a^2)^{v/2}$ as $t \go \infty$. This is referred to as the information paradox. The two paradoxes may confuse students and even make them struggle about Bayesian data analysis, especially when we introduce basic ideas of Bayesian inference in an elementary level. In this paper, we advocate a hyper-prior for $\sigma_a^2$, which not only alleviates the impacts of misspecified hyperparameter, but also yields an explicit Bayes factor. More importantly, the proposed approach is still a function of the two-sample $t$-statistic and enjoys various appealing properties, as discussed next. \subsection{The hyper-prior for $\sigma_a^2$} In this section, we consider a proper prior for $\sigma_a^2$, denoted by $\pi(\sigma_a^2)$. The proposed Bayes factor can be written as \begin{equation} \label{bf:g01} \mathrm{PBF}[H_1: H_0] = \int_0^\infty \biggl[\frac{1+t^2/v}{1+t^2/\bigr(v(1+n_\delta\sigma_a^2)\bigr)}\biggr]^{(v+1)/2}(1+n_\delta\sigma_a^2)^{-1/2} \pi(\sigma_a^2)\,d\sigma_a^2. \end{equation} The prior for $\sigma_a^2$ is assigned to be the Pearson type VI distribution with shape parameters $a> -1$, $b > -1$, and scale parameter $\kappa > 0$. Its probability density function (pdf) is \begin{equation}\label{BF:fun} \pi(\sigma_a^2) = \frac{\kappa (\kappa \sigma_a^2)^b(1 + \kappa \sigma_a^2)^{-a - b - 2}}{B(a + 1, b + 1)}I_{(0, \infty)}{(\sigma_a^2)}, \end{equation} where $B(\cdot, \cdot)$ is a beta function. This prior has also been used by \cite{Wang:Sun:2013b} in the one-way random effects model. With the particular choice of $\kappa = n_\delta$ and $b = (v+1)/2 - a - 5/2$, the Bayes factor can be greatly simplified as \begin{equation}\label{BFequation} \mathrm{PBF}[H_1 : H_0] = \frac{\Gamma\bigl(v/2\bigr)\Gamma\bigl(a + 3/2\bigr)}{\Gamma\bigl((v + 1)/2\bigl)\Gamma(a + 1)}\biggl(1 + \frac{t^2}{v}\biggr)^{(v-2a-2)/2}, \end{equation} which is an explicit expression and can thus be easily computed using an Excel spreadsheet or a simple calculator. Such an expression is unavailable for other choices of $\kappa$ and $b$. Like the GBF in (\ref{BF:0001}), it can be regarded as a Bayesian version of the $t$-statistic; in addition, our approach enjoys several appealing properties, which are not shared by the GBF. The proof of the theorem is straightforward and is thus omitted here for simplicity. \begin{theorem0} In the setting of the information paradox mentioned above, the Bayes factor in (\ref{BFequation}) tends to infinity when $-1 < a < v/2 - 1$. \end{theorem0} The theorem shows that when $-1 < a < v/2 - 1$, the specified hyper-prior provides a resolution of the information paradox that aries in the GBF. In the case of minimum sample sizes of the two samples (i.e., $n_1 + n_2 = 3$), we have $v = 1$, indicating that $a \in (-1, -1/2)$. Of particular note is that when $a=-1/2$, the asymptotic tail behavior of \begin{align*} \pi\bigl(\delta/\sigma \mid \mu, \sigma^2, \delta \neq 0\bigr) = \int_0^\infty N\bigl(\delta/\sigma \mid \lambda, \sigma_a^2\bigr) \pi(\sigma_a^2) \, d\sigma_a^2 \end{align*} becomes the Cauchy density for sufficiently large $\delta/\sigma$, which provides a flat tail behavior and diminishes the prior influence of $\pi\bigl(\delta/\sigma \mid \mu, \sigma^2, \delta \neq 0\bigr)$, especially when $a$ is small. Consequently, we recommend $a \in (-1, -1/2]$. It deserves mentioning that the prior depends on the sample size and that as the sample size increases, the prior has a density in the right tail that behaves like $(\sigma^2_a)^{-a-2}$, leading to a fat tail for small value of $a$. Furthermore, it can be seen from Figure \ref{priorpdf:004} that a higher prior probability is assigned to the event $\sigma_a^2 >1$. This phenomenon occurs because the parameter $\sigma_a^2$ seems to act as an inverse prior sample size. A small value of $\sigma_a^2$ (such as $\sigma_a^2\go 0$) makes the prior converge to a point mass at $\delta=0$, and the alternative $H_1$ may collapse to $H_0$. We thus obtain that the Bayes factor (the GBF) tends to 1, indicating that both hypotheses are equal descriptions to the data in the limit. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.5]{priorpdf} \end{center} \caption{The hyper-prior for $\sigma_a^2$ with $\kappa=n_\delta$, $a=-3/4$, and $b=(v+1)/2-a-5/2$ for different choices of $n_1$ and $n_2$.} \label{priorpdf:004} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.42]{Barlett} \includegraphics[scale=0.42]{NBarlett} \end{center} \caption{The Bayes factor as a function of the hyperparameter (left: the GBF; right: the PBF) when $n_1 = n_2 =10$.} \label{incon:01} \end{figure} To see how the PBF avoids Bartlett's paradox and the information paradox, we consider two simple examples with $n_1 = n_2 =10$: one with a fixed $t$-statistic, and the other with an increasing value. Suppose that $t=5$, providing strong evidence against $H_0$. We observe from Figure \ref{incon:01} that the PBF with $a \in (-1, -1/2]$ always rejects $H_0$, while the GBF fails to reject $H_0$ when $\sigma_a$ becomes large, regardless of the information from the data. Also, it is well-known that the larger the $t$-statistic, the stronger the evidence against $H_0$. Figure \ref{incon:02} shows that as the $t$-statistic increases, the PBF grows faster than the GBF, which tends to a constant, even though $t$ becomes significantly large. These two examples show that the PBF not only avoids these paradoxes, but also provides a way to enhance students' better understanding of these paradoxes \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.42]{Information} \includegraphics[scale=0.42]{NInformation} \end{center} \caption{The Bayes factor as a function of the $t$-statistic (left: the GBF with $\sigma_a =.1$; right: the PBF with $a=-.75$) when $t =5$ and $n_1 = n_2 = 10$.} \label{incon:02} \end{figure} \section{Simulation study} \label{section:03} In this section, we conduct simulation studies and sensitivity analysis to investigate the finite sample performance of the two Bayes factors (GBF and PBF) with various choices of their corresponding hyperparameters. For sample 1, we generate $n_1$ random variables normally distributed with mean $0$ and standard deviation 1. For sample 2, we generate $n_2$ random variables normally distributed with mean $\delta$ and standard deviation 1, where $\delta$ ranges from $-4$ to $4$ in increments of $0.1$. To assess the sensitivity of the hyperparameters, we take $\sigma_a = \{0.1, 1/3, 0.5, 1, 1.2, 2, 5\}$ for the GBF in (\ref{BF:0001}) and $a = \{-0.95, -0.9, -0.8, -0.75, -0.7, -0.6, -0.5\}$ for the PBF in (\ref{BFequation}). For each case, we analyze $10,000$ simulated datasets with various choices of $n_1$ and $n_2$. The decision criterion used in this paper is to choose $H_1$ if the Bayes factor $>1$ and $H_0$ otherwise. The relative frequencies of rejecting $H_0$ under the three different choices of sample size are depicted in Figures \ref{incon:001}, \ref{incon:002}, and \ref{incon:003}. Rather than providing exhaustive results based on these simulations, we merely highlight the most important findings from the three figures. (i) The GBF is quite sensitive to the choice of the hyperparameter $\sigma_a$, even when the sample size is large. For instance, when $n_1 = n_2 =100$ and $\delta=-0.3$, the frequency of rejecting $H_0$ changes from 0.8479 to 0.2843 with $\sigma_a$ increasing from 0.1 to 5. (ii) The PBF is relatively insensitive to the hyperparameter $a$, and when the sample size is large, the PBF behaves similarly for all values. (iii) We observe that under $H_0$ (i.e., $\delta=0$), the relative frequency of rejecting $H_0$ varies greatly for the GBF with different choice of $\sigma_a$, whereas the PBF is quite stable in terms of different value of $a$. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.42]{PreviousBF10} \includegraphics[scale=0.42]{PBF10} \end{center} \caption{The relative frequency of rejection of $H_0$ under different procedures (left: the GBF; right: the PBF) when $n_1 = n_2 = 10$.} \label{incon:001} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.42]{PreviousBF30} \includegraphics[scale=0.42]{PBF30} \end{center} \caption{The relative frequency of rejection of $H_0$ under different procedures (left: the GBF; right: the PBF) when $n_1 = n_2 = 30$.} \label{incon:002} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.42]{PreviousBF100} \includegraphics[scale=0.42]{PBF100} \end{center} \caption{The relative frequency of rejection of $H_0$ under different procedures (left: the GBF; right: the PBF) when $n_1 = n_2 = 100$.} \label{incon:003} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.42]{comparisonn10} \includegraphics[scale=0.42]{comparisonn30} \includegraphics[scale=0.42]{comparisonn100} \includegraphics[scale=0.42]{comparisonn500} \end{center} \caption{The relative frequency of rejection of $H_0$ under the three considered procedures with different sample sizes.} \label{incon:004} \end{figure} We now compare the performance of the two Bayes factors with the P-value based on the $t$-statistic in (\ref{tstat:01}) when $\alpha=0.05$. Based on the same simulation scheme described above, we consider the GBF with $\sigma_a = 1/3$, suggested by \cite{Gone:John:Lu:West:2005} and the PBF with $a=-3/4$. Figure \ref{incon:004} depicts the numerical findings with different sample sizes. We observe that the PBF and the P-value have similar performances, whereas they are significantly different from the GBF. As expected, when the sample size becomes large, the three procedures behave very similarly. In addition, the PBF has a faster decreasing rate to zero than the two other methods, in terms of the relative frequency of rejecting $H_0$. Thus, we may conclude that the PBF is consistent under $H_0$ when the sample size approaches infinity. This property is not shared by the two other methods under consideration. \section{A real-data application} \label{section:04} We compare the performance of the two Bayes factors via a real-data example available at The Data and Story Library. ($http://lib.stat.cmu.edu/DASL/Datafiles/Calcium.html$). The data consist of the blood pressure measurements for 21 African-American men: 10 of the men took calcium supplements and 11 took placebos. We are interested in testing if increasing calcium intake reduces blood pressure. The pooled-variance $t$-statistic is 1.634, with the two-sided P-value of 0.1187. The positive $t$-statistic indicates intake of calcium is beneficial for reducing blood pressure, and the P-value shows that the null hypothesis that calcium has no effect is more likely at the $5\%$ significance level. To fully specify the Bayesian approach, we need to choose appropriate priors for the unknown parameters. Due to lack of prior knowledge, we consider $\pi_0 = \pi_1 = 1/2$. Therefore, for decision-making, the hypothesis $H_1$ is more likely to be selected if $P(H_1 \mid \bfY) > 1/2$, or equivalently, the value of the Bayes factor is larger than $1$. \begin{table}[!htbp] \centering \begin{tabular}{c*{6}{c}c} \hline $\sigma_a$ & 1/10 & 1/3 & 1/2 & 1 & 1.5 & 2 & 5 \\ \hline GBF$[H_1 : H_0]$ & 1.307 & 1.264 & 1.358 & 1.193 & 0.934 & 0.746 & 0.321 \\ $P(H_1 \mid\bfY)$ & 0.509 & 0.558 & 0.576 & 0.544 & 0.483 & 0.427 & 0.243 \\ \hline \end{tabular} \caption{Numerical summaries of the GBF with different choice of $\sigma_a$.} \label{table:EX1} \end{table} \cite{Gone:John:Lu:West:2005} analyze this dataset by using the GBF with $\sigma_a = 1/3$ and obtain that the null hypothesis is less likely because $P(H_1 \mid\bfY) = 0.558$. From a practical viewpoint, we shall be interested in a sensitivity analysis of the hyperparameter $\sigma_a$. Numerical results are reported in Table \ref{table:EX1}. We observe that as $\sigma_a$ increases, the GBF decreases. When $\sigma_a > 1$, the GBF tends to favor $H_0$, whereas it tends to reject $H_0$ when $\sigma_a \leq 1$. The corresponding posterior probability changes from $0.509$ (against $H_0$) to $0.243$ (against $H_1$) when $\sigma_a$ changes from $1/10$ to $5$. This observation shows that the GBF is quite sensitive to the choice of $\sigma_a$ and that different choice of $\sigma_a$ may lead to a contradiction in a decision-marking process. We now employ the PBF with different values of $a \in (-1, -1/2]$. It can be seen from Table \ref{table:EX2} that the PBF is quite robust to the choice of $a$ and leads to the same decision. In addition, the conclusion based on the PBF is coincident with the one based on the two-sided P-value. \begin{table}[!htbp] \centering \begin{tabular}{c*{6}{c}} \hline $a$ & $-9/10$ & $-4/5$ & $-3/4$ & $-7/10$ & $-3/5$ & $-1/2$ \\ \hline PBF$[H_1 : H_0]$ & 0.177 & 0.316 & 0.375 & 0.429 & 0.534 & 0.606 \\ $P(H_1 \mid\bfY)$ & 0.150 & 0.240 & 0.273 & 0.300 & 0.344 & 0.377 \\ \hline \end{tabular} \caption{Numerical summaries based on the PBF with different choice of $a$.} \label{table:EX2} \end{table} \section{Concluding remarks} \label{section:05} In this paper, we propose an explicit closed-form Bayes factor for testing the difference between two means from two separate groups of subjects. The proposed approach enjoys several appealing properties. It relies on data only through the classical $t$-statistic and can thus be easily calculated using a simple calculator. It avoids several undesirable properties encountered by the approach due to \cite{Gone:John:Lu:West:2005}. More importantly, it can be easily taught in elementary statistics with an emphasis on Bayesian thinking. We hope that the results of this paper will not only facilitate an intuitive understanding of the relationship between frequentist and Bayesian ideas, but also shed some light on the importance of hyper-prior specifications to students, educators, and researchers. \section{Appendix} \noindent{\bf Derivation of equation (\ref{BFequation}):} When we consider the Pearson type VI distribution with $\kappa = n_\delta$, the Bayes factor in (\ref{bf:g01}) can be expressed as \begin{align*} \mathrm{PBF}[H_1 : H_0] &= \frac{n_\delta}{B(a + 1, b + 1)} \int_0^\infty \biggl[\frac{1+t^2/v}{1+t^2/\bigr(v(1+n_\delta\sigma_a^2)\bigr)}\biggr]^{(v+1)/2}(n_\delta \sigma_a^2)^b(1+n_\delta\sigma_a^2)^{-a-b-5/2}\,d\sigma_a^2. \end{align*} With the transformation $\tau = n_\delta\sigma_a^2$ and $b = (v+1)/2 - a - 5/2$, it follows \begin{align*} \mathrm{PBF}[H_1 : H_0] =&\frac{1}{B(a + 1, b + 1)}\int_0^\infty \biggl[\frac{1+t^2/v}{1+t^2/\bigr(v(1+\tau)\bigr)}\biggr]^{(v+1)/2}\tau^b(1+\tau)^{-a-b-5/2}\,d\tau\\[3pt] =&\frac{(1 + t^2/v)^{(v+1)/2}}{B(a + 1, b + 1)}\int_0^\infty \biggl[1 + \frac{t^2}{v}\frac{1}{1 +\tau}\biggr]^{-(v+1)/2}\tau^b(1+\tau)^{-a-b-5/2}\,d\tau \\[3pt] =&\frac{(1 + t^2/v)^{(v+1)/2}}{B(a + 1, b + 1)}\int_0^\infty \biggl[1 + \tau + \frac{t^2}{v}\biggr]^{-(v+1)/2}\tau^b(1+\tau)^{(v+1)/2-a-b-5/2}\,d\tau \\[3pt] =&\frac{(1 + t^2/v)^{(v+1)/2}}{B(a + 1, b + 1)}\int_0^\infty \biggl[1 + \tau + \frac{t^2}{v}\biggr]^{-(v+1)/2}\tau^b\,d\tau ~\mathrm{since}~b = (v+1)/2 - a - 5/2\\[3pt] =&\frac{1}{B(a + 1, b + 1)}\int_0^\infty \biggl[1 + \frac{\tau}{1 + t^2/v}\biggr]^{-(v+1)/2}\tau^b\,d\tau. \end{align*} With the transformation $x = \tau/(1 + t^2/v)$, it follows \begin{align*} \mathrm{PBF}[H_1 : H_0] = &\frac{(1 + t^2/v)^{b+1}}{B(a + 1, b + 1)}\int_0^\infty (1 + x)^{-(v+1)/2}x^b\,dx\\[3pt] = &\frac{B(b + 1, (v+1)/2-b-1)}{B(a + 1, b + 1)}\biggl(1 + \frac{t^2}{v}\biggr)^{b+1}\\[3pt] = &\frac{\Gamma\bigl(v/2\bigr)\Gamma\bigl(a + 3/2\bigr)}{\Gamma\bigl((v + 1)/2\bigl)\Gamma(a + 1)}\biggl(1 + \frac{t^2}{v}\biggr)^{(v-2a-2)/2}, \end{align*} because of $b = (v+1)/2 - a - 5/2$. This completes the proof. \bibliographystyle{annals}
1,477,468,750,686
arxiv
\section{INTRODUCTION} \IEEEPARstart{W}{ireless} network infrastructures in the fifth-generation (5G) and beyond technologies will experience an exponential growth of interconnected devices. (BSs) or access points (APs) making the network highly dense and of heterogeneous \cite{Xiaohu20145G}. for the dense inter-connected nodes since it enables the communication between the AP and several transmitters \cite{Osseiran20145G}. In practical systems, the incorporation of wireless backhaul is an easy, flexible, and cost-efficient alternative to wired backhaul. However, wireless backhaul connections are unreliable as the backhaul links have a certain probability of failure \cite{OnurBackhaul2022}. Physical layer security (PLS) has recently emerged as a means of securing wireless networks, which are constantly at risk from eavesdropping attacks \cite{wyner_wiretap},\cite{Gamal_On_the_Sec_Cap_Fad_Ch}. The transmitter selection in a multi-source network can improve the PLS by increasing the diversity gain and hence the secrecy of the system without using multiple antennas \cite{bletsas2006, Kundu17_sopeaves,chinmoy_GC16, Kundu15_dual}. Based on the availability of the channel state information (CSI), the transmitter selection can be classified into two categories; optimal transmitter selection (OTS) and sub-optimal transmitter selection (STS). An OTS scheme requires global CSI, whereas an STS scheme does not require global CSI \cite{chinmoy_GC16, Vu2017Secure, Kundu15_dual}. Hence, STS schemes can reduce the complexity and power consumption of the network and thereby extending the lifetime of the network. The STS scheme performed by maximizing the destination channel rate without considering the eavesdropping rate has been referred to as the traditional transmitter selection (TTS) scheme in the literature \cite{kim2016secrecy, Vu2017Secure}. The authors in \cite{Kundu17_sopeaves} proposed an STS scheme by selecting a transmitter corresponding to the worst eavesdropping link without considering the destination rate to improve the secrecy of the system. We refer to this scheme as a minimal eavesdropping selection (MIN-ES) scheme. The reliability of the backhaul links affects the system performance, therefore, it is important to explore the effect of the backhaul uncertainty on the secrecy performance of transmitter selection schemes which have extensively been studied in \cite{Yincheng,TAKhan2015Coperative,kimBackhaulSelection, Debbah2012_het_backhaul, Kundu_TVT19, wafaiVTC, kotwal2021transmitter,chinmoy_letter2021,kotwalTVT_backhaul}. Although the knowledge of backhaul link activity is crucial, it might not always be available for transmitter selection \cite{Kundu_TVT19}. The unavailability of backhaul link activity knowledge, which we denote as the backhaul link activity knowledge unavailable (BKU) case, can result in selecting a transmitter with an inactive backhaul link. When the backhaul link activity knowledge is available, which we denote as the backhaul link activity knowledge available (BKA) case, the selection can be made from the set of transmitters with active backhaul links \cite{wafaiVTC,kotwal2021transmitter,chinmoy_letter2021,kotwalTVT_backhaul}. The knowledge of active backhaul links improves the secrecy performance. In \cite{Kundu_TVT19}, the authors studied the secrecy of the cognitive radio (CR) network in the presence of unreliable backhaul links in the BKU case; however, the BKA case was not considered for the transmitter selection. The non-zero secrecy rate (NZSR), secrecy outage probability (SOP), and ergodic secrecy rate (ESR) were evaluated for a number of transmitter selection schemes, such as MIN-ES, TTS, OTS, and the transmitter selection scheme with minimal interference. The authors in \cite{wafaiVTC} then extended the SOP analysis in the same system for the TTS and OTS schemes for the BKA case. In \cite{kotwal2021transmitter}, the SOP for the TTS and OTS schemes in a frequency selective fading channel was evaluated for the BKA case; however, the ESR was not evaluated for the selection schemes. Though, the authors in \cite{kotwalTVT} evaluated the ESR of the OTS scheme in the same channel model but did not show the effect of backhaul links. In all these papers \cite{Kundu_TVT19,wafaiVTC,kotwal2021transmitter,kotwalTVT}, the secrecy performance was evaluated for a single eavesdropper. In the case of multiple eavesdroppers, the ESR of the OTS scheme was evaluated in \cite{chinmoy_letter2021} for the BKA case under the frequency-flat fading channel condition. However, the SOP was not evaluated, and the MIN-ES and TTS schemes were not considered. In \cite{kotwalTVT_backhaul}, the authors evaluated the SOP and ESR for TTS and OTS schemes including multiple eavesdroppers in the BKA case under the frequency-selective fading channels condition. However, the MIN-ES scheme was not studied. Although the authors in \cite{chinmoy_letter2021} and \cite{kotwalTVT_backhaul} considered multiple eavesdroppers, a straightforward non-colluding case was considered where an eavesdropper with maximum instantaneous SNR determines the secrecy performance. Moreover, in all the above-mentioned papers \cite{wafaiVTC,kotwal2021transmitter,kotwalTVT,chinmoy_letter2021,kotwalTVT_backhaul}, the MIN-ES scheme was not considered with wireless backhaul links. To the best of the authors' knowledge, transmitter selection schemes with colluding eavesdroppers performing maximal ratio combining (MRC) have not been explored for both the backhaul link activity knowledge cases (BKU and BKA). Although the secrecy performance of networks with wireless backhaul links has been widely studied in the above-mentioned works, an effort to provide a generalized mathematical framework that incorporates both the backhaul uncertainty (BKU and BKA) cases irrespective of different transmitter selection schemes leading to all the secrecy performance metrics (NZSR, SOP, and ESR) is missing from the literature. Motivated by the above discussion, we consider a wireless backhaul-aided network with multiple transmitters and a single destination in the presence of multiple colluding eavesdroppers. Two cases of backhaul link activity knowledge, BKU and BKA cases are considered. To enhance the system's secrecy performance, we investigate MIN-ES, TTS, and OTS schemes under BKU and BKA cases. The MIN-ES method has not been studied by \cite{chinmoy_letter2021,wafaiVTC,kotwal2021transmitter,kotwalTVT_backhaul} for secrecy including wireless backhaul. We study the case of colluding eavesdroppers where the eavesdroppers utilize the MRC technique as opposed to \cite{chinmoy_letter2021,kotwalTVT_backhaul}. Moreover, in \cite{chinmoy_letter2021}, the authors did not consider the SOP analysis and the BKU case both of which we incorporate. The authors in \cite{kotwalTVT_backhaul} considered frequency-selective channels whereas we assume frequency-flat fading channels as a result the analysis is entirely different. We also present a generalized approach to incorporate both backhaul uncertainty (BKU and BKA) cases into the performance analysis by incorporating a mixture distribution method. Our secrecy performance analysis approach by finding the distribution of the ratio of the destination channel and eavesdropping channel SNR uniformly provides the NZSR, SOP, and ESR performances. The main contributions of this paper are listed as follows: \begin{itemize} \item We evaluate the exact closed-form expressions of the NZSR, SOP, and ESR for the two sub-optimal transmitter selections (MIN-ES and TTS) and the optimal transmitter selection (OTS) schemes in both the BKU and BKA cases. \item We consider multiple colluding eavesdroppers performing MRC under independent but non-identical (INID) channels. \item To obtain greater insights, simplified asymptotic expressions for the NZSR, SOP, and ESR under both BKU and BKA cases and diversity order for the SOP under perfect backhaul conditions are also included. \item A general and unified method is provided to incorporate backhaul uncertainty using a mixture distribution and the ratio of destination and eavesdropper channel SNRs which is then directly used to find the SOP, NZSR, and ESR, along with their asymptotes. \end{itemize} \color{black} The rest of the paper is organized as follows: system model is described in Section \ref{system model} followed by distribution of $\Gamma_{\text{R}}^{(n^*)}$ for each selection scheme in Section \ref{sec_distribution_snr_ratio}. The ESR is discussed in Section \ref{section_ergodic_secrecy_rate} and the asymptotic SOP analysis for unreliable backhaul links is provided in Section \ref{sec_asymptotic_analysis_sop_unreliable}. Section \ref{section_diversity_order} discusses the diversity order for perfect backhaul links and Section \ref{section_asymptotic_ESR} provides the asymptotic ESR. Finally, the numerical results with discussions are presented in Section \ref{section_numerical_results} followed by conclusions in Section \ref{conclusions}. \textit{Notation:} The probability of occurrence of an event and the expectation of a random variable (RV) $X$ are denoted by $\mathbb{P}[\cdot]$ and $\mathbb{E}_X[\cdot]$, respectively. The channel coefficient of a link between nodes $\text{A}$ and $\text{B}$ is denoted by $h^{(n)}_{\text{AB}}$, the end-to-end signal-to-noise-ratio (SNR) which incorporates the backhaul link reliability factor at $\text{B}$ for the link $\text{A-B}$ is denoted as $\hat{\Gamma}_{\text{AB}}$ while as the SNR without the inclusion of backhaul link reliability factor is denoted as ${\Gamma}_{\text{AB}}$. The cumulative distribution function (CDF) of an RV $X$ is denoted by $F_{X}(\cdot)$. Correspondingly, $f_{X}(\cdot)$ is the respective probability density functions (PDFs). \section{System and channel model}\label{system model} We consider a network where multiple transmitters are connected to an access point AP via wireless backhaul links, as presented in Fig. \ref{fig:SM2}. The network consists of $N$ transmitters $\text{S}_n$, where $n\in\{1, \ldots, N\}$, serving an user $\text{D}$ in presence of $K$ eavesdroppers $\text{E}_k$ where $k\in\{1, \ldots, K\}$. Each node in the system is assumed to be equipped with a single antenna. Wireless backhaul links are present between each transmitter and the AP which have a certain probability of link failure. The backhaul link reliability between AP and $\text{S}_n$ for each $n\in\{1,\ldots N\}$ is modeled by independent Bernoulli RV $\mathbb{I}_n \in\{0,1\}$ with backhaul link reliability factor $\mathbb{P}[\mathbb{I}_n=1]=s$ and backhaul link failure probability $\mathbb{P}[\mathbb{I}_n=0]=(1-s)$ where $0<s\le 1$ is the backhaul link reliability factor. \begin{figure} \centering \includegraphics[width=2.7in]{SM9.eps} \caption{ A multi-transmitter network with unreliable backhaul connections. } \vspace{-0.5cm} \label{fig:SM2} \end{figure} The channel gains $h^{(n)}_{\text{SD}}$ of the $\text{S}_n$-$\text{D}$ link for each $n$ is assumed to be independent and identically distributed (i.i.d) exhibiting frequency flat Rayleigh fading. Since each link is Rayleigh distributed, its power gain $|h^{(n)}_{\text{SD}}|^2$ is exponentially distributed. We assume that $|h^{(n)}_{\text{SD}}|^2$ for each $n$ has a power gain parameter $\lambda_{\text{D}}$. Without loss of generality, assuming identical additive white Gaussian noise (AWGN) parameters at each receiver with zero mean and unit variance, the instantaneous SNR of the $\text{S}_n$-$\text{D}$ link is denoted as $\Gamma^{(n)}_{\text{SD}}=|h^{(n)}_{\text{SD}}|^2$. The corresponding PDF and CDF of $\Gamma^{(n)}_{\text{SD}}$ are expressed as \begin{align}\label{pdf_SD} f_{\Gamma^{(n)}_{\text{SD}}}(x)&=\lambda_{\text{D}}\exp(-\lambda_\text{D}x),\\ \label{cdf_SD} F_{\Gamma^{(n)}_{\text{SD}}}(x)&=1-\exp(-\lambda_\text{D}x), \end{align} respectively. All $\text{S}_n$-$\text{E}_k$ links for each $k\in\{1,\ldots, K\}$ and a particular $n\in\{1,\ldots, N\}$ are assumed to be independent non-identically distributed (i.n.i.d) Rayleigh fading channels with power gain parameter $\lambda_{\text{E}}^{(k)}$. For a given $k\in\{1,\ldots, K\}$, all $\text{S}_n$-$\text{E}_k$ links for each $n\in\{1,\ldots, N\}$ are assumed to be i.i.d. All the eavesdroppers corresponding to a transmitter $\text{S}_n$ collude with each other by performing MRC technique to maximize the eavesdropping SNR as opposed to \cite{chinmoy_letter2021, kotwalTVT_backhaul}, where the worst eavesdropping SNR determines the secrecy performance. Therefore, the PDF and CDF of the eavesdropping SNR $\Gamma^{(n)}_{\text{SE}}$ corresponding to $\text{S}_n$ is written as \cite{MRC_Akkouchi} \begin{align} \label{PDF_SE} &f_{\Gamma^{(n)}_{\text{SE}}}(x)=\sum_{k=1}^{K}\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})} e^{-\lambda_{\text{E}}^{(k)} x}, \\ \label{CDF_SE} &F_{\Gamma^{(n)}_{\text{SE}}}(x)=1-\sum_{k=1}^{K}\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K (\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})} e^{-\lambda_{\text{E}}^{(k)} x}, \end{align} respectively, when $\lambda_{\text{E}}^{(j)}\ne\lambda_{\text{E}}^{(k)}$ (as we assume i.n.i.d eavesdropping links, there is a high probability that this is maintained). We evaluate the non-zero secrecy rate (NZSR), secrecy outage probability (SOP), and ergodic secrecy rate (ESR) for each transmitter selection scheme in this work, including backhaul link reliability factor $s$ taken into consideration. Towards this, we first define the achievable secrecy rate corresponding to $\text{S}_n$ in bits per channel use when the corresponding backhaul link is active (i.e., $s=1$) as \begin{align} \label{secrecy_capacity} C^{(n)}_{S}=\max\lb\{\log_2\Big(\Gamma_{\text{R}}^{(n)}\Big),0\rb\}. \end{align} where ${\Gamma_{\text{R}}^{(n)}}=\frac{1+{\Gamma}_{\text{SD}}^{(n)}}{1+{\Gamma}_{\text{SE}}^{(n)}}$. The distribution of ${\Gamma_{\text{R}}^{(n)}}$ is obtained as \begin{align}\label{gamma_r} F_{\Gamma_{\text{R}}^{(n)}}(x)&=\mathbb{P}\Bigg[\frac{1+{\Gamma}_{\text{SD}}^{(n)}}{1+{\Gamma}_{\text{SE}}^{(n)}}\le x\Bigg]\nn\\ &=\int_{0}^{\infty} F_{{\Gamma}^{(n)}_{\text{SD}}}(x(y+1)-1) f_{{\Gamma}_{\text{SE}}^{(n)}}(y) dy. \end{align} $F_{\Gamma_{\text{R}}^{(n)}}(x)$ is then directly used to find the NZSR, SOP, and ESR. The NZSR is defined as the probability that the achievable secrecy rate is greater than zero, and for $\text{S}_n$ when the corresponding backhaul link is active, it is evaluated as \cite{yang2013physical} \begin{align} \label{non_zero_0} \mathcal{P}^{(n)}_{\text{NZ}}&=\mathbb P [C^{(n)}_{S}>0] =1- \int_{0}^{\infty} F_{{\Gamma}^{(n)}_{\text{SD}}}(x) f_{{\Gamma}^{(n)}_{\text{SE}}}(x)dx. \end{align} The SOP for $\text{S}_n$ when the corresponding backhaul link is active is defined as the probability when the achievable secrecy rate $C_{S}^{(n)}$ falls below a threshold secrecy rate $R_{th}$ and is evaluated as \cite{wang2015security,yang2013transmit} \begin{align}\label{secrecy OP equation} \mathcal{P}_{\text{out}}^{(n)} (R_{th}) &= \mathbb P\Big[C_{S}^{(n)} \le R_{th}\Big] \nn\\ &= \int_{0}^{\infty} F_{{\Gamma}^{(n)}_{\text{SD}}}(\rho(x+1)-1) f_{{\Gamma}_{\text{SE}}^{(n)}}(x) dx, \end{align} where $\rho = 2^{R_{th}}$. The ESR for $\text{S}_n$ when the corresponding backhaul link is active is defined as \cite{chinmoy_letter2021} \begin{align}\label{ergodic_secrecy_rate} \mathcal{C}_{\text{erg}}^{(n)}&=\frac{1}{\ln(2)}\int_{1}^{\infty}\log(x)f_{\Gamma_{\text{R}}^{(n)}}(x)dx \nn \\ & =\frac{1}{\ln(2)}\int_1^\infty\Bigg(\frac{1-F_{\Gamma_{\text{R}}^{(n)}}(x)}{x}\Bigg)dx. \end{align} It is to be noted from (\ref{gamma_r}) and (\ref{secrecy OP equation}) that the SOP can be obtained from the CDF of $\Gamma_{\text{R}}^{(n)}$ as \begin{align} \label{eq_new_SOP_from_GammaR} \mathcal{P}_{\text{out}}^{(n)}(R_{th})= F_{\Gamma_{\text{R}}^{(n)}}(\rho), \end{align} and from (\ref{non_zero_0}), (\ref{secrecy OP equation}), and (\ref{eq_new_SOP_from_GammaR}) that the NZSR can also be obtained from the CDF of $\Gamma_{\text{R}}^{(n)}$ or the SOP as \begin{align} \label{eq_NZSR_from_SOP} \mathcal{P}^{(n)}_{\text{NZ}}=1-\mathcal{P}_{\text{out}}^{(n)} (0) =1-F_{\Gamma_{\text{R}}^{(n)}}(1). \end{align} To incorporate the backhaul link reliability factor in the end-to-end SNR distribution of links from the AP to $\text{D}$ or $\text{E}$ via $\text{S}_n$, we utilize a mixture distribution of the Bernoulli and exponential RVs. A convex combination of the Bernoulli and exponential distribution is used for the mixture distribution to model the end-to-end $\text{AP}$-$\text{S}_n$-$\text{X}$ link SNR, for $\text{X}\in \{\text{D,E}\}$, ${\hat{\Gamma}^{(n)}_{\text{SX}}}$ is expressed as \cite{Kundu_TVT19} \begin{align}\label{mixture_distribution} f_{\hat{\Gamma}^{(n)}_{\text{SX}}}(x)=(1-s)\delta(x)+s f_{\Gamma^{(n)}_{\text{SX}}}(x), \end{align} where $\delta(x)$ is the Dirac delta function, ${\Gamma^{(n)}_{\text{SX}}}$ is the SNR of the $\text{S}_n$-$\text{X}$ link when the corresponding backhaul link is active. Here we note that the factor $(1-s)$ associated with $\delta(x)$ represents the probability that the backhaul is inactive. When $s=1$, it suggests that the backhaul link is active. It is noted from (\ref{mixture_distribution}) that the application of the mixture distribution generalizes the secrecy performance analysis for the transmitter selection schemes under both the perfect backhaul (i.e., $s=1$) and unreliable backhaul (i.e., $s<1$) conditions. As the backhaul link is common between any end-to-end link leading to $\text{D}$ and $\text{E}$, the backhaul link reliability factor has to be incorporated in either $\hat{\Gamma}^{(n)}_{\text{SD}}$ or $\hat{\Gamma}^{(n)}_{\text{SE}}$ during the performance analysis (NZSR, SOP, and ESR) of transmitter selection schemes but not in both as depicted in (\ref{mixture_distribution}). This is because, for a given selected transmitter $n^*$ on the basis of $\hat{\Gamma}^{(n^*)}_{\text{SX}}$, the corresponding $\text{S}_{n^*}$-$\text{X}$ link must be considered. Otherwise, $\hat{\Gamma}^{(n^*)}_{\text{SD}}$ and $\hat{\Gamma}^{(n^*)}_{\text{SE}}$ would become independent. In the following sections, we evaluate the NZSR, SOP, and ESR for each transmitter selection scheme under two backhaul link activity knowledge cases, BKU and BKA, with the help of the distribution of ${\Gamma_{\text{R}}^{(n^*)}}$ for a given selected transmitter. The distribution of ${\Gamma_{\text{R}}^{(n^*)}}$ changes according to the transmitter selection schemes and backhaul link activity knowledge, which will be obtained in the next section. Further, the asymptotic analysis is also carried out in each case to obtain better insights. The distribution of ${\Gamma_{\text{R}}^{(n^*)}}$ helps us to obtain any secrecy performance (NZSR, SOP, ESR) of both the STS and OTS schemes in a unified manner. It is also noted here that the evaluation of the ESR for the OTS scheme would not have been possible otherwise. \section{Distribution of $\Gamma_{\text{R}}^{(n^*)}$}\label{sec_distribution_snr_ratio} In this section, the CDF of $\Gamma_{\text{R}}^{(n^*)}$ for each transmitter selection scheme is evaluated for both BKU and BKA cases. This will then be used to find the NZSR, SOP, and ESR of those transmitter selection schemes. To evaluate the distribution of $\Gamma_{\text{R}}^{(n^*)}$, in the BKU case, the SNR distribution corresponding to the selected transmitter is evaluated first and then the backhaul reliability factor is included in the SNR distribution. This is due to the fact that which backhaul links are available is not known. However, in the BKA case, the backhaul reliability factor is incorporated in the individual link SNRs first and then the SNR distribution of the selected transmitter is evaluated. This will be clear in the following subsections. \subsection{Minimal eavesdropping selection in the BKU case (MIN-ES-BKU)}\label{sub_sec_min_es_bku} In this section, the transmitter with minimum channel power gain among $\text{S}_n$-$\text{E}$ links where $n\in\{1,\ldots,N\}$ is selected without considering which backhaul links are active as no knowledge of backhaul link activity is available. Hence, the eavesdropping SNR corresponding to the selected transmitter if the backhaul link is active becomes, \begin{align}\label{MIN-ES_k} \Gamma_{\text{SE}}^{(n^*)} = \min_{n\in \{1,2\ldots,N\}} \{\Gamma_{\text{SE}}^{(n)}\}. \end{align} The CDF of $\Gamma_{\text{R}}^{(n^*)}$ under unreliable backhaul conditions including backhaul link reliability factor is then evaluated following (\ref{gamma_r}) as \begin{align}\label{gamma_r_min_without_0} F_{\Gamma_{\text{R}}^{(n^*)}}(y) &= \int_{0}^{\infty} F_{{\Gamma}^{(n)}_{\text{SD}}}(y(x+1)-1) f_ {\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x) dx. \end{align} Due to the common backhaul link between destination and eavesdropping channel, the CDF of $\Gamma_{\text{SD}}^{(n)}$ conditioned on the selected transmitter is $\hat{\Gamma}_{\text{SD}}^{(n)} =\Gamma_{\text{SD}}^{(n)}$. Thus, the backhaul link reliability factor is considered in $\hat{\Gamma}_{\text{SE}}^{(n^*)}$ but not in $\hat{\Gamma}_{\text{SD}}^{(n)}$ while evaluating (\ref{gamma_r_min_without_0}). Following the mixture distribution method shown in (\ref{mixture_distribution}), $f_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is evaluated as \begin{align} \label{PDF_SE_MIN-ES_without} f_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)&=(1-s)\delta(x)+sf_{{\Gamma}_{\text{SE}}^{(n^*)}}(x), \end{align} where $f_{{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is provided in (\ref{PDF_SE_MIN-ES}) in \textit{Lemma \ref{lemma1}}. Finally, using (\ref{PDF_SE_MIN-ES}) and $F_{\Gamma_{\text{SD}}^{(n)}}(x)$ from (\ref{cdf_SD}), $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ is evaluated as \begin{align} \label{eq_integral_terms} &F_{\Gamma_{\text{R}}^{(n^*)}}(y) =(1-s)\int_{0}^{\infty}\delta(x)dx\nn\\ &+s\int_{0}^{\infty}\Big(1-e^{-\lambda_{\text{D}}(y(x+1)-1)}\Big)\sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\tilde{\lambda}_{\text{E}}^{(k)} e^{-\tilde{\lambda}_{\text{E}}^{(k)}x}dx \\ \label{gamma_r_min_without} &=1-\sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\frac{s\tilde{\lambda}_{\text{E}}^{(k)}e^{-\lambda_{\text{D}} (y-1)}}{(y\lambda_{\text{D}} +\tilde{\lambda}_{\text{E}}^{(k)})}, \end{align} where $\mathcal{M}^{(N)}$, $\binom{N}{i_1,\ldots,i_K}$, $i_{k}$ for each $k\in\{1 ,\ldots, K\}$, and $\tilde{\lambda}_{\text{E}}^{(k)}$ are defined in Lemma \ref{lemma1}. The first term in (\ref{eq_integral_terms}) represents the case where the backhaul link is inactive; therefore, $F_{{\Gamma}^{(n)}_{\text{SD}}}(x)=1$ for any value of $x$. Whereas the second term represents the case where the backhaul link is active, and hence, $F_{{\Gamma}^{(n)}_{\text{SD}}}(x)$ in the second term follows exponential CDF as in (\ref{pdf_SD}). \textit{Remark}: The NZSR and SOP of the system are evaluated directly from $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{gamma_r_min_without}) following (\ref{eq_NZSR_from_SOP}) and (\ref{eq_new_SOP_from_GammaR}), respectively. \begin{lemma}\label{lemma1} The PDF of ${{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is given as \begin{align} \label{PDF_SE_MIN-ES} &f_{{\Gamma}_{\text{SE}}^{(n^*)}}(x)=\sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\tilde{\lambda}_{\text{E}}^{(k)} e^{-\tilde{\lambda}_{\text{E}}^{(k)}x}, \end{align} where $\mathcal{M}^{(\omega)}$ is the set of integer vectors $[i_{1},\ldots,i_{K}]$ containing $K$ elements such that $i_{k}\in\{0 ,\ldots, \omega\}$ for each $k\in\{1 ,\ldots, K\}$ and $\sum_{k=1}^{K}i_k=\omega$ where $\omega$ is any whole number, i.e., $\omega\in\{0,1,2,\ldots\}$, $\binom{N}{i_1,\ldots,i_K}=\frac{N!}{i_1!i_2!\ldots i_K}$ is a multinomial coefficient, and $\tilde{\lambda}_{\text{E}}^{(k)}=\sum_{k=1}^{K}i_k \lambda_{\text{E}}^{(k)} $. \end{lemma} \begin{proof} The PDF $f_{{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is derived from the CDF of $\Gamma_{\text{SE}}^{(n^*)}$. The CDF $F_{{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is derived using the CDF of $\Gamma_{\text{SE}}^{(n)}(x)$ in (\ref{CDF_SE}) and utilizing the multinomial theorem as \begin{align}\label{cdf_se_minimunm} &F_{{\Gamma}_{\text{SE}}^{(n^*)}}(x)=\mathbb{P}\Big[\min_{{n \in \{1,2,\ldots, N\}}}\{\Gamma_{\text{SE}}^{(n)}\}\le x\Big]\nn\\ &=1-\Big(1-\mathbb{P}\Big[\Gamma_{\text{SE}}^{(n)}\le x\Big]\Big)^N\nn\\ &=1-\Bigg(\sum_{k=1}^{K}\frac{\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)e^{-\lambda_{\text{E}}^{(k)} x}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{N} \nn \\ &=1-\sum_{{\mathbf{i}\in\mathcal{M}^{(N)}}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)e^{-\tilde{\lambda}_{\text{E}}^{(k)}x}, \end{align} By differentiating (\ref{cdf_se_minimunm}), the PDF $f_{{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is achieved as shown in (\ref{PDF_SE_MIN-ES}). \end{proof} \subsection{Minimal eavesdropping selection in the BKA case (MIN-ES-BKA)}\label{sub_sec_min_es_bka} In this section, the transmitter with minimum channel power gain among $\text{S} _n$-$\text{E}$ links is selected among all links for which backhaul links are active. Assuming $\mathcal{S} \subseteq \{1,2,\ldots, N\}$ is the subset of transmitters for which the backhaul links are active, the end-to-end eavesdropping SNR including backhaul link reliability factor is \begin{align} \label{eq_gain_min_with} \hat{\Gamma}_{\text{SE}}^{(n^* )}=\min_{n\in\mathcal{S}}\{\hat{\Gamma}_{\text{SE}}^{(n)}\}. \end{align} Therefore, $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ is evaluated following (\ref{gamma_r_min_without_0}) where $f_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is given in (\ref{eq_gain_min_with}). To evaluate the $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$, the distribution of $\hat{\Gamma}_{\text{SE}}^{(n^* )}$ is required as in the previous section and is evaluated in (\ref{PDF_SE_MINES_with}) in \textit{ Lemma \ref{lemma2}}. With the help of $f_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ from (\ref{PDF_SE_MINES_with}) and following the similar steps that were carried out for (\ref{gamma_r_min_without}), $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ is evaluated as \begin{align}\label{gamma_r_min_with} &F_{\Gamma_{\text{R}}^{(n^*)}}(y)= (1-s)^N\int_{0}^{\infty}\delta(x)dx\nn\\ &+\int_{0}^{\infty}\Big(1-e^{-\lambda_{\text{D}}(\rho x+y-1)}\Big)\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}(1-s)^{N-n}\nn\\ &\times s^n\binom{n}{i_1,\ldots,i_K}\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\nn\\ &\times\tilde{\lambda}_{\text{E}}^{(k)}e^{-\tilde{\lambda}_{\text{E}}^{(k)}x}dx \nn \\ &=1-\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}(1-s)^{N-n}s^n\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\frac{\tilde{\lambda}_{\text{E}}^{(k)}e^{-\lambda_{\text{D}}(y-1)}}{y\lambda_{\text{D}}+\tilde{\lambda}_{\text{E}}^{(k)}}. \end{align} \textit{Remark}: The NZSR and SOP can be evaluated directly from $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{gamma_r_min_with}) following (\ref{eq_NZSR_from_SOP}) and (\ref{eq_new_SOP_from_GammaR}), respectively. \begin{lemma}\label{lemma2} The distribution of $\hat{\Gamma}_{\text{SE}}^{(n^* )}$ is \begin{align}\label{PDF_SE_MINES_with} &f_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)=(1-s)^N\delta(x)+\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}\nn\\ &\times (1-s)^{N-n}s^n\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\nn\\ &\times\tilde{\lambda}_{\text{E}}^{(k)}e^{-\tilde{\lambda}_{\text{E}}^{(k)}x}. \end{align} \end{lemma} \begin{proof} The PDF $f_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is derived from the CDF of $\hat{\Gamma}_{\text{SE}}^{(n^*)}$. To find the CDF of $\hat{\Gamma}_{\text{SE}}^{(n^*)}$, we consider evaluating CDF in two mutually exclusive events wherein i) all the backhaul links are inactive and ii) at least one of the links is active. In the first event, as the probability that all links are inactive is $(1-s)^N$, the CDF in this event is \begin{align}\label{CDF_SE_MIN-ES_with_mix1} F_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)&=\big(\mathbb{P}\left[\mathbb{I}_n=0\right]\big)^N u(x)= (1-s)^N u(x), \end{align} where $u(x)$ is the unit step function. In the second event, the CDF is directly obtained as \begin{align} \label{CDF_SE_MIN-ES_with_mix2} &F_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)=\sum_{n=1}^{N}\binom{N}{n}(1-s)^{N-n}s^n\Big(\mathbb{P}\Big[\min_{n\in\mathcal{S}}\{{\Gamma}_{\text{SE}}^{(n)}\}\le x\Big]\Big) \nn\\ &=\sum_{n=1}^{N}\binom{N}{n}(1-s)^{N-n}s^n\Big(1-\Big(1-\mathbb{P}\Big[{\Gamma}_{\text{SE}}^{(n)}\le x\Big]\Big)^n\Big) \nn\\ &=\sum_{n=1}^{N}\binom{N}{n}(1-s)^{N-n}s^n\nn\\ &\times\Bigg(1-\Bigg(\sum_{k=1}^{K}\frac{\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big) e^{-\lambda_{\text{E}}^{(k)} x}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^n\Bigg). \end{align} Finally, the CDF $F_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ is obtained by adding (\ref{CDF_SE_MIN-ES_with_mix1}) and (\ref{CDF_SE_MIN-ES_with_mix2}) as both the cases arise independently. Similar to (\ref{cdf_se_minimunm}), we apply the multinomial theorem in (\ref{CDF_SE_MIN-ES_with_mix2}) as well. The PDF $f_{\hat{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ in (\ref{PDF_SE_MINES_with}) is obtained by differentiating (\ref{CDF_SE_MIN-ES_with_mix2}). \end{proof} We here note that it is challenging to find the minimum among RVs in (\ref{eq_gain_min_with}) for which backhaul links are active as these RVs have a mixture distribution of Bernoulli and exponential. As Bernoulli RV always takes the minimum value of zero, one might end up getting zero in (\ref{eq_gain_min_with}) if proper modeling is not implemented for active backhaul links. We note here that the procedure adopted in Lemma \ref{lemma2} for incorporating the backhaul reliability factor in the eavesdropping link SNR and finding the minimum of RVs which includes mixture distribution is novel. \subsection{Traditional transmitter selection in the BKU case (TTS-BKU) }\label{sub_sec_tts_bku}In this section, the transmitter is selected based on the maximum channel power gain among the $\text{S}_n$-$\text{D}$ links without knowing which backhaul links are active. In this case, the selected transmitter provides the following destination SNR if the corresponding backhaul link is active \begin{align}\label{eq_gain_max_without} {\Gamma}_{\text{SD}}^{(n^*)}= \max_{n\in \{1,2\ldots, N\}}\{ {\Gamma}_{\text{SD}}^{(n)}\}. \end{align} To derive the CDF of $\Gamma_{\text{R}}^{(n^*)}$, the backhaul link reliability factor is included in $\hat{\Gamma}_{\text{SD}}^{(n)}$ and hence, the CDF of $\Gamma_{\text{R}}^{(n^*)}$ is evaluated following (\ref{gamma_r}) as \begin{align}\label{non_zero_tts} F_{\Gamma_{\text{R}}^{(n^*)}}(y)&=\int_{0}^{\infty} F_{\hat{\Gamma}^{(n^*)}_{\text{SD}}}(y(x+1)-1) f_ {{\Gamma}_{\text{SE}}^{(n)}}(x) dx. \end{align} The CDF $F_{\hat{\Gamma}^{(n^*)}_{\text{SD}}}(x)$ above is expressed following the mixture distribution from (\ref{mixture_distribution}) as \begin{align}\label{CDF_sd_TTS_without} F_{\hat{\Gamma}^{(n^*)}_{\text{SD}}}(x)&=(1-s)u(x)+sF_{\Gamma^{(n^*)}_{\text{SD}}}(x), \end{align} where $F_{{\Gamma}^{(n^*)}_{\text{SD}}}(x)$ is derived in (\ref{CDF_TTS_without_max}) in Lemma \ref{lemma3}. Next, $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{non_zero_tts}) is evaluated using $F_{\hat{\Gamma}_{\text{SD}}^{(n^*)}}(x)$ from (\ref{CDF_sd_TTS_without}) and $f_{{\Gamma}_{\text{SE}}^{(n)}}(x)$ from (\ref{PDF_SE}) as \begin{align}\label{gamma_r_tts_without} &F_{\Gamma_{\text{R}}^{(n^*)}}(y)=\int_{0}^{\infty}\Big((1-s)u(x)+s\Big(1-\sum_{n=1}^{N}\binom{N}{n}(-1)^{n+1}\nn\\ &\times e^{-n\lambda_{\text{D}}(y(x+1)-1)}\Big)\Big)\sum_{k=1}^{K}\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}e^{-\lambda_{\text{E}}^{(k)} x}dx\nn\\ &=1-\sum_{n=1}^{N}\sum_{k=1}^{K}\binom{N}{n}\frac{(-1)^{n+1}s\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)e^{-\lambda_{\text{D}} (y-1)}}{(ny\lambda_{\text{D}}+ \lambda_{\text{E}}^{(k)})\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}. \end{align} \textit{Remark}: The NZSR and SOP can be evaluated directly from $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{gamma_r_tts_without}) following (\ref{eq_NZSR_from_SOP}) and (\ref{eq_new_SOP_from_GammaR}), respectively. \begin{lemma}\label{lemma3} The CDF of ${\Gamma^{(n^*)}_{\text{SD}}}(x)$ without including the backhaul link reliability factor is expressed as \begin{align}\label{CDF_TTS_without_max} F_{\Gamma^{(n^*)}_{\text{SD}}}(x)&=1-\sum_{n=1}^{N}\binom{N}{n}(-1)^{n+1}e^{-n\lambda_{\text{D}} x}. \end{align} \end{lemma} \begin{proof} $F_{\Gamma^{(n^*)}_{\text{SD}}}(x)$ is evaluated using the definition of CDF as \begin{align}\label{lemma3proof} F_{\Gamma^{(n^*)}_{\text{SD}}}(x)&=\mathbb{P}\Big[\max_{{n \in \{1,2,\ldots, N\}}}\{{\Gamma}_{\text{SD}}^{(n)}\}\le x\Big]\nn\\ &=\prod_{n=1}^{N}\mathbb{P}\Big[\Gamma_{\text{SD}}^{(n)}\}\le x\Big]. \end{align} Using (\ref{cdf_SD}), (\ref{CDF_TTS_without_max}) provides (\ref{lemma3proof}). \end{proof} \subsection{Traditional transmitter selection in the BKA case (TTS-BKA)}\label{sub_sec_tts_bka} In this section, the transmitter is selected based on the maximum channel power gain among the $\text{S}_n$-$\text{D}$ links whose backhaul links are active. In this case, the destination SNR including the backhaul link reliability factor becomes \begin{align} \label{eq_gain_max_with} \hat{\Gamma}_{\text{SD}}^{(n^*)}=\max_{n\in\mathcal{S}}\{\hat{\Gamma}_{\text{SD}}^{(n)}\}. \end{align} Therefore, the CDF of $\Gamma_{\text{R}}^{(n^*)}$ is evaluated following (\ref{non_zero_0}) as \begin{align}\label{non_zero_tts_with} &F_{\Gamma_{\text{R}}^{(n^*)}}(y)= \int_{0}^{\infty} F_{\hat{\Gamma}_{\text{SD}}^{(n^*)}}(y(x+1)-1)f_{{\Gamma}_{\text{SE}}^{(n)}}(x)dx, \end{align} where $F_{\hat{\Gamma}_{\text{SD}}^{(n^*)}}(x)$ is evaluated in (\ref{CDF_SD_TTS_with}) in Lemma \ref{lemma4}. Next, $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ is obtained using $F_{\hat{\Gamma}_{\text{SD}}^{(n^*)}}(x)$ from (\ref{CDF_SD_TTS_with}) and $f_{{\Gamma}_{\text{SE}}^{(n)}}(x)$ from (\ref{PDF_SE}) in (\ref{non_zero_tts_with}) as \begin{align}\label{gamma_r_tts_with} &F_{\Gamma_{\text{R}}^{(n^*)}}(y) =\int_{0}^{\infty}\Big(1-\sum_{n=1}^{N}\binom{N}{n}(1-s)^{N-n}u(x)s^{n}\sum_{q=1}^{n}\nn\\ &\binom{n}{q}(-1)^{q+1}e^{-q\lambda_{\text{D}} (y(x+1)-1)}\Big)\sum_{k=1}^{K}\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}e^{-\lambda_{\text{E}}^{(k)} x}dx }{\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})} \nn \\ &=1-\sum_{n=1}^{N}\sum_{k=1}^{K}\sum_{q=1}^{n}\binom{N}{n}\binom{n}{q}(1-s)^{N-n}s^{n}(-1)^{q+1}\nn\\ &\times\frac{\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)e^{-\lambda_{\text{D}} (y-1)}}{(qy\lambda_{\text{D}} +\lambda_{E})\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}. \end{align} \textit{Remark}: The NZSR and the SOP can be evaluated directly from $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{gamma_r_tts_with}) following (\ref{eq_NZSR_from_SOP}) and (\ref{eq_new_SOP_from_GammaR}), respectively. \begin{lemma}\label{lemma4} The CDF of the SNR $\hat{\Gamma}_{\text{SD}}^{(n^*)}$ when the backhaul link activity knowledge is available, is \begin{align}\label{CDF_SD_TTS_with} F_{\hat{\Gamma}^{(n^*)}_{\text{SD}}}(x) &=1-\sum_{n=1}^{N}\sum_{q=1}^{n}\binom{N}{n}\binom{n}{q}(1-s)^{N-n}s^{n}(-1)^{q+1}\nn\\ &\times e^{-q\lambda_{\text{D}} x}. \end{align} \end{lemma} \begin{proof} The CDF $F_{\hat{\Gamma}_{\text{SD}}^{(n^*)}}$ is evaluated as \begin{align}\label{lemma4proof} F_{\hat{\Gamma}^{(n^*)}_{\text{SD}}}(x) &=\mathbb{P}\Big[\max_{{n \in \{1,2\ldots, N\}}}\{\hat{\Gamma}_{\text{SD}}^{(n)}\}\le x\Big]\nn\\ &=\prod_{n=1}^{N}\Big(\mathbb{P}\Big[\hat{\Gamma}_{\text{SD}}^{(n)}\le x\Big] \Big) \nn \\ &=\Big((1-s)u(x)+s(1-e^{-\lambda_{\text{D}} x})\Big)^N. \end{align} In (\ref{lemma4proof}), we have used the CDF of ${\hat{\Gamma}^{(n)}_{\text{SD}}}(x)$ following the mixture distribution in (\ref{mixture_distribution}) as \begin{align} \label{CDF_SNR_SD_with} F_{\hat{\Gamma}^{(n)}_{\text{SD}}}(x)&=(1-s)u(x)+s(1-e^{-\lambda_{\text{D}} x}). \end{align} Finally, (\ref{CDF_SD_TTS_with}) is evaluated from (\ref{lemma4proof}) using two successive binomial expansions. \end{proof} \color{black} \subsection{Optimal transmitter selection in the BKU case (OTS-BKU)}\label{sub_sec_ots_bku} In this section, the transmitter is selected for which the instantaneous achievable secrecy rate $C^{(n)}_S$ in (\ref{secrecy_capacity}) among all $n\in\{1,\ldots, N\}$ is maximum without considering which backhaul links are active as no knowledge of backhaul link activity is available. This alternatively means that $\Gamma_{\text{R}}^{(n)}$ among all $n\in\{1,\ldots, N\}$ is maximum. Hence, the selected transmitter is denoted by \begin{align}\label{_max_OTS_without} n^*=\arg \max_{n\in \{1,2\ldots, N\}}\{ {\Gamma_{\text{R}}^{(n)}}\}. \end{align} The selected transmitter has uncertainty whether it is active or not, hence, the CDF of $\Gamma_{\text{R}}^{(n^*)}$ is modeled using the mixture distribution method described in (\ref{mixture_distribution}) as \begin{align}\label{gamma_r_ots_without} F_{\Gamma_{\text{R}}^{(n^*)}}(y)&= (1-s)\times1+s \mathbb P\Big[\max_{n\in \{1,2\ldots, N\}}\{\Gamma_{\text{R}}^{(n)}\} \le y\Big]. \end{align} In (\ref{gamma_r_ots_without}), the second term in the summation contains the CDF of $\Gamma_{\text{R}}^{(n^*)}$ when the corresponding backhaul link is active, which is evaluated following (\ref{gamma_r}) with the help of $F_{{\Gamma}_{\text{SD}}^{(n)}}(x)$ from (\ref{cdf_SD}) and $f_{{\Gamma}_{\text{SE}}^{(n)}}(x)$ from (\ref{PDF_SE}) as \begin{align}\label{NZ_OS_without} &\mathbb P\Big[\max_{n\in \{1,2\ldots, N\}}\{ \Gamma_{\text{R}}^{(n)}\} \le y\Big]\nn\\ &=\Bigg(\int_{0}^{\infty}F_{{\Gamma}_{\text{SD}}^{(n)}}(y(x+1)-1)f_{{\Gamma}_{\text{SE}}^{(n)}}(x)dx \Bigg)^N\nn\\ &=1-\sum_{n=1}^{N}\binom{N}{n}(-1)^{n+1}\nn\\ &\times\Bigg(\sum_{k=1}^{K}\frac{\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)e^{-\lambda_{\text{D}}(y-1)}}{(\lambda_{\text{D}}y+\lambda_{\text{E}}^{(k)})\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})} \Bigg)^n. \end{align} \textit{Remark}: The NZSR and the SOP can be evaluated directly from $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{gamma_r_ots_without}) following (\ref{eq_NZSR_from_SOP}) and (\ref{eq_new_SOP_from_GammaR}), respectively. \subsection{Optimal transmitter selection in the BKA case (OTS-BKA)}\label{sub_sec_ots_bka} The availability of backhaul link activity knowledge permits us to find the maximum $C^{(n)}_S$ among all links with active backhaul links, which is expressed as \begin{align}\label{max_OTS_with} n^* =\arg \max_{n\in \mathcal S} \{ \Gamma_{\text{R}}^{(n)}\}. \end{align} Consequently, the CDF of $\Gamma_{\text{R}}^{(n^*)}$ is evaluated following (\ref{gamma_r}) including the backhaul link reliability factor in $\hat{\Gamma}_{\text{SD}}^{(n)}$ as \begin{align} \label{NZ_OS_with_0} F_{\Gamma_{\text{R}}^{(n^*)}}(y) =\Bigg(\int_{0}^{\infty}F_{\hat{\Gamma}_{\text{SD}}^{(n)}}(y(x+1)-1)f_{{\Gamma}_{\text{SE}}^{(n)}}(x)dx \Bigg)^N. \end{align} Therefore, $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ is evaluated using $F_{\hat{\Gamma}^{(n)}_{\text{SD}}}(x)$ from (\ref{CDF_SNR_SD_with}) and $f_{{\Gamma}_{\text{SE}}^{(n)}}(x)$ from (\ref{PDF_SE}) in (\ref{NZ_OS_with_0}) as \begin{align}\label{gamma_r_ots_with} F_{\Gamma_{\text{R}}^{(n^*)}}(y) &=1-\sum_{n=1}^{N}\binom{N}{n}(-1)^{n+1}s^n\nn\\ &\times\Bigg(\sum_{k=1}^{K}\frac{\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)e^{-\lambda_{\text{D}}(y-1)}}{(\lambda_{\text{D}}y+\lambda_{\text{E}}^{(k)})\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})} \Bigg)^n. \end{align} \textit{Remark}: The NZSR and the SOP can be evaluated directly from $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{gamma_r_ots_with}) following (\ref{eq_NZSR_from_SOP}) and (\ref{eq_new_SOP_from_GammaR}), respectively. We note from (\ref{gamma_r_min_without}), (\ref{gamma_r_min_with}), (\ref{gamma_r_tts_without}), (\ref{gamma_r_tts_with}), (\ref{NZ_OS_without}), and (\ref{gamma_r_ots_with}) that the number of summations and multiplication terms significantly increase with the increase in $N$ and $K$, making the SOP and NZSR evaluation very computationally intensive. To reduce the computational complexity and to find better insights, we provide the asymptotic analysis of the SOP for the BKU and BKA cases in Section \ref{sec_asymptotic_analysis_sop_unreliable} and the diversity order analysis for perfect backhaul links in Section \ref{section_diversity_order}. \section{Ergodic Secrecy Rate} \label{section_ergodic_secrecy_rate} In this section, the ESR of each transmitter selection scheme is derived in both BKU and BKA cases. The ESR is evaluated using (\ref{ergodic_secrecy_rate}) with the help of the distribution of $\Gamma_{\text{R}}^{(n^*)}$ already derived in Section \ref{sec_distribution_snr_ratio} for each selection scheme and backhaul link activity knowledge. \subsection{Minimal eavesdropping selection in the BKU case (MIN-ES-BKU)} In this section, the transmitter selection is performed according to (\ref{MIN-ES_k}) as already mentioned in Section \ref{sub_sec_min_es_bku} for the MIN-ES-BKU scheme. The ESR is consequently obtained from (\ref{ergodic_secrecy_rate}) with the help of $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ derived for the same scheme in (\ref{gamma_r_min_without}) as \begin{align}\label{erg_min_es_without} &\mathcal{C}_{\text{erg}}=\frac{s}{\ln(2)}\int_{1}^{\infty} \sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\frac{\tilde{\lambda}_{\text{E}}^{(k)}e^{-\lambda_{\text{D}} (x-1)}}{x(x\lambda_{\text{D}} +\tilde{\lambda}_{\text{E}}^{(k)})}dx. \end{align} The integral in (\ref{erg_min_es_without}) is evaluated by first adopting partial fraction and then directly using in \cite[eq. (3.352.2)]{table_of_integrals} as \begin{align}\label{erg_min_es_without_final} &\mathcal{C}_{\text{erg}}= \frac{s}{\ln(2)}\sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\frac{e^{\lambda_{\text{D}}}\tilde{\lambda}_{\text{E}}^{(k)}}{\lambda_{\text{D}}}\nn\\ &\times \Bigg(\int_{1}^{\infty}\frac{\lambda_{\text{D}}e^{-\lambda_{\text{D}} x}}{\tilde{\lambda}_{\text{E}}^{(k)}x}dx-\int_{1}^{\infty}\frac{\lambda_{\text{D}}e^{-\lambda_{\text{D}} x}}{\tilde{\lambda}_{\text{E}}^{(k)}\Big(x+\frac{\tilde{\lambda}_{\text{E}}^{(k)}}{\lambda_{\text{D}}}\Big)}dx\Bigg)\nn\\ &=\frac{s}{\ln(2)}\sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\Big(e^{\tilde{\lambda}_{\text{E}}^{(k)}+\lambda_{\text{D}}}\nn\\ &\times\mathrm{Ei}(-(\tilde{\lambda}_{\text{E}}^{(k)}+\lambda_{\text{D}}))-e^{\lambda_{\text{D}}}\mathrm{Ei}(-\lambda_{\text{D}})\Big), \end{align} where $\mathrm{Ei}(x)=\int_{-\infty}^{x}\frac{e^{t}}{t}dt$ is the exponential integral. \subsection{Minimal eavesdropping selection in the BKA case (MIN-ES-BKA)} In this section, the transmitter selection is performed according to (\ref{eq_gain_min_with}) as already mentioned in Section \ref{sub_sec_min_es_bka} for the MIN-ES-BKA scheme. Following the same procedure as in (\ref{erg_min_es_without_final}), the ESR is evaluated by utilizing $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ from (\ref{gamma_r_min_with}) in (\ref{ergodic_secrecy_rate}) as \begin{align}\label{erg_min_es_with} &\mathcal{C}_{\text{erg}}=\frac{1}{\ln(2)}\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)(1-s)^{N-n}s^n\nn\\ &\times\Big(e^{\tilde{\lambda}_{\text{E}}^{(k)}+\lambda_{\text{D}}}\mathrm{Ei}(-(\tilde{\lambda}_{\text{E}}^{(k)}+\lambda_{\text{D}}))-e^{\lambda_{\text{D}}}\mathrm{Ei}(-\lambda_{\text{D}})\Big). \end{align} \subsection{Traditional transmitter selection in the BKU case (TTS-BKU) } In this section, the transmitter selection is performed according to (\ref{eq_gain_max_without}) as already mentioned in Section \ref{sub_sec_tts_bku} for the TTS-BKU case. The ESR in this case is derived by substituting for $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ from (\ref{gamma_r_tts_without}) in (\ref{ergodic_secrecy_rate}) as \begin{align}\label{ERG_TTS_without} \mathcal{C}_{\text{erg}}&=\frac{1}{\ln(2)}\int_{1}^{\infty}\frac{1}{x}\Bigg(\sum_{n=1}^{N}\sum_{k=1}^{K}\binom{N}{n}(-1)^{n+1}s\nn\\ &\times\frac{\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)e^{-\lambda_{\text{D}} (x-1)}}{(\lambda_{\text{D}} n x+\lambda_{\text{E}}^{(k)})\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)dx. \end{align} The integral in (\ref{ERG_TTS_without}) is evaluated by using the partial fraction method and following the steps taken for achieving (\ref{erg_min_es_without_final}) as \begin{align}\label{eq_ERG_TTS_without} &\mathcal{C}_{\text{erg}}=\frac{s}{\ln(2)}\sum_{n=1}^{N}\sum_{k=1}^{K}\binom{N}{n}\frac{(-1)^{n+1}\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\nn\\ &\times\Big(-e^{n\lambda_{\text{D}}}\mathrm{Ei} (-n\lambda_{\text{D}} )+e^{(\lambda_{\text{E}}^{(k)}+n\lambda_{\text{D}})}\mathrm{Ei}(-(\lambda_{\text{E}}^{(k)}+n\lambda_{\text{D}}))\Big). \end{align} \subsection{Traditional transmitter selection in the BKA case (TTS-BKA) } In this section, the transmitter is selected according to (\ref{eq_gain_max_with}) as already mentioned in Section \ref{sub_sec_tts_bku} for the TTS-BKA case. Thus, the ESR for this case is evaluated by utilizing $F_{{\Gamma}_{\text{R}}^{(n^*)}}(x)$ from (\ref{gamma_r_tts_with}) in (\ref{ergodic_secrecy_rate}) as \begin{align}\label{ERG_TTS_with} &\mathcal{C}_{\text{erg}}=\int_{1}^{\infty}\sum_{n=1}^{N}\sum_{k=1}^{K}\sum_{q=1}^{n}\binom{N}{n}\binom{n}{q}(1-s)^{N-n}s^{n}(-1)^{q+1}\nn\\ &\times\frac{\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)e^{-\lambda_{\text{D}} (x-1)}}{x(qx\lambda_{\text{D}} +\lambda_{E})\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}dx\nn\\ &=\frac{1}{\ln(2)}\sum_{n=1}^{N}\sum_{k=1}^{K}\sum_{q=1}^{n}\binom{N}{n}\binom{n}{q}(1-s)^{N-n}s^{n}(-1)^{q+1}\nn\\ &\times\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Big(-e^{q\lambda_{\text{D}}}\mathrm{Ei} (-q\lambda_{\text{D}})\nn\\ &+e^{\lb(\lambda_{\text{E}}^{(k)}+q\lambda_{\text{D}}\rb)}\mathrm{Ei}(-\lambda_{\text{E}}^{(k)}-q\lambda_{\text{D}})\Big). \end{align} \subsection{Optimal transmitter selection in the BKU case (OTS-BKU)} In this section, the best transmitter is selected according to (\ref{_max_OTS_without}) as shown in Section \ref{sub_sec_ots_bku} for the OTS-BKU case. By substituting $F_{{\Gamma}_{\text{R}}^{(n^*)}}(x)$ from (\ref{gamma_r_ots_without}) in (\ref{ergodic_secrecy_rate}), the ESR is evaluated as \begin{align}\label{ERG_OTS_without} &\mathcal{C}_{\text{erg}}=\frac{1}{\ln(2)}\int_{1}^{\infty}\frac{1}{x}\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}(-1)^{n+1}\nn\\ &\times s\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{D}}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\frac{e^{-\lambda_{\text{D}} (x-1)\tilde{i}_k}dx}{\prod_{k=1}^{K}\Big(x+\frac{\lambda_{\text{E}}^{(k)}}{\lambda_{\text{D}}}\Big)^{i_k}}, \end{align} where $\tilde{i}_k=\sum_{k=1}^{K}i_k$. The integral is evaluated by using the partial fraction method and then following the similar steps as were taken for (\ref{erg_min_es_without_final}) as \begin{align} &\mathcal{C}_{\text{erg}} =\frac{1}{\ln(2)}\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}(-1)^{n+1}s\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{D}}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\Bigg(-\frac{e^{\lambda_{\text{D}} \tilde{i}_k}\mathrm{Ei}{(-\lambda_{\text{D}} \tilde{i}_k)}}{\prod_{k=1}^{K}\Big(\frac{\lambda_{\text{E}}^{(k)}}{\lambda_{\text{D}}}\Big)^{i_k}}\nn\\ &-\sum_{k=1}^{K}\sum_{l_k=1}^{i_k}{A_k^{(i_k)}(\lambda_{\text{D}} \tilde{i}_k)^{l_k-i_k}e^{((\lambda_{\text{D}}+\lambda_{\text{E}}^{(k)}) \tilde{i}_k)} }\nn\\ &\times\Gamma\Big[l_k-i_k,(-(\lambda_{\text{D}}+\lambda_{\text{E}}^{(k)}) \tilde{i}_k)\Big]\Bigg), \end{align} where $\Gamma[m,x]=\int_{x}^{\infty}t^{m-1}e^{-t}dt$ is the incomplete gamma function with $m$ being a positive integer and $A_k^{(i_k)}$ is the partial fraction coefficient easily obtained using the standard partial fraction method. \subsection{Optimal transmitter selection in the BKA case (OTS-BKA)} In this section, the best transmitter is selected according to (\ref{max_OTS_with}) as shown in Section \ref{sub_sec_ots_bka} for the OTS-BKA case. The ESR is derived by following the same procedure as in (\ref{ERG_OTS_without}) by utilizing (\ref{gamma_r_ots_with}) in (\ref{ergodic_secrecy_rate}) as \begin{align}\label{ERG_OTS_with} &\mathcal{C}_{\text{erg}}=\frac{1}{\ln(2)}\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}(-1)^{n+1}s^n\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{D}}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\Bigg(-\frac{e^{\lambda_{\text{D}} \tilde{i}_k}\mathrm{Ei}{(-\lambda_{\text{D}} \tilde{i}_k)}}{\prod_{k=1}^{K}\Big(\frac{\lambda_{\text{E}}^{(k)}}{\lambda_{\text{D}}}\Big)^{i_k}}\nn\\ &-\sum_{k=1}^{K}\sum_{l_k=1}^{i_k}{A_k^{(i_k)}(\lambda_{\text{D}} \tilde{i}_k)^{l_k-i_k}e^{((\lambda_{\text{D}}+\lambda_{\text{E}}^{(k)}) \tilde{i}_k)} }\nn\\ &\times\Gamma\Big[l_k-i_k,(-(\lambda_{\text{D}}+\lambda_{\text{E}}^{(k)}) \tilde{i}_k)\Big]\Bigg). \end{align} It is difficult to understand from the exact ESR expressions in (\ref{erg_min_es_without_final}), (\ref{erg_min_es_with}), (\ref{ERG_TTS_without}), and (\ref{ERG_TTS_with}) that how it depends on $s$, $1/\lambda_{\text{D}}$, $1/\lambda_{\text{E}}^{(k)}$ for all $k\in\{1,\ldots,K\}$, $K$, and $N$. To get insights into how the backhaul link reliability factor and other system parameters affect the NZSR and SOP, we provide a simplified asymptotic ESR expression in Section \ref{section_asymptotic_ESR}. \textit{Remark}: The methodology followed in Section \ref{sec_distribution_snr_ratio} and Section \ref{section_ergodic_secrecy_rate} is uniform and can be used to find any secrecy performance (SOP, NZSR, and ESR) for different transmitter selection schemes while incorporating the backhaul uncertainty. \section{Asymptotic Analysis of SOP}\label{sec_asymptotic_analysis_sop_unreliable} In this section, the asymptotic behavior of the system is analyzed when the SNR of the $\text{S}_n$-$\text{D}$ link is increased asymptotically compared to the SNR of the eavesdroppers' links. By assuming $1/\lambda_{\text{D}} \rightarrow \infty$ for a given $1/\lambda_{\text{E}}^{(k)}$ for each $n\in\{1,\dots, K\}$ and $k \in \{1,\ldots, N\}$, the asymptotic analysis of the SOP for the selection schemes in BKU and BKA cases are carried out. For the asymptotic analysis, we first approximate $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ already derived in Section \ref{sec_distribution_snr_ratio} for each selection scheme under the condition of unreliable backhaul links when $1/\lambda_{\text{D}} \rightarrow \infty$ and use the same in the SOP expression of (\ref{eq_new_SOP_from_GammaR}). The asymptotic analysis provides insight into the impact of unreliable backhaul connections on the system's performance in the high-SNR regime. \subsection{Minimal eavesdropping selection (MIN-ES) } In this section, we evaluate the asymptotic SOP of the MIN-ES scheme for both the BKU and BKA cases. To derive the asymptotic SOP for the MIN-ES-BKU scheme, we first find an approximation of $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{gamma_r_min_without}) as $\textnormal{exp}(-y\lambda_{\text{D}})$ tends to unity when $1/\lambda_{\text{D}} \rightarrow \infty$ as \begin{align}\label{NZ_MIN-ES_without_asym} \mathcal{P}_{\text{out}}^\infty&=1-s\sum_{i_1+\ldots+i_K=n}\binom{n}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg) = (1-s). \end{align} The multiplier of $s$ in the second term of the above equation becomes unity. Then, using (\ref{NZ_MIN-ES_without_asym}) in the SOP expression of (\ref{eq_new_SOP_from_GammaR}), we obtain the asymptotic SOP expression. Similarly, for the MIN-ES-BKA scheme, $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ in (\ref{gamma_r_min_with}) is approximated at high-SNR first, and then the asymptotic SOP is evaluated using the SOP expression from (\ref{eq_new_SOP_from_GammaR}) as \begin{align}\label{NZ_MIN-ES_with_asym} \mathcal{P}_{\text{out}}^\infty&=1-\sum_{n=1}^{N}\binom{N}{n}(1-s)^{N-n}s^n=(1-s)^N. \end{align} \subsection{Traditional transmitter selection (TTS)} In this section, the asymptotic SOP of the TTS scheme for both the BKU and BKA cases is evaluated. Applying $1/\lambda_{\text{D}} \rightarrow \infty$ in (\ref{gamma_r_tts_without}) we approximate $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ for the TTS-BKU scheme, and using the same in the SOP expression of (\ref{eq_new_SOP_from_GammaR}), the asymptotic SOP is evaluated as $\mathcal{P}_{\text{out}}^\infty= (1-s).$ Similarly, for the TTS-BKA scheme, the asymptotic SOP is evaluated using the approximate $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ from (\ref{gamma_r_tts_with}) at high-SNR and then applying it in (\ref{eq_new_SOP_from_GammaR}) as \begin{align}\label{NZ_TTS_with_asym} \mathcal{P}_{\text{out}}^\infty&= 1-\sum_{n=1}^{N}\binom{N}{n}(-1)^{n+1}s^n=(1-s)^N. \end{align} \subsection{Optimal transmitter selection (OTS)} The asymptotic SOP for the OTS-BKU scheme can easily be expressed following the same approach as in (\ref{NZ_MIN-ES_without_asym}) using the approximate $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ from (\ref{gamma_r_ots_without}) at high-SNR and then using (\ref{eq_new_SOP_from_GammaR}) as \begin{align}\label{asym_NZ_OS_without} &\mathcal{P}_{\text{out}}^\infty= 1- s\sum_{n=1}^{N}\binom{N}{n}(-1)^{n+1}\nn\\ &\times\Bigg(\sum_{k=1}^{K}\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})} \Bigg)^n =(1-s). \end{align} The asymptotic SOP for the OTS-BKA scheme is derived using the approximated $F_{\Gamma_{\text{R}}^{(n^*)}}(y)$ from (\ref{gamma_r_ots_with}) at high-SNR and then using (\ref{eq_new_SOP_from_GammaR}) as $\mathcal{P}_{\text{out}}^\infty=\lb(1-s\rb)^N$. The asymptotic analysis of the NZSR can be evaluated directly from the asymptotic SOP derived in this section for each selection scheme and backhaul link activity knowledge case following (\ref{eq_NZSR_from_SOP}). It is observed from the asymptotes derived for each case in this section that the SOP at high-SNR saturates to a constant value depending on the backhaul link reliability factor $s$ and does not depend on the channel quality. We note that the asymptotic SOP is the same, i.e., $(1-s)$, in the BKU case, irrespective of the selection schemes. Similarly, the asymptotic SOP in the BKA case for all the selection schemes is the same, i.e., $(1-s)^N$. We further notice that when backhaul link activity knowledge is unavailable, the asymptotic SOP can not be improved further by increasing the number of transmitters. In contrast, the SOP improvement is possible when the backhaul link activity knowledge is available by increasing the number of transmitters. We also observe that asymptotic SOP requires very few computations compared to the exact SOP expression derived in Section \ref{sec_distribution_snr_ratio}. \section{Diversity Order Analysis of SOP with Perfect Backlhaul Links}\label{section_diversity_order} In this section, we evaluate the diversity order for each transmitter selection scheme when all backhaul links are perfect, i.e., $s=1$. Since the asymptotic values depend only on the backhaul link reliability factor, as shown in the previous section, it is not possible to understand how the SOP reaches the asymptotic values, which is generally answered by the secrecy diversity order analysis. That is why we derive the diversity order here when $s=1$. The diversity order is defined as the negative slope of the SOP curve at a high-SNR regime defined in Section \ref{sec_asymptotic_analysis_sop_unreliable} in the logarithm scale of the SNR and is expressed as \begin{align}\label{diversity_order} d=-\lim_{1/\lambda_{\text{D}}\rightarrow\infty} \frac{\log\lb(\mathcal{P}_{\text{out}}\rb)}{\log\lb(1/\lambda_{\text{D}}\rb)}. \end{align} Toward the goal, we first obtain an approximate SOP corresponding to the selection schemes at the high-SNR regime and then use the diversity order definition in (\ref{diversity_order}). To obtain an approximate $\mathcal{P}_{\text{out}}$ corresponding to the transmitter selection scheme in the high-SNR regime, we derive approximate $F_{\Gamma_{\text{R}}^{(n^*)}}(\rho)$ following Section \ref{sec_asymptotic_analysis_sop_unreliable} but in the case of perfect backhaul links using the first-order Taylor series approximation $F_{\Gamma^{(n)}_{\text{SD}}}(x)=\lambda_\text{D}x$ from (\ref{cdf_SD}) and $f_ {{\Gamma}_{\text{SE}}^{(n^*)}}(x)$ from Section \ref{sec_asymptotic_analysis_sop_unreliable} for perfect backhaul links. Under the perfect backhaul links condition, the notion of BKU and BKA does not apply. \subsection{Minimal eavesdropping selection (MIN-ES)} To obtain the diversity order, as described above, $F_{\Gamma_{\text{SD}}^{(n)}}$ in (\ref{cdf_SD}) is approximated first using first-order Taylor series approximation as $F_{\Gamma^{(n)}_{\text{SD}}}(x)=\lambda_\text{D}x$ at the high-SNR regime. Next, the approximate SOP is evaluated using $f_{\Gamma_{\text{SE}}^{(n^*)}}(x)$ from (\ref{PDF_SE_MIN-ES}) under the perfect backhaul links condition and approximate $F_{\Gamma^{(n)}_{\text{SD}}}(x)$. Following (\ref{gamma_r_min_without_0}), the approximate SOP is \begin{align}\label{sop_mines_high_snr} &\mathcal{P}_{\text{out}}=F_{\Gamma_{\text{R}}^{(n^*)}}(\rho) = \int_{0}^{\infty} F_{{\Gamma}^{(n)}_{\text{SD}}}(\rho(x+1)-1) f_ {{\Gamma}_{\text{SE}}^{(n^*)}}(x) dx\nn\\ &=\lambda_{\text{D}}\sum_{i_1+\ldots+i_K=N}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\Bigg(\frac{1}{\tilde{\lambda}_{\text{E}}^{(k)}}+(\rho-1)\Bigg). \end{align} {Thereafter, using (\ref{sop_mines_high_snr}) in (\ref{diversity_order}), we derive the diversity order as $d=1$. The diversity order is one because the transmitter selection is carried out based on the eavesdroppers' links. The minimum SNR at the eavesdropper does not guarantee better destination channel, that is why the diversity does not improve. } \subsection{Traditional transmitter selection (TTS)} In this case, $F_{\Gamma_{\text{SD}}^{(n*)}}(x)$ in (\ref{CDF_TTS_without_max}) for the TTS scheme is approximated using first-order Taylor series approximation as $F_{\Gamma^{(n*)}_{\text{SD}}}(x)=(\lambda_\text{D}x)^N$. Next, to evaluate the approximate SOP, we use $f_{\Gamma_{\text{SE}}^{(n)}}(x)$ from (\ref{PDF_SE}) and approximate $F_{\Gamma^{(n*)}_{\text{SD}}}(x)$ under the perfect backhaul links condition. We obtain the approximate SOP following (\ref{gamma_r_tts_without}) as \begin{align}\label{sop_tts_high_snr} \mathcal{P}_{\text{out}} &=\lambda_{\text{D}}^N\sum_{n=0}^{N}\sum_{k=1}^{K}\frac{\rho^n(\rho-1)^{N-n}\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)}{\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})} \Bigg(\frac{n!}{{\lambda}_{\text{E}_k}^{n+1}}\Bigg). \end{align} {Then, the diversity order is obtained using (\ref{sop_tts_high_snr}) in (\ref{diversity_order}) as $d=N$, which is intuitive as the transmitter is selected based on the selection among $N$ number of links to the destination.} \subsection{Optimal transmitter selection (OTS)} Similar to the previous section, $F_{\Gamma_{\text{SD}}^{(n)}}(x)$ in (\ref{cdf_SD}) is approximated first using first-order Taylor series approximation $F_{\Gamma^{(n)}_{\text{SD}}}(x)=\lambda_\text{D}x$ and then the approximate SOP is obtained using approximate $F_{\Gamma^{(n)}_{\text{SD}}}(x)$ and $f_{\Gamma_{\text{SE}}^{(n)}}$ from (\ref{PDF_SE}) under the perfect backhaul links condition. The approximate SOP following (\ref{gamma_r_ots_without}) is \begin{align}\label{sop_ots_high_snr} \mathcal{P}_{\text{out}} &=\Bigg(\sum_{k=1}^{K}\frac{\lambda_{\text{D}}\Big(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\Big)\Big(\frac{1}{{\lambda}_{E}^{(k)}}+(\rho-1)\Big)}{\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})} \Bigg)^N. \end{align} {Finally, the application of (\ref{sop_ots_high_snr}) in (\ref{diversity_order}) provides the diversity order as $d=N$, which is the same as the diversity order of the TTS scheme. It can be observed that at high SNRs, both TTS and OTS will relish the same rate of improvement with $N$. However, in OTS, the global CSI is required, whereas the CSI of the eavesdroppers' links is not required in the TTS scheme.} \section{Asymptotic Analysis of ESR}\label{section_asymptotic_ESR} In this section, we provide the asymptotic analysis of the ESR for each selection scheme in both the backhaul link activity knowledge cases, BKU and BKA. The definition of the asymptotic analysis is the same as in Section \ref{sec_asymptotic_analysis_sop_unreliable}, i.e., $1/\lambda_{\text{D}} \rightarrow \infty$ for a given $1/\lambda_{\text{E}}^{(k)}$ for each $n\in\{1,\dots, K\}$ and $k \in \{1,\ldots, N\}$ unless otherwise specified. \subsection{Minimal eavesdropping selection (MIN-ES)} The asymptotic ESR for the MIN-ES-BKU scheme is evaluated first. To derive the asymptotic expression, we approximate exact $\mathcal{C}_{\text{erg}}$ from (\ref{erg_min_es_without}) derived in Section \ref{section_ergodic_secrecy_rate} for the MIN-ES-BKU scheme when $1/\lambda_{\text{D}} \rightarrow \infty$. As $1/\lambda_{\text{D}} \rightarrow \infty$, the ESR in (\ref{erg_min_es_without}) is approximated by assuming $\lambda_{\text{D}} \rightarrow 0$ as \begin{align}\label{erg_asymp_min_es} \mathcal{C}_{\text{erg}}& \approx \frac{s}{\ln(2)}\sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\nn\\ &\times\Big(e^{\tilde{\lambda}_{\text{E}}^{(k)}}\mathrm{Ei}(-\tilde{\lambda}_{\text{E}}^{(k)})-\mathrm{Ei}(-\lambda_{\text{D}})\Big). \end{align} Next, we use the definition $\mathrm{Ei}(-x)=C+\ln{(x)}+\int_{0}^{x}\frac{e^{-t}-1}{t}dt$, where $C=\lim\limits_{m\rightarrow\infty}\lb(-\log(m)+\sum_{k=1}^{m}\frac{1}{k}\rb)$ = $0.5772$ (with $m$ being a positive integer) is the Euler constant, from \cite[eq. (8.212.1)]{table_of_integrals} to further approximate (\ref{erg_asymp_min_es}) by neglecting the integral $\int_{0}^{\lambda_\text{D}}\frac{e^{-t}-1}{t}dt$ while $\lambda_\text{D}\rightarrow 0$. Hence, we write the asymptotic ESR as \begin{align} \label{eq_asy_ESR_MINESMKU} \mathcal{C}_{\text{erg}}^{\infty} &=\frac{s}{\ln(2)}\Bigg(\ln{\Big(\frac{1}{\lambda_{\text{D}}}\Big)}-\sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\Bigg.\nn\\ &\Bigg.\times\Big(C-e^{\tilde{\lambda}_{\text{E}}^{(k)}}\mathrm{Ei}(-\tilde{\lambda}_{\text{E}}^{(k)})\Big)\Bigg). \end{align} The asymptotic ESR in (\ref{eq_asy_ESR_MINESMKU}) can be rewritten as a linear function of $\ln{\lb(1/\lambda_{\text{D}}\rb)}$ to derive insights from the equation as \begin{align}\label{eq_straight_line_form_ESR_MINESBKU} \mathcal{C}_{erg}^{\infty}&=S^{\infty}\lb(\ln{(1/\lambda_{\text{D}})}-L^{\infty}\rb), \end{align} where $S^{\infty}$ is the high-SNR slope and $L^{\infty}$ is the power offset parameter \cite{wang_PLS_2014}. The slope shows the rate of change of the ESR with SNR at high SNR, and a system should have a high slope. The offset parameter represents the shift of the asymptotic ESR from the origin, and it is desirable to have $L^\infty$ as small as possible. From (\ref{eq_asy_ESR_MINESMKU}), $S^{\infty}= \frac{s}{\ln(2)} $ and \begin{align}\label{l_infinity_min_without} L^{\infty} &=C-\sum_{\mathbf{i}\in\mathcal{M}^{(N)}}\binom{N}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)e^{\tilde{\lambda}_{\text{E}}^{(k)}}\mathrm{Ei}(-\tilde{\lambda}_{\text{E}}^{(k)}). \end{align} Following the same procedure as in the MIN-ES-BKU case in obtaining (\ref{eq_straight_line_form_ESR_MINESBKU}) from (\ref{erg_asymp_min_es}), the asymptotic ESR for the MIN-ES-BKA case is evaluated from (\ref{erg_min_es_with}). In this case, $S^{\infty}=\frac{1-(1-s)^N}{\ln(2)}$ and \begin{align}\label{l_infinity_min_with} L^{\infty}&=\frac{1}{1-(1-s)^N}\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}\nn\\ &\times(1-s)^{N-n}s^n\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\prod\limits_{\substack{j=1\\j\ne k}}^K\lambda_{\text{E}}^{(k)}(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\nn\\ &\times\Big(C-e^{\tilde{\lambda}_{\text{E}}^{(k)}}\mathrm{Ei}(-\tilde{\lambda}_{\text{E}}^{(k)})\Big). \end{align} \subsection{Traditional transmitter selection (TTS)} In the TTS-BKU scheme, the asymptotic ESR is evaluated by following the same steps as taken to arrive at (\ref{eq_straight_line_form_ESR_MINESBKU}) from (\ref{erg_asymp_min_es}). In this case, we also apply $1/\lambda_{\text{D}} \rightarrow \infty$ or $\lambda_{\text{D}} \rightarrow 0$ in (\ref{eq_ERG_TTS_without}) to get the approximated ESR as \begin{align}\label{eq_asy_ESR_TTSBKU} \mathcal{C}_{\text{erg}}&\approx \frac{s}{\ln(2)}\sum_{n=1}^{N}\sum_{k=1}^{K}\binom{N}{n}(-1)^{n+1}\frac{\lb(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\rb)}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\nn\\ &\times \Big(e^{\lambda_{\text{E}}^{(k)}}\mathrm{Ei}(-\lambda_{\text{E}}^{(k)})-\mathrm{Ei} (-n\lambda_{\text{D}} )\Big). \end{align} Using the same approximation of $\mathrm{Ei}(-n\lambda_{\text{D}})\approx C+\ln{(n\lambda_{\text{D}})}$ when $\lambda_{\text{D}}\rightarrow 0$ as in the MIN-ES scheme, the asymptotic ESR is evaluated as in (\ref{eq_straight_line_form_ESR_MINESBKU}) where $S^{\infty}= \frac{s}{\ln(2)} $ and \begin{align}\label{l_inf_tts_without} L^{\infty}&=C+\sum_{n=1}^{N}\sum_{k=1}^{K}\binom{N}{n}\frac{(-1)^{n+1}\lb(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\rb)}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\nn\\ &\times\Big(\ln(n)-e^{\lambda_{\text{E}}^{(k)}}\mathrm{Ei}(-\lambda_{\text{E}}^{(k)})\Big). \end{align} Similarly, in the TTS-BKA scheme, the asymptotic ESR is achieved from (\ref{ERG_TTS_with}) where $\frac{1-\lb(1-s\rb)^N}{\ln(2)} $ and \begin{align}\label{l_inf_tts_with} &L^{\infty}=\frac{1}{1-\lb(1-s\rb)^N}\sum_{n=1}^{N}\sum_{k=1}^{K}\sum_{q=1}^{n}\binom{N}{n}\binom{n}{q}(1-s)^{N-n}s^{n}\nn\\ &\times\frac{(-1)^{q+1}\lb(\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}\rb)}{\lambda_{\text{E}}^{(k)}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Big(C+\ln(q)-e^{\lambda_{\text{E}}^{(k)}}\mathrm{Ei}(-\lambda_{\text{E}}^{(k)})\Big). \end{align} It is observed in this section that the slope of the ESR for each selection scheme depends only on $s$ and increases as $s$ tends to unity. The highest slope $S^\infty=\frac{1}{\ln(2)}$ is achieved when $s=1$. We also observe that the slopes of the MIN-ES and TTS schemes are the same for the BKU cases. The slopes in MIN-ES-BKA and TTS-BKA cases are also the same; however, higher than the slopes of the MIN-ES-BKU and TTS-BKU cases. This shows that the slope depends on the availability of the backhaul link activity knowledge for a given selection scheme. However, for the given backhaul link activity knowledge case, it is the same irrespective of the selection schemes. In the BKU cases, the slope only depends on $s$ but not on $N$. In contrast, the slope in the BKA case depends on $s$ and $N$ both. This suggests that the slope can be improved by increasing $N$ in the BKA case but not in the BKU case. The slope in the BKU and BKA cases for both the MIN-ES and TTS schemes is independent of $K$. We note from (\ref{l_infinity_min_without}), (\ref{l_infinity_min_with}), (\ref{l_inf_tts_without}), and (\ref{l_inf_tts_with}) that $L^\infty$ is independent of $1/\lambda_{\text{D}}$. It increases as $K$ and $1/\lambda_{\text{E}}^{(k)}$ for each $k\in\{1,\ldots,K\}$ increases. Thus, the asymptotic ESR decreases. This is intuitive as the increase in eavesdropping quality degrades the system's secrecy. We also note that $L^\infty$ decreases as $N$ increases. From (\ref{l_infinity_min_without}) and (\ref{l_inf_tts_without}), we observe that in both the BKU cases for MIN-ES and TTS schemes, $L^\infty$ is independent of $s$ and only depends on $N$, $K$, and $1/\lambda_{\text{E}}^{(k)}$. However, in the BKA cases for MIN-ES and TTS schemes in (\ref{l_infinity_min_with}) and (\ref{l_inf_tts_with}), $L^\infty$ decreases as $s$ tends to unity. \subsection{Optimal transmitter selection (OTS)} In the OTS scheme, it is not trivial to find the asymptotic ESR by using the same approximation as was used for the MIN-ES and TTS schemes. Therefore, the asymptotic ESR for the $n$-th transmitter is derived assuming ${\Gamma}_{\text{SD}}^{(n)}$ and ${\Gamma}_{\text{SE}}^{(n)}$ both operate in the high-SNR regime and $1/\lambda_{\text{D}} >>1/\lambda_{\text{E}}^{(k)}$ for all $n\in\{1,\ldots,N\}$ and $k\in \{1,\ldots,K\}$. In this case, $\Gamma_{\text{R}}^{(n)}$ is approximated by neglecting unity from both the numerator and the denominator in (\ref{gamma_r}). The CDF of ${\Gamma_{\text{R}}^{(n)}}$ with high-SNR approximation when the backhaul link is active, i.e., $s=1$ is written as \begin{align}\label{gamma_r_high_snr} F_{\Gamma_{\text{R}}^{(n)}}(x)&=\mathbb{P}\Bigg[\frac{{\Gamma}_{\text{SD}}^{(n)}}{{\Gamma}_{\text{SE}}^{(n)}}\le x\Bigg] = \int_{0}^{\infty} F_{{\Gamma}^{(n)}_{\text{SD}}}(xy) f_{{\Gamma}_{\text{SE}}^{(n)}}(y) dy. \end{align} The asymptotic ESR for a selection scheme is then evaluated following (\ref{ergodic_secrecy_rate}) with the help of corresponding $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ derived from (\ref{gamma_r_high_snr}) in the BKU and BKA cases. In the OTS-BKA scheme, $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ in the high-SNR regime is derived following Section \ref{sec_distribution_snr_ratio} under the OTS-BKU case using (\ref{cdf_SD}) and (\ref{PDF_SE}) in (\ref{gamma_r_high_snr}) as \begin{align}\label{gamma_ots_high_snr} &F_{\Gamma_{\text{R}}^{(n^*)}}(x)=(1-s)+s\times\mathbb{P}\Bigg[\max_{n\in \{1,2\ldots, N\}} \Bigg\{\frac{{\Gamma}_{\text{SD}}^{(n)}}{{\Gamma}_{\text{SE}}^{(n)}} \Bigg\}\le x\Bigg] \nn \\ &=1-\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}(-1)^{n+1}\binom{n}{i_1,\ldots,i_K}\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{D}}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\frac{s}{\prod\limits_{t=1}^{K}\Big(x+\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}\Big)^{i_k}}. \end{align} The asymptotic ESR is then evaluated by following the same procedure as adopted for the OTS-BKU case in (\ref{ERG_OTS_without}) from Section \ref{section_ergodic_secrecy_rate}. Therefore, using $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ from (\ref{gamma_ots_high_snr}) in (\ref{ergodic_secrecy_rate}), the asymptotic ESR is evaluated as \begin{align}\label{ERG_OTS_without_high} &\mathcal{C}_{\text{erg}}^{\infty} =\frac{1}{\ln(2)}\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}(-1)^{n+1}s\nn\\ &\times\Bigg(\prod_{k=1}^{K}\Bigg(\frac{\prod_{m=1}^{K}\lambda_{\text{E}}^{(m)}}{\lambda_{\text{D}}\prod\limits_{\substack{j=1\\j\ne k}}^K(\lambda_{\text{E}}^{(j)}-\lambda_{\text{E}}^{(k)})}\Bigg)^{i_k}\Bigg)\nn\\ &\times\sum_{k=1}^{K}\Bigg(\frac{\ln{\Big(\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}\Big)}}{\Big(\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}\Big)^{i_k}\prod\limits_{\substack{j=1\\j\ne k}}^{K}\Big(\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}-\frac{{\lambda}_{E}^{(j)}}{\lambda_{\text{D}}}\Big)^{i_ji_k}}+\sum_{l_k=1}^{i_k-1}A_k^{(i_k)}\nn\\ &\times\frac{\Big(1+\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}\Big)^{l_k-i_k} }{(i_k-l_k)}\Bigg). \end{align} In the OTS-BKA scheme, $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ is evaluated with the help of (\ref{cdf_SD}) and (\ref{PDF_SE}) in (\ref{gamma_r_high_snr}) in the high-SNR regime following the same procedure to derive (\ref{ERG_OTS_with}) in Section \ref{section_ergodic_secrecy_rate} for the OTS-BKU scheme. Using $F_{\Gamma_{\text{R}}^{(n^*)}}(x)$ in (\ref{ergodic_secrecy_rate}), the asymptotic ESR is written as \begin{align}\label{erg_asym_ots_with} &\mathcal{C}_{\text{erg}}^{\infty}=\frac{1}{\ln(2)}\sum_{n=1}^{N}\sum_{\mathbf{i}\in\mathcal{M}^{(n)}}\sum_{k=1}^{K}\binom{N}{n}\binom{n}{i_1,\ldots,i_K}(-1)^{n+1}s^n\nn\\ &\times\sum_{k=1}^{K}\Bigg(\frac{\ln{\Big(\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}\Big)}}{\Big(\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}\Big)^{i_k}\prod\limits_{\substack{j=1\\j\ne k}}^{K}\Big(\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}-\frac{{\lambda}_{E}^{(j)}}{\lambda_{\text{D}}}\Big)^{i_ji_k}}+\sum_{l_k=1}^{i_k-1}A_k^{(i_k)}\nn\\ &\times\frac{\Big(1+\frac{{\lambda}_{E}^{(k)}}{\lambda_{\text{D}}}\Big)^{l_k-i_k} }{(i_k-l_k)}\Bigg). \end{align} We notice from (\ref{ERG_OTS_without_high}) and (\ref{erg_asym_ots_with}) that the asymptotic ESR in both the BKU and BKA cases depends on the ratio of destination link SNR to eavesdropping link SNR. Thus, as this ratio improves, the asymptotic ESR also improves. It is difficult to represent (\ref{erg_asym_ots_with}) in the form of the asymptotic straight line as shown in (\ref{eq_straight_line_form_ESR_MINESBKU}). Therefore we show it for $K=1$. In this case, the asymptotic ESR of the OTS-BKU scheme simplifies to \begin{align}\label{erg_asym_ots_k1} \mathcal{C}_{\text{erg}}^{\infty} &=\frac{1}{\ln(2)}\sum_{n=1}^{N}\binom{N}{n}(-1)^{n+1}s\lb(\log\lb(\frac{\lambda_{E}^{(1)}}{\lambda_{\text{D}}}\rb)-H_{n-1}\rb), \end{align} where $H_{n-1}$ denotes the $(n-1)$-th harmonic number. Thus, we find $S^{\infty}=\frac{s}{\ln(2)}$ and \begin{align}\label{l_inf_ots_without} L^{\infty} &=\log\Big(\frac{1}{\lambda_{E}^{(1)}}\Big)+\sum_{n=1}^{N}\binom{N}{n}{(-1)^{n+1}}{{H}_{n-1}}. \end{align} Similarly, for the OTS-BKA case when $K=1$, we can show $S^{\infty}=\frac{1-(1-s)^N}{\ln(2)}$ and \begin{align}\label{l_inf_ots_with} L^{\infty} &=\log\Big(\frac{1}{\lambda_{E}^{(1)}}\Big)+\sum_{n=1}^{N}\binom{N}{n}\frac{{(-1)^{n+1}s^n}}{1-(1-s)^N}{H_{n-1}}. \end{align} We note here that the slope and the offset parameter for the BKA case for $K=1$ matches with the result shown in \cite{chinmoy_letter2021}. We observe that the slope of the OTS-BKU for $K=1$ is the same as the slope of MIN-ES-BKU and TTS-BKU cases. The slope of the OTS-BKA case for $K=1$ is also the same as the slope of MIN-ES-BKA and TTS-BKA cases. Thus, irrespective of the selection schemes, the slope is the same for the BKU case, and it is also the same for the BKA case. However, $L^\infty$ is different for each selection scheme and in the BKU and BKA cases. Similar to the observation in the MIN-ES and TTS schemes, $L^\infty$ in the OTS scheme does not depend on $s$ and $1/\lambda_{\text{D}}$ for the BKU cases and depends on $s$, $N$, $K$, and $1/\lambda_{\text{E}}^{(k)}$ for the BKA cases. {For $K=1$, $L^\infty$ in the OTS scheme in (\ref{l_inf_ots_without}) and (\ref{l_inf_ots_with}) is smaller than $L^\infty$ of the MIN-ES and TTS scheme in (\ref{l_infinity_min_without}), (\ref{l_infinity_min_with}), (\ref{l_inf_tts_without}), and (\ref{l_inf_tts_with}). This is why OTS scheme performs the best.} The asymptotic ESR expression with the help of the slope and offset clearly shows how it depends on $s$, $1/\lambda_{\text{D}}$, $1/\lambda_{\text{E}}^{(k)}$ for all $k\in\{1,\ldots,K\}$, $K$, and $N$ which was difficult to understand from the exact ESR expressions. Further, it is to be noted that the asymptotic ESR depends on parameters $s$, $1/\lambda_{\text{D}}$, $1/\lambda_{\text{E}}^{(k)}$ for all $k\in\{1,\ldots,K\}$, $K$, and $N$, in contrast, the asymptote of the SOP and NZSR depends only on $s$ for the BKU case and only on $s$ and $N$ for the BKA case. \section{Numerical Results} \label{section_numerical_results} This section describes the numerical along with simulated results. Throughout this section, we assume that the threshold secrecy rate is set as $R_{th}=1$ bits per channel use (bpcu), {$K=3$, and the eavesdroppers' SNRs are set at $1/\lambda_{\text{E}}^{(1)}=6$ dB, $1/\lambda_{\text{E}}^{(2)}=9$ dB, $1/\lambda_{\text{E}}^{(3)}=13$ dB for each $n\in\{1,\ldots, N\}$}, unless stated otherwise. Results are plotted for the SOP and ESR for each selection scheme in BKU (designated by black color) and BKA (designated by red color) cases. The NZSR is not shown as it can be easily obtained from the SOP and the observations are similar to that of the SOP. The simulation is denoted by `$\times$', and the horizontal solid line denotes the asymptote from Fig \ref{fig_sop_s} to Fig \ref{fig_erg_nk}, and the slanted straight lines denote the asymptote in Fig \ref{fig_erg_asym}. The validity of our analysis is evident from the figures, as the numerical and simulation results match perfectly. We notice from all the figures that the OTS scheme outperforms all other schemes in both the BKU and BKA cases. Further, it is noticed that the BKA cases outperform the BKU cases corresponding to their associated selection schemes, as the backhaul link activity knowledge improves the secrecy performance. \subsection{Secrecy Outage Probability} \begin{figure} \centering \includegraphics[width=3.6in]{fig_sop_s.eps} \caption{Variation of SOP with ${1}/{\lambda_{\text{D}}}$ for different values of $s$.} \label{fig_sop_s} \end{figure} In Fig. \ref{fig_sop_s}, the SOP has been plotted versus destination channel SNR 1/$\lambda_{\text{D}}$ for different values of $s = \{0.20, 0.90\}$ with $N=5$ and $K=3$. We observe that the secrecy performance improves with the increase in SNR $1/\lambda_{\text{D}}$ till it reaches its asymptotic value. We also notice that the performance improves with the increase in $s$. Further at low SNRs, the MIN-ES scheme performs better than the TTS scheme. After a certain value of 1/$\lambda_{\text{D}}$, the TTS scheme overcomes the MIN-ES scheme. This is because selecting the worst eavesdropping link is advantageous over selecting the best destination link among already degraded destination links. In contrast, selection among the destination links is advantageous at high SNR when the links are usually good. At high SNRs, the performance difference between the TTS scheme and the MIN-ES scheme is also more significant than the difference between the OTS scheme and the TTS scheme. It is also observed that the SOP saturates to a constant value at high SNR, which is represented by the asymptotic straight line. As $s$ increases, saturation levels decrease. Further, the BKA case has a lower saturation level than the BKU case. This can also be confirmed from the numerical analysis in Section \ref{sec_asymptotic_analysis_sop_unreliable}. \begin{figure} \centering \includegraphics[width=3.6in] {fig_sop_n.eps} \caption{Variation of SOP with ${1}/{\lambda_{\text{D}}}$ for different values of $N$.} \label{fig_sop_n} \end{figure} Fig. \ref{fig_sop_n} shows the SOP versus 1/$\lambda_{\text{D}}$ for different values of $N = \{2, 5\}$ when $s = 0.20$ and $K=3$. We observe that the SOP decreases for all considered schemes as the number of transmitters increases. More the number of transmitter choices, the better the secrecy performance of the system. However, the rate of performance improvement is far slower in the case of BKU. Thus, we notice that the secrecy performance of the system with fewer transmitters, i.e., $N=2$ for BKA cases, outperforms the BKU cases with a large number of transmitters, i.e., $N=5$ for any selection scheme. The SOP for the BKU case for all the selection schemes saturates to the same constant value irrespective of $N$ as it depends only on $s$, however, for the BKA case, the SOP for all the selection schemes saturates to different levels as it depends both on $s$ and $N$. \begin{figure} \centering \includegraphics[width=3.6in]{fig_sop_k.eps} \caption{ Variation of SOP with ${1}/{\lambda_{\text{D}}}$ for different values of $K$.} \label{fig_sop_k} \end{figure} Fig. \ref{fig_sop_k} plots the SOP versus 1/$\lambda_{\text{D}}$ for the proposed selection schemes for different number of eavesdroppers $K=\{1,3\}$ when $s=0.90$ and $N=5$. The eavesdroppers' SNRs are set at $1/\lambda_{\text{E}}^{(1)}=6$ dB, $1/\lambda_{\text{E}}^{(2)}=9$ dB, and $1/\lambda_{\text{E}}^{(3)}=13$ dB when $K=3$ for each $n\in\{1,\ldots, N\}$, and $1/\lambda_{\text{E}}^{(1)}=13$ dB when $K=1$. We note that the secrecy performance degrades with the increased number of eavesdroppers. The MIN-ES scheme performs better than the TTS at low SNRs, and the TTS scheme outperforms the MIN-ES scheme at high-SNR, which was also found in Fig. \ref{fig_sop_s}. We find that irrespective of the selection schemes, the asymptotes for both the values of $K$ coincide for a given backhaul link activity knowledge case (BKU and BKA). This shows that the asymptotes are independent of $K$ and the selection schemes and only depend on whether the backhaul link activity knowledge is available or not. This confirms the analysis for asymptotic SOP in Section \ref{sec_asymptotic_analysis_sop_unreliable}. \subsection{Ergodic Secrecy Rate} \begin{figure} \centering \includegraphics[width=3.6in]{fig_erg_s.eps} \caption{Variation of ESR with ${1}/{\lambda_{\text{D}}}$ for different values of $s$.} \label{fig_erg_s} \end{figure} Fig. \ref{fig_erg_s} plots the ESR versus $1/\lambda_{\text{D}}$ for two different values of the backhaul link reliability factor, $s=\{0.90, 0.20\}$ with $N=5$ and $K=3$. The ESR performance of the transmitter selection schemes follow a similar trend of the SOP, e.g., the MIN-ES scheme performs better than the TTS scheme at low SNRs, and the ESR improves as $s$ increases. However, it is noticed that the ESR improvement from the BKU to BKA case by utilizing the backhaul activity knowledge when $s$ is low is more significant as compared to the case when $s$ high. This implies that the knowledge of backhaul link activity can significantly improve the secrecy performance when the backhaul link reliability is relatively low. \begin{figure} \centering \includegraphics[width=3.6in]{fig_erg_nk.eps} \caption{Variation of ESR with ${1}/{\lambda_{\text{D}}}$ for different values of $N$ and $K$.} \label{fig_erg_nk} \end{figure} Fig. \ref{fig_erg_nk} shows the effect of a different number of eavesdroppers $K=\{1, 3\}$ and $N=\{2,5\}$ on the ESR for all the selection schemes when $s=0.2$. The eavesdroppers' SNRs are set at $1/\lambda_{\text{E}}^{(1)}=6$ dB, $1/\lambda_{\text{E}}^{(2)}=9$ dB, and $1/\lambda_{\text{E}}^{(3)}=13$ dB when $K=3$ for each $n\in\{1,\ldots, N\}$, and $1/\lambda_{\text{E}}^{(1)}=13$ dB when $K=1$. We notice that the ESR performance improves when the number of transmitters increases. However, the system's secrecy performance decreases as the number of eavesdroppers increases. \begin{figure} \centering \includegraphics[width=3.6in]{fig_erg_asym.eps} \caption{Variation of ESR with ${1}/{\lambda_{\text{D}}}$ for perfect and unreliable backhaul links.} \label{fig_erg_asym} \end{figure} In Fig. \ref{fig_erg_asym}, the ESR and its asymptotic values are plotted versus $\frac{1}{\lambda_{\text{D}}}$ when $N=5$ and $K=1$ for the perfect and unreliable backhaul (BKU and BKA) cases. The ESR axis is shown in the linear scale to plot the asymptotes as straight lines following (\ref{eq_straight_line_form_ESR_MINESBKU}). The asymptotic straight lines match well with the exact ESRs at the high-SNR regime. We notice that the perfect backhaul case, i.e., $s=1$, outperforms the unreliable backhaul case, i.e., $s<1$, which is intuitive. It also has the highest slope. However, as $s$ decreases from $s=1$, the slope of the ESR in the unreliable case also decreases. Further, among unreliable backhaul cases, the BKA case has a better slope than the BKU case. This signifies that the rate of improvement of ESR in the BKA case with SNR is better than in the BKU case due to the available backhaul link activity knowledge. It is also noticed that the slope for the BKU cases, irrespective of the selection schemes, is the same. So does the slope of the BKA cases for all the selection schemes. This shows that the slope does not depend on the selection schemes only depends on the backhaul link activity knowledge and the reliability factor. This can also be confirmed from the derived slopes in the asymptotic analysis in Section \ref{section_asymptotic_ESR}. \section{Conclusions} \label{conclusions} We adopt a generalized methodology to include the backhaul reliability factor in the secrecy performance analysis of transmitter selection schemes against colluding eavesdroppers and provide a uniform approach to obtain exact closed-form NZSR, SOP, and ESR expressions from the distribution of the ratio of destination link and eavesdropping link SNR irrespective of selection schemes. Simplified asymptotic expressions are provided for each transmitter selection scheme to understand the effect of system parameters and available backhaul activity knowledge. We observe that the asymptotic NZSR and SOP performance can not be improved by increasing the number of transmitters when the backhaul link activity knowledge is unavailable and saturates to a value depending on the backhaul reliability factor only. In contrast, asymptotic saturation can be improved by increasing the number of transmitters when the backhaul link activity knowledge is available. In both cases, the asymptotic saturation value does not depend on the number of eavesdroppers. The effect of unavailable backhaul activity knowledge degrades the rate of improvement of the ESR with SNR. Though the number of eavesdroppers has no effect on the rate of improvement of the ESR with SNR, the ESR degrades with the increase in the number of eavesdroppers. We notice that the knowledge of backhaul link activity can significantly improve the secrecy performance when the backhaul link reliability is relatively low. We also find that the MIN-ES scheme is better at low SNR than the TTS scheme, however, the TTS scheme outperforms it when SNR improves. \bibliographystyle{IEEEtran}
1,477,468,750,687
arxiv
\subsection*{\refname}} \bibliographystyle{naturemag}
1,477,468,750,688
arxiv
\section{#1}\vspace{-3pt}} \newcommand{\Subsection}[1]{\vspace{-2pt}\subsection{#1}\vspace{-2pt}} \newcommand{\Subsubsection}[1]{\vspace{-4pt}\subsubsection{#1}\vspace{-4pt}} \newenvironment{tight_itemize}{\begin{itemize} \itemsep -1pt}{\end{itemize}} \newenvironment{tight_enumerate}{\begin{enumerate} \itemsep -1pt}{\end{enumerate}} \linespread{0.99} \begin{document} \title{\huge{A Framework for Analysis of Computational Imaging Systems: Role of Signal Prior, Sensor Noise and Multiplexing}} \author{Kaushik~Mitra,~\IEEEmembership{Member,~IEEE}, Oliver~S.~Cossairt,~\IEEEmembership{Member,~IEEE} and~ Ashok~Veeraraghavan,~\IEEEmembership{Member,~IEEE} \IEEEcompsocitemizethanks{\IEEEcompsocthanksitem K. Mitra and A. Veeraraghavan are with the Department of Electrical and Computer Engineering, Rice University, Houston, TX, 77025.\protect\\ E-mail: Kaushik.Mitra@rice.edu and vashok@rice.edu \IEEEcompsocthanksitem O. S. Cossairt is with the Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL 60208. \protect\\ E-mail: ollie@eecs.northwestern.edu \thanks{} } \maketitle \begin{abstract} \input{abstract.tex} \end{abstract} \begin{keywords} Computational imaging, Extended depth-of-field (EDOF), Motion deblurring, GMM \end{keywords} \Section{Introduction} \label{sec:Intro} \input{intro.tex} \Section{Related Work} \label{sec:relatedWork} \input{relatedWork.tex} \Section{Problem Definition and Notation} \label{sec:ProbDef} \input{ProbDef.tex} \Section{Analytic Performance Characterization of CI Systems Using GMM Prior} \label{sec:GMMPrior} \input{GMMPrior.tex} \Section{Common Framework for Analysis of CI Systems} \label{sec:frmAnalCIsystems} \input{frmAnalCIsystems.tex} \Section{Performance Analysis of EDOF Systems} \label{sec:perfEDOFsystems} \input{perfEDOFsystems.tex} \Section{Performance Analysis of Motion Deblurring Systems} \label{sec:perfMotDeblurSystems} \input{perfMotDeblurSystems.tex} \Section{Performance Analysis of Light Field Systems} \label{sec:perfLF} \input{perfLF.tex} \Section{Exact MMSE vs. Its Lower and Upper Bounds} \label{sec:exactVsBounds} \input{exactMMSEvsBounds.tex} \Section{Discussions} \label{sec:Discussions} \input{Discussions.tex} \Section{Acknowledgements} Kaushik Mitra and Ashok Veeraraghavan acknowledge support through NSF Grants NSF-IIS: 1116718, NSF-CCF:1117939 and a research grant from Samsung Advanced Institute of Technology through the Samsung GRO program. \vspace{-0.1in} {\small \bibliographystyle{ieee} \subsection{Scope and Limitations} \noindent \textbf{Image Formation Model.} Our analysis assumes a linear image formation model. Non-linear imaging systems, such as a two/three photon microscopes and coherent imaging systems are outside the scope of this paper. Nevertheless, our analysis covers a very large array of existing imaging systems \cite{raskar2006coded,MaskPaper,levin2007image,wagadarikar2008single,ihrke2010theory,ratner2007illumination}. We use a geometric optics model and ignore the effect of diffraction due to small apertures. \vspace{.05in} \noindent\textbf{Noise Model.} We use an affine noise model to describe the combined effects of \textit{signal-independent} and \textit{signal-dependent} noise. Signal-dependent Poisson noise is approximated using a Gaussian noise model (as described in Section \ref{sec:Noise}). \vspace{.05in} \noindent\textbf{Single Image Capture.} We perform analysis of only single image CI techniques. Our results are therefore not applicable to multi-image capture techniques such as Hasinoff et al. \cite{Hasinoff:09} (EDOF), and Zhang et al. \cite{zhang2010defocusdenoise} (Motion Deblurring). \vspace{.05in} \noindent \textbf{Patch Based Prior.} Learning a GMM prior on entire images would require an impossibly large training set. To combat this problem, we train our GMM on image patches, and solve the image estimation problem in a patch-wise manner. As a result, our technique requires that multiplexed measurements are restricted to linear combinations of pixels in a neighborhood smaller than the GMM patch size. \vspace{.05in} \noindent \textbf{Shift-Invariant Blur.} We analyze motion and defocus deblurring cameras under the assumption of a single known shift-invariant blur kernel. This amounts to the assumption that either the depth/motion is position-independent, or the blur is independent of depth/motion. We do not analyze errors due to inaccurate kernel estimation (for coded aperture and flutter shutter \cite{raskar2006coded,MaskPaper,levin2007image}) or due to the degree of depth/motion invariance (for focal sweep, cubic phase plate, motion invariant photography \cite{diffusionCoding,Baek:10,levin2008motion,cho2010motion}). \subsection{Optimal Exposure Setting for Motion Deblurring} We first fix the exposure setting of the impulse imaging system based on the range of object velocities we want to capture. The exposure is set to a value that the motion blur for the desired range of velocities is less than a pixel. We then analytically compute the expected SNR gain of different exposure settings (PSF kernel lengths) with respect to the impulse imaging system (of PSF kernel length $1$) at various light levels, see Figure \ref{fig:motionDeblurring}(a). For light levels less than $150$ lux capturing the image with a larger exposure and then deblurring is a better option, whereas, for light levels greater than $150$ lux we should capture the impulse image and then denoise. Figure \ref{fig:motionDeblurring}(b) shows the optimal blur PSF length at different light levels. At light level of $1$ lux, the optimal PSF length is $23$, whereas for light levels greater than or equal to $150$ lux the optimal length is $1$, i.e., the impulse image setting. Figure \ref{fig:motionDeblurring} (c-e) show the simulated results with different PSF kernel lengths at a few lighting levels. \begin{figure}[htb] \centering \includegraphics[width=3.5in]{figures/OptSetting/defocusDeblurring.png} \label{fig:defocusDeblurring} \caption{\emph{Optimal aperture setting for defocus deblurring:} Depending on the desired DOF, we fix the aperture size of the impulse imaging system so that the defocus blur is less than a pixel. We then analytically compute the SNR gain of different aperture settings (PSF kernel size) with respect to impulse imaging system of PSF kernel size $1\times1$ for various light levels, see subplot (a). For light levels less than $400$ lux capturing the image with a larger aperture and then deblurring is a better option, whereas, for light levels greater than $400$ lux we should capture impulse image and then denoise. In subplot (b) we show the optimal blur PSF size at different light levels. At light level of $1$ lux, the optimal PSF is $9\times9$, whereas for light levels greater than $400$ lux the optimal is $1\times1$, i.e., the impulse image setting. Subplots (c-d) show the simulated results with different PSF size at a few lighting levels.} \end{figure} \subsection{Optimal Aperture Setting For Defocus Deblurring} \label{sec:OptAperSetting} Depending on the desired DOF, we fix the aperture size of the impulse imaging system so that the defocus deblur is less than a pixel. We then analytically compute the SNR gain of different aperture settings (PSF kernel size) with respect to impulse imaging system of PSF kernel size $1\times1$ for various light levels, see Figure \ref{fig:defocusDeblurring}(a). From this plot, we conclude that for light levels less than $400$ lux capturing the image with a larger aperture and then deblurring is a better option, whereas, for light levels greater than $400$ lux we should capture the impulse image and then denoise. Figure \ref{fig:defocusDeblurring}(b) shows the optimal blur PSF size at different light levels. At light level of $1$ lux, the optimal PSF is $9\times9$, whereas, for light levels greater than $400$ lux the optimal is $1\times1$, i.e., the impulse image setting. Figure \ref{fig:defocusDeblurring}(c-d) show the simulated results with different PSF size at a few lighting levels.
1,477,468,750,689
arxiv
\subsection{\@startsection{subsection}{3}% \z@{.5\linespacing\@plus.7\linespacing}{.3\linespacing}% {\bfseries\centering}} \makeatother \makeatletter \def\subsubsection{\@startsection{subsubsection}{3}% \z@{.5\linespacing\@plus.7\linespacing}{.3\linespacing}% {\centering}} \makeatother \makeatletter \def\ifx\protect\@typeset@protect\expandafter\footnote\else\expandafter\@gobble\fi{\ifx\protect\@typeset@protect\expandafter\footnote\else\expandafter\@gobble\fi} \makeatother \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{question}[theorem]{Question} \newtheorem{fact}[theorem]{Fact} \newtheorem{remark}[theorem]{Remark} \newcounter{claimcounter} \numberwithin{claimcounter}{theorem} \newenvironment{claim}{\stepcounter{claimcounter}{\noindent {\bf Claim \theclaimcounter.}}}{} \newenvironment{claimproof}[1]{\noindent{{\em Proof.}}\space#1}{\hfill $\rule{0.35em}{0.35em}$} \newcommand{\RM}[2]{\mathsf{RM}^{\mathcal{{#1}}}({#2})} \newcommand{\typesp}[3]{S^{\mathcal{{#1}}}_{{#2}}({#3})} \newcommand{\typelem}[3]{\mathsf{tp}^{\mathcal{{#1}}}(\vec{{#2}}/{#3})} \newcommand*\dep{{=\mkern-1.2mu}} \newcommand*\apdep[1]{{=_{#1}\mkern-1.2mu}} \newcommand{\Dep}[2]{\dep(\vec{{#1}}, \vec{{#2}})} \newcommand*\bota{{\bot\mkern-1.2mu}} \newcommand{\perpc}[1]{\perp_{#1}} \newcommand{\perp\!\!\!\perp}{\perp\!\!\!\perp} \def\presuper#1#2% {\mathop{}% \mathopen{\vphantom{#2}}^{#1}% \kern-\scriptspace% #2} \newcommand{\relort}[2]{\presuper{{#1}}{\rotatebox[origin=c]{90}{$\Vdash$}}^{#2}} \def\ \rotatebox[origin=c]{90}{$\Vdash$} \ {\ \rotatebox[origin=c]{90}{$\Vdash$} \ } \begin{document} \title{A Logic for Arguing About Probabilities in Measure Teams} \thanks{The research of the second author was supported by the Finnish Academy of Science and Letters (Vilho, Yrj\"o and Kalle V\"ais\"al\"a foundation).} \author{Tapani Hyttinen} \address{Department of Mathematics and Statistics, University of Helsinki, Finland} \author{Gianluca Paolini} \address{Department of Mathematics and Statistics, University of Helsinki, Finland} \author{Jouko V\"a\"an\"anen} \address{Department of Mathematics and Statistics, University of Helsinki, Finland and Institute for logic, Language and Computation, University of Amsterdam, The Netherlands} \begin{abstract} We use sets of assignments, a.k.a. teams, and measures on them to define probabilities of first-order formulas in given data. We then axiomatise first-order properties of such probabilities and prove a completeness theorem for our axiomatisation. We use the Hardy-Weinberg Principle of biology and the Bell's Inequalities of quantum physics as examples. \end{abstract} \maketitle \def\mathcal A{\mathcal A} \section{Introduction} The logic of propositions with assigned probabilities is usually associated with nondeductive methods such as inductive reasoning (\cite{MR0040253}). The concept of probability in such an approach is the degree of confirmation or belief. Instead, in this paper we assign probabilities to propositions using the {\em frequency} interpretation and study properties of such probabilities. Thus, while probability logic usually focuses on the question how to assign probabilities to composite formulas, we focus on the symmetric question how to axiomatise formulas built up from probabilities. To make using the frequency interpretation possible in defining probabilities we adopt the approach of {\em team semantics} from \cite{vaananen}. Suppose $\mathcal A$ is a first-order structure with domain $A$. Suppose furthermore $v_0,\ldots,v_n$ are variables that have values in $A$. If we have a set $X$ of assignments of values to $v_0,\ldots,v_n$ in $A$, called a {\em team}, we may ask, what is the probability that a randomly chosen assignment in $X$ satisfies a given first-order formula $\phi(v_0,\ldots,v_n)$ in $\mathcal A$? For this to make perfect sense we need to specify a probability function for relevant subsets of $X$. Our {\em measure teams} are exactly such teams. In this paper we give axioms for making inferences about first-order properties of such probabilities, and prove the completeness of our axioms. In the context of experimental science it is natural to consider probabilities of formulas rather than just the truth values true/false. In the world of Big Data this is even more relevant. We suggest to take the concept of a measure team as a starting point and use it to compute the probabilities of formulas, rather than having the probabilities as given, as in the probability logic of \cite{MR0040253, MR0175755}. In a sense we can argue about the probabilities and have the evidence---the data, or team as we call it---as part of the discussion. The measure teams that arise from actual experiments are, of course, finite. Indeed, the simplest measure teams consist just of a finite number of assignments of values to fixed variables, as in the table Figure~\ref{dis} of 8 rows of binary data. \begin{figure}[h] $$\begin{array}{|c|c|c|c|c|} \hline \phantom{a} & v_0 & v_1 & v_2 & v_3 \\ \hline 0 & 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 2 & 1 & 1 & 1 & 1 \\ 3 & 1 & 1 & 1 & 0 \\ 4 & 0 & 0 & 1 & 1 \\ 5 & 0 & 0 & 0 & 0 \\ 6 & 0 & 0 & 0 & 0 \\ 7 & 0 & 0 & 0 & 0 \\ \hline \end{array} $$\caption{A discrete measure team \label{dis}}\end{figure} An example of a finite measure team in biology is a pool of genes. One of the pioneering mathematical results in genetics is the Hardy-Weinberg Theorem which shows that a conservation phenomenon takes place in a gene pool from generation to generation under certain assumptions, such as random mating. The Hardy-Weinberg Theorem is an example of a property of measure teams that can be expressed and proved in our setup. Despite the finiteness of teams arising from experiments, we consider in this paper mainly infinite teams, typically continuum size, which abstract away the finiteness of empirical observations. Our Completeness Theorem (Theorem~\ref{main_theorem}) is with respect to infinite measure teams. A paradigm example is an idealised measurement of given variables $v_0,\ldots,v_n$ at all points of time starting at time $0$ and ending at time $1$ (see Figure~\ref{flow}). The values of the variables can be e.g. real numbers which change continuously with time. Thus we have an assignment $s_t$ that depends continuously on time $t$ and interprets variables $v_0,\ldots,v_3$ at every point of time. When time progresses from $0$ to $1$, the vector $(s_t(v_0),\ldots,s_t(v_3))$ flows from $(s_0(v_0),\ldots,s_0(v_3))$ to $(s_1(v_0),\ldots,s_1(v_3))$. It seems appropriate to call such teams {\em continuous teams} as the assignment changes continuously with time. In physical sciences variables, such as temperature, speed, pressure, amplitude, force, etc, are typically continuous in time. Therefore the concept of continuous team would seem to cover a lot of examples. Continuous teams (when considered with respect to the Lebesgue measure) are examples of measure teams, the topic of this paper. \begin{figure}[h] $$\begin{array}{|c|c|c|c|c|} \hline t & v_0 & v_1 & v_2 & v_3 \\ \hline 0 &s_0(v_0) & s_0(v_1) & s_0(v_2) & s_0(v_3) \\ \vdots &\vdots & \vdots & \vdots & \vdots \\ t &s_t(v_0) & s_t(v_1) & s_t(v_2) & s_t(v_3) \\ \vdots &\vdots & \vdots & \vdots & \vdots \\ 1 &s_1(v_0) & s_1(v_1) & s_1(v_2) & s_1(v_3) \\ \hline \end{array} $$\caption{A continuous measure team \label{flow}}\end{figure} \section{Measure Teams} We denote by $\mathrm{Var} = \left\{ v_i \, | \, i < \omega \right\}$ the set of individual first-order variables. \begin{definition}[Multi-team] A {\em multi-team} $X$ with values in $A$ and domain $\mathrm{dom}(X) \subseteq \mathrm{Var}$ is a pair $(\Omega, \tau)$ such that $\Omega$ is a set and $\tau: \Omega \rightarrow A^{\mathrm{dom}(X)}$ is a function. \end{definition} Given a multi-team $(\Omega, \tau)$, if we put a probability measure on $\Omega$ we get a probabilistic notion of team. Of course, for this definition to be useful one has to put also some measurability conditions on $\tau$. This idea leads to the notion of {\em measure team}, which is the focus of the present paper. \begin{definition}[Measure team] Let $L$ be a signature and $\mathcal{A}$ an $L$-structure. A measure team $X$ with values in $\mathcal{A}$ and domain $\mathrm{dom}(X) \subseteq \mathrm{Var}$ is a quadruple $(\Omega, \mathcal{F}, P, \tau)$ such that $(\Omega, \mathcal{F}, P)$ is a probability space and $\tau: \Omega \rightarrow A^{\mathrm{dom}(X)}$ is a measurable function, in the sense that $$\left\{ i \in \Omega \, | \, \mathcal{A} \models_{t(i)} \phi \right\} \in \mathcal{F}$$ for every first-order $L$-formula $\phi$ with free variables in $\mathrm{dom}(X)$. \end{definition} If $X =(\Omega, \mathcal{F}, P, \tau)$ is countable, then the natural choice for $\mathcal{F}$ is $\mathcal{P}(\Omega)$, i.e. the whole power set of $\Omega$, and measurability of $\tau$ is automatically ensured. In the uncountable case, the situation is of course more delicate. \begin{definition}[Probability]\label{def_prob} Let $L$ be a signature, $\mathcal{A}$ an $L$-structure, $X = (\Omega, \mathcal{F}, P, \tau)$ a measure team with values in $\mathcal{A}$ and $\phi$ a first-order $L$-formula with free variables in $\mathrm{dom}(X)$. We let \[ [\phi]_X = P(\left\{ i \in \Omega \, | \, \mathcal{A} \models_{\tau(i)} \phi \right\}) .\] \end{definition} That is, $[\phi]_X$ is the probability that a randomly chosen assignment from $X$ satisfies $\phi$. Notice that because of the measurability conditions imposed on $\tau$, the above definition makes sense. \begin{example}\label{boolean} In Figure \ref{dis} we have an example of a measure team $X = (\Omega, \mathcal{F}, P, \tau)$ with values in the boolean algebra on two elements $\mathcal{A} = (\{0,1\}, 0, \vee, \wedge, \neg)$, where $(\Omega, \mathcal{P}(\Omega), P)$ is the set with eight elements endowed with the normalized counting measure (measure of one point is $\frac{1}{8}$), the domain of $X$ is $\left\{ v_0, v_1, v_2, v_3 \right\}$ and $\tau$ is as in the figure, e.g. $\tau(0)((v_0, v_1, v_2, v_3)) = (1, 1, 0, 1)$. If we consider the variables $v_i$ as propositional variables, then in this case $$ [v_0 \wedge v_1]_X = \frac{1}{2}, $$ because 50\% of the rows satisfy the propositional formula $v_0\wedge v_1$. We will call measure teams of this particular kind boolean multi-teams. In \cite{QTL} a system of propositional logic based on boolean multi-teams has been investigated. \end{example} We denote by $\mathcal{R} = (\mathbb{R}, 0, 1, +, -, \cdot, \leq)$ the ordered field of real numbers, with $\mathcal{L}$ the $\sigma$-algebra of Lebesgue measurable subsets of $[0, 1]$ and with $P$ the Lebesgue measure on $[0, 1]$. \begin{example} Let $(f_i: [0, 1] \rightarrow \mathbb{R}))_{i < 3}$ be continuous functions, and let $\tau: [0, 1] \rightarrow \mathbb{R}^{\left\{ v_0, v_1, v_2 \right\}}$ so that $\tau(a)(v_i) = f_i(a)$, for $i < 3$. Then $X = ([0, 1], \mathcal{L}, P, \tau)$ is a measure team, which we called above {\em continuous measure team}, with values in $\mathcal{R}$. This follows from elementary properties of continuous functions and elimination of quantifiers for $\mathcal{R}$. \end{example} \section{Measure Team Logic}\label{measure_team_logic} Our measure team logic is concerned with making inferences about the probabilities themselves, not about how probabilities of composite formulas depend on probabilities of the subformulas. An example of a valid sentence of our measure team logic is $$|\phi| = |\phi \wedge \psi| + |\phi \wedge \neg \psi|,$$ where $|\phi|$ denotes the probability of $\phi$ in the team in question. Thus our logic has built in function symbols $+,\cdot$ for expressing arithmetic relations between probabilities. We now define {\em measure team logic}. Let $L_0$ be a countable signature, $Q \subseteq \mathbb{R}$ countable and $n \leq \omega$. The intended $Q$ is the set of rational numbers $\mathbb{Q}$ (or even $\mathbb{Q} \cap [0, 1]$). We define the signature $L_Q$ and $L_1$ as follows \[ L_Q = \left\{ 0, 1, +, -, \cdot, \leq \right\} \cup \left\{ c_q \, | \, q \in Q \right\}\] \[ L_1 = L_1(L_0, n) = L_Q \cup \left\{ |\phi(x)| \; | \; \phi(x) \; \text{$L_0$-formula } x = (v_i)_{i < n} \right\},\] where the $c_q$ and $|\phi(x)|$ are constant symbols. Note that $|\phi(x)|$ is considered just as a constant symbol, however complicated the formula $\phi(x)$ is. Without loss of generality we may assume that $L_0 \cap L_1 = \emptyset$, this is to avoid possible confusion. A typical (atomic) formula of our logic is of the form $$|\phi(x)|=|\psi(x)|$$ with the meaning that a randomly chosen assignment from our team is as likely to satisfy $\phi(x)$ as it is to satisfy $\psi(x)$. Another typical (atomic) formula is of the form $$|\phi(x)|=|\psi(x)|+|\theta(x)|$$ with the meaning that the probability that a randomly chosen assignment from our team satisfies $\phi(x)$ is the sum of the corresponding probabilities for $\psi(x)$ and $\theta(x)$. Given a measure team $X$ with values in $\mathcal{A}$ and $\mathrm{dom}(X) = \left\{ v_i \, | \, i < n \right\}$, we let $\mathcal{R}^{X}_Q$ be the expansion of $\mathcal{R} = (\mathbb{R}, 0, 1, +, -, \cdot, \leq)$ to an $L_1$-structure obtained by interpreting the constant $c_q$ as the real number $q$, and by letting $$|\phi(x)|^{\mathcal{R}^{X}_Q} = [\phi(x)]_X,$$ where $[\phi(x)]_X$ is as in Definition \ref{def_prob}. Thus, $[\phi(x)]_X$ is the value of the constant symbol $|\phi(x)|$ in $\mathcal{R}^{X}_Q$. \begin{definition}[Semantics] Let $\Sigma$ be an $L_1$-theory, $\mathcal{A}$ an $L_0$-structure, $X$ a measure team with values in $\mathcal{A}$ and $\mathrm{dom}(X) = \left\{ v_i \, | \, i < n \right\}$. We define $$X \models \Sigma \;\; \Leftrightarrow_{\tiny def} \;\; \mathcal{R}^{X}_Q \models \Sigma.$$ \end{definition} \begin{definition}[Logical consequence] Let $T$ be an $L_0$-theory and $\Sigma \cup \left\{ \alpha \right\}$ an $L_1$-theory. We define $$(T, \Sigma) \models \alpha$$ if for every $\mathcal{A} \models T$ and every measure team $X$ with values in $\mathcal{A}$ such that $\mathrm{dom}(X) = \left\{ v_i \, | \, i < n \right\}$, we have that $$X \models \Sigma \;\; \Rightarrow \;\; X \models \alpha.$$ \end{definition} We now define a deductive system $(T, \Sigma) \vdash \alpha$ with $T$ an $L_0$-theory, $\Sigma$ an $L_1$-theory and $\alpha$ an $L_0$-formula or an $L_1$-formula. Of course what we are really interested in is the case when $\alpha$ is an $L_1$-formula, but for things to work, i.e. to prove completeness, we also have to admit the case in which $\alpha$ is an $L_0$-formula. The deductive system $\vdash$ has three components: $\vdash_0$, $\vdash_1$ and $\vdash_2$. The component $\vdash_0$ allows to deduce $L_0$-formulas from $L_0$-formulas, the component $\vdash_1$ allows to deduce $L_1$-formulas from $L_1$-formulas and the component $\vdash_2$ allows to deduce $L_1$-formulas from $L_0$-formulas. The component $\vdash_0$ is simply the deductive system of first-order logic with respect to $L_0$-formulas. The component $\vdash_1$ is the deductive system of first-order logic with respect to $L_1$-formulas plus the axioms $\mathrm{RCF}^* = \mathrm{Th}(\mathcal{R}_Q)$ (or any axiomatization thereof). Finally, the component $\vdash_2$ consists of three axioms ($A_0$)-($A_2$) and one rule ($R_0$), as below: \smallskip \begin{prooftree} \AxiomC{} \LeftLabel{($A_0$)} \UnaryInfC{$|\phi \wedge \neg \phi| = 0$} \end{prooftree} \begin{prooftree} \AxiomC{} \LeftLabel{($A_1$)} \UnaryInfC{$|\phi \vee \neg \phi| = 1$} \end{prooftree} \begin{prooftree} \AxiomC{} \LeftLabel{($A_2$)} \UnaryInfC{$|\phi \vee \psi| = |\phi| + |\psi| - |\phi \wedge \psi|$} \end{prooftree} \begin{prooftree} \AxiomC{$\bigvee_{i < k} \bigwedge_{j < m_i} \forall x (\phi^i_j(x) \rightarrow \psi^i_j(x))$} \LeftLabel{($R_0$)} \UnaryInfC{$\bigvee_{i < k} \bigwedge_{j < m_i} (|\phi^i_j(x)| \leq |\psi^i_j(x)|)$} \end{prooftree} where in rule ($R_0$) we assume that the formulas $\forall x (\phi^i_j(x) \rightarrow \psi^i_j(x))$ are sentences. \medskip As an example of the use of our deductive system we show that $$\vdash |\phi| = |\phi \wedge \psi| + |\phi \wedge \neg \psi|.$$ First of all notice that \begin{prooftree} \AxiomC{$\phi \wedge \psi \wedge \phi \wedge \neg\psi \leftrightarrow \psi \wedge \neg \psi$} \LeftLabel{($R_0$)} \UnaryInfC{$|\phi \wedge \psi \wedge \phi \wedge \neg\psi| = |\psi \wedge \neg \psi|$} \AxiomC{} \LeftLabel{($A_0$)} \UnaryInfC{$|\psi \wedge \neg \psi| = 0$} \BinaryInfC{$|\phi \wedge \psi \wedge \phi \wedge \neg\psi| = 0$} \end{prooftree} Secondly, let $\alpha = |(\phi \wedge \psi) \vee (\phi \wedge \neg\psi)| = |\phi \wedge \psi| + |\phi \wedge \neg\psi| - |\phi \wedge \psi \wedge \phi \wedge \neg\psi|$ and notice that \begin{prooftree} \AxiomC{} \LeftLabel{($A_2$)} \UnaryInfC{$\alpha$} \AxiomC{$|\phi \wedge \psi \wedge \phi \wedge \neg\psi| = 0$} \BinaryInfC{$|(\phi \wedge \psi) \vee (\phi \wedge \neg\psi)| = |\phi \wedge \psi| + |\phi \wedge \neg\psi|$} \end{prooftree} Finally, we conclude \begin{prooftree} \AxiomC{$\phi \leftrightarrow (\phi \wedge \psi) \vee (\phi \wedge \neg\psi)$} \LeftLabel{($R_0$)} \UnaryInfC{$|\phi| = |(\phi \wedge \psi) \vee (\phi \wedge \neg\psi)|$} \AxiomC{$|(\phi \wedge \psi) \vee (\phi \wedge \neg\psi)| = |\phi \wedge \psi| + |\phi \wedge \neg\psi|$} \BinaryInfC{$|\phi| = |\phi \wedge \psi| + |\phi \wedge \neg \psi|$} \end{prooftree} \medskip \section{Some examples} \begin{example} In this example we look at the usefulness of quantification when one expresses conditions on probabilities. This example is hypothetical in many senses but it is faithful to the calculations of quantum mechanics. Suppose that we have two observables $v_{1}$ and $v_{2}$ which can take values from the set $\{ 1,2,3,4\}$, a device that produces particles such that they are all in the same unknown pure state and that someone has produced a large table $X$ of measurements of these observables from the particles produced by the device (usually it is impossible to measure the two observables independently from one particle but we overlook this kind of problems here, in \cite{QTL} we have studied logical questions related to the impossibility of experimentally producing tables with values for all observables from every particle). In physics this kind of situation is typically modelled by two self-adjoint operators in a 4-dimensional Hilbert space. Let $P$ be the operator for $v_{1}$ and $p(i)$, $i\in\{ 1,2,3,4\}$, its eigenvectors with eigenvalue $i$. Similarly, let $Q$ be the operator for $v_{2}$ and $q(i)$ its eigenvectors. Notice that when one knows the operators $P$ and $Q$, it is possible to calculate the coordinates of the vectors $p(i)$ in the basis of eigenvectors of $Q$. Can we express in measure team logic the condition that the measurements are in harmony with the theory? Yes, the following is expressible in our logic: there are four complex numbers (pairs of reals) $c_{n}$, $n\in\{ 1,2,3,4\}$, such that for all $i\in\{ 1,2,3,4\}$ the following holds: $\vert c_{i}\vert^{2}=[v_{1}=i]_{X}$ and $\vert \langle s\vert q(i)\rangle\vert^{2}=[v_{2}=i]_{X}$, where $\langle \cdot\vert \cdot\rangle $ is the inner product and $$s=(1/2)\sum_{n=1}^{4}c_{n}p(n).$$ This is exactly the condition that our data $X$ agrees with the theory. \end{example} \begin{example} This is an example of the use of $T$ in theories $(T,\Sigma )$. We look at homogeneous Markov chains (see e.g. \cite[pg. 61]{markov_chains}). We think of variables $(v_i)_{i<\omega}$ as random variables and elements of the team $X$ as tests. The value of the random variable $v_j$ in the test $i\in\Omega$ is $\tau(i)(v_j)$. Figuratively speaking, the team $X$ consists of rows of data concerning the random variables $v_i$. We give axioms which say that the sequence $(v_i)_{i<\omega}$ of random variables is a Markov process. The state space of a Markov process is usually assumed to be countable which is not a first-order property. However, if one looks at chains in which the state diagram has some additional properties (after we remove some of the arrows with probability $0$) we can overcome this problem. The additional property we study here is that there is some natural number $N$ such that from each node in the state diagram at most $N$ arrows with non-zero probability go out. We also assume that the chain has an initial state from which every process starts. The main example in our mind of this is the random walk in a space of dimension $N/2$. \def\eta{\eta} Markov chains with these properties can be axiomatized as follows in measure team logic: The vocabulary of $T$ consists of a binary relation $E$ and constants $c_{\eta}$, $\eta\in N^{<\omega}$. The theory $T$ says the following for all $\eta,\xi\in N^{<\omega}$: (a) $(c_{\eta},a)\in E$ iff $a=c_{\eta\frown (i)}$ for some $i<N$. (b) If $c_{\eta}=c_{\xi}$ then for all $i<N$, $c_{\eta\frown (i)}=c_{\xi\frown (i)}$. \noindent As an initial state we take (the interpretation of) $c_{\empty}$ and notice that if $\mathcal{A}\models T$, then the set $G_{\mathcal{A}}$ of the interpretations of the constants equipped with $E\cap (G_{\mathcal{A}}\times G_{\mathcal{A}})$ is a connected directed graph and every state diagram satisfying our assumptions (after removing some of the useless arrows) can be obtained from a model of $T$ in this way. Also it is worth noticing that any process that starts from the initial state stays inside $G_{\mathcal{A}}$. We let $n=\omega$ and describe the probabilities as a Markov process that starts from the initial state. Thus $\Sigma$ consists of the following for all $i,j<\omega$, $\eta\in N^{<\omega}$ and $k<N$: (A) $\vert v_{0}=c_{\empty}\vert =1$. (B) $\vert E(v_{i},v_{i+1})\vert =1$. (C) $(\vert v_{i}=c_{\eta}\vert =0)\vee (\vert v_{i}=c_{\eta}\vert =0) \vee \newline \phantom{C} \phantom{C} \phantom{C}\phantom{a\,} (\vert v_{i}=c_{\eta}\wedge v_{i+1}= c_{\eta\frown (k)}\vert\vert v_{j}=c_{\eta}\vert = \vert v_{j}=c_{\eta}\wedge v_{j+1}= c_{\eta\frown (k)}\vert\vert v_{i}=c_{\eta}\vert )$. A team $X$ satisfies $\Sigma$ if and only if the stochastic process consisting of the values of the random variables $(v_i)_{i<\omega}$ in $X$ is a Markov chain. \end{example} \begin{example}[The Hardy-Weinberg Principle] In the early days of biology there was an apparent paradox: It seemed that in any population the dominant alleles should eventually drive out the recessive ones, but this was not supported by observations and experimental data. The Hardy-Weinberg Principle (\cite{hardy,weinberg}) explains why in a randomly mating population the recessive alleles stabilise to maintain a fixed portion, even after just one generation. We consider a diallelic gene with alleles A and a. The logically---but not at all biologically---possible genotypes form the 27 element set $$M=\{AA,Aa,aa\}\times\{AA,Aa,aa\}\times\{AA,Aa,aa\},$$ where the first component of the triples is the genotype of the father, the second that of the mother and the third that of the child. \def\mathcal{M}{\mathcal{M}} Let $L_0$ be the following signature ($f$ for father, $m$ for mother and $c$ for child) \[ \left\{ P^j_k \, | \, j \in \left\{ f, m, c \right\} \text{ and } k \in \left\{ AA, Aa, aa \right\} \right\},\] where the $P^j_k$ are unary predicate symbols. We get an $L_0$-structure by defining \[\mathcal{M}=(M, (P^j_k)^\mathcal{M})_{j,k},\] where $$(P^f_k)^\mathcal{M}=\{(u,v,w) \in M: u=k \},$$ $$(P^m_k)^\mathcal{M}=\{(u,v,w) \in M: v=k \} \text{ and } (P^c_k)^\mathcal{M}=\{(u,v,w) \in M: w=k \}.$$ Thus $\mathcal{M}$ is simply the set $M$ of logically possible genotypes with their internal structure accessible via the predicates $P^j_k$. Now we can look at measure teams of assignments of variables in this structure. We focus on three variables $v_0,v_1$ and $v_2$, representing three generations. Such a measure team can be thought of as genetic data about three generations of a population. We disregard mating across generations, so for us the next generation is always the children. More formally, a measure team $X$, relevant for the purpose of the Hardy-Weinberg Principle, is a quadruple $(\{1,\ldots, n\}, \mathcal{P}(\{1,\ldots, n\}), P, \tau)$ such that $P$ is the uniform probability on $\{1,\ldots, n\}$ and $\tau: \{1,\ldots, n\} \rightarrow M^{\{v_0,v_1,v_2\}}$ is an arbitrary function. For each $i\in\{1,\ldots,n\}$ the assignment $\tau(i)$ records a father-mother-child triple of the first generation ($v_0$), second generation ($v_1$) and the third generation ($v_2$). For the language $L_1$ we choose $Q=\mathbb{Q}$. Let $\Sigma_{\mbox{\tiny HW}}$ consist of the below $L_1$-equations (\ref{starr})-(\ref{starstarstarstarstar}). Remember that the language $L_1$ contains all the constant symbols $|\phi(x)|$, where $\phi(x)$ is an arbitrary $L_0$-formula. So (\ref{starr})-(\ref{starstarstarstarstar}) are atomic sentences, more exactly equations of constant terms. \begin{equation}\label{starr} |P^j_k(v_{i+1})| = |P^c_k(v_i)|, \end{equation} for $j = f, m$, $k = AA, Aa, aa$ and $i =0, 1$; \begin{equation}\label{starstar} |P^f_{k}(v_{i+1}) \wedge P^m_{z}(v_{i+1})| = |P^f_{k}(v_{i+1}) \wedge P^m_{z}(v_{i+1}) \wedge P^c_{w}(v_{i+1})| \end{equation} for $(k, z, w) = (AA, AA, AA), (AA, aa, Aa), (aa, AA, Aa), (aa, aa, aa)$ and $i =0, 1$; \begin{equation}\label{starstarstar} |P^f_{k}(v_{i+1}) \wedge P^m_{l}(v_{i+1})| = 2\cdot |P^f_{k}(v_{i+1}) \wedge P^m_{l}(v_{i+1}) \wedge P^c_{m}(v_{i+1})| \end{equation} for $(k, l, m) = (AA, Aa, AA), (AA, Aa, Aa), (aa, Aa, Aa), (aa, Aa, aa),(Aa, aa, aa),$ $(Aa, AA, Aa), (Aa, aa, Aa), (Aa, AA, AA), (Aa, Aa, Aa)$ and $i =0, 1$; \begin{equation}\label{starstarstarstar} |P^f_{k}(v_{i+1}) \wedge P^m_{l}(v_{i+1})| = 4\cdot |P^f_{k}(v_{i+1}) \wedge P^m_{l}(v_{i+1}) \wedge P^c_{m}(v_{i+1})| \end{equation} for $(k, l, m) = (Aa, Aa, AA), (Aa, Aa, aa)$ and $i =0, 1$; \begin{equation}\label{starstarstarstarstar} |P^f_k(v_{i+1}) \wedge P^m_l(v_{i+1})| = |P^f_k(v_{i+1})| \cdot |P^m_l(v_{i+1})| \end{equation} for $k = AA, Aa, aa$, $l = AA, Aa, aa$ and $i =0, 1$. The formulas of type (1) express that allele frequencies are equal in the sexes, the formulas of type (2)-(4) specify how the genotypes are inherited, according to Mendel's Principles, and the formulas of type (5) express that mating is random, an important assumption of the Hardy-Weinberg Principle. Finally, let $\alpha_{\mbox{\tiny HW}}$ be the conjunction of the following $L_1$-equations: $$ \begin{array}{lcr} |P^c_{AA}(v_1)| &=& |P^c_{AA}(v_2)|\\ |P^c_{Aa}(v_1)| &=& |P^c_{Aa}(v_2)|\\ |P^c_{aa}(v_1)| &=& |P^c_{aa}(v_2)|. \end{array}$$ These conjuncts say that the genotype frequencies among the children in the second and third generations are the same, i.e. a stable balance achieves already at the second generation. Thus, all the assumptions of the Hardy-Weinberg Principle are formalizable in our logic. Since the Hardy-Weinberg Principle is true and our logic is complete (see Section \ref{completeness}), it follows that $$\Sigma_{\mbox{\tiny HW}}\vdash\alpha_{\mbox{\tiny HW}},$$ i.e. our deductive system proves (a formalization) of the Hardy-Weinber Principle. \end{example} \begin{example}[Bell's Inequalities] In \cite{QTL}, among other things, we presented a system of probability logic capable to handle so-called {\em logical Bell's inequalities} \cite{abramsky}. Suppose $X=(\Omega, \mathcal{F}, P, \tau)$ is a boolean multi-team (see Example \ref{boolean}) the domain of which contains the proposition symbols of some given propositional formulas $(\phi_j)_{j < k}$. Then \begin{equation}\label{star} \sum_{j < k} [\phi_j]_X \leq k-1 + [\bigwedge_{j < k} \phi_j]_X. \end{equation} If furthermore the formula $\bigwedge_{j < k} \phi_j$ is contradictory (in the sense of propositional logic), then $[\bigwedge_{j < k} \phi_j]_X =0$. Thus, the inequality (\ref{star}) becomes \begin{equation}\label{starstar} \sum_{j < k} [\phi_j]_X \leq k-1. \end{equation} % Inequalities of this form (\ref{starstar}) are of great importance in foundations of quantum mechanics, see \cite{abramsky} and \cite{QTL}. Because of the completeness result presented in the next section, we will see that this inequalities are provable in our logic. For suitably chosen propositional formulas $(\phi_j)_{j < k}$, representing propositions about Quantum Mechanics, the inequality (\ref{starstar}) fails thereby demonstrating the contextuality of probabilities in the quantum world. To remedy this a {\em quantum team logic} is introduced in \cite{QTL}. In the quantum team logic the problematic inequalities (\ref{starstar}) are not provable but we still have a Completeness Theorem with respect to {\em quantum teams}, a generalization of the concept of a measure team. \end{example} \section{Completeness}\label{completeness} In this section we prove that the deductive system described in Section \ref{measure_team_logic} is complete with respect to the given semantics. We begin with a Lindenbaum's Lemma like result for our deductive system. As in Section \ref{measure_team_logic}, let $L_0$ be a countable signature, $Q \subseteq \mathbb{R}$ countable and $n \leq \omega$. \begin{lemma}\label{linden} Suppose that $(T, \Sigma) \nvdash \perp$. Then there are a complete $L_0$-theory $T_0$ and a complete $L_1$-theory $\Sigma_0$ such that $T \subseteq T_0$, $\Sigma \subseteq \Sigma_0$ and $(T_0, \Sigma_0) \nvdash \perp$. \end{lemma} \begin{proof} We first construct $T_0$ as a limit of a chain $(T^*_i)_{i < \omega}$ of $L_0$-theories. Let $(\phi_i)_{i < \omega}$ be an enumeration of the $L_0$-sentences. By induction on $i < \omega$ we construct $T^*_i$ so that $(T^*_i, \Sigma) \nvdash \perp$ and either $\phi_i \in T^*_{i+1}$ or $\neg \phi_i \in T^*_{i+1}$. If $i = 0$, let $T^*_0 = T$. If $i = j+1$, there are three cases. \newline {\bf Case 1}. $T^*_j \vdash \phi_j$. Let $T^*_i = T^*_j \cup \left\{ \phi_j \right\}$. \newline {\bf Case 2}. $T^*_j \vdash \neg \phi_j$. Let $T^*_i = T^*_j \cup \left\{ \neg \phi_j \right\}$. \newline {\bf Case 3}. $T^*_j \nvdash \phi_j$, i.e. $T^*_j \cup \left\{ \neg \phi_j \right\} \nvdash \perp$, and $T^*_j \nvdash \neg \phi_j$, i.e. $T^*_j \cup \left\{ \phi_j \right\} \nvdash \perp$. For the sake of a contradiction, suppose that \[(T^*_j \cup \left\{ \phi_j \right\}, \Sigma) \vdash \perp \; \text{ and } \; (T^*_j \cup \left\{ \neg \phi_j \right\}, \Sigma) \vdash \perp.\] We show this is impossible and then extend $T^*_j$ with $\phi_j$ if $(T^*_j \cup \left\{ \phi_j \right\}, \Sigma) \nvdash \perp$, and $\neg \phi_j$ otherwise. Given that $T^*_j \cup \left\{ \phi_j \right\} \nvdash \perp$, there must exists $t < \omega$ so that letting $$\chi_s = \bigvee_{i < k_s} \bigwedge_{j < m_{(i, s)}} \forall x(\phi^s_{(i,j)} \rightarrow \psi^s_{(i,j)}) \; \text{ and } \; \chi'_s = \bigvee_{i < k_s} \bigwedge_{j < m_{(i, s)}} (|\phi^s_{(i,j)}| \leq |\psi^s_{(i,j)}|),$$ for $s \leq t$, we have that \def\mbox{$\cdots$}{\mbox{$\cdots$}} \begin{prooftree} \AxiomC{$T^*_j \cup \left\{ \phi_j \right\}$} \UnaryInfC{$\chi_0$} \LeftLabel{($R_0$)} \UnaryInfC{$\chi'_0$} \AxiomC{$T^*_j \cup \left\{ \phi_j \right\}$} \UnaryInfC{$\phantom{\chi_0} \cdots \phantom{\chi_0}$} \LeftLabel{($R_0$)} \UnaryInfC{$\phantom{\chi'_0} \cdots \phantom{\chi_0}$} \AxiomC{$T^*_j \cup \left\{ \phi_j \right\}$} \UnaryInfC{$\chi_t$} \LeftLabel{($R_0$)} \UnaryInfC{$\chi'_t$} \AxiomC{$\Sigma$} \QuaternaryInfC{$\perp$} \end{prooftree} Notice though that our deductive system proves that formulas in $\bigwedge \bigvee \bigwedge$ form are equivalent to formulas in $\bigvee \bigwedge$ form, and so we have that \def\mbox{$\cdots$}{\mbox{$\cdots$}} \begin{prooftree} \AxiomC{$\chi_0 \, \mbox{$\cdots$} \, \chi_t$} \UnaryInfC{$\bigwedge_{s \leq t} \chi_s$} \UnaryInfC{$\chi$} \UnaryInfC{$\bigwedge_{s \leq t} \chi_s$} \UnaryInfC{$\chi_0 \, \mbox{$\cdots$} \, \chi_t$} \end{prooftree} where $\chi$ is the formula in $\bigvee \bigwedge$ form equivalent to $\bigwedge_{s \leq t} \chi_s$. In substance, without loss of generality we can simplify the situation assuming that $t=0$ and thus \begin{prooftree} \AxiomC{$T^*_j \cup \left\{ \phi_j \right\}$} \UnaryInfC{$\bigvee_{i < k_0} \bigwedge_{j < m_{(i, 0)}} \forall x(\phi^0_{(i,j)} \rightarrow \psi^0_{(i,j)})$} \LeftLabel{($R_0$)} \UnaryInfC{$\bigvee_{i < k_0} \bigwedge_{j < m_{(i, 0)}} (|\phi^0_{(i,j)}| \leq |\psi^0_{(i,j)}|)$} \AxiomC{$\Sigma$} \BinaryInfC{$\perp$} \end{prooftree} Analogously, given that $T^*_j \cup \left\{ \neg \phi_j \right\} \nvdash \perp$, we have that \begin{prooftree} \AxiomC{$T^*_j \cup \left\{ \neg \phi_j \right\}$} \UnaryInfC{$\bigvee_{i < k_1} \bigwedge_{j < m_{(i, 1)}} \forall x(\phi^1_{(i,j)} \rightarrow \psi^1_{(i,j)})$} \LeftLabel{($R_0$)} \UnaryInfC{$\bigvee_{i < k_1} \bigwedge_{j < m_{(i, 1)}} (|\phi^1_{(i,j)}| \leq |\psi^1_{(i,j)}|)$} \AxiomC{$\Sigma$} \BinaryInfC{$\perp$} \end{prooftree} But then \begin{prooftree} \AxiomC{$T^*_j \cup \left\{ \phi_j \vee \neg \phi_j \right\}$} \UnaryInfC{$\bigvee_{s < 2, \, i < k_s} \bigwedge_{j < m_{(i, s)}} \forall x(\phi^s_{(i,j)} \rightarrow \psi^s_{(i,j)})$} \LeftLabel{($R_0$)} \UnaryInfC{$\bigvee_{s < 2, \, i < k_s} \bigwedge_{j < m_{(i, s)}} (|\phi^s_{(i,j)}| \leq |\psi^s_{(i,j)}|)$} \AxiomC{$\Sigma$} \BinaryInfC{$\perp$} \end{prooftree} which contradicts the fact that $(T^*_j, \Sigma) \nvdash \perp$. We now construct $\Sigma_0$. First of all, let $\Sigma'$ be the deductive closure of $\Sigma$ under the axioms $\mathrm{RCF}^*$ and the rule ($R_0$) with premises from $T_0$. Then $(T_0, \Sigma')$ must be consistent because otherwise there would be $i < \omega$ such that $(T^*_i, \Sigma)$ is not consistent. Now, just extend $\Sigma'$ to a complete $L_1$-theory $\Sigma_0$ using the Lindembaum's Lemma of first-order logic. Then $(T_0, \Sigma_0)$ is as wanted. \end{proof} Before proving a completeness result, we need some elementary facts about elementary extensions of the ordered field of real numbers $\mathcal{R} = (\mathbb{R}, 0, 1, +, -, \cdot, \leq)$. Let $\mathcal{B}$ be an elementary extension of $\mathcal{R}$. We say that $b \in B$ is finite if there exist $r, s \in \mathbb{R}$ such that $r < b < s$. We denote by $\mathrm{Fin}(\mathcal{B})$ the set of finite elements of $\mathcal{B}$. Given $b \in \mathrm{Fin}(\mathcal{B})$ we denote by $\mathrm{st}(b)$ the standard part of $b$, i.e. $\mathrm{sup}(\{ r \in \mathbb{R} : r < b \})$ (for details see e.g. \cite[Section 5.6]{gold}). By positive bounded formulas we mean formulas which are built up from atomic formulas by means of conjunction $\wedge$, disjunction $\vee$, universal quantification $\forall$ and bounded existential quantification $\exists x(-n \leq x \leq n \wedge \phi)$. \begin{fact}\label{positive_fr} Let $\mathcal{B}$ be an elementary extension of $\mathcal{R}$. Then, the map $\mathrm{st}: \mathrm{Fin}(\mathcal{B}) \rightarrow \mathbb{R}$ preserves positive bounded formulas. \end{fact} \begin{proof} This can be proved by a straightforward induction on the complexity of positive formulas. Only the atomic case is interesting. For it see e.g. \cite[Theorem 6.7]{loeb} or \cite[Theorem 5.6.2]{gold} (in \cite{loeb} and \cite{gold} proofs are with respect to the hyperreals but they work for any elementary extension of $\mathcal{R}$). \end{proof} \begin{theorem}[Completeness]\label{main_theorem} Let $T$ be an $L_0$-theory and $\Sigma$ a positive bounded $L_1$-theory. Then the following are equivalent. \begin{enumerate}[(i)] \item There exists $\mathcal{A} \models T$ and a measure team $X = (\Omega, \mathcal{F}, P, \tau)$ with values in $\mathcal{A}$ and $\mathrm{dom}(X) = \left\{ v_i \, | \, i < n \right\}$, such that $X \models \Sigma$. \item $(T, \Sigma) \nvdash \perp$. \item As in (i), with $\Omega = [0, 1]$, $\mathcal{F}$ the $\sigma$-algebra $\mathcal{L}$ of Lebesgue measurable subsets of $[0, 1]$ and $P$ the Lebesgue measure on $[0, 1]$. \end{enumerate} \end{theorem} \begin{proof} We only prove (ii) implies (iii). Suppose that $(T, \Sigma) \nvdash \perp$. By Lemma \ref{linden} we can find a complete $L_0$-theory $T_0$ and complete $L_1$-theory $\Sigma_0$ such that $T \subseteq T_0$, $\Sigma \subseteq \Sigma_0$ and $(T_0, \Sigma_0) \nvdash \perp$. In particular, the theories $T_0$ and $\Sigma_0$ are consistent (with respect to the deductive system of first-order logic), because otherwise we would be able to derive a contradiction also from our deductive system. Let then $\mathcal{B} \models \Sigma_0$. Given that our deductive system contains $\mathrm{RCF}^* = \mathrm{Th}(\mathcal{R}_Q)$, we can---without loss of generality---assume that $\mathcal{R}_Q \preccurlyeq \mathcal{B} \restriction L_Q$. To see this, just take a sufficiently saturated elementary extension of $\mathcal{R}_Q$ and think of $\Sigma_0$ as a type of the theory $\mathrm{RCF}^* \subseteq \Sigma_0$ in the variables $|\phi(x)|$. We now expand $\mathcal{R}_Q$ to an $L_1$-structure by letting $$ |\phi(x)|^{\mathcal{R}^{X}_Q} = \mathrm{st}(|\phi(x)|^{\mathcal{B}}) $$ for every $L_0$-formula $\phi(x)$ in the free variables $x = (v_i)_{i < n}$. By Fact \ref{positive_fr} we have ${\mathcal{R}^{X}_Q} \models \Sigma$. We now want to define $\mathcal{A} \models T_0$ as well as a measure team $X = ([0, 1], \mathcal{L}, P, \tau)$ with values in $\mathcal{A}$ and $\mathrm{dom}(X) = \left\{ v_i \, | \, i < n \right\}$, so that $$|\phi(x)|^{\mathcal{R}^{X}_Q} = [\phi(x)]_X$$ for every $L_0$-formula $\phi(x)$ in the free variables $x$. As to $\mathcal{A}$, we can let it be any $\omega$-saturated model of $T_0$ ($\omega$-compactness suffices if $n < \omega$). As to the team $X$, we do the following. Let $(\phi_i(x))_{i < \omega}$ be an enumeration of the $L_0$-formulas in the free variables $x$ (remember that $x$ is a vector). We label $2^{< \omega}$ with subsets $I_{\sigma} \subseteq [0, 1)$ as in Figure \ref{tree} (where for simplicity we let $|\phi| = |\phi(x)|^{\mathcal{R}^{X}_Q}$, $\phi_{(1, 1)} = \phi_0 \wedge \phi_1$ and $\phi_{(0, 1)} = \neg\phi_0 \wedge \phi_1$). \begin{figure}[ht] \begin{center} \begin{tikzpicture}[level distance=1.5cm, level 1/.style={sibling distance=5.85cm}, level 2/.style={sibling distance=3cm}] level 3/.style={sibling distance=0.1cm}] \node {$\emptyset$} child {node {$[0, |\phi_0|)$} child {node {$[0, |\phi_{(1, 1)}|)$} child {node {$\cdots$}} child {node {$\cdots$}}} child {node {$[|\phi_{(1, 1)}|, |\phi_0|)$} child {node {$\cdots$}} child {node {$\cdots$}}} } child {node {$[|\phi_0|, 1)$} child {node {$[|\phi_0|, |\phi_0| + |\phi_{(0, 1)}|)$} child {node {$\cdots$}} child {node {$\cdots$}}} child {node {$[|\phi_0| + |\phi_{(0, 1)}|, 1)$} child {node {$\cdots$}} child {node {$\cdots$}}} }; \end{tikzpicture} \end{center} \caption{Labelling $2^{< \omega}$ with $I_{\sigma} \subseteq [0, 1)$}\label{tree} \end{figure} Because of ($A_0$)-($A_2)$ and $(R_0)$, given $s \in [0, 1)$ and $1 \leq i < j < \omega$, we have $s \in I_{\sigma_i} \wedge I_{\sigma_j}$ for unique $\sigma_i \in 2^i$ and $\sigma_j \in 2^j$, and moreover $\sigma_i \subseteq \sigma_j$. Thus, to every $s \in [0, 1)$ we can associate $$f_s = \bigcup_{1 \leq i < \omega}\sigma_i \in 2^{\omega}.$$ Let $$ \mathrm{tp}({f_s}) = \mbox{\Large $\{$ } \!\!\! \bigwedge_{i < m} \phi_i^{f_s(i)}(x) \, | \,1 \leq m < \omega \mbox{\Large $\}$.}$$ Then, because of ($A_0$) and $(R_0)$, the type $\mathrm{tp}({f_s})$ is finitely satisfiable and hence satisfiable in $\mathcal{A}$ ($\omega$-saturated models realize types over the empty set in infinitely many variables). Let then $a_s \in A$ such that $a_s \models \mathrm{tp}({f_s})$ and \[ \tau(s)(x) = \begin{cases} a \in A^{|x|} \;\;\;\;\; \text{ if } s = 1 \\ a_s \;\;\;\;\;\;\;\;\;\;\;\;\;\; \text{ if } s \in [0, 1). \end{cases} \] Then $X = ([0, 1], \mathcal{L}, P, \tau)$ is as desired. \end{proof} The following standard counterexample shows that the positivity of $\Sigma$ is a necessary condition in Theorem \ref{main_theorem}. \begin{example} Let $L_0$ consists of a single unary predicate $R$, $T$ be the empty theory and \[ \Sigma = \left\{ |R(x)| > 0 \right\} \cup \mbox{\Large $\{$ } \!\!\! |R(x)| \leq \frac{1}{n} \, | \, 0 < n < \omega \mbox{\Large $\}$.}\] Then $(T, \Sigma) \nvdash \perp$, because $\Sigma$ is finitely satisfiable, but (i) of Theorem \ref{main_theorem} fails, as in fact there is no way to expand $\mathcal{R}_Q$ to an $L_1$-structure $\mathcal{R}^X_Q$ so that $\mathcal{R}^X_Q \models \Sigma$. \end{example} On the other hand, if we insist on $\Sigma$ being finite we can prove Theorem \ref{main_theorem} without the positive bounded assumption. \begin{theorem} As in Theorem \ref{main_theorem} with $\Sigma$ finite and arbitrary, i.e. not necessarily positive bounded. \end{theorem} \begin{proof} The proof is essentially as in Theorem \ref{main_theorem}. We only have to specify how to define $|\phi(x)|^{\mathcal{R}^{X}_Q}$ in this case. We do this. Let $\Sigma_0$ and $\mathcal{B} \models \Sigma_0$ as in the proof of Theorem \ref{main_theorem}. First of all, extend $\Sigma$ to a $\Sigma'$ adding the following formulas for every $|\phi(x)|$ and $|\psi(x)|$ occurring in $\Sigma$: \begin{enumerate}[(i)] \item $0 \leq |\phi(x)| \leq 1$; \item $|\neg \phi(x)| = 1 - |\phi(x)|$; \item $|\phi(x)| = |\phi(x) \wedge \psi(x)| + |\phi(x) \wedge \neg \psi(x)|$; \item $|\phi(x) \vee \psi(x)| = |\phi(x)| + |\psi(x)| - |\phi(x) \wedge \psi(x)|$. \end{enumerate} Further extend $\Sigma'$ to a $\Sigma''$ requiring that if $|\phi(x)| = 0 \in \Sigma_0$ and $|\phi(x)|$ occurs in $\Sigma$, then $|\phi(x)| = 0 \in \Sigma''$. Items (i)-(iv) are theorems of our logic, and so $\Sigma'' \subseteq \Sigma_0$. Secondly, notice that it suffices to specify $|\phi(x)|^{\mathcal{R}^{X}_Q}$ only for the $|\phi(x)|$ occurring in $\Sigma''$, but this is easily done---just consider $\bigwedge \Sigma''$, substitute constants of the form $|\phi(x)|$ with free variables, quantify existentially and find a real solution using the fact that $\mathcal{R}_Q \preccurlyeq \mathcal{B} \restriction L_Q$. The rest of the proof is clear (in this case enumerate only the $L_0$-formulas that occur in $\Sigma$ and construct a finite tree). \end{proof} \begin{corollary} Let $T$ be an $L_0$-theory and $\Sigma \cup \left\{ \alpha \right\}$ a finite $L_1$-theory. Then \[ (T, \Sigma) \vdash \alpha \;\; \Leftrightarrow \;\; (T, \Sigma) \models \alpha \] \end{corollary} The main source of inspiration for our logic is of course for $T = T_0 = \mathrm{Th}(\mathcal{A})$, with $\mathcal{A}$ a particular $L_0$-structure. If we wish the class of teams with values in $\mathcal{A}$ to be complete in the sense of providing every possible counterexample for $(T, \Sigma) \nvdash \perp$ (as in Theorem \ref{main_theorem}), then we have to require $\omega$-compactness or $\omega$-saturation (depending on whether $n < \omega$ or $n = \omega$) of $\mathcal{A}$. If $\mathcal{A}$ is finite then, of course, we do not have this problem (since it is $\omega$-saturated). In particular, for $L_0$ the signature of boolean algebras and $\mathcal{A}$ the boolean algebra $\left\{ 0, 1 \right\}$, we have a system of propositional probability logic properly extending the probability logic considered in \cite{QTL}.
1,477,468,750,690
arxiv
\section{Introduction} We consider model order reduction (MOR) of asymptotically stable linear time invariant (LTI) dynamical systems of the form \begin{align}\label{eq:LTI} \begin{split} \dot{x}(t) &= Ax(t)+Bu(t), \quad x(0) = x_0, \\ y(t) &= Cx(t)+Du(t) \end{split} \end{align} for $t\ge 0$ with initial value $x_0 = X_0z_0$. We assume that $A \in \mathbb{R}^{n \times n}$, $B \in\mathbb{R}^{n \times m}$, $C \in \mathbb{R}^{p \times n}$, and $D \in \mathbb{R}^{p \times m}$, where $m,\,p \ll n$. The functions $u:[0,\infty) \to \mathbb{R}^m$, $x:[0,\infty) \to \mathbb{R}^n$, and $u:[0,\infty) \to \mathbb{R}^p$ are the \emph{input}, \emph{state}, and \emph{output}, respectively. In this paper we assume that all feasible initial conditions live in a low-dimensional subspace of $\mathbb{R}^n$ spanned by the columns of the matrix $X_0 \in \mathbb{R}^{n \times q}$. In this work we aim for a reduced order model (ROM) of state dimension $r \ll n$ of the form \begin{subequations}\label{eq:ROM} \begin{align} \dot{x}_r(t) &= A_r x_r(t) + B_r u(t), \quad x_r(0) = X_{0,r} z_0, \label{eq:ROMx} \\ y_r(t) &= C_r x_r(t) + D_r u(t) + f(t), \label{eq:ROMy} \end{align} \end{subequations} such that $y_r(\cdot)$ is close to $y(\cdot)$ for all inputs $u(\cdot)$ and all initial values defined by $z_0$. We will present two methods to achieve this goal. The first method determines a ROM of state-space dimension $r$ with error bound \begin{equation}\label{eq:bound_share} \left\|y-y_r\right\|_{\mathbb{L}_2} \le 2(\eta_{r+1} + \cdots + \eta_n)(\left\|u\right\|_{\mathbb{L}_2}+\beta\left\|z_0\right\|_2). \end{equation} The other method obtains a ROM of state-space dimension $r = k + \ell$ with error bound \begin{equation}\label{eq:bound_sep} \left\|y-y_r\right\|_{\mathbb{L}_2}\le 2(\sigma_{k+1} + \cdots + \sigma_n)\left\|u\right\|_{\mathbb{L}_2} + 2(\theta_{\ell+1}+\cdots+\theta_n)\left\|z_0\right\|_2. \end{equation} Here, $\eta_i$, $\sigma_i$, and $\theta_i$, $i = 1,\,\ldots,\,n$ are the Hankel singular values of certain systems. In this paper we will first prove these bounds. Then we give a practical procedure on how to construct the ROMs and evaluate it numerically. Moreover, we give a detailed comparison to other approaches. \section{State of the Art} \label{sec:sota} In this section we review the current state of the art on balanced truncation model reduction. In particular, we will discuss several approaches for treating inhomogeneous initial conditions and give the error bounds, if available. It is important to note that the error bounds listed below are typically \emph{a posteriori bounds}, so the error bound is only available after the ROM has been computed. In constrast to that, our bounds \eqref{eq:bound_share} and \eqref{eq:bound_sep} are \emph{a priori bounds}, so they can be evaluated before the ROM is known. \paragraph{Balanced Truncation (BT)} is a well known method for model reduction of asymptotically stable LTI systems, see, e.\,g.,~\cite{GugA04}. From given $A,\,B,\,C$ and a desired reduced order $r$, this method computes projection matrices $V_r,\,W_r \in \mathbb{R}^{n \times r}$ and Hankel singular values $\sigma_1,\,\ldots,\,\sigma_n$ which are independent of $r$. We will use the notation \begin{equation*} [V_r,W_r,\sigma_1,\ldots,\sigma_n] = \operatorname{BT}(A,B,C,r). \end{equation*} The ROM (of order $r$) is then given by projection, that is \begin{subequations}\label{eq:proj} \begin{align} A_r &:= W_r^\mathsf{T} A V_r, \quad B_r := W_r^\mathsf{T} B, \quad C_r := CV_r, \quad D_r := D,\label{eq:proj_abcd}\\ X_{0,r} &:= W_r^\mathsf{T} X_0, \quad f \equiv 0. \label{eq:proj_x0f} \end{align} \end{subequations} It is well-known that the ROM is asymptotically stable if $\sigma_r > \sigma_{r+1}$~\cite[Section~7.2]{Ant05}. We define $\mathbb{L}_2^k := \left\{ g:[0,\infty) \to \mathbb{R}^k : \int_0^\infty g(t)^\mathsf{T} g(t){\rm d}t < \infty \right\}$ with induced norm $\left\|g\right\|_{\mathbb{L}_2}:=\left(\int_0^\infty g(t)^\mathsf{T} g(t){\rm d}t\right)^{1/2}$. Analogously, we define $\mathbb{L}_2^{k \times \ell} := \left\{ G:[0,\infty) \to \mathbb{R}^{k \times \ell} : \int_0^\infty G(t)^\mathsf{T} G(t){\rm d}t < \infty \right\}$ with induced norm $\left\|G\right\|_{\mathbb{L}_2}:=\left(\int_0^\infty G(t)^\mathsf{T} G(t){\rm d}t\right)^{1/2}$. If $u \in \mathbb{L}_2^m$, then due to asymptotic stability of the original system and the ROM we have $y,\,y_r \in \mathbb{L}_2^p$. If further $x_0 = 0$, then BT guarantees the a priori error bound \begin{equation}\label{eq:bt_bound} \left\|y-y_r\right\|_{\mathbb{L}_2}\le 2(\sigma_{r+1}+\cdots+\sigma_n)\left\|u\right\|_{\mathbb{L}_2}. \end{equation} This error bound can be extended to the case of inhomogeneous initial values as follows. We can write the outputs of the FOM and the ROM as \begin{align*} y(t) &= C\mathrm{e}^{At}X_0z_0 + \int_{0}^t C\mathrm{e}^{A(t-\tau)}Bu(\tau) \mathrm{d}\tau + Du(t), \\ y_r(t) &= C_r\mathrm{e}^{A_rt}X_{0,r}z_0 + \int_{0}^t C_r\mathrm{e}^{A_r(t-\tau)}B_ru(\tau) \mathrm{d}\tau + Du(t). \end{align*} Since in BT, both $A$ and $A_r$ are asymptotically stable, $C\mathrm{e}^{A\cdot}X_0,\, C_r\mathrm{e}^{A_r\cdot}X_{0,r} \in \mathbb{L}_2^{p \times q}$ as well as $C\mathrm{e}^{A\cdot}X_0z_0,\, C_r\mathrm{e}^{A_r\cdot}X_{0,r}z_0 \in \mathbb{L}_2^{p}$ and hence we obtain the estimate \begin{align*} \left\|y-y_r\right\|_{\mathbb{L}_2} &\le \left\| C\mathrm{e}^{A\cdot}X_0z_0 - C_r\mathrm{e}^{A_r\cdot}X_{0,r}z_0 \right\|_{\mathbb{L}_2} + 2(\sigma_{r+1}+\cdots+\sigma_n)\left\|u\right\|_{\mathbb{L}_2} \\ & \le \left\| C\mathrm{e}^{A\cdot}X_0 - C_r\mathrm{e}^{A_r\cdot}X_{0,r} \right\|_{\mathbb{L}_2} {\| z_0 \|}_2 + 2(\sigma_{r+1}+\cdots+\sigma_n)\left\|u\right\|_{\mathbb{L}_2}. \end{align*} However, $\left\| C\mathrm{e}^{A\cdot}X_0 - C_r\mathrm{e}^{A_r\cdot}X_{0,r} \right\|_{\mathbb{L}_2}$ is nothing but the $\mathbb{H}_2$-norm of the transfer function of the system $\left[ \begin{bsmallmatrix} A & 0 \\ 0 & A_r \end{bsmallmatrix}, \begin{bsmallmatrix} X_0 \\ X_{0,r} \end{bsmallmatrix}, \begin{bsmallmatrix} C & -C_r \end{bsmallmatrix} \right]$ and can be computed by solving a (typically large-scale) Lyapunov equation \cite[Chap. 4]{ZhoDG96}. The problem is that in standard BT, we do not attempt to minimize the part of the error that is related to the initial values. One can think of certain situations in which this error is large. One possibility is the case in which $X_{0,r} = W_r^\mathsf{T} X_0 \approx 0$. Then the error associated with the initial values is essentially given by the $\mathbb{H}_2$-norm of the transfer function of the system $[A,X_0,C]$ which can be large. \paragraph{The method TrlBT of Baur, Benner, and Feng~\cite{BauBF14}} consists of translating the state $x(t)$ to $\tilde{x}(t) := x(t) - x_0$. The original system becomes \begin{align*} \dot{\tilde{x}}(t) &= A\tilde{x}(t) + \begin{bmatrix}B & Ax_0 \end{bmatrix}\tilde{u}(t), \quad \tilde{x}(0)=0,\\ y(t) &= C\tilde{x}(t) + \begin{bmatrix}D & Cx_0 \end{bmatrix}\tilde{u}(t) \quad \text{with }\tilde{u}(t):=\begin{bmatrix}u(t)\\1\end{bmatrix}. \end{align*} This homogeneous system is then reduced by standard balanced truncation. Note that BT is applied to an expanded system, here $\left[A,\begin{bmatrix}B & Ax_0\end{bmatrix},C\right]$. Since $\tilde{u} \notin \mathbb{L}_2^{m+1}$, no computable error bound similarly to~\eqref{eq:bt_bound} can be given. \paragraph{The method AugBT of Heinkenschloss, Reis, and Antoulas~\cite{HeiRA11}} consists of applying BT to the expanded system $\left[A,\begin{bmatrix}B & X_0\end{bmatrix},C\right]$ to obtain the projection matrices $V_r,\,W_r$ and Hankel singular values $\eta_1,\,\ldots,\,\eta_n$. The ROM is obtained by~\eqref{eq:ROM} and~\eqref{eq:proj} using the modified projection matrices $V_r,\,W_r$. This method achieves an a posteriori error bound \begin{multline} \label{eq:bound_HeiRA11} \left\|y-y_r\right\|_{\mathbb{L}_2} \le 2(\eta_{r+1} + \cdots + \eta_n)\left\|u\right\|_{\mathbb{L}_2}\\ +3\cdot 2^{-1/3}(\eta_{r+1}+\cdots+\eta_n)^{2/3}\left(\left\|L^\mathsf{T} AX_0\right\|_2+{\|\Sigma_r^{\frac12}A_rX_{0,r}\|}_2\right)^{1/3}\left\|z_0\right\|_2, \end{multline} where $L$ is such that $L L^\mathsf{T}$ is the observability Gramian of the expanded system, and $\Sigma_r = {\rm diag}(\eta_1,\ldots,\eta_r)$. This bound has two disadvantages: it involves the reduced system and the Hankel singular values\xspace appear with exponent $2/3$. The previously discussed methods are joint--projection methods, i.\,e., they use a single projection to produce ROMs in which both the responses to the input and initial state are treated simultaneously. On the other hand, there are seperate--projection methods producing ROMs in which the two parts are reduced seperately. This leads to a ROM that consists of two decoupled subsystems as in the following method. \paragraph{The method BT-BT of Beattie, Gugercin, and Mehrmann~\cite{BeaGM17}} produces a seperate--projection ROM. Let \begin{align*} [V_{k},W_{k},\sigma_1,\ldots,\sigma_n] &= \operatorname{BT}(A,B,C,k), \\ \big[\hat{V}_{\ell},\hat{W}_{\ell},\theta_1,\ldots,\theta_n\big] &= \operatorname{BT}(A,X_0,C,\ell). \end{align*} Then a reduced order model of order $r = k+\ell$ is constructed as in~\eqref{eq:ROM} with \begin{align*} A_r &:= \begin{bmatrix} W_{k}^\mathsf{T} A V_{k} & 0 \\ 0 & \hat{W}_{\ell}^\mathsf{T} A \hat{V}_{\ell} \end{bmatrix}, \quad B_r = \begin{bmatrix} W_{k}^\mathsf{T} B \\ 0 \end{bmatrix}, \quad C_r = \begin{bmatrix} CV_{k} & C\hat{V}_{\ell} \end{bmatrix}, \\ D_r &:= D, \quad X_{0,r} := \begin{bmatrix} 0 \\ \hat{W}_{\ell}^\mathsf{T} X_0 \end{bmatrix}, \quad f \equiv 0. \end{align*} Let $\big[A_{\rm b},{X}_{0,\rm b},C_{\rm b}\big]$ be a fully balanced realization of $[A,X_0,C]$ and assume that $Y$ solves the Sylvester equation \begin{equation*} A_{\rm b}^\mathsf{T} Y + Y\begin{bmatrix}I_{\ell} & 0\end{bmatrix} A_{\rm b} \begin{bmatrix}I_{\ell} \\ 0\end{bmatrix} + C_{\rm b}^\mathsf{T} C_{\rm b}\begin{bmatrix}I_{\ell} \\ 0\end{bmatrix} =0. \end{equation*} With \begin{equation*} T := [t_{i,j}] = X_{0,\rm b}X_{0,\rm b}^\mathsf{T} + 2 Y \begin{bmatrix} I_\ell & 0 \end{bmatrix} A_{\rm b}, \end{equation*} this method achieves an a posteriori error bound \begin{equation} \label{eq:bound_BeaGM17} \left\|y-y_r\right\|_{\mathbb{L}_2} \le 2(\sigma_{k+1} + \cdots + \sigma_n)\left\|u\right\|_{\mathbb{L}_2}+ \left(t_{\ell+1,\ell+1}\theta_{\ell+1} + \cdots + t_{n,n}\theta_n\right)^{1/2}\left\|z_0\right\|_2. \end{equation} This bound has several disadvantages: even though, typically the values of $t_{i,i}$ for $i=\ell+1,\,\ldots,\,n$ are small, the Hankel singular values\xspace appear with an exponent, here $1/2$. Moreover, a fully balanced realization of $[A,X_0,C]$ is necessary, whose computation is expensive and can be numerically unstable. Also, the matrix $T$ depends on the reduced order $\ell$, making deciding on $\ell$ difficult a priori. \paragraph{Singular perturbation approximations} are another class of model reduction techniques that are somewhat related to balanced truncation. Recently, the paper by Daragmeh, Hartmann, and Qatanani~\cite{DarHQ19} suggest a singular perturbation approximation for systems with nonzero initial condition. There, the authors provide another a posteriori error bound that is in the flavor of the one of~\cite{BeaGM17} and which can be computed by solving a Sylvester equation that contains data of the reduced-order model. \section{Proposed Method} In this section we discuss the main contribution of our paper. Here we derive two kinds of ROMs. The first one is a joint projection ROM in which a system with expanded system matrix is reduced by BT and in which both the system responses to the input and the initial condition are reduced at once. This ROM admits the error bound~\eqref{eq:bound_share}. Thereafter, we discuss a separate projection ROM in which both responses are reduced individually which leads to the error bound~\eqref{eq:bound_sep}. Since both ROMs are depending on design parameters we will have a detailed discussion about their interpretation and choice and we will discuss how to construct the ROMs in practice. \subsection{Joint Projection ROMs}\label{sec:method} Our first method consists of applying BT to a system with expanded input matrix. More precisely, the projection matrices $V_r,\,W_r$ and the {Hankel singular values\xspace} $\eta_1,\,\ldots,\,\eta_n$ are obtained by \begin{equation}\label{eq:BT_exp_joint} [V_r,W_r,\eta_1,\ldots,\eta_n] = \operatorname{BT}\left(A, \begin{bmatrix}B & \frac1{\beta\sqrt{2\alpha}}(A+\alpha I_n)X_0\end{bmatrix},C,r\right), \end{equation} where $\alpha$ and $\beta$ are real positive parameters. The ROM is then given by~\eqref{eq:ROM}, where the reduced matrices $A_r$, $B_r$, $C_r$, and $D_r$ are given by~\eqref{eq:proj_abcd}, and $X_{0,r}$ and $f$ (in contrast to~\eqref{eq:proj_x0f}) are given by \begin{subequations}\label{eq:Xfr} \begin{align} X_{0,r} &:= (A_r+\alpha I_r)^{-1}W_r^\mathsf{T}(A+\alpha I_n)X_0 \label{eq:x0r}\\ f(t) &:= F_rz_0{\rm e}^{-\alpha t}\quad\text{with } F_r:=CX_0-C_rX_{0,r}. \label{eq:fr} \end{align} \end{subequations} We call this reduction method \emph{joint--projection decaying shift balanced truncation}, or shortly \textbf{jShiftBT}, because the derivation of this method involves a decaying shift of the system state (cf. the proof of the following theorem). For this ROM we will prove the above mentioned error bound in the following theorem. \begin{theorem}\label{thm:bound} Let the LTI system~\eqref{eq:LTI} be asymptotically stable. Let $\alpha,\,\beta$ be real positive scalars. If $\eta_r > \eta_{r+1}$, then the ROM~\eqref{eq:ROM} defined by~\eqref{eq:BT_exp_joint},~\eqref{eq:proj_abcd}, and~\eqref{eq:Xfr} is asymptotically stable and the error is bounded by~\eqref{eq:bound_share}, provided that $-\alpha$ is not an eigenvalue of $A_r$. Moreover, we have $y_r(0) = y(0)$. \end{theorem} \begin{proof} The proof consists of the derivation of the ROM and proceeds in three steps. \emph{Step 1:} First, similarly to the method of Baur, Benner, and Feng~\cite{BauBF14}, the state is shifted to $\tilde{x}(t):=x(t)-x_0{\rm e}^{-\alpha t}$, i.\,e., the shift decays with rate $\alpha$. Then $\tilde{x}(0)=0$ and for $\dot{\tilde{x}}(\cdot)$ we obtain \begin{align*} \dot{\tilde{x}}(t) & =\dot{x}(t)+\alpha x_0{\rm e}^{-\alpha t} \\ &= Ax(t)+Bu(t)+\alpha x_0{\rm e}^{-\alpha t}\\ &= A\big(\tilde{x}(t)+x_0{\rm e}^{-\alpha t}\big)+Bu(t)+\alpha x_0{\rm e}^{-\alpha t} \\ &= A\tilde{x}(t)+Bu(t)+(A+ \alpha I_n)x_0{\rm e}^{-\alpha t}. \end{align*} For the output we get \begin{equation*} y(t) = C\big(\tilde{x}(t)+x_0{\rm e}^{-\alpha t}\big)+Du(t)=C\tilde{x}(t)+Du(t)+\frac{\beta\sqrt{2\alpha}}{\beta\sqrt{2\alpha}}CX_0z_0{\rm e}^{-\alpha t}. \end{equation*} Thus we obtain a linear system with homogeneous initial condition, i.\,e., \begin{align*} \dot{\tilde{x}}(t) &= A\tilde{x}(t) + \begin{bmatrix} B & \frac1{\beta\sqrt{2\alpha}}(A+\alpha I_n)X_0 \end{bmatrix} \tilde{u}(t), \quad \tilde{x}(0)=0, \\ y(t) &= C\tilde{x}(t) + \begin{bmatrix} D & \frac1{\beta\sqrt{2\alpha}}CX_0 \end{bmatrix} \tilde{u}(t) \quad \text{with }\tilde{u}(t) = \begin{bmatrix}u(t) \\ \beta\sqrt{2\alpha}{\rm e}^{-\alpha t}z_0 \end{bmatrix}. \end{align*} \emph{Step 2:} We apply standard BT to this system, which amounts to~\eqref{eq:BT_exp_joint} and obtain the ROM \begin{subequations} \begin{align} \dot{\tilde{x}}_r(t) &= A_r \tilde{x}_r(t) + \begin{bmatrix} B_r & \frac1{\beta\sqrt{2\alpha}}W_r^\mathsf{T}(A+\alpha I_n)X_0 \end{bmatrix} \tilde{u}(t), \quad \tilde{x}_r(0) = 0 \label{eq:ROMproofx} \\ y_r(t) &= C_r\tilde{x}_r(t)+ \begin{bmatrix}D & \frac1{\beta\sqrt{2\alpha}}CX_0 \end{bmatrix} \tilde{u}(t)\label{eq:ROMproofy} \end{align} as in~\eqref{eq:proj}. \end{subequations} For $t=0$ we have \begin{equation*} y_r(0) = C_r\tilde{x}_r(0) + \begin{bmatrix}D & \frac1{\beta\sqrt{2\alpha}}CX_0 \end{bmatrix} \tilde{u}(0) =Du(0)+CX_0z_0=y(0). \end{equation*} Since $\eta_r > \eta_{r+1}$ by assumption, the ROM is asymptotically stable~\cite[Section~7.2]{Ant05}. If $u \in \mathbb{L}_2^m$, then we have $\tilde{u} \in \mathbb{L}_2^{m+q}$ and therefore, $y,\,y_r \in \mathbb{L}^p$ and by~\eqref{eq:bt_bound} we have \begin{equation}\label{eq:boundtilde} \left\|y-y_r\right\|_{\mathbb{L}_2}\le 2(\eta_{r+1}+\cdots +\eta_n)\left\|\tilde{u}\right\|_{\mathbb{L}_2}. \end{equation} Inserting \begin{equation*} \left\| \tilde{u} \right\|_{\mathbb{L}_2} \le \left\| u \right\|_{\mathbb{L}_2} + \beta \big\|\sqrt{2\alpha}{\rm e}^{-\alpha \cdot }z_0\big\|_{\mathbb{L}_2} =\left\|u\right\|_{\mathbb{L}_2} + \beta \left\|z_0\right\|_2 \big\|\sqrt{2\alpha}{\rm e}^{-\alpha \cdot} \big\|_{\mathbb{L}_2} =\left\|u\right\|_{\mathbb{L}_2} + \beta \left\|z_0\right\|_2 \end{equation*} into~\eqref{eq:boundtilde} yields the claimed bound~\eqref{eq:bound_share}. \emph{Step 3:} We set $x_r(t) := \tilde{x}_r(t) + x_r(0){\rm e}^{-\alpha t}$ (i.\,e., we ``unshift'' $\tilde{x}_r(\cdot)$). Here we set $x_r(0) = X_{0,r}z_0$ for an $X_{0,r}$ that is yet to be determined. Putting this $x_r(\cdot)$ into~\eqref{eq:ROMproofx} yields \begin{align*} \dot{x}_r(t) &= A_rx_r(t) + B_ru(t) + \left(W_r^\mathsf{T}(A+\alpha I_n)X_0-(A_r+\alpha I_r\big)X_{0,r}\right)z_0{\rm e}^{-\alpha t}, \end{align*} which reduces to~\eqref{eq:ROMx} by choosing $X_{0,r}$ as in~\eqref{eq:x0r}. Inserting $x_r(\cdot)$ into~\eqref{eq:ROMproofy} yields \begin{align*} y_r(t) &= C_r x_r(t) + \begin{bmatrix} D & \frac1{\beta\sqrt{2\alpha}}CX_0 \end{bmatrix} \tilde{u}(t) -C_rX_{0,r}z_0{\rm e}^{-\alpha t} \\ &= C_rx_r(t) + Du(t) + F_rz_0{\rm e}^{-\alpha t}, \end{align*} which is~\eqref{eq:ROMy} for $F_r$ as in~\eqref{eq:fr}. This concludes the proof. \end{proof} \begin{remark} The property $y_r(0)=y(0)$ is shared with \textbf{TrlBT}. The other methods do not guarantee this. \end{remark} \begin{remark} The term $f(t) = F_rz_0{\rm e}^{-\alpha t}$ in our ROM is non-standard. It consists of a vector times a scalar-valued exponential function. Hence it is easy to compute. However, if a ROM without $f$ is desired we can get rid of it at the price of expanding the ROM. Indeed, the ROM may be reformulated by appending $\phi(t):={\rm e}^{-\alpha t}$ to $x_r(t)$. This gives \begin{align*} \begin{bmatrix}\dot{x}_r(t)\\ \dot{\phi}(t)\end{bmatrix} &=\begin{bmatrix}A_r&0\\ 0&-\alpha\end{bmatrix} \begin{bmatrix}x_r(t)\\ \phi(t)\end{bmatrix} +\begin{bmatrix}B_r\\ 0\end{bmatrix}u, \quad \begin{bmatrix}x_r(0)\\ \phi(0)\end{bmatrix}= \begin{bmatrix}X_{0,r}z_0\\ 1\end{bmatrix}\\ y_r &= \begin{bmatrix}C_r & F_rz_0 \end{bmatrix} \begin{bmatrix}x_r(t) \\ \phi(t)\end{bmatrix} +D_ru(t). \end{align*} However, the initial value is not linear in $z_0$ anymore (but affine linear). Also, the output matrix of the ROM depends on $z_0$, which may be undesirable. These disadvantages can be removed by reformulating the ROM by appending $\psi(t):=R_rz_0{\rm e}^{-\alpha t}$ to $x_r(t)$. This yields \begin{align*} \begin{bmatrix}\dot{x}_r(t)\\ \dot{\psi}(t)\end{bmatrix} &=\begin{bmatrix}A_r&0\\ 0&-\alpha I\end{bmatrix} \begin{bmatrix}x_r(t) \\ \psi(t)\end{bmatrix} +\begin{bmatrix}B_r\\ 0\end{bmatrix}u(t), \quad \begin{bmatrix}x_r(0)\\ \psi(0)\end{bmatrix}= \begin{bmatrix}X_{0,r}\\ R_r\end{bmatrix}z_0\\ y_r(t) &= \begin{bmatrix} C_r & L_r \end{bmatrix} \begin{bmatrix}x_r(t) \\ \psi(t) \end{bmatrix} +D_ru(t), \end{align*} where $F_r =: L_rR_r$ is a rank-revealing decomposition of $F_r$. This ROM is completely in standard form, but is of order $r+ \operatorname{rank}(F_r)$ which is bounded from above by $r + \min\{p,q\}$. \end{remark} \begin{remark} As a case study consider the case $q=1$ and $X_0$ being an eigenvector of $A$ corresponding to an eigenvalue $-\alpha \in \mathbb{R}$. Then $(A+\alpha I)X_0=0$ and our method gives the same projection matrices as standard BT. Now if $X_0$ happens to be both, an uncontrollable mode and easy to observe, it will be truncated in the ROM, i.\,e., $W_r^\mathsf{T} X_0=0$ and $X_{0,r}=0$. On the other hand it has, as an initial value, significant influence on the output $y(\cdot)$. How can our method work in this situation when standard BT does not? The answer lies in the extra term $f(\cdot)$ that reduces to $CX_0e^{-\alpha t}$ in this case, i.\,e., it reintroduces the mode that has been truncated by BT. \end{remark} \subsection{Separate Projection ROMs} \label{sec:composite} The ROM constructed in Subsection~\ref{sec:method} is a joint projection ROM where one has to specify the parameter $\beta$ to put an emphasis either on the input or on the initial condition. However, the reduction error may be large, if e.\,g., a high weight is put on the input error ($\beta$ is large) and if $\left\| z_0 \right\|_2$ is large, since then the expression $\left\|u\right\|_{\mathbb{L}_2}+\beta\left\|z_0\right\|_2$ is large, too. So the motivation of this subsection is to reduce the two parts of the system individually and to construct a seperate--projection ROM out of the two reduced subsystems similarly as in~\cite{BeaGM17}. To begin with, we write the output of the system~\eqref{eq:LTI} as \begin{equation*} y(t) = \underbrace{C\mathrm{e}^{At}x_0}_{=: y_{x_0}(t)} + \underbrace{\int_{0}^t C\mathrm{e}^{A(t-\tau)}Bu(\tau) \mathrm{d}\tau + Du(t)}_{=:y_u(t)}. \end{equation*} The output component $y_u(\cdot)$ is given by the output of the system \begin{align} \label{eq:LTIhom} \begin{split} \dot{x}(t) &= Ax(t) + Bu(t), \quad x(0) = 0, \\ y_u(t) &= Cx(t) + Du(t), \end{split} \end{align} while $y_{x_0}(\cdot)$ is the output of the system \begin{align} \label{eq:LTIinh} \begin{split} \dot{\hx}(t) &= A \hx(t), \quad \hx(0) = x_0 = X_0z_0, \\ y_{x_0}(t) &= C \hx(t). \end{split} \end{align} Now we reduce the system~\eqref{eq:LTIhom} using standard balanced truncation, i.\,e., \begin{equation*} [V_{k},W_{k},\sigma_1,\,\ldots,\sigma_n] = \operatorname{BT}(A,B,C,k) \end{equation*} and the ROM is given by \begin{align*} \dot{x}_k(t) &= A_k x_k(t) + B_k u(t), \quad x_k(0) = 0, \\ y_{u,k}(t) &= C_k x_k(t) + D_k u(t) \end{align*} with $A_k = W_k^\mathsf{T} A V_k$, $B_k = W_k^\mathsf{T} B$, $C_k = CV_k$, and $D_k = D$. The system~\eqref{eq:LTIinh} is reduced using the approach from Subsection~\ref{sec:method} for $\beta = 1$. This results in performing balanced trunction on a shifted system, i.\,e., \begin{equation}\label{eq:BT_exp_sep} \big[\hat{V}_{\ell},\hat{W}_{\ell},\theta_1,\,\ldots,\theta_n\big] = \operatorname{BT}\left(A,\frac{1}{\sqrt{2\alpha}}(A+\alpha I_n)X_0,C,\ell\right) \end{equation} and the corresponding ROM is \begin{align*} \dot{\hx}_{\ell}(t) &= \hat{A}_{\ell} \hx_\ell(t), \quad \hx_\ell(0) = X_{0,\ell} z_0, \\ y_{x_0,\ell}(t) &= \hat{C}_{\ell} \hx_\ell(t) + \hat{F}_\ell z_0 \mathrm{e}^{-\alpha t}, \end{align*} with $\hat{A}_\ell = \hat{W}_\ell^\mathsf{T} A \hat{V}_\ell$, $\hat{C}_\ell = C \hat{V}_\ell$, $X_{0,\ell} = \big(\hA_\ell+\alpha I_\ell\big)^{-1}\hat{W}_\ell^\mathsf{T}(A+\alpha I_n)X_0$, and $\hat{F}_\ell = CX_0 - \hat{C}_\ell X_{0,\ell}$. With the reduced subsystems above we can now construct the overall ROM \begin{align}\label{eq:compositeROM} \begin{split} \begin{bmatrix} \dot{x}_k(t) \\ \dot{\hx}_\ell(t) \end{bmatrix} &= \begin{bmatrix} A_k & 0 \\ 0 & \hat{A}_\ell \end{bmatrix} \begin{bmatrix} x_k(t) \\ \hat{x}_\ell(t) \end{bmatrix} + \begin{bmatrix} B_k \\ 0 \end{bmatrix} u(t), \quad \begin{bmatrix} x_k(0) \\ \hat{x}_\ell(0) \end{bmatrix} = \begin{bmatrix} 0 \\ X_{0,\ell} z_0 \end{bmatrix}, \\ y_r(t) &:= y_{u,k}(t) + y_{x_0,\ell}(t) = \begin{bmatrix} C_k & \hat{C}_\ell \end{bmatrix} \begin{bmatrix} x_k(t) \\ \hat{x}_\ell(t) \end{bmatrix} + D_k u(t) + \hat{F}_\ell z_0 \mathrm{e}^{-\alpha t}. \end{split} \end{align} We call this method \emph{separate--projection decaying shift balanced truncation}, shortly \textbf{sShiftBT}. Now the following result is an immediate consequence of a combination of the standard BT error bound and Theorem~\ref{thm:bound}. \begin{theorem} Let the LTI system~\eqref{eq:LTI} be asymptotically stable. Let $\alpha$ be a real positive scalar. If $\sigma_k > \sigma_{k+1}$ and $\theta_\ell > \theta_{\ell+1}$, then the ROM~\eqref{eq:compositeROM} is asymptotically stable and the error is bounded by~\eqref{eq:bound_sep}, provided that $-\alpha$ is not an eigenvalue of $\hat{A}_\ell$. Moreover, we have $y_r(0) = y(0)$. \end{theorem} \begin{remark} An advantage of the separate projection ROM is that it is a feasible ROM for all possible inputs and initial values. In contrast to that one may have to construct several joint projection ROMs for different values of $\beta$ in order to cover all possible inputs and initial values one wants to simulate the model with. Moreover, note that the reduced order of the separate projection ROM is $r = k+\ell$. However, since the reduced state matrix is of block-diagonal structure, the two reduced subsystems can be simulated individually. In particular, we have $z_0 = \sum_{i=1}^q \zeta_i x_{0}^{(i)}$ with a basis $\left\{ x_{0}^{(1)},\,\ldots,\,x_{0}^{(q)} \right\}$ of $\operatorname{im} X_{0,\ell}$. Thus, if the model has to be simulated for a lot of different initial conditions, one could precompute \begin{equation*} {y}_{x_0,\ell}^{(i)}(t) := \hat{C}_\ell \mathrm{e}^{\hat{A}_\ell t} x_{0}^{(i)}, \quad i = 1,\,\ldots,\,q. \end{equation*} Then for the particular initial condition $x_0 = X_0z_0$, \begin{equation*} {y}_{x_0,\ell}(t) = \sum_{i=1}^q \zeta_i {y}_{x_0,\ell}^{(i)}(t) \end{equation*} can be evaluated more efficiently and the online costs are dominated by a ROM of reduced order $k$. \end{remark} \subsection{Discussion of the Parameters $\alpha$ and $\beta$}\label{sec:disc} All that remains is to choose the two parameters $\alpha$ and $\beta$ in the method. Let us begin by noting that for $\alpha\rightarrow 0$ our decaying--shift approach $\tilde{x}(t)=x(t)-x_0{\rm e}^{-\alpha t}$ reduces to the constant one of \textbf{TrlBT}. Also, for $\alpha \rightarrow \infty$, the function $\left(\frac{1}{\sqrt{2\alpha}}{\rm e}^{-\alpha \cdot}\right)^2$ converges to Dirac's $\delta$ impulse used in \textbf{AugBT} and \textbf{BT-BT}. Moreover, for $\beta \to \infty$ we obtain the standard BT ROM. In that sense our approach contains the existing ones. However, for all these extreme cases of $\alpha$ or $\beta$ approaching zero or $\infty$, our error bound gets huge! This is either, because $\beta$ appears explicitly in it, or because the expanded input matrix $B_{\rm exp} := \begin{bmatrix} B & \frac{1}{\beta\sqrt{2\alpha}}(A+\alpha I_n)X_0\end{bmatrix}$ contains large elements leading to large Hankel singular values\xspace. So, good values for $\alpha$ and $\beta$ are neither too large nor too small. With $c_u:=2(\eta_{r+1}+\cdots+\eta_n)$ and $c_{x_0}:= \beta c_u$ we write the error bound~\eqref{eq:bound_share} as \begin{equation}\label{eq:bound_with_cu} \left\| y-y_r \right\|_{\mathbb{L}_2} \le c_u\cdot\left\|u\right\|_{\mathbb{L}_2} + c_{x_0} \cdot {\|z_0\|}_2. \end{equation} Note that all, $c_u$, $c_{x_0}$, and the Hankel singular values\xspace $\eta_i$ depend on $\alpha$ and $\beta$. We will also write $c_u(\alpha,\beta)$ and $c_{x_0}(\alpha,\beta)$ whenever we want to emphasize this dependency Observe that both summands of~\eqref{eq:bound_with_cu} are influenced by $\alpha$ in the same way. Hence $\alpha$ is a tuning parameter, i.\,e., optimizing it for $c_u$ also improves the value of $c_{x_0}$. An ad hoc heuristic candidate is the choice \begin{equation*} \alpha_{{\rm heur}}=\frac{{\|AX_0\|}_{\rm F}}{{\|X_0\|}_{\rm F}} \end{equation*} which minimizes $\big\|\tfrac{1}{\sqrt{\alpha}}(A+\alpha I_n)X_0\big\|_{\rm F}$, i.\,e., the norm of the extra block in $B_{\rm exp}$. Another possibility that certainly comes to mind is the negative spectral abscissa \begin{equation*} \tilde{\alpha}_{{\rm heur}} = -\max_{\lambda\in \Lambda(A)}\operatorname{Re}(\lambda). \end{equation*} With this choice, the shift $x(t)-\tilde{x}(t) = x_0{\rm e}^{-\alpha t}$ decays at the same rate as the most stable mode of the homogeneous system $\dot{x}(t) = A x(t)$. Of course, $\alpha$ can also be obtained by numerical optimization methods, see Subsection~\ref{sec:alpha} for details. We will assess these aproaches with numerical examples in Section~\ref{sec:eval}. For $\beta$, things are different because it influences $c_u$ and $c_{x_0}$ in different ways. Note that by our construction of $B_{\rm exp}$, $c_u$ is monotonically decreasing in $\beta$ whereas $c_{x_0} = \beta c_u$ is monotonically increasing. Thus, by increasing $\beta$ we improve the input part of the error bound $c_u\cdot{\|u\|}_{\mathbb{L}_2}$, but worsen the initial value part $c_{x_0}\cdot {\|z_0\|}_2$, and vice versa. So, if nothing is known about $u$ and $z_0$, \emph{then $\beta$ should be considered a design parameter that is provided by the user}. For example, if we want $c_u$ to be a hundred times smaller than $c_{x_0}$, then we set $\beta=100$. However, in certain situations, $\beta$ can also be a tuning parameter. Assuming that (typical or approximate) values of ${\|u\|}_{\mathbb{L}_2}$ and ${\|z_0\|}_2$, or at least their ratio, are known, we can optimize the right hand side of~\eqref{eq:bound_with_cu} over $\alpha$ and $\beta$. In this situation an ad hoc value would be \begin{equation}\label{eq:betaadhoc} \beta_{\rm heur}={{\|u\|}_{\mathbb{L}_2}}/{{\|z_0\|}_2}, \end{equation} as this choice balances the two summands in~\eqref{eq:bound_with_cu}. It turns out that this choice is almost optimal with suboptimality factor at most 2 as the following lemma shows. Thus, further numerical optimization for $\beta$ is rather futile. \begin{lem} Let ${\|u\|}_{\mathbb{L}_2}$ and ${\|z_0\|}_2$ be given and choose $\beta_{\rm heur}={{\|u\|}_{\mathbb{L}_2}}/{{\|z_0\|}_2}$. For $\alpha,\,\beta > 0$ define $$e(\alpha,\beta) := c_u(\alpha,\beta) (\left\|u\right\|_{\mathbb{L}_2} + \beta {\|z_0\|}_2).$$ Then for any fixed $\alpha_0 > 0$ we have \begin{equation} \label{eq:subopt1} e(\alpha_0,\beta_{\rm heur}) \le 2 \min_{\beta>0} e(\alpha_0,\beta). \end{equation} Moreover, \begin{equation} \label{eq:subopt2} \min_{\alpha >0} e(\alpha,\beta_{\rm heur}) \le 2 \min_{\alpha,\beta>0} e(\alpha,\beta). \end{equation} \end{lem} \begin{proof} First we show \eqref{eq:subopt1}: First, for $\alpha,\,\beta >0$ define $$\mu({\alpha},\beta):=\max\left\{c_u(\alpha,\beta)\left\|u\right\|_{\mathbb{L}_2},\, c_{x_0}(\alpha,\beta) {\|z_0\|}_2\right\}.$$ Now let $\alpha_0>0$ be arbitrary but fixed. First note that $\beta_{\rm heur}$ minimizes $\mu({\alpha_0},\cdot)$ because of the monotonicity of $c_u(\alpha_0,\cdot)$ and $c_{x_0}(\alpha_0,\cdot)$ and since \begin{equation*} c_u(\alpha_0,\beta_{\rm heur})\left\|u\right\|_{\mathbb{L}_2} = c_{x_0}(\alpha_0,\beta_{\rm heur}) {\|z_0\|}_2 = \beta_{\rm heur} c_{u}(\alpha_0,\beta_{\rm heur}) {\|z_0\|}_2. \end{equation*} Then for all $\beta > 0$ we obtain \begin{equation*} e(\alpha_0,\beta_{\rm heur}) = 2\mu(\alpha_0,\beta_{\rm heur}) \le 2\mu(\alpha_0,\beta) \le 2e(\alpha_0,\beta). \end{equation*} Next we show \eqref{eq:subopt2}: Define $\alpha_* := \argmin_{\alpha > 0} e(\alpha,\beta_{\rm heur})$ and $\big(\hat{\alpha}_*,\hat{\beta}_*\big) = \argmin_{\alpha,\beta > 0} e(\alpha,\beta)$. Then with the help of \eqref{eq:subopt1} we obtain the estimate \begin{equation*} e(\alpha_*,\beta_{\rm heur}) \le e\big(\hat{\alpha}_*,\beta_{\rm heur}\big) \le 2 e\big(\hat{\alpha}_*,\hat{\beta}_*\big), \end{equation*} which concludes the proof. \end{proof} \subsection{Efficient Construction of the ROMs} If $\alpha$ and $\beta$ are known, then we can construct the ROMs by~\eqref{eq:BT_exp_joint} or~\eqref{eq:BT_exp_sep} using a BT implementation of choice. However, if the ROMs have to be determined for several choices of $\alpha$ and $\beta$, or if $\alpha$ and $\beta$ are to be determined inside an optimization loop, then this can get prohibitively expensive. In this subsection we explain, how the reduced-order models presented in Subsections~\ref{sec:method} and~\ref{sec:composite} can be efficiently determined for many values of $\alpha$ and $\beta$. Consider the three Lyapunov equations \begin{subequations}\label{eq:lyap} \begin{align} A P + P A^\mathsf{T} + B B^\mathsf{T} = 0, \label{eq:lyapB} \\ A \hP + \hP A^\mathsf{T} + X_0 X_0^\mathsf{T} = 0, \label{eq:lyapX0} \\ A^\mathsf{T} Q + Q A + C^\mathsf{T} C = 0. \label{eq:lyapC} \end{align} \end{subequations} Assume that we have computed low-rank factorizations of the solutions \begin{equation*} P = R R^\mathsf{T}, \quad \hP = \hR \hR^\mathsf{T}, \quad Q = L L^\mathsf{T}. \end{equation*} These factors are directly obtained by many established methods such as the ADI method or Krylov subspace methods (see, e.\,g.,~\cite{BenS13,Sim07}) for which well-tested software exists, e.\,g.,~\cite{BenKS20}. Recall that in Subsection~\ref{sec:method} we want to reduce the system $\left[A,\begin{bmatrix}B & \frac{1}{\beta\sqrt{\alpha}}(A + \alpha I_n) X_0 \end{bmatrix}, C \right]$, so we need to determine its controllability Gramian given by the solution of the Lyapunov equation \begin{equation*} A \cP(\alpha,\beta) + \cP(\alpha,\beta) A^\mathsf{T} + \begin{bmatrix}B & \frac{1}{\beta\sqrt{2\alpha}}(A + \alpha I_n) X_0 \end{bmatrix}\begin{bmatrix}B & \frac{1}{\beta\sqrt{2\alpha}}(A + \alpha I_n) X_0 \end{bmatrix}^\mathsf{T} = 0. \end{equation*} By multiplying~\eqref{eq:lyapX0} by $\frac{1}{\beta\sqrt{2\alpha}}(A + \alpha I_n)$ from the left and by $\frac{1}{\beta\sqrt{2\alpha}}(A + \alpha I_n)^\mathsf{T}$ from the right and adding~\eqref{eq:lyapB}, we see that \begin{equation*} \cP(\alpha,\beta) = P + \left(\frac{1}{\beta\sqrt{2\alpha}}(A + \alpha I_n)\right) \hat{P} \left(\frac{1}{\beta\sqrt{2\alpha}}(A + \alpha I_n)\right)^\mathsf{T}. \end{equation*} In particular, we have the factorization \begin{equation*} \cP(\alpha,\beta) = \cR(\alpha,\beta)\cR(\alpha,\beta)^\mathsf{T} = \begin{bmatrix} R & \frac{1}{\beta\sqrt{2\alpha}}(A + \alpha I_n) \hat{R} \end{bmatrix}\begin{bmatrix} R & \frac{1}{\beta\sqrt{2\alpha}}(A + \alpha I_n) \hat{R} \end{bmatrix}^\mathsf{T}. \end{equation*} Now the ROM is determined using the SVD of the matrix \begin{equation}\label{eq:M} M(\alpha,\beta) := L^\mathsf{T} \cR(\alpha,\beta) = \begin{bmatrix} L^\mathsf{T} R & \frac{1}{\beta\sqrt{2\alpha}}L^\mathsf{T} A \hat{R} + \frac{\sqrt{\alpha}}{\sqrt{2}\beta} L^\mathsf{T} \hat{R} \end{bmatrix}. \end{equation} In particular, the nonzero Hankel singular values\xspace $\eta_i$ are the nonzero singular values of $M(\alpha, \beta)$. Note that $L^\mathsf{T} R$, $L^\mathsf{T} A \hat{R}$, and $L^\mathsf{T} \hat{R}$ are independent of $\alpha$ and $\beta$ and are typically all small matrices that can be efficiently precomputed. Note that solving the three Lyapunov equations is by far the dominant computational burden. Computing the SVDs, even for many values of $\alpha$ and $\beta$ is comparatively cheap. This allows computing the ROM inside an optimization loop to obtain optimal values of $\alpha$ and $\beta$. This will be done in the following section. \begin{remark} The construction of the separate projection ROM from Subsection~\ref{sec:composite} and \mbox{\textbf{BT-BT}} also makes use of the same three Lyapunov equations in~\eqref{eq:lyap}. Also the ROMS of \textbf{TrlBT} and \textbf{AugBT}, while constructed differently in~\cite{BauBF14} and~\cite{HeiRA11}, could be built by solving these three equations. \end{remark} \subsection{Optimizing the Parameter $\alpha$}\label{sec:alpha} Now we return to the optimization of $\alpha$. The value $\alpha_{\rm heur}$ is no more than an educated guess; with the results of the previous subsection we are ready to use numerical optimization machinery. First consider the bound~\eqref{eq:bound_share}, namely \begin{equation*} \left\| y - y_{r}\right\|_{\mathbb{L}_2} \le \underbrace{2 \left( \eta_{r+1}(\alpha) + \ldots + \eta_n(\alpha) \right)}_{=: c_u(\alpha)}\left( \left\| u \right\|_{\mathbb{L}_2} + \beta \left\| z_0 \right\|_2\right). \end{equation*} Our goal is to find $\alpha_*$ such that $c_{u}(\alpha_*)$ is minimal as this will minimize the error bound. Note that the Hankel singular values $\eta_i(\alpha)$ and hence $y_r(\cdot)$ also depend on $\beta$. However, the value of $\beta$ as well as the reduced order $r$ are fixed, so we do not list them explicitly as arguments since we only focus on the optimization of $\alpha$ in this subsection. One should however be aware of the fact that the optimization has to be repeated for every reduced order and parameter $\beta$ of interest, because the optimal value $\alpha_*$ depends on both. First, recall that the nonzero Hankel singular values $\eta_i(\alpha)$ are the nonzero singular values of the matrix $M(\alpha) = \begin{bmatrix} L^\mathsf{T} R & \frac{1}{\beta\sqrt{2\alpha}}L^\mathsf{T} A \hat{R} + \frac{\sqrt{\alpha}}{\sqrt{2}\beta} L^\mathsf{T} \hat{R} \end{bmatrix}$. Hence, $c_{u}(\cdot)$ is continous and piecewise smooth. The only critical points $\alpha$, where $c_u(\cdot)$ may not be smooth, are those for which $\eta_{r}(\alpha)$ and $\eta_{r+1}(\alpha)$ coincide or where the smallest nonzero Hankel singular value goes through zero. The latter is usually several orders of magnitude smaller than $\eta_{r+1}(\cdot)$ and hence affects $c_u(\cdot)$ only marginally. Therefore, we do not consider this case any further here. However, note that in principal, the problem of minimizing $c_u(\cdot)$ is a \emph{non-smooth problem}, meaning that local minima may be attained at points, at which $c_u(\cdot)$ is not differentiable. Next we show that, under a weak assumption, the case $\eta_r(\alpha_0) = \eta_{r+1}(\alpha_0)$ cannot lead to a local minimum at $\alpha_0$ as summarized in the following lemma. \begin{lem} Assume that $\eta_{r+1}(\cdot)$ is not differentiable at $\alpha_0 > 0$ and that $\eta_{r-1}(\alpha_0) > \eta_r(\alpha_0) = \eta_{r+1}(\alpha_0) > \eta_{r+2}(\alpha_0)$. Let $M(\cdot)$ have constant rank in a neighborhood of $\alpha_0$. Then $\alpha_0$ is not a local minimizer of $c_u(\cdot)$. \end{lem} \begin{proof} Since the singular value curves of $M(\cdot)$ can be chosen to be real analytic, the function $\eta_{r+1}(\cdot)$ is \emph{semi-differentiable} at $\alpha_0$ and the left and right derivatives \begin{align*} \frac{\mathrm{d}_-}{\mathrm{d} \alpha} \eta_{r+1}(\alpha_0) := \lim_{\alpha \to \alpha_0^-} \frac{\eta_{r+1}(\alpha) - \eta_{r+1}(\alpha_0)}{\alpha - \alpha_0}, \quad\frac{\mathrm{d}_+}{\mathrm{d}\alpha} \eta_{r+1}(\alpha_0) := \lim_{\alpha \to \alpha_0^+} \frac{\eta_{r+1}(\alpha) - \eta_{r+1}(\alpha_0)}{\alpha - \alpha_0} \end{align*} exist. We further have \begin{enumerate}[a)] \item $\frac{\mathrm{d}_+}{\mathrm{d}\alpha}\eta_{r+1}(\alpha_0)=\frac{\mathrm{d}_-}{\mathrm{d}\alpha}\eta_{r}(\alpha_0)$, since ${\eta_r(\cdot)|}_{(\alpha_0-\varepsilon, \alpha_0)}$ and ${\eta_{r+1}(\cdot)|}_{[\alpha_0,\alpha_0+\varepsilon)}$ form a smooth singular value curve for some $\varepsilon > 0$; \item $\frac{\mathrm{d}_-}{\mathrm{d}\alpha}\eta_{r}(\alpha_0) <\frac{\mathrm{d}_-}{\mathrm{d}\alpha}\eta_{r+1}(\alpha_0)$, since $\eta_{r+1}(\cdot)$ is the smaller singular value; \item the function $c_u(\alpha)-\eta_{r+1}(\alpha)=\sum_{i=r+2}^n \eta_i(\alpha)$ is smooth in a neighborhood of $\alpha_0$ (using $\eta_{r+1}(\alpha_0) > \eta_{r+2}(\alpha_0)$ and the constant--rank assumption). \end{enumerate} Together we have \begin{equation*} \frac{\mathrm{d}_+}{\mathrm{d}\alpha}c_u(\alpha_0)-\frac{\mathrm{d}_-}{\mathrm{d}\alpha}c_u(\alpha_0) \stackrel{\text{c)}}{=}\frac{\mathrm{d}_+}{\mathrm{d}\alpha}\eta_{r+1}(\alpha_0)-\frac{\mathrm{d}_-}{\mathrm{d}\alpha}\eta_{r+1}(\alpha_0) \stackrel{\text{a)}}{=}\frac{\mathrm{d}_-}{\mathrm{d}\alpha}\eta_{r}(\alpha_0)-\frac{\mathrm{d}_-}{\mathrm{d}\alpha}\eta_{r+1}(\alpha_0) \stackrel{\text{b)}} < 0, \end{equation*} i.\,e., the right derivative of $c_u(\cdot)$ jumps downwards at $\alpha_0$. However, for $\alpha_0$ to be a local minimum, the right derivative would have to jump from a negative to a positive value. Thus, $\alpha_0$ is not a local minimum. \end{proof} Figure~\ref{fig:alphaplot} shows a typical plot of $c_{u}(\cdot)$. \begin{figure} \centering \includegraphics{alpha_CDplayer_r20_beta1.pdf} \caption{Typical behavior of $c_u(\cdot)$. Note that there are two non-differentiable points near $10^4$.} \label{fig:alphaplot} \end{figure} We observe three segments: for small values of $\alpha$, $c_{u}(\cdot)$ is monotonically decreasing, thereafter follows a region, in which $c_u(\cdot)$ is non-monotonic and contains local minima and after that, for large values of $\alpha$, $c_u(\cdot)$ is monotonically increasing. This can be explained by looking at~\eqref{eq:M}: for small (or large) $\alpha$, the term $\tfrac{1}{\beta\sqrt{2\alpha}}L^\mathsf{T} A\hat{R}$ (or $\tfrac{\sqrt{\alpha}}{\beta\sqrt{2}}L^\mathsf{T} A\hat{R}$) is dominating $M(\alpha)$ and its singular values -- in contrast to the case of a medium-sized $\alpha$ for which the three terms in $M(\alpha)$ are of comparable size. Aiming for a local minimum of $c_u(\cdot)$ at least, we proceed in the following steps: \emph{Step 1: We use a sampling procedure} to obtain a good value for $\alpha$. One possibility is to sample $c_u(\alpha)$ for a large range of magnitudes in order to find at least approximately the region in which local minima are present. One possibility is to choose $\alpha \in \{10^j \; | \; j \in j_{\min}, j_{\min}+1,\, \ldots,\,j_{\max}\}$ with integers $j_{\min} \le j_{\max}$. \emph{Step 2: We perform an optional local optimization} to improve the best sample value $c_u(\alpha)$ from Step 1. In our approach we use standard gradient-based optimization methods implemented in the \textsc{Matlab} function \texttt{fmincon}. To address the possible non-smooth nature of the problem, one could also use more sophisticated non-smooth optimization methods such as \texttt{GRANSO} \cite{CurMO17}. It remains to determine the derivative of $c_{u}(\cdot)$. As discussed above, these do in general not exist for all $\alpha$. Nevertheless, the derivatives exist almost everywhere. We need the following result for the derivative of the singular values. \begin{lem}[\cite{Lan64}] \label{lem:diffsingval} Consider the differentiable function $Z : (-\varepsilon,\varepsilon) \to \mathbb{R}^{n \times m}$. Let $\sigma(t)$ be a singular value of $Z(t)$ converging to a simple singular value $\sigma_0$ of $Z_0 := Z(0)$. Let $u_0 \in \mathbb{R}^n$ and $v_0 \in \mathbb{R}^m$ be the corresponding right and left singular vectors, i.\,e., $\left\| u_0 \right\|_2 = \left\| v_0 \right\|_2 = 1$, $Z_0 u_0 = \sigma_0 v_0$ and $v_0^\mathsf{T} Z_0 = \sigma_0 u_0^\mathsf{T}$. Then \begin{equation*} \left.\frac{\mathrm{d}}{\mathrm{d}t}\sigma(t)\right|_{t=0} = v_0^\mathsf{T} \left(\left.\frac{\mathrm{d}}{\mathrm{d}t}Z(t)\right|_{t=0}\right) u_0. \end{equation*} \end{lem} We have \begin{align*} \frac{\mathrm{d}}{\mathrm{d} \alpha} M(\alpha) &= \begin{bmatrix} 0 & -\frac{1}{2\beta\alpha\sqrt{2\alpha}} L^\mathsf{T} A \hR + \frac{1}{2\beta\sqrt{2\alpha}} L^\mathsf{T} \hR \end{bmatrix} \end{align*} and with Lemma~\ref{lem:diffsingval} we finally obtain \begin{align*} \frac{\mathrm{d}}{\mathrm{d} \alpha} c_{u}(\alpha) = 2\sum_{j=r+1}^n\frac{\mathrm{d}}{\mathrm{d} \alpha} \eta_j(\alpha) &= 2\sum_{j=r+1}^n v_j(\alpha)^\mathsf{T} \frac{\mathrm{d}}{\mathrm{d} \alpha} M(\alpha) u_j(\alpha), \end{align*} where $u_j(\alpha)$ and $v_j(\alpha)$ are the right and left singular vectors associated with the singular value $\eta_j(\alpha)$ of $M(\alpha)$ which we assume to be simple in order to ensure differentiability. The second bound~\eqref{eq:bound_sep} \begin{equation*} \left\| y - y_r\right\|_{\mathbb{L}_2} \le 2 \left( \sigma_{r+1} + \ldots + \sigma_n \right)\left\|u \right\|_{\mathbb{L}_2} + 2 \left( \theta_{r+1}(\alpha) + \ldots + \theta_n(\alpha) \right)\left\| z_0 \right\|_{2} \end{equation*} can be treated in a similar manner as the bound above. Note that the first summand is parameter-independent and the second one only depends on $\alpha$ in the singular values of \begin{equation*} N(\alpha) = \frac{1}{\sqrt{2\alpha}}L^\mathsf{T} A \hR + \frac{\sqrt{\alpha}}{\sqrt{2}} L^\mathsf{T} \hR. \end{equation*} The derivatives can be obtained as for $M(\alpha)$ by setting $\beta = 1$ and therefore, we omit the details here. \section{Numerical Evaluation}\label{sec:eval} Now we evaluate our method and especially the error bounds and compare them with the methods listed in Section~\ref{sec:sota}. Here we consider two small examples from the SLICOT benchmark collection for model reduction\footnote{See \url{http://slicot.org/20-site/126-benchmark-examples-for-model-reduction}.}, see~\cite{ChaV02}. In principle, our method also works well in the large-scale setting since methods for solving large-scale Lyapunov equations are available and the optimization procedure from Subsection~\ref{sec:alpha} only acts on matrices which are constructed by low-rank Cholesky factors which are usually small. However, we choose to consider small examples since they allow us to evaluate the error of the reduction by simulating the error systems. The examples are constructed such that the input is zero initially so that we we can assess the quality of the reduction for the part of the ROM depending on the initial values. Later, the input is turned on and we can evaluate the reduction of the response to the input. \begin{example} \label{exm:beam} First we consider the \texttt{beam} example with $n=348$ and $m=p=1$. We choose $X_0 = \big[X_0^{(i,j)}\big] \in \mathbb{R}^{n \times q}$ with $q=2$, $X_0^{(5,1)} = 1$, $X_0^{(101,2)} = 100$, and zeros elsewhere. As input we choose $$u(t) = \begin{cases} 1, & \text{if } t \in [500,1000], \\ 0, & \text{otherwise}\end{cases}$$ with ${\|u\|}_{\mathbb{L}_2} = 500$ and as initial condition we take $x_0 = X_0z_0$ with $z_0 = \left[\begin{smallmatrix} 10 \\ -1 \end{smallmatrix}\right]$ and ${\|z_0\|}_2 \approx 10.0499$. \end{example} \begin{example} \label{exm:CDplayer} The second example we consider in this paper is the \texttt{CDplayer} with $n=120$ and $m=p=2$. The $X_0 \in \mathbb{R}^{n \times q}$ with $q=2$ is constructed such that $W_r^\mathsf{T} X_0 = 0$, where $W_r$ is the left projection matrix obtained from standard \textbf{BT} with $r=50$. In this way we aim to construct an example where \textbf{BT} leads to a poor reduction in the part of the response that depends on the initial value. The input is chosen as $$u(t) = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \cdot \begin{cases} 1, & \text{if } t \in [1.5,3], \\ 0, & \text{otherwise}\end{cases}$$ with ${\|u \|}_{\mathbb{L}_2} = 1.5$. The initial condition is $x_0 = X_0z_0$ with $z_0 = \left[\begin{smallmatrix} 1 \\ 10 \end{smallmatrix}\right]$ and ${\|z_0\|}_2 \approx 10.0499$. \end{example} The following numerical experiments have been run under \textsc{Matlab} R2021b Update 1 on a HP X360 Convertible laptop with an Intel\textsuperscript{\textregistered} Core\textsuperscript{\texttrademark} i7-10710U CPU @ 1.10 GHz with 16 GB of RAM and using Windows 10. \subsection{Evaluation of the Error Bounds} First we consider the error bound constants $c_u$ and $c_{x_0}$ as in~\eqref{eq:bound_with_cu} of \textbf{jShiftBT} for several fixed values of $\beta$ and compare them with the ones obtained by \textbf{AugBT} and \textbf{BT}. We have used the optimized values $\alpha_*$ for the error bounds. The results are listed in Table~\ref{tab:joint}. As discussed in Subsection~\ref{sec:disc}, the table illustrates that for \textbf{jShiftBT}, $c_u$ is monotonically decreasing and that it tends to the value of $c_u$ for \textbf{BT} for $\beta \to \infty$. Moreover, $c_{x_0}$ for \textbf{jShiftBT} is monotonically increasing. Let us briefly discuss the performance of the heuristic choices $\alpha_{\rm heur}$ and $\tilde{\alpha}_{\rm heur}$. A comparison is given in Table~\ref{tab:alpha} in which we list the values of the different $\alpha$ values and the corresponding value of $c_u$ for \textbf{jShiftBT} with $\beta = 1$ and $r=30$. Recall that in this case, $c_u = c_{x_0}$, so we only list one of the values. Apparantly, the choice of $\tilde{\alpha}_{\rm heur}$ is overestimating the best error bound obtained by $\alpha_*$ by up to two orders of magnitude. On the other hand, the choice $\alpha_{\rm heur}$ is closer to the optimal value and there is only a small overestimation of the error bound. Next, we evaluate the error bound~\eqref{eq:bound_sep} of the separate projection ROMs and compare it with the a posteriori bound~\eqref{eq:bound_BeaGM17} for \textbf{BT-BT}, respectively. For both methods, we list the values of $c_u$ and $c_{x_0}$ in Table~\ref{tab:allmethods} for various partial reduced order $k$ and $\ell$. Since $c_u$ is the same for both methods we only list it once. First, we see that for all reduced orders, the bounds for \textbf{BT-BT} are typically better than the ones for \textbf{sShiftBT}. However, we would like to stress that the a posteriori bound of \textbf{BT-BT} requires a fully balanced realization of the system $[A,X_0,C]$ which cannot be computed for large systems and moreover the solution of a Sylvester equation for every value of $\ell$. On the other hand, our method works on the low-rank Gramian factors and these can still be optimized even in the large-scale setting. So our bound can still be at least estimated in practical applications. \begin{table} \centering \caption{Comparison of the error bound constants $c_u/c_{x_0}$ for \textbf{AugBT}, \textbf{BT}, and \textbf{jShiftBT} for several fixed values of $\beta$. The parameter $\alpha$ has been optimized as in Subsection~\ref{sec:alpha}.} \label{tab:joint} \subtable[Results for Example~\ref{exm:beam}.]{\begin{tabular}{c|c|ccccc|c} $r$ & \textbf{AugBT} & \multicolumn{5}{c|}{\textbf{jShiftBT}} & \textbf{BT} \\ & & $\beta=0.01$ & $\beta=0.1$ & $\beta = 1$ & $\beta = 10$ & $\beta = 100$ & \\ \hline \multirow{2}{*} 5 & 2.4$\mathrm{e}+$ 2/& 2.9$\mathrm{e}+$ 4/& 3.4$\mathrm{e}+$ 3/& 4.1$\mathrm{e}+$ 2/& 1.7$\mathrm{e}+$ 2/& 1.7$\mathrm{e}+$ 2/& 1.7$\mathrm{e}+$ 2/ \\ & 2.5$\mathrm{e}+$ 2 & 2.9$\mathrm{e}+$ 2 & 3.4$\mathrm{e}+$ 2 & 4.1$\mathrm{e}+$ 2 & 1.7$\mathrm{e}+$ 3 & 1.7$\mathrm{e}+$ 4 & 1.0$\mathrm{e}+$ 1 \\ \multirow{2}{*} {10} & 5.1$\mathrm{e}+$ 1/& 1.2$\mathrm{e}+$ 4/& 1.2$\mathrm{e}+$ 3/& 1.3$\mathrm{e}+$ 2/& 2.9$\mathrm{e}+$ 1/& 2.4$\mathrm{e}+$ 1/& 2.4$\mathrm{e}+$ 1/ \\ & 9.4$\mathrm{e}+$ 1 & 1.2$\mathrm{e}+$ 2 & 1.2$\mathrm{e}+$ 2 & 1.3$\mathrm{e}+$ 2 & 2.9$\mathrm{e}+$ 2 & 2.4$\mathrm{e}+$ 3 & 8.4$\mathrm{e}+$ 0 \\ \multirow{2}{*} {15} & 2.1$\mathrm{e}+$ 1/& 5.0$\mathrm{e}+$ 3/& 5.0$\mathrm{e}+$ 2/& 5.3$\mathrm{e}+$ 1/& 1.1$\mathrm{e}+$ 1/& 7.7$\mathrm{e}+$ 0/& 7.5$\mathrm{e}+$ 0/ \\ & 5.8$\mathrm{e}+$ 1 & 5.0$\mathrm{e}+$ 1 & 5.0$\mathrm{e}+$ 1 & 5.3$\mathrm{e}+$ 1 & 1.1$\mathrm{e}+$ 2 & 7.7$\mathrm{e}+$ 2 & 3.8$\mathrm{e}+$ 0 \\ \multirow{2}{*} {20} & 1.2$\mathrm{e}+$ 1/& 2.8$\mathrm{e}+$ 3/& 2.8$\mathrm{e}+$ 2/& 3.1$\mathrm{e}+$ 1/& 6.4$\mathrm{e}+$ 0/& 3.8$\mathrm{e}+$ 0/& 3.7$\mathrm{e}+$ 0/ \\ & 4.1$\mathrm{e}+$ 1 & 2.8$\mathrm{e}+$ 1 & 2.8$\mathrm{e}+$ 1 & 3.1$\mathrm{e}+$ 1 & 6.4$\mathrm{e}+$ 1 & 3.8$\mathrm{e}+$ 2 & 2.9$\mathrm{e}+$ 0 \\ \multirow{2}{*} {25} & 6.7$\mathrm{e}+$ 0/& 1.4$\mathrm{e}+$ 3/& 1.4$\mathrm{e}+$ 2/& 1.7$\mathrm{e}+$ 1/& 3.7$\mathrm{e}+$ 0/& 1.9$\mathrm{e}+$ 0/& 1.8$\mathrm{e}+$ 0/ \\ & 2.8$\mathrm{e}+$ 1 & 1.4$\mathrm{e}+$ 1 & 1.4$\mathrm{e}+$ 1 & 1.7$\mathrm{e}+$ 1 & 3.7$\mathrm{e}+$ 1 & 1.9$\mathrm{e}+$ 2 & 3.0$\mathrm{e}+$ 0 \\ \multirow{2}{*} {30} & 3.5$\mathrm{e}+$ 0/& 5.8$\mathrm{e}+$ 2/& 5.8$\mathrm{e}+$ 1/& 7.4$\mathrm{e}+$ 0/& 2.0$\mathrm{e}+$ 0/& 9.3$\mathrm{e}-$ 1/& 8.6$\mathrm{e}-$ 1/ \\ & 1.9$\mathrm{e}+$ 1 & 5.8$\mathrm{e}+$ 0 & 5.8$\mathrm{e}+$ 0 & 7.4$\mathrm{e}+$ 0 & 2.0$\mathrm{e}+$ 1 & 9.3$\mathrm{e}+$ 1 & 1.3$\mathrm{e}+$ 0 \\ \multirow{2}{*} {40} & 1.2$\mathrm{e}+$ 0/& 1.9$\mathrm{e}+$ 2/& 1.9$\mathrm{e}+$ 1/& 2.6$\mathrm{e}+$ 0/& 7.4$\mathrm{e}-$ 1/& 2.5$\mathrm{e}-$ 1/& 2.0$\mathrm{e}-$ 1/ \\ & 9.3$\mathrm{e}+$ 0 & 1.9$\mathrm{e}+$ 0 & 1.9$\mathrm{e}+$ 0 & 2.6$\mathrm{e}+$ 0 & 7.4$\mathrm{e}+$ 0 & 2.5$\mathrm{e}+$ 1 & 6.8$\mathrm{e}-$ 1 \\ \multirow{2}{*} {50} & 4.2$\mathrm{e}-$ 1/& 4.7$\mathrm{e}+$ 1/& 4.9$\mathrm{e}+$ 0/& 8.4$\mathrm{e}-$ 1/& 2.2$\mathrm{e}-$ 1/& 6.2$\mathrm{e}-$ 2/& 3.3$\mathrm{e}-$ 2/ \\ & 4.5$\mathrm{e}+$ 0 & 4.7$\mathrm{e}-$ 1 & 4.9$\mathrm{e}-$ 1 & 8.4$\mathrm{e}-$ 1 & 2.2$\mathrm{e}+$ 0 & 6.2$\mathrm{e}+$ 0 & 3.8$\mathrm{e}-$ 1 \\ \end{tabular}} \subtable[Results for Example~\ref{exm:CDplayer}.]{\begin{tabular}{c|c|ccccc|c} $r$ & \textbf{AugBT} & \multicolumn{5}{c|}{\textbf{jShiftBT}} & \textbf{BT} \\ & & $\beta=0.01$ & $\beta=0.1$ & $\beta = 1$ & $\beta = 10$ & $\beta = 100$ & \\ \hline \multirow{2}{*} 5 & 1.3$\mathrm{e}+$ 3/& 1.0$\mathrm{e}+$ 5/& 1.5$\mathrm{e}+$ 4/& 2.8$\mathrm{e}+$ 3/& 1.5$\mathrm{e}+$ 3/& 1.3$\mathrm{e}+$ 3/& 1.3$\mathrm{e}+$ 3/ \\ & 1.8$\mathrm{e}+$ 4 & 1.0$\mathrm{e}+$ 3 & 1.5$\mathrm{e}+$ 3 & 2.8$\mathrm{e}+$ 3 & 1.5$\mathrm{e}+$ 4 & 1.3$\mathrm{e}+$ 5 & 3.3$\mathrm{e}+$ 1 \\ \multirow{2}{*} {10} & 7.0$\mathrm{e}+$ 1/& 5.2$\mathrm{e}+$ 4/& 7.1$\mathrm{e}+$ 3/& 1.1$\mathrm{e}+$ 3/& 2.1$\mathrm{e}+$ 2/& 7.6$\mathrm{e}+$ 1/& 6.3$\mathrm{e}+$ 1/ \\ & 2.5$\mathrm{e}+$ 3 & 5.2$\mathrm{e}+$ 2 & 7.1$\mathrm{e}+$ 2 & 1.1$\mathrm{e}+$ 3 & 2.1$\mathrm{e}+$ 3 & 7.6$\mathrm{e}+$ 3 & 3.3$\mathrm{e}+$ 1 \\ \multirow{2}{*} {15} & 2.0$\mathrm{e}+$ 1/& 3.7$\mathrm{e}+$ 4/& 4.5$\mathrm{e}+$ 3/& 5.4$\mathrm{e}+$ 2/& 1.1$\mathrm{e}+$ 2/& 2.5$\mathrm{e}+$ 1/& 1.2$\mathrm{e}+$ 1/ \\ & 1.1$\mathrm{e}+$ 3 & 3.7$\mathrm{e}+$ 2 & 4.5$\mathrm{e}+$ 2 & 5.4$\mathrm{e}+$ 2 & 1.1$\mathrm{e}+$ 3 & 2.5$\mathrm{e}+$ 3 & 3.3$\mathrm{e}+$ 1 \\ \multirow{2}{*} {20} & 1.2$\mathrm{e}+$ 1/& 2.7$\mathrm{e}+$ 4/& 3.2$\mathrm{e}+$ 3/& 3.8$\mathrm{e}+$ 2/& 5.7$\mathrm{e}+$ 1/& 1.5$\mathrm{e}+$ 1/& 4.7$\mathrm{e}+$ 0/ \\ & 7.6$\mathrm{e}+$ 2 & 2.7$\mathrm{e}+$ 2 & 3.2$\mathrm{e}+$ 2 & 3.8$\mathrm{e}+$ 2 & 5.7$\mathrm{e}+$ 2 & 1.5$\mathrm{e}+$ 3 & 3.3$\mathrm{e}+$ 1 \\ \multirow{2}{*} {25} & 7.1$\mathrm{e}+$ 0/& 2.1$\mathrm{e}+$ 4/& 2.3$\mathrm{e}+$ 3/& 2.8$\mathrm{e}+$ 2/& 4.0$\mathrm{e}+$ 1/& 8.6$\mathrm{e}+$ 0/& 1.6$\mathrm{e}+$ 0/ \\ & 6.7$\mathrm{e}+$ 2 & 2.1$\mathrm{e}+$ 2 & 2.3$\mathrm{e}+$ 2 & 2.8$\mathrm{e}+$ 2 & 4.0$\mathrm{e}+$ 2 & 8.6$\mathrm{e}+$ 2 & 3.3$\mathrm{e}+$ 1 \\ \multirow{2}{*} {30} & 4.2$\mathrm{e}+$ 0/& 1.7$\mathrm{e}+$ 4/& 1.8$\mathrm{e}+$ 3/& 2.1$\mathrm{e}+$ 2/& 3.0$\mathrm{e}+$ 1/& 4.5$\mathrm{e}-$ 0/& 8.1$\mathrm{e}-$ 1/ \\ & 4.8$\mathrm{e}+$ 2 & 1.7$\mathrm{e}+$ 2 & 1.8$\mathrm{e}+$ 2 & 2.1$\mathrm{e}+$ 2 & 3.0$\mathrm{e}+$ 2 & 4.5$\mathrm{e}+$ 2 & 3.3$\mathrm{e}+$ 1 \\ \multirow{2}{*} {40} & 2.1$\mathrm{e}+$ 0/& 1.1$\mathrm{e}+$ 4/& 1.1$\mathrm{e}+$ 3/& 1.2$\mathrm{e}+$ 2/& 1.6$\mathrm{e}+$ 1/& 2.3$\mathrm{e}-$ 0/& 2.9$\mathrm{e}-$ 1/ \\ & 3.0$\mathrm{e}+$ 2 & 1.1$\mathrm{e}+$ 2 & 1.1$\mathrm{e}+$ 2 & 1.2$\mathrm{e}+$ 2 & 1.6$\mathrm{e}+$ 2 & 2.3$\mathrm{e}+$ 2 & 3.3$\mathrm{e}+$ 1 \\ \multirow{2}{*} {50} & 9.1$\mathrm{e}-$ 1/& 7.2$\mathrm{e}+$ 3/& 7.4$\mathrm{e}+$ 2/& 7.8$\mathrm{e}+$ 1/& 8.6$\mathrm{e}+$ 0/& 1.1$\mathrm{e}-$ 0/& 1.1$\mathrm{e}-$ 1/ \\ & 1.7$\mathrm{e}+$ 2 & 7.2$\mathrm{e}+$ 1 & 7.4$\mathrm{e}+$ 1 & 7.8$\mathrm{e}+$ 1 & 8.6$\mathrm{e}+$ 1 & 1.1$\mathrm{e}+$ 2 & 3.3$\mathrm{e}+$ 1 \\ \end{tabular}} \end{table} \begin{table}[t] \centering \caption{Comparison of the error bound constants of \textbf{jShiftBT} for the heuristic values $\alpha_{\rm heur}$, $\tilde{\alpha}_{\rm heur}$, and the locally optimal choice $\alpha_*$. Here we use $\beta = 1$ (thus, $c_u = c_{x_0}$) and $r = 30$.} \label{tab:alpha} \subtable[Results for Example~\ref{exm:beam}]{ \begin{tabular}{c|ccc} & $\alpha_{*}$ & $\alpha_{\rm heur}$ & $\tilde{\alpha}_{\rm heur}$ \\ \hline $\alpha$ value & 1.1$\mathrm{e}+$ 1 & 1.4$\mathrm{e}+$ 2 & 5.1$\mathrm{e}-$ 3 \\ $c_u$ & 7.4$\mathrm{e}+$ 0 & 1.5$\mathrm{e}+$ 1 & 1.8$\mathrm{e}+$ 2 \\ \end{tabular}} \subtable[Results for Example~\ref{exm:CDplayer}]{ \begin{tabular}{c|ccc} & $\alpha_{*}$ & $\alpha_{\rm heur}$ & $\tilde{\alpha}_{\rm heur}$ \\ \hline $\alpha$ value & 5.6$\mathrm{e}+$ 3 & 3.8$\mathrm{e}+$ 4 & 2.4$\mathrm{e}-$ 2 \\ $c_u$ & 2.1$\mathrm{e}+$ 2 & 3.0$\mathrm{e}+$ 2 & 2.9$\mathrm{e}+$ 4 \\ \end{tabular}} \end{table} \begin{table}[t] \centering \caption{Error bound constants of \textbf{BT-BT} and \textbf{sShiftBT} for the partial reduced orders $k$ and $\ell$. Here we use optimized values of $\alpha$. The minimum constants $c_{x_0}$ are emphasized by bold font.} \label{tab:allmethods} \subtable[Results for Example~\ref{exm:beam}]{ \begin{tabular}{c|c|cc} $k,\,\ell$ & \multicolumn{1}{c}{$c_u$} & \multicolumn{2}{c}{$c_{x_0}$} \\ & & \textbf{BT-BT} & \textbf{sShiftBT} \\ \hline 5 & 1.7e$+$2 & {\bf 1.0e$+$1} & 2.9e$+$2 \\ 10 & 2.4e$+$1 & {\bf 7.0e$+$0} & 1.2e$+$2 \\ 15 & 7.5e$+$0 & {\bf 2.8e$+$0} & 5.0e$+$1 \\ 20 & 3.7e$+$0 & {\bf 1.7e$+$0} & 2.8e$+$1 \\ 25 & 1.8e$+$0 & {\bf 1.6e$+$0} & 1.4e$+$1 \\ 30 & 8.6e$-$1 & {\bf 2.9e$-$1} & 5.8e$+$0 \\ 40 & 2.0e$-$1 & {\bf 1.0e$-$1} & 1.9e$+$0 \\ 50 & 3.3e$-$2 & {\bf 5.9e$-$2} & 4.7e$-$1 \end{tabular}} \hspace*{0.2cm}\subtable[Results for Example~\ref{exm:CDplayer}]{\begin{tabular}{c|c|cc} $k,\,\ell$ & \multicolumn{1}{c}{$c_u$} & \multicolumn{2}{c}{$c_{x_0}$} \\ & & \textbf{BT-BT} & \textbf{sShiftBT} \\ \hline 5 & 1.3e$+$3 & {\bf 9.0e$+$0} & 6.8e$+$2 \\ 10 & 6.3e$+$1 & {\bf 5.7e$+$0} & 4.1e$+$2 \\ 15 & 1.2e$+$1 & {\bf 6.3e$+$0} & 2.9e$+$2 \\ 20 & 4.7e$+$1 & {\bf 4.1e$+$0} & 2.3e$+$2 \\ 25 & 1.6e$+$0 & {\bf 3.4e$+$0} & 1.9e$+$2 \\ 30 & 8.1e$-$1 & {\bf 3.4e$+$0} & 1.6e$+$2 \\ 40 & 2.9e$-$1 & {\bf 1.8e$+$0} & 1.1e$+$2 \\ 50 & 1.1e$-$1 & {\bf 1.5e$+$0} & 6.8e$+$1 \end{tabular}} \end{table} \subsection{Evaluation of the Errors} In Figure~\ref{fig:allmethods} we plot the actual errors for each of the reduction methods. Here we choose $r=30$ for standard \textbf{BT}, \textbf{TrlBT}, \textbf{AugBT}, and \textbf{jShiftBT} as well as $k = \ell = 15$ for \textbf{BT-BT} and \textbf{sShiftBT}. These figures have been created by simulating the error systems using \textsc{Matlab}'s ODE solver \texttt{ode45} and plotting ${\|y(t) - y_r(t)\|}_2$ over the time $t$. Note that the results typically show an extremely oscillatory behavior, thus we have applied a smoothing filter to the output to improve the visibility of the individual error trajectories. The examples have been constructed such that we see the error of the response to the initial value in the first half of the considered time interval, while in the second half we see the error of the input-dependent part. The figure indicates that for Example~\ref{exm:beam}, the overall reduction using our \textbf{jShiftBT} outperforms all the other methods, especially when reducing the response to the initial value. The same is true for Example~\ref{exm:CDplayer}, even though it is not as clearly visible in the figure as for the first example (cf. Table~\ref{tab:errors} which contains the errors for our experiments). Standard \textbf{BT}, while not guaranteeing any error bound, also works similarly well for the two examples, as does \textbf{AugBT}. However, the two separate projection methods \textbf{BT-BT} and \textbf{sShiftBT} perform much worse. But this behavior can be expected since the other methods use a single projection to reduce both the responses to the initial value and the input at once. In constrast to that, in the seperate projection methods, two individual projections which may have a large redundant ``overlap'', have to be formed. Finally, the method \textbf{TrlBT} does not seem to work well on both examples and produces by far the largest error. \begin{figure} \centering \begin{tabular}{c} \subfigure[Results for Example~\ref{exm:beam}.]{ \includegraphics{example1_allmethods.pdf} } \\ \subfigure[Results for Example~\ref{exm:CDplayer}.]{ \includegraphics{example2_allmethods.pdf} } \end{tabular} \caption{Comparison of the errors of the new approach with the methods from the literature. Here we use $r=30$ for standard \textbf{BT}, \textbf{TrlBT}, \textbf{AugBT}, and \textbf{jShiftBT}; and $k = \ell = 15$ for \textbf{BT-BT} and \textbf{sShiftBT} as well as the optimized values of $\alpha$.} \label{fig:allmethods} \end{figure} Next we show error plots for different choices of $\beta$ in \textbf{jShiftBT} for the reduced order $r=30$ (where we have again applied a smoothing filter). As expected, the figure shows that for smaller values of $\beta$, the reduction of the initial value response is emphazised, leading to smaller errors in the first half of the considered time interval, but to larger errors in the second half. In contrast to that, we focus on the reduction of the input response for larger $\beta$ and so the errors are bigger in the first, but smaller in the second half of the time interval of interest. \begin{figure} \centering \begin{tabular}{c} \subfigure[Results for Example~\ref{exm:beam}.]{ \includegraphics{example1_joint.pdf} } \\ \subfigure[Results for Example~\ref{exm:CDplayer}.]{ \includegraphics{example2_joint.pdf} } \end{tabular} \caption{Comparison of the errors of \textbf{jShiftBT} for various fixed values of $\beta$ and the optimal choice of $\alpha$. Here we use the reduced order $r=30$.} \label{fig:joint} \end{figure} Finally, we evaluate the $\mathbb{L}_2$- and $\mathbb{L}_\infty$-errors obtained in all our numerical simulations and we list the corresponding error norms in Table~\ref{tab:errors}. This table shows that \textbf{jShiftBT} is the clear winner among all the methods and that the overall errors are also quite robust with respect to changes in the parameter $\beta$ for the considered examples. However, a few methods sometimes get close to the errors of \textbf{jShiftBT}, this is especially the case for \textbf{AugBT}. The table further indicates that the reduction obtained by \textbf{BT} for Example~\ref{exm:CDplayer} is relatively poor compared with the other methods. This is due to the fact that the response to the initial value is not reduced well (as purposely designed in this example). \begin{table}[t] \centering \caption{Comparison of the error norms for the simulations in Figures~\ref{fig:allmethods} and~\ref{fig:joint}. Here we list the $\mathbb{L}_2$- and the $\mathbb{L}_\infty$-norms of the error trajectories in the respective time interval displayed in the figures. The smallest errors are emphasized by bold font.} \label{tab:errors} \subtable[Results for Example~\ref{exm:beam}]{ \begin{tabular}{c|cc} method & $\mathbb{L}_2$-error & $\mathbb{L}_\infty$-error \\ \hline standard \textbf{BT} & 1.3$\mathrm{e}+$ 0 & 2.2$\mathrm{e}+$ 0\\ \textbf{TrlBT} & 2.1$\mathrm{e}+$ 1 & 8.3$\mathrm{e}-$ 1 \\ \textbf{AugBT} & 7.8$\mathrm{e}-$ 1 & 5.3$\mathrm{e}-$ 1 \\ \textbf{BT-BT} & 1.6$\mathrm{e}+$ 1 & 6.8$\mathrm{e}+$ 1 \\ \textbf{sShiftBT} & 1.6$\mathrm{e}+$ 1 & 3.0$\mathrm{e}+$ 0 \\ \textbf{jShiftBT}, $\beta = \beta_{\rm heur}$ & 1.1$\mathrm{e}+$ 0 & 1.7$\mathrm{e}+$ 0\\ \textbf{jShiftBT}, $\beta = 0.01$ & 4.2$\mathrm{e}+$ 0 & \textbf{2.3$\mathrm{e}-$ 1}\\ \textbf{jShiftBT}, $\beta = 0.1$ & 3.8$\mathrm{e}+$ 0 & {\bf 2.3$\mathrm{e}-$ 1}\\ \textbf{jShiftBT}, $\beta = 1$ & 1.8$\mathrm{e}+$ 0 & 2.6$\mathrm{e}-$ 1\\ \textbf{jShiftBT}, $\beta = 10$ & {\bf 6.9$\mathrm{e}-$ 1} & 4.0$\mathrm{e}-$ 1\\ \textbf{jShiftBT}, $\beta = 100$ & 1.3$\mathrm{e}+$ 0 & 2.0$\mathrm{e}+$ 0 \\ \end{tabular}} \subtable[Results for Example~\ref{exm:CDplayer}]{ \begin{tabular}{c|cc} method & $\mathbb{L}_2$-error & $\mathbb{L}_\infty$-error \\ \hline standard \textbf{BT} & 2.3$\mathrm{e}+$ 2 & 1.2$\mathrm{e}+$ 4\\ \textbf{TrlBT} & 6.6$\mathrm{e}+$ 2 & 7.3$\mathrm{e}+$ 2 \\ \textbf{AugBT} & 5.2$\mathrm{e}+$ 1 & 2.3$\mathrm{e}+$ 3 \\ \textbf{BT-BT} & 5.5$\mathrm{e}+$ 1 & 2.4$\mathrm{e}+$ 3 \\ \textbf{sShiftBT} & 6.4$\mathrm{e}+$ 1 & 1.6$\mathrm{e}+$ 3 \\ \textbf{jShiftBT}, $\beta = \beta_{\rm heur}$ & {\bf 1.9$\mathrm{e}+$ 1} & {\bf 4.9$\mathrm{e}+$ 2} \\ \textbf{jShiftBT}, $\beta = 0.01$ & 7.2$\mathrm{e}+$ 1 & 5.2$\mathrm{e}+$ 2 \\ \textbf{jShiftBT}, $\beta = 0.1$ & {\bf 1.9$\mathrm{e}+$ 1} & {\bf 4.9$\mathrm{e}+$ 2} \\ \textbf{jShiftBT}, $\beta = 1$ & 2.0$\mathrm{e}+$ 1 & 5.7$\mathrm{e}+$ 2 \\ \textbf{jShiftBT}, $\beta = 10$ & 2.0$\mathrm{e}+$ 1 & 5.4$\mathrm{e}+$ 2 \\ \textbf{jShiftBT}, $\beta = 100$ & 3.6$\mathrm{e}+$ 1 & 1.3$\mathrm{e}+$ 3 \\ \end{tabular}} \end{table} \section{Concluding Remarks} In this work we have derived a new alternative procedure for balanced truncation model reduction for systems with nonzero initial value. In contrast to other methods, our method provides an a priori error bound that can be computed efficiently from the solutions of three Lyapunov equations that are needed in the reduction algorithm. As the numerical examples have shown, our error bound and also the errors are often better to those of other techniques available in the literature. Especially our new joint projection method outperforms all the other methods in our numerical experiments. Even, if multiple joint projection ROMs have to be used for a large range of inputs and initial values, they can be constructed very efficiently. This is because the main computational burden is the solution of the three Lyapunov equations, whereas the parameter optimization is comparably cheap. Therefore, we recommend the potential practitioner to use \textbf{jShiftBT} for reducing models with nonzero initial condition. \section*{Code Availability} The \textsc{Matlab} code and data for reproducing the numerical results are available for download under the DOI \texttt{10.5281/zenodo.6355512}. \section*{Acknowledgement} The authors thank Bj\"orn Liljegren-Sailer (Universit\"at Trier) for noticing an error in a previous version of our software. \bibliographystyle{abbrv}
1,477,468,750,691
arxiv
\section{NFRHT in plasmonic and polaritonic bulk systems} We describe NFRHT at a mean temperature, $T$, by evaluating the radiative thermal conductance per unit area \cite{song_near-field_2015}: \begin{equation}\label{eq:HTC_def} h=\int_0^{\infty} \left[\frac{\partial}{\partial T}\theta(\omega,T)\right]\Phi\bomega d\omega, \end{equation} where $\theta(\omega,T)=\hbar\omega / \left[\exp{\left(\hbar\omega/k_B T\right)}-1\right]$ is mean energy per photon and $\Phi\bomega $ is the thermal emission spectrum, expressed in $\mathrm{m}^{-2}$. We consider a vacuum gap of size $d$ that separates two planar semi-infinite bodies exchanging heat. From fluctuational-electrodynamics \cite{rytov_theory_1953,polder_theory_1971,loomis_theory_1994,shchegrov_near-field_2000}, $\Phi\bomega $ is given by: \begin{equation}\label{eq:Phi} \Phi\bomega =\frac{1}{4\pi^2}\int_0^\infty [\xi_p (\omega,\beta)+\xi_s(\omega,\beta)] \beta d\beta, \end{equation} where $\xi_{p,s}(\omega,\beta)$ is the probability for a photon at frequency $\omega$ and in-plane wavenumber $\beta$ to tunnel accross the gap. The subscripts $p$ and $s$ denote polarization, corresponding to TM (transverse magnetic) and TE (transverse electric), respectively. For gap sizes smaller than the thermal wavelength, $\lambda_\mathrm{T}=b_\text{Wien}/T$, where $b_\text{Wien} =2989\,\mu m \, K$ \cite{planck_zur_1901}, thermally excited SPPs and SPhPs dominate NFRHT in plasmonic and polar materials, respectively. Since these can only be excited in $p$-polarization \cite{maier_plasmonics_2007}, the emission spectrum, $\Phi$, can be approximated as the contribution from $p$-polarization alone. At sufficiently small vacuum gaps, the dispersion of surface polaritons approaches the quasistatic limit, for which $\beta\gg k_0$ \cite{pendry_radiative_1999}, where $k_0=\omega/c$ is the free-space wavenumber. In this limit, it can be shown (Supplementary Material) that the maximum in-plane wavenumber, $\beta_{\text{max}}$, that satisfies the perfect photon tunneling condition, i.e. $\xi_p=1$, occurs near the surface polariton resonance frequency, $\Omega$, at which $\real{r_p}=0$, where $\displaystyle r_p\bomega = \frac{\chi\bomega }{\chi\bomega +2}$ is the Fresnel coefficient in the quasistatic regime, or: \begin{equation}\label{eq:resonance_freq} \real{\frac{2}{\chi (\Omega)}}=-1. \end{equation} For low-loss materials, i.e. for $\imag{\chi(\Omega)}\ll 1$, Eq. \eqref{eq:resonance_freq} reduces to the more common expression $\real{\chi(\Omega)}=-2$ \cite{maier_plasmonics_2007}. To obtain the emission spectrum, we carry out the integration of Eq. \eqref{eq:Phi}, over all available wavenumbers, $\beta$. Upon assuming $\beta\gg k_0$, this integration yields: \begin{equation}\label{eq:Phi_analytical} \Phi\bomega = \frac{1}{8\pi^2d^2}\frac{\imag{r_p\bomega }}{\real{r_p\bomega }}\text{Im}\left\{\Li{r_p^2\bomega }\right\}, \end{equation} where $\text{Li}_2$ is the dilogarithm or Spence's function \cite{lewin_dilogarithms_1958}. This expression agrees with Rousseau \textit{et al. } \cite{rousseau_asymptotic_2012}. Its derivation, along with the more general expression for heat exchange between dissimilar materials, can be found in the Supplementary Material, where we also showcase the validity of Eq. \eqref{eq:Phi_analytical}. Eq. \eqref{eq:Phi_analytical} directly computes the thermal emission spectrum provided the Fresnel coefficients, and is valid for \textit{any} material's frequency dispersion. In the low-loss limit, the emission spectrum is maximum at $\omega=\Omega$, where Eq. \eqref{eq:Phi_analytical} reduces to the result by Miller \textit{et al}. in \cite{miller_shape-independent_2015}, as shown in the Supplementary Material, where we discuss an approach to quantitatively distinguish the low-loss from the high-loss regime in NFRHT (Eqs. (S27)-(S28)). The dielectric function of plasmonic and polar materials can be described by Drude and Lorentz oscillators, respectively: \begin{subnumcases}{\varepsilon\bomega=} \displaystyle\varepsilon_{\text{plasm}} =\varepsilon_\infty \left[1-\frac{\omega_p^2}{\omega(\omega+i\gamma)}\right]\label{eq:eps_plasm}\\ \displaystyle\varepsilon_{\text{polar}} = \varepsilon_\infty\left[1+\frac{\omega_\text{LO}^2-\omega_\text{TO}^2}{\omega_\text{TO}^2-\omega^2 -i\,\omega \gamma}\right]\label{eq:eps_pol}, \end{subnumcases} where $\varepsilon_\infty$ is the high-frequency relative permittivity, and $\gamma$ is the optical loss \cite{drude_zur_1900,kheirandish_modified_2020}. For plasmonic metals, $\omega_p$ is the plasma frequency, near which the SPP mode occurs. For polar materials, $\omega_\text{TO}$ and $\omega_\text{LO}$ are the transverse and longitudinal optical phonon frequencies, respectively \cite{caldwell_low-loss_2015}. The spectral range $[\omega_\text{TO},\omega_\text{LO}]$ defines the \emph{Reststrahlen} band \cite{kortum_phenomenological_1969}, within which the SPhP mode occurs. The resonance frequency, $\Omega$, is found by solving Eq. \eqref{eq:resonance_freq}. Since $\Omega$ does not vary significantly as $\gamma$ increases (Supplementary Material), we evaluate it for $\gamma\to 0$ as: \begin{subnumcases}{\Omega=} \displaystyle\Omega_{\text{plasm}} = \sqrt{\frac{\varepsilon_\infty}{\varepsilon_\infty+1}}\omega_p\\\label{eq:Omega_plasm} \displaystyle \Omega_{\text{polar}} = \sqrt{\frac{\varepsilon_\infty \omega_\text{LO}^2+\omega_\text{TO}^2}{1+\varepsilon_{\infty}}}\label{eq:Omega_pol}. \end{subnumcases} Henceforth, we assume that $\Omega_{\text{plasm}}$ and $\Omega_{\text{polar}}$ are $\gamma$-independent. \emph{Radiative thermal conductance, $h$ -} In evaluating NFRHT performance, it is useful to introduce a material quality factor for plasmonic and polar media \cite{wang_general_2006,pascale_bandwidth_2021,caldwell_low-loss_2015}: \begin{equation}\label{eq:Q} \displaystyle Q=\left.\frac{\omega \frac{d\real{\varepsilon}}{d\omega}}{2\imag{\varepsilon}}\right|_\Omega\approx \frac{\Omega}{\gamma}. \end{equation} To obtain the radiative thermal conductance (Eq. \eqref{eq:HTC_def}), we use contour integration in the complex frequency plane of $\Phi$ (section \textit{Radiative thermal conductance $h$} in the Supplementary Material), similar to \cite{rousseau_asymptotic_2012}. Considering that the Planck distribution varies slowly with respect to $\Phi\bomega$, we obtain: \begin{equation}\label{eq:HTC_analytical} h = h_\text{max} \, \Psi\left(\frac{Q}{B}\right)\Pi\left( \frac{\Omega}{T} \right) . \end{equation} The functions $\Pi$ and $\Psi$ are given by: \begin{equation}\label{eq:PiT} \quad\displaystyle \Pi\left( \frac{\Omega}{T} \right)=\frac{1}{k_B}\frac{\partial}{\partial T}\theta(\Omega,T) =\left[\frac{\frac{\hbar }{2k_B} \frac{\Omega}{T}}{\sinh{\left(\frac{\hbar }{2k_B} \frac{\Omega}{T}\right)}}\right]^2 , \end{equation} \begin{equation}\label{eq:Gamma} \displaystyle \Psi\left(\frac{Q}{B}\right)=\left.\frac{h}{h_\text{max}}\right|_{T\gg \frac{\hbar \Omega}{2k_B}} = -\frac{\Li{-(Q/B)^2}}{1.36 (Q/B)}. \end{equation} These functions are both bounded by unity, hence $h_\text{max}$ in Eq. \eqref{eq:HTC_analytical} defines the maximum thermal conductance that a polaritonic material can reach in a planar configuration, and is given by: \begin{equation}\label{eq:hmax_z} h_\text{max} = \frac{1.36 \, k_B}{16\pi d^2} \frac{\Omega}{B}. \end{equation} As can be seen, $h_\text{max}$ is temperature- and loss-independent. Furthermore, it expresses the well-known $\propto d^{-2}$ dependence of NFRHT from the vacuum gap size \cite{wang_parametric_2009,ben-abdallah_fundamental_2010,rousseau_asymptotic_2012,iizuka_analytical_2015}. The parameter $B$ is termed \emph{material residue} henceforth, and it is defined as $\displaystyle B = Q/\imag{r_p(\Omega)}$, by setting $Q\to \infty$, which reduces to: \begin{subequations} \label{eq:Bplasm_pol} \begin{align}[left={B = \empheqlbrace}] & \displaystyleB_\text{plasm} = \frac{1+\varepsilon_\infty}{2}\label{eq:Bplasm}\\ & \displaystyle B_\text{polar}=\frac{(1+\varepsilon_\infty)^2}{2\varepsilon_\infty}\frac{\Omega_{\text{polar}}^2}{\omega_\text{LO}^2-\omega_\text{TO}^2}\label{eq:Bpol}, \end{align} \end{subequations} for Drude and Lorentz materials, respectively. Eq. \eqref{eq:HTC_analytical} is the key contribution of this paper. Unlike expressions presented in previous works \cite{rousseau_asymptotic_2012,iizuka_analytical_2015}, Eq. \eqref{eq:HTC_analytical} distinctly separates the role of the optical loss, described by the quality factor $Q$, in NFRHT, from other dispersion parameters that are captured by $B$, and temperature. The decoupling of temperature, material quality factor, and material residue, in Eq. \eqref{eq:HTC_analytical}, via the functions $\displaystyle \Pi\left(\frac{\Omega}{T}\right)$ and $\displaystyle\Psi\left(\frac{Q}{B}\right)$, allows a quantitative classification of different materials as candidates for tailoring NFRHT. Eq. \eqref{eq:HTC_analytical} is a very good approximation of the exact result obtained via fluctuational electrodynamics, for $Q$ considerably larger than unity, which is satisfied by all relevant materials for NFRHT \footnote{As a benchmark, for $Q\gg 1$, considered in \cite{miller_shape-independent_2015,venkataram_fundamental_2020}, the ratio $\displaystyle \frac{Q}{B}$ can be written as $\displaystyle \frac{Q}{B}=\frac{|\chi(\Omega)|^2}{\imag{\chi (\Omega)}}=\zeta$, where $\zeta$ is a material response factor}. Further, Eq. \eqref{eq:HTC_analytical} is \textit{exact} for $\varepsilon_\mathrm{\infty}=1$ in either polar or plasmonic media. We stress that Eq. \eqref{eq:hmax_z} represents a tight bound to NFRHT that accounts for material dispersion. This is to be contrasted to previous results that derived upper bounds to $h$ with the idealized assumption of a dispersion-less perfect blackbody in the near-field (i.e. $\xi=1$) \cite{ben-abdallah_fundamental_2010}, thus yielding a thermal conductance that is orders of magnitude larger than our result in Eq. \eqref{eq:HTC_analytical} (Fig. \ref{fig:fig3}). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig1.pdf} \caption{Materials' quality factor and residue parameter for plasmonic and polar materials. Drude parameters are taken from \cite{ashcroft_solid_1976} for Au, Ag, Cu, Al and from \cite{caldwell_low-loss_2015} for the rest of the considered materials. Superscripts ${}^o$ and ${}^e$ stand for the ordinary and extraordinary principal axes, respectively. The solid line shows Eq. \ref{eq:Qopt}, which maximizes NFRHT.} \label{fig:fig1} \end{figure} With Eqs. \eqref{eq:HTC_analytical}, \eqref{eq:Gamma}, one can identify the optimal material characteristics, independent of temperature, that maximize NFRHT. In particular, $\Psi$ describes how NFRHT changes with optical loss. Seeking for the maximum of $\Psi$ (Eq. \eqref{eq:Gamma}), we obtain: \begin{equation}\label{eq:Qopt} Q_\text{opt}=4.5 \,B. \end{equation} Hence, NFRHT is maximized when the material quality factor is $4.5$ times the material residue function, given in Eq. \eqref{eq:Bplasm_pol}. The work in \cite{ben-abdallah_fundamental_2010} yielded a universal optimal quality factor, namely $Q_\mathrm{opt}^{*}=2.72$, which, however, is independent of $B$, thus suggesting that \textit{all} materials that have the same $Q$ should perform identically in terms of NFRHT. In contrast, Eq. \eqref{eq:Qopt} demonstrates that other dispersion characteristics, beyond the quality factor, are critical in evaluating NFRHT response. From Eq. \eqref{eq:Bplasm}, the material residue for plasmonic materials depends only on $\varepsilon_\infty$. Typically, $\varepsilon_\infty\lesssim 10$, hence $B_\text{plasm}$ remains well below $10$. Thus, from Eq. \eqref{eq:Qopt}, $Q_\text{opt}$ for plasmonic materials is relatively low, namely $Q_\text{opt}\lesssim 50$. Hence, plasmonic materials with good NFRHT performance have high-loss ($\gamma$) and modest $Q$, and NFRHT is enhanced due to the broadband nature of the plasmonic resonance. In contrast, the material residue for polar materials, $B_\text{polar}$ (Eq. \eqref{eq:Bpol}) is inversely proportional to the spectral width of the Reststrahlen band, $(\omega_\mathrm{LO}-\omega_\mathrm{TO})$. The Reststrahlen band of most polar materials is narrow, hence $B_\text{polar}>B_\text{plasm}$, therefore $Q_\text{opt}$ for polar media is higher than for plasmonic ones. In contrast to plasmonic media, in polar ones, it is the narrowband nature of SPhPs that enhances NFRHT (See Supplementary Material, Eq. (S5)). In Fig. \ref{fig:fig1}, Eq. \eqref{eq:Qopt} is shown with the solid line. We also evaluate the NFRHT performance of several relevant polaritonic emitters considered in literature. These include polar materials such as Silicon Carbide (SiC), hexagonal Boron Nitride (hBN), and doped semiconductors, e.g. Gallium Arsenide (GaAs), Indium Arsenide (InAs) \cite{cardona_fundamentals_2005,schubert_infrared_2000,caldwell_low-loss_2015}, as well as plasmonic materials such as standard noble metals, e.g. Gold (Au), Silver (Ag), and heavily doped oxides, e.g. IZO and GZO \cite{kim_optimization_2013,kim_plasmonic_2013,caldwell_low-loss_2015}. The distance between each point in Fig. \ref{fig:fig1} and the solid curve representing Eq. \eqref{eq:Qopt} expresses how far from the ideal material performance each material falls. Interestingly, an ultra-high $Q$ does \textit{not} necessarily yield optimal NFRHT. By contrast, it is the interplay between $Q$ and $B$ that is critical, making, for instance, GaAs, AZO and GaN near-optimal materials for NFRHT as compared to Ag or 3C-SiC, even though the latter exhibit ultra-high quality factors. This demonstrates the importance of the material residue, $B$, in evaluating NFRHT performance. The parameter $h_\text{max}$ in Eq. \eqref{eq:hmax_z} is the maximum radiative thermal conductance achievable for each material, if one adjusted their quality factor such that $\displaystyle\Psi\left(\frac{Q}{B}\right)\to 1$, and in the limit of infinite temperature, for which $\displaystyle\Pi\left( \frac{\Omega}{T} \right)\to 1$. In Fig. \ref{fig:fig2} (a), we calculate $h_\text{max}$ for the materials considered in Fig. \ref{fig:fig1}. To quantify the degree to which the loss of each material deviates from the optimal value as defined in Eq. \eqref{eq:Qopt}, we plot these points against the ratio $Q/B$, which is inversely proportional to $\gamma$, the optical loss. Fig. \ref{fig:fig2} (a) demonstrates that materials with significantly different quality factors, e.g. Au and Ag, can have similar maximal thermal conductance, if one adjusted their loss. This occurs because the material residue, $B$, compensates for the lower $Q$ of Au as compared to that of Ag (see Fig. \ref{fig:fig1}). In other words, small deviations of $\varepsilon_\mathrm{\infty}$ from unity in plasmonic metals (Eq. \eqref{eq:Bplasm}), and, similarly, sub-optimal Reststrahlen band spectral widths with respect to $\Omega$ in polar materials (Eq. \eqref{eq:Bpol}), can considerably affect the optimal point of NFRHT. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig2.pdf} \caption{{\bf(a)} Maximum heat transfer coefficient, $h_\text{max}$ (Eq. \eqref{eq:hmax_z}), for pairs of plasmonic and polar materials as considered in Fig. \ref{fig:fig1}, for optimal loss and infinite temperature. {\bf(b)} Function $\Psi$ (Eq. \eqref{eq:Gamma}), as $Q/B$ varies. The optimal $Q_\text{opt}/B$ is shown with the dashed vertical line. Points in panel (b) represent fluctuational electrodynamics calculations for the considered materials, and are in very good agreement with our analytical result (Eq. \eqref{eq:HTC_analytical}).} \label{fig:fig2} \end{figure} The dependence of NFRHT from the optical loss is captured explicitly in $\Psi$ (Eq. \eqref{eq:Gamma}), and is shown graphically in Fig. \ref{fig:fig2} (b). As described in Eq. \eqref{eq:Qopt}, $\Psi$ is maximum at $Q_\text{opt}=4.5\,B$, depicted with the vertical dashed line. The horizontal distance between this line and each point in Fig. \ref{fig:fig2} (a) indicates how close each material is to the ideal optical loss, for its particular resonance frequency, $\Omega$. For example, despite the comparable material residue, $B$, of Ag and Au, the loss ($\gamma$) of Au yields a value of $\Psi$ that is much closer to unity as compared to Ag, hence Au presents overall better NFRHT performance, which is consistent with Fig. \ref{fig:fig1}. In Fig. \ref{fig:fig2} (b), we also append points that correspond to exact calculations with fluctuational electrodynamics, for few commonly used materials in NFRHT. These calculations are performed in the limit of infinite temperature, for the sake of a meaningful comparison with our formalism in Eq. \eqref{eq:HTC_analytical}. These exact results, represented as markers, are in very good agreement with our theory (solid line) for all considered materials. Small discrepancies occur in the range of relatively low-$Q$, for example in the case of IZO \cite{kim_plasmonic_2013}, since our formalism assumes resonant material response, hence its accuracy improves as the material quality factor increases (see Supplementary Material for details). The temperature dependence of NFRHT is described via $\displaystyle\Pi\left( \frac{\Omega}{T} \right)$ in Eq. \eqref{eq:PiT}, which is the only temperature-dependent term in Eq. \eqref{eq:HTC_analytical}, and agrees with previous analytical results \cite{ben-abdallah_fundamental_2010,rousseau_asymptotic_2012}. In contrast to Wein's displacement law in the far-field, where $h$ scales as $T^{3}$, in the near-field, $\Pi$ scales as $\sim T^{-2}$. On the other hand, from Eq.\eqref{eq:hmax_z}, $h_\text{max}$ scales with the resonance frequency, therefore materials supporting polaritons at high frequencies (high-$\Omega$) will, in principle, reach higher NFRHT rates. However, for this to occur in practice, they ought to operate at dramatically higher temperatures. Specifically, since $\Pi$ decreases exponentially with the ratio $\Omega/T$, to avoid a dramatic damping in $h$, a higher resonance frequency should be compensated by a higher operating temperature, as expected. This can be seen in Fig. \ref{fig:fig3}, showing the total radiative thermal conductance, $h$, computed via our analytical result (Eq. \eqref{eq:HTC_analytical}), as well as the exact result (fluctuational electrodynamics), where the wavenumber and frequency integrations are carried out numerically. We consider a set of plasmonic materials, i.e. IZO and Ag, and a set of polar ones, i.e. 4H-SiC and AlN. It is clear that plasmonic materials reach higher NFRHT than their polar counterparts, however this occurs at very high temperatures. This is expected since the resonance frequency, $\Omega$, of plasmonic media is significantly higher than that of polar ones. Using Eq. \ref{eq:PiT}, one can estimate the optimal temperature of operation for each material (Fig. S5 in the Supplementary Material). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig3.pdf} \caption{Temperature dependence of the total radiative thermal conductance, $h$ (Eq. \eqref{eq:HTC_analytical}), for pairs of plasmonic materials (Ag, IZO) and polar ones (AlN, 4H-SiC). Dotted lines show results with fluctuational electrodynamics. The black dashed line shows the fundamental bound $ h_\text{max}^\mathrm{U} = \frac{k_B^2 T}{3\hbar d^2}$ \cite{ben-abdallah_fundamental_2010}. The orange curve shows the upper bound $h^{opt}$ for 4H-SiC \cite{venkataram_fundamental_2020}.} \label{fig:fig3} \end{figure} In Fig. \ref{fig:fig3}, we also append the exact results with fluctuational electrodynamics (dotted). These are in excellent agreement with our analytical formalism, except for small deviations that occur only for materials with relatively low $Q$. This is expected, since a low-$Q$ suggests a spectrally broadband response, whereas our formalism applies to polaritonic resonances (Supplemental Material, Fig. S4 (a)). To conclude, the vast majority of polar and plasmonic materials, one can compute \textit{exactly} their NFRHT properties with Eqs. (\ref{eq:HTC_analytical}-\ref{eq:hmax_z}). As a reference, in Fig. \ref{fig:fig3}, we also show the fundamental limit to the radiative thermal conductance $h$, i.e. $\displaystyle h_\text{max}^\mathrm{U} = \frac{k_B^2 T}{3\hbar d^2}$, as derived by Ben-Abdallah \textit{et al. } \cite{ben-abdallah_fundamental_2010} (dashed line), and the upper bound derived by Venkataram \textit{et al. } \cite{venkataram_fundamental_2020} for one of the considered materials, viz. 4H-SiC, denoted with $h^{opt}$(4H-SiC) (orange curve). By comparing with our exact results, both $h_\text{max}^\mathrm{U}$ and $h^{opt}$(4H-SiC) represent loose bounds to NFRHT. This is to be contrasted with the expression in Eq. \eqref{eq:hmax_z}, which is the limit to which $h$ actually saturates at high temperatures for optimal loss, i.e. $Q=Q_\text{opt}$, for every material (see right $y$-axis in Fig. \ref{fig:fig3}). \textit{Conclusion -} We present a simple analytical framework that describes NFRHT in polaritonic bulk systems. We derive a universal closed form expression (Eq. \eqref{eq:HTC_analytical}) for the thermal conductance that is valid for any plasmonic or polar material. This expression clarifies what the role of optical loss ($\gamma$) and material quality factor ($Q\propto\gamma^{-1}$) are in NFRHT, as well as their interplay with other material dispersion characteristics. We show that the quality factor of a material's polariton resonance alone is not sufficient to accurately describe NFRHT. In contrast, we introduced the material residue parameter, $B$ that completes the analytical framework for the classification of \text{all} plasmonic and polar materials for NFRHT. We derive a material-dependent optimal condition that maximizes NFRHT, namely $Q=4.5\,B$, where the quality factor of the polaritonic resonance is inversely proportional to the optical loss, and the material residue is loss-independent and encompasses critical properties in polaritonic materials, i.e. the resonance frequency and the spectral width of the Reststrahlen band. In previous works, upper bounds to the spectral emissivity \cite{shim_fundamental_2019,molesky_fundamental_2020,venkataram_fundamental_2020} and loose upper bounds to the total near-field thermal conductance have been determined \cite{pendry_radiative_1999,ben-abdallah_fundamental_2010}. In contrast, here, we provide a tight bound to the thermal conductance, $h_\text{max}$. Other than the well-known dependence from the gap-size $d^{-2}$, $h_\text{max}$ also rigorously demonstrates the role of other material dispersion characteristics in NFRHT. \begin{acknowledgments} The authors declare no competing financial interest. The project that gave rise to these results received the support of a fellowship from ”la Caixa” Foundation (ID 100010434) and from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 847648. The fellowship code is LCF/BQ/PI21/11830019. \end{acknowledgments} \input{references.bbl} \end{document}% \section{Introduction} \section{Emission spectrum $\Phi$} Let us consider two semi-infinite layers made of non-magnetic, isotropic, homogeneous polaritonic material with relative dielectric permittivity $\varepsilon(\omega)$ and susceptibility $\chi\bomega =\varepsilon\bomega -1$, separated by a vacuum gap of size $d$. For gap sizes smaller than the thermal wavelength, $\lambda_T=b_\text{Wien}/T$, where $b_\text{Wien} =2989\mu m \, K$ \cite{planck_zur_1901}, evanescent waves ($\beta>k_0$) dominate the heat flux. In this range, the corresponding transmission probability, per polarization, can be expressed as \cite{rytov_theory_1953,polder_theory_1971,loomis_theory_1994,shchegrov_near-field_2000}: \begin{equation}\label{Seq:xi} \xi_{p,s}(\omega,\beta>k_0) = \frac{4\imag{r_{p,s}}^2 e^{-2\eta_0 d}}{|1-r_{p,s}^2 e^{-2\eta_0 d}|^2}, \end{equation} where $k_0 = \omega/c_0$ is the free-space wavenumber, $c_0$ is the speed of light in vacuum, $\eta_0=\sqrt{\beta^2-k_0^2}$ is the out-of-plane wavenumber in vacuum, and $r_{p,s}$ are the Fresnel coefficients at the vacuum-material interface for $p$- and $s$-waves, respectively \cite{yeh_optical_1988}. In plasmonic and polar homogeneous non-magnetic isotropic media, thermally excited SPPs and SPhPs, respectively, dominate NFRHT. These can only be excited in $p$-polarization \cite{maier_plasmonics_2007}, hence, the emission spectrum, $\Phi$, can be approximated as the contribution from $p$-polarization alone, without loss of generality. Furthermore, at sufficiently small vacuum gaps, the dispersion of surface polaritons approaches the quasistatic limit, for which $\beta\gg k_0$ \cite{pendry_radiative_1999}. In this limit, $\eta_0\approx\beta$, and one can approximate the Fresnel coefficient with \begin{equation}\label{Seq:FresnelCoeff} r_p\bomega=\frac{\varepsilon\bomega-1}{\varepsilon\bomega+1}. \end{equation} The transmission probability $\xi_p$ for $p$-polarization (Eq. \ref{Seq:xi}) in the electrostatic limit $\beta\gg k_0$ can be written as: \begin{equation}\label{Seq:xi_p} \xi_{p}(\omega,x) = \frac{4\imag{r_p\bomega}^2 e^{-2 x}}{|1-r_{p}\bomega^2 e^{-2x}|^2}, \end{equation} Perfect photon tunneling occurs at $\xi_p(\omega,\beta)=1$. From Eq. \eqref{Seq:xi_p}, this occurs at an in-plane wavenumber of: \begin{equation}\label{Seq:beta_xi1} \beta_{res}\bomega = \frac{1}{d}\ln \left|r_p \bomega \right|^2 = \frac{1}{d}\ln \left|\frac{\chi\bomega }{\chi\bomega +2}\right|^2. \end{equation} Eq. \eqref{Seq:beta_xi1} is valid for frequencies $\omega$ such that $\left|1+2/\chi\bomega \right|<1$, and defines a curve in the $(\omega,\beta)$ parameter space, near which the NFRHT is maximal \cite{ben-abdallah_fundamental_2010}. The maximum $\beta_{res}$ that satisfies Eq. \eqref{Seq:beta_xi1} occurs near the surface polariton resonance frequency, $\Omega$, such that $\displaystyle \real{\frac{2}{\chi (\Omega)}}=-1$ or $\displaystyle \real{r_p}=0$. At $\omega=\Omega$, Eq. \eqref{Seq:beta_xi1} yields \begin{equation} \beta_{\text{max}}\approx\frac{1}{d}\ln \imag{r_p}^2. \end{equation} The logarithmic dependence of $\beta_\mathrm{max}$ from the imaginary part of the Fresnel coefficient showcases the role of material loss in NFRHT \cite{ben-abdallah_fundamental_2010}. The emission spectrum $\Phi$ is therefore given by \begin{multline}\label{Seq:Phi} \Phi\bomega =\frac{1}{4\pi^2}\int_0^\infty \beta[\xi_p (\omega,\beta)+\xi_s (\omega,\beta)]d\beta\\ \simeq \frac{1}{4\pi^2}\int_{k_0 d}^\infty \beta\xi_p (\omega,\beta)d\beta\simeq\frac{1}{4\pi^2}\int_0^\infty \beta\xi_p (\omega,\beta)d\beta \\ =\frac{1}{4\pi^2d^2}\int_0^\infty x\xi_p(\omega,x)dx,\quad \end{multline} where we have assumed $k_0 d\ll 1$ and we have made the substitution $\beta d \to x$. By plugging the expression of $\xi_p$ given in Eq. \eqref{Seq:xi_p} in Eq. \eqref{Seq:Phi}, and by making the substitution $e^{-2x}\to y$, we obtain: \begin{multline}\label{Seq:Phi_demon} \displaystyle \Phi =- \frac{\imag{r_p}^2}{4\pi^2d^2} \int_0^{1} \frac{\log y}{|1-r_p^2y|^2}dy\\ =-\frac{1}{8\pi^2d^2}\frac{\imag{r_p}}{\real{r_p}}\text{Im}\left\{\int_0^{r_p^2} \frac{\log y}{1-y}dy\right.\\ \left. -\log {r_p^2}\int_0^{r_p^2} \frac{1}{1-y}dy, \right\} \end{multline} \begin{figure} \centering \includegraphics{contour1.pdf} \caption{Contour $\mathcal{L}$ in the complex plane for the identity in Eq. \eqref{Seq:int_1}. It is a triangle with vertices $0,\,1,\,r_p^2$, where $r_p$ is the Fresnel coefficient for $p$-polarization.} \label{fig:contour1} \end{figure} We now solve the last two integrals in Eq. \eqref{Seq:Phi_demon}. We note that the function $\frac{\log y}{1-y}$ is analytical inside the contour $\mathcal{L}$ in the complex plane depicted in Fig. \ref{fig:contour1}. Thus, by applying \emph{Cauchy integral's theorem} \cite{walsh_cauchy-goursat_1933}, $\displaystyle \oint_\mathcal{L}\frac{\log y}{1-y}dy=0$, hence the first integral is solved as: \begin{multline}\label{Seq:int_1} \int_0^{r_p^2} \frac{\log y}{1-y}dy = \int_1^{r_p^2} \frac{\log y}{1-y}dy+\int_0^{1} \frac{\log y}{1-y}dy\\ =\Li{1-r_p^2} - \frac{\pi^2}{6}, \end{multline} where $\text{Li}_2$ is the dilogarithm or Spence's function, defined as \cite{lewin_dilogarithms_1958} \begin{equation}\label{Seq:dilog} \text{Li}_2(z) = -\int_0^z \frac{\ln{1-u}}{u}du,\quad\forall z\in\mathcal{C}. \end{equation} The second integral is simply $\displaystyle\int_0^{r_p^2} \frac{\log {r_p^2}}{1-y}dy=-\log{r_p^2}\log{(1-r_p^2)}$. Therefore, by plugging this result and the one of Eq. \eqref{Seq:int_1}, in Eq. \eqref{Seq:Phi_demon}, we can express the thermal emission spectrum, $\Phi$, as: \begin{multline}\label{Seq:Phi_def} \displaystyle \Phi = \frac{1}{8\pi^2d^2}\frac{\imag{r_p}}{\real{r_p}}\text{Im}\left\{ -\Li{1-r_p^2} \right.\\ \left.+\frac{\pi^2}{6}-\log{r_p^2}\log{(1-r_p^2)} \right\}\\ =\frac{1}{8\pi^2d^2}\frac{\imag{r_p}}{\real{r_p}}\text{Im}\left\{\Li{r_p^2}\right\}, \end{multline} where we have used the dilogarithm identity $\Li{z}+\Li{1-z}=\frac{\pi^2}{6}-\log z\log{(1-z)},\,\forall z\in\mathcal{C}\setminus\{1\}$ \cite{zagier_dilogarithm_2007}. Eq. \eqref{Seq:Phi_def} coincides with Eq. (7) in the main manuscript, and applies to \textit{any} material dispersion. In the case of the NFRHT between different polaritonic materials with permittivities $\varepsilon_1,\,\varepsilon_2$, the emission spectrum, $\Phi$, can be derived as in Eq. \eqref{Seq:Phi_demon}, and has the following expression: \begin{equation}\label{Seq:Phi_dissimilar} \Phi= \frac{1}{4\pi^2d^2}\left[\frac{\real{r_{p,1}}}{\imag{r_{p,1}}}+\frac{\real{r_{p,2}}}{\imag{r_{p,2}}}\right]^{-1}\text{Im}\left\{\Li{r_{p,1} r_{p,2}}\right\}, \end{equation} where $r_{p,1},r_{p,2}$ are the Fresnel coefficient at the interfaces with the media of permittivities $\varepsilon_1,\,\varepsilon_2$, respectively. Eqs. \eqref{Seq:Phi_def} and \eqref{Seq:Phi_dissimilar} agree with \cite{rousseau_asymptotic_2012}. In the low-loss limit, the polariton resonance frequency $\Omega$ can be found as the real solution of $\real{r_p({\Omega})}=0$ (see Eq. \eqref{Seq:resFreq_def}). In this limit, the emission spectrum is maximum at $\omega=\Omega$. As $\displaystyle \lim_{r_p\to 0} \Phi$, we find that \eqref{Seq:Phi_def} simplifies to: \begin{multline}\label{Seq:MillerBound} \Phi(\Omega)= \frac{1}{4\pi^2d^2}\ln{\left[1+\imag{r_p(\Omega)}^2\right]}\\ \approx\frac{1}{4\pi^2d^2}\ln{\left[\left.\frac{|\chi|^4}{4\imag{\chi}^2}\right|_\Omega\right]}, \end{multline} where the identity $\displaystyle\imag{r_p(\Omega)}=\left.\frac{|\chi|^2}{2\imag{\chi}}\right|_{\omega=\Omega}$ was used. Eq. \ref{Seq:MillerBound} agrees exactly with the result by Miller \textit{et al}. in \cite{miller_shape-independent_2015} (Eq. (10)), derived for planar configurations. By contrast, in the high-loss limit, Eq. \eqref{Seq:resFreq_def} may have no real solutions, $\Omega$, and the maximum of $\Phi$ needs to be calculated by maximizing the right hand side in Eq. \eqref{Seq:Phi_def}. The range of frequencies for which Eq. \eqref{Seq:MillerBound} is valid and the threshold between low-loss and high-loss regimes is discussed in the following section. We now showcase the validity of Eq. \eqref{Seq:Phi_def} via comparison with fluctuational electrodynamics. The dielectric function of plasmonic and polar materials can be adequately described by the Drude and Lorentz oscillator models, respectively: \begin{subnumcases}{\varepsilon\bomega=\label{Seq:drude_DL}} \displaystyle\varepsilon_{\text{plasm}} =\varepsilon_\infty \left[1-\frac{\omega_p^2}{\omega(\omega+i\gamma)}\right]\label{Seq:eps_plasm}\\ \displaystyle\varepsilon_{\text{polar}} = \varepsilon_\infty\left[1+\frac{\omega_\text{LO}^2-\omega_\text{TO}^2}{\omega_\text{TO}^2-\omega^2 -i\,\omega \gamma}\right]\label{Seq:eps_pol}. \end{subnumcases} Here, $\varepsilon_\infty$ is the high-frequency relative permittivity, and $\gamma$ is the damping rate \cite{drude_zur_1900,kheirandish_modified_2020} in both plasmonic and polar media. For plasmonic metals, $\omega_p$ is the plasma frequency, near which the SPP mode occurs. For polar materials, $\omega_\text{TO}$ and $\omega_\text{LO}$ are the transverse and longitudinal optical phonon frequencies, respectively \cite{caldwell_low-loss_2015}. The spectral range $[\omega_\text{TO},\omega_\text{LO}]$ is known as the \emph{Reststrahlen} band \cite{kortum_phenomenological_1969}, and the SPhP mode occurs within this range. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Phi_SiC.pdf} \caption{Thermal emission spectrum, $\Phi$, normalized by $d^{-2}$ as a function of frequency, in the neighborhood of the SPhP resonance $\Omega$ for two bulk planar layers made of SiC exchanging thermal radiation in the near-field. $ \Phi_p$, $ \Phi_s$, and $ \Phi$, shown with the red, blue, and black curve, respectively, correspond to the p-polarization, s-polarization, and total spectrum as computed via fluctuational electrodynamics, whereas the analytical prediction in Eq. \eqref{Seq:Phi_def} as well as in \cite{rousseau_asymptotic_2012} is shown with the green dashed line. We also show the spectral upper bound $\Phi_\text{opt}$ from \cite{venkataram_fundamental_2020} (orange curve) and the emission spectrum at the resonance frequency from \cite{miller_shape-independent_2015} (purple triangle), given in Eq. \eqref{Seq:MillerBound}.} \label{fig:fig1} \end{figure} Drude metals and polar dielectrics support surface polaritons at the resonance frequency $\bar{\Omega}$ such that \begin{equation}\label{Seq:resFreq_def} \real{r_p(\bar{\Omega})}=0 \quad \mbox{or}\quad \real{\frac{2}{\chi (\bar\Omega)}}=-1. \end{equation} As an example, in Fig. \ref{fig:fig1}, we plot $\Phi(\omega)$ normalized with $d^{-2}$ for silicon carbide (SiC), a widely used polar dielectric in the NFRHT literature \cite{song_near-field_2015,francoeur_electric_2011,mulet_enhanced_2002}. We consider a representative Lorentz model for its permittivity, with $\varepsilon_\infty=6.7$, $\omega_\text{TO}=1.49\times 10^{14}\,\text{rad/s}$, $\omega_\text{LO}=1.83\times 10^{14}\,\text{rad/s}$, $\gamma=8.97\times 10^{11}\,\text{rad/s}$ \cite{hong_near-field_2018}. The exact emission spectrum for $p$-polarization ($\Phi_p$), obtained via fluctuational electrodynamics (FE), viz. via numerical integration of Eq. \eqref{Seq:Phi} with no approximations, is shown with the red curve. Its $s$-polarization counterpart as well as their sum, $\Phi = \Phi_p+\Phi_s$, are shown with the blue and black curve, respectively. As anticipated, the $p$-polarization component dominates the emission spectrum in almost the entire frequency range near $\Omega$, and thus coincides with the total $\Phi$. Importantly, the green dashed curve shows $\Phi_p$ obtained with the analytical solution in Eq. \eqref{Seq:Phi_def} and in \cite{rousseau_asymptotic_2012}. This curve overlaps nearly perfectly with FE for frequencies near-resonance. We also display an upper bound to the spectrum of NFRHT, $\Phi_{opt}$, as derived by Venkataram \textit{et al.} \cite{venkataram_fundamental_2020}, obtained through singular value decomposition of the Maxwell Green's tensor. This result accurately estimates the response of SiC only on resonance. Similarly, we show with a triangle-shaped marker the upper bound to NFRHT on resonance, derived by Miller \textit{et al.} \cite{miller_shape-independent_2015}, and given in Eq. \eqref{Seq:MillerBound}. \section{Radiative thermal conductance $h$} The radiative thermal conductance for two closely spaced semi-infinite layers is defined as: \begin{equation}\label{seq:HTC_def} h=\int_0^{\infty}k_B \Pi\left(\frac{\omega}{T}\right) \Phi\bomega d\omega, \end{equation} where \begin{equation}\label{Seq:Pi} \Pi\left(\frac{\omega}{T}\right)= \frac{1}{k_B}\left[\frac{\partial}{\partial T}\theta(\omega,T)\right]=\left[\frac{\frac{\hbar }{2k_B} \frac{\omega}{T}}{\sinh{\left(\frac{\hbar }{2k_B} \frac{\omega}{T}\right)}}\right]^2, \end{equation} where $\theta(\omega,T)=\hbar\omega / \left[\exp{\left(\hbar\omega/k_B T\right)}-1\right]$ is the mean energy per photon and $\Phi\bomega $ is the emission spectrum, whose closed form expression has been derived in the previous section and is given in Eq. \eqref{Seq:Phi_def}. We now particularize this derivation to plasmonic and polar media, whose dispersion relations are given in Eqs. \eqref{Seq:drude_DL} and \eqref{Seq:eps_pol}, respectively. By applying Eq. \eqref{Seq:resFreq_def}, for $\gamma\to 0$, the surface plasmon polariton (SPP) and surface phonon polariton (SPhP) resonance frequency can be expressed as: \begin{subnumcases}{\Omega=\label{Seq:Omega}} \displaystyle\Omega_{\text{plasm}} = \sqrt{\frac{\varepsilon_\infty}{\varepsilon_\infty+1}}\omega_p\\\label{Seq:Omega_plasm} \displaystyle \Omega_{\text{polar}} = \sqrt{\frac{\varepsilon_\infty \omega_\text{LO}^2+\omega_\text{TO}^2}{1+\varepsilon_{\infty}}}\label{Seq:Omega_pol}. \end{subnumcases} We assume that the function $\Pi\left(\frac{\omega}{T}\right)$ (Eq. (9) of the main text) is slowly varying with respect to the emission spectrum $\Phi(\omega)$, which peaks at the polariton resonance frequency $\bar\Omega$. Therefore, we make the first step toward the analytical integration of Eq. \eqref{seq:HTC_def} by sampling $\Pi$ at the frequency $\bar\Omega$, i.e.: \begin{equation}\label{seq:HTC_def2} h=k_B \Pi\left(\frac{\bar\Omega}{T}\right)\int_0^{\infty} \Phi\bomega d\omega. \end{equation} We now carry out the frequency integration of the emission spectrum in Eq. \eqref{seq:HTC_def2}. By using its expression in Eq. \eqref{Seq:Phi_def}, we have \begin{equation}\label{Seq:h_step1} 8\pi^2d^2\int_0^\infty\Phi(\omega)d\omega=\int_0^\infty\frac{\imag{r_p\bomega}}{\real{r_p\bomega}}\text{Im}\left\{\Li{r_p^2\bomega}\right\}d\omega. \end{equation} Since $r_p(\omega)$, defined in Eq. \eqref{Seq:FresnelCoeff}, inherits the hermiticity (or PT-symmetry) from the permittivity function $\varepsilon(\omega)$, i.e. $r_p^*(\omega)=-r_p(-\omega)$ ($^*$ is the complex-conjugate operator), the integrand in Eq. \eqref{Seq:h_step1} is an even function of $\omega$. Therefore, we can extend the integration to include the negative frequency axis, namely \begin{equation}\label{Seq:h_step2} 8\pi^2d^2 \int_0^\infty\Phi(\omega)d\omega=\text{Im}\left\{\int_{-\infty}^{+\infty}f(\omega)d\omega\right\}, \end{equation} where the complex valued function $f(z)$ is \begin{equation}\label{Seq:f} f(z)= \frac{\imag{r_p}(z)}{2\real{r_p}(z)}\Li{r_p^2(z)}. \end{equation} \begin{figure*}[t] \centering \includegraphics{contour2.pdf} \caption{Contour $\mathcal{L}$ in the complex plane for the complex contour integration in Eq. \eqref{Seq:h_step3}, composed by the real axis and a semicircular contour of radius $R$, which will tend to infinity in order to cover the lower half of the complex plane. The poles of the function $f(z)$, defined in Eq. \eqref{Seq:f}, are shown for the cases $\varepsilon_\infty=1$ {\bf(a)}, $(\varepsilon_\infty\neq1,Q>Q_\text{th})$ {\bf(b)} and $(\varepsilon_\infty\neq1,Q<Q_\text{th})$ {\bf(c)}.} \label{fig:contour2} \end{figure*} We now tackle the complex integration of $f\bomega$ in Eq. \eqref{Seq:h_step2} by means of contour integration in the complex plane, following a similar strategy to the one empolyed in \cite{rousseau_asymptotic_2012}. Specifically, we intend to perform the integration on the closed contour $\mathcal{L}$ shown in Fig. \ref{fig:contour2}, composed by the real axis and a semicircular contour of positive radius $R$ lying the lower half-plane, in the limit $R\to\infty$. Since the integrand is vanishing on the semicircular contour in the limit $\displaystyle\lim_{R\to\infty}$, from Jordan's lemma \cite{carrier_functions_2005}, the integral on this contour is also zero. Hence, we can rewrite the integrated emission spectrum as: \begin{equation}\label{Seq:h_step3} 8\pi^2d^2 \int_0^\infty\Phi(\omega)d\omega =\text{Im}\left\{\oint_\mathcal{L}f(z)dz\right\}. \end{equation} The function $f(z)$ is the analytical extension of the real-valued function $f\bomega$ to the complex plane. It must be noted that $\imag{r_p}(z),\,\real{r_p}(z)$ are no longer constrained to be real valued functions, and therefore the notation $\imag{\cdot},\,\real{\cdot}$ no longer refers to the real and imaginary part operators. Nevertheless, we keep using the same notation in the following calculations for the sake of simplicity, while accounting that $\imag{r_p}(z),\,\real{r_p}(z)$ are functions derived for real variable, and extended to the complex plane. For instance, the step $\imag{r_p\bomega}\to\imag{r_p}(z)$ for plasmonic dispersions is the following: \begin{multline}\label{Seq:imagrp} \imag{r_p\bomega}\\ =\frac{2\varepsilon_\infty \omega\omega_p^2 \gamma}{[(1+\varepsilon_\infty)\omega^2-\varepsilon_\infty \omega_p^2]^2+(1+\varepsilon_\infty )^2\omega^2\gamma^2}\\ \to \frac{2\varepsilon_\infty z\omega_p^2 \gamma}{[(1+\varepsilon_\infty)z^2-\varepsilon_\infty \omega_p^2]^2+(1+\varepsilon_\infty )^2z^2\gamma^2}\\ =\imag{r_p}(z),\qquad z\in\mathcal{C}. \end{multline} We now carry the complex integration using the Cauchy's residue theorem \cite{carrier_functions_2005}. The first step is identifying the poles of the integrand $f(z)$. By inspecting the expression of $f(z)$ in Eq. \eqref{Seq:f}, it is clear that the poles are exactly the frequencies $\bar\Omega$ solving the resonance condition in Eq. \eqref{Seq:resFreq_def}. By solving Eq. \eqref{Seq:resFreq_def}, using the expression of the plasmonic and polar permittivity given in Eq. \eqref{Seq:drude_DL}, we have: \begin{widetext} \begin{subnumcases}{\bar\Omega_{\substack{1 \\ 2}}=\label{Seq:OmegaGeneral}} \displaystyle \Omega\qquad &$\varepsilon_\infty=1$ \\\label{Seq:OmegaGeneral_plasm} \displaystyle \Omega\sqrt{F-\frac{1}{2Q^2}\mp \frac{1}{2} \sqrt{\left(\frac{1}{Q_\text{th}^2}-\frac{1}{Q^2}\right)\left(\frac{1}{Q_2^2}-\frac{1}{Q^2}\right)}}\qquad &$\varepsilon_\infty\neq 1$, \label{Seq:OmegaGeneral_pol} \end{subnumcases} \end{widetext} where $\Omega$ is the polariton resonance frequency in the absence of optical losses, given in Eq. \eqref{Seq:Omega}, and $Q$ is the quality factor of the polaritonic material resonance, evaluated at frequency $\omega=\Omega$, namely \cite{wang_general_2006}: \begin{equation}\label{Seq:Q} \displaystyle Q=\left.\frac{\omega \frac{d\real{\varepsilon}}{d\omega}}{2\imag{\varepsilon}}\right|_\Omega\approx \frac{\Omega}{\gamma}. \end{equation} The parameter F for plasmonic and polar cases can be written as: \begin{subnumcases}{F=\label{Seq:F}} \displaystyle F_\text{plasm}= 1+\frac{1}{2(B_\text{plasm}-1)} \label{Seq:F_plasm}\\ \displaystyle F_\text{polar} =1+\frac{\varepsilon_\infty+1}{2B_\text{polar}(\varepsilon_\infty-1)} \label{Seq:F_pol}. \end{subnumcases} The parameter $B$, which in the main manuscript we have labeled as \textit{material residue} function, is independent of the material losses (independent of $\gamma$), and is calculated as $\displaystyle B = \frac{Q}{\imag{r_p(\Omega)}}$, in the limit $Q\to\infty$. Therefore, for plasmonic and polar dispersions, respectively, $\displaystyle B$ can be written as: \begin{subequations} \label{Seq:Bplasm_pol} \begin{align}[left={B = \empheqlbrace}] & \displaystyleB_\text{plasm} = \frac{1+\varepsilon_\infty}{2}\label{Seq:Bplasm}\\ & \displaystyle B_\text{polar}=\frac{(1+\varepsilon_\infty)^2}{2\varepsilon_\infty}\frac{\Omega_{\text{polar}}^2}{\omega_{\text{LO}}^2-\omega_\text{TO}^2}\label{Seq:Bpol} \end{align} \end{subequations} Finally, the parameters $Q_\text{th}$ and $Q_2$ have the following expressions: \begin{align} Q_\text{th} &= \frac{1}{\sqrt{2\left(F-\sqrt{2F-1}\right)}}\label{Seq:Qth}\\ Q_2 &= \frac{1}{\sqrt{2\left(F+\sqrt{2F-1}\right)}}\label{Seq:Q2}, \end{align} where $F$ for plasmonic and polar dispersions is given in Eq. \eqref{Seq:F}. From Eq. \eqref{Seq:OmegaGeneral}, in the case $\varepsilon_\infty \neq 1$, it can be shown that $\bar\Omega_{1,2}$ are real numbers only if $Q<Q_2$ and $Q>Q_\text{th}$. It can be proven that $Q_2<1$, and hence we can focus only on the cases $Q<Q_\text{th}$ and $Q>Q_\text{th}$. Therefore, $Q_\text{th}$ represents a threshold for the material quality factor below which the poles of $f(z)$ become complex and move away from the real axis, as shown in Figs. \ref{fig:contour2}b-c. We can now apply the residue theorem to carry out the complex integration in Eq. \eqref{Seq:h_step3}. It is important to notice that, since for $\varepsilon_\infty\neq 1$ and $Q>Q_\text{th}$ the poles of $f(z)$ are on the integration contour, we have to add a $\frac{1}{2}$ factor to the standard residue theorem formula, for which the poles are in the interior of the integration contour $\mathcal{L}$ \cite{carrier_functions_2005}. On the other hand, the frequency $\bar\Omega$ in the case $\varepsilon_\infty=1$ does not depend on the $Q$ factor, and the poles are always real.\\ {\bf Case $\varepsilon_\infty=1$. }\\ Via algebraic manipulation, it can be shown that the function $f(z)$ for both plasmonic and polar dispersions in the case $\varepsilon_\infty=1$ has the following expression: \begin{equation} f(z) = \frac{\Omega}{2Q}\frac{z}{z^2-\Omega^2}\Li{r_p^2(z)}. \end{equation} Therefore, $f(z)$ has two first-order real poles, i.e. $\{+\Omega,-\Omega\}$, and the integration of Eq. \eqref{Seq:h_step3} can be carried out through the residue theorem as follows: \begin{multline}\label{Seq:h_step4_einf1} \text{Im}\left\{\oint_\mathcal{L}f(z)dz\right\} \\ = -\text{Im}\left\{ i\pi \left[ \mathcal{R}\text{es} (f,+\Omega)+\mathcal{R}\text{es} (f,-\Omega)\right]\right\}\\ =-\text{Re}\left\{2\pi \mathcal{R}\text{es} (f,+\Omega) \right\}, \end{multline} where $\mathcal{R}\text{es} (f,w)$ is the residue of $f$ at $w$. Here, we have used the parity of $f$, and the minus sign comes from having chosen a clockwise (negative) orientation of the contour $\mathcal{L}$, shown in Fig. \ref{fig:contour2}a. By taking the limit $\displaystyle \lim_{z\to\Omega}(z-\Omega)f(z)$, we calculate the residue $\mathcal{R}\text{es} (f,+\Omega)$, which has the following expression: \begin{multline}\label{Seq:residue_einf1} \mathcal{R}\text{es} (f,+\Omega)= \frac{\Omega}{4Q}\Li{-\imag{r_p(\Omega)}^2}\\ =\frac{\Omega}{4Q}\Li{-\left(\frac{Q}{B}\right)^2}. \end{multline} By plugging this result in Eq. \eqref{Seq:h_step4_einf1}, we can finally write the expression for the radiative thermal conductance as: \begin{equation}\label{Seq:HTC_analytical2} h = h_\text{max} \, \Psi\left(\frac{Q}{B}\right)\Pi\left( \frac{\Omega}{T} \right), \end{equation} where $\displaystyle\Pi\left( \frac{\Omega}{T} \right)$ is defined in Eq. \eqref{Seq:Pi}, and \begin{equation}\label{Seq:hmax_Psi2} h_\text{max} = \frac{1.36 \, k_B}{16\pi d^2} \frac{\Omega}{B}\qquad \Psi\left(\frac{Q}{B}\right)= -\frac{\Li{-(Q/B)^2}}{1.36 (Q/B)}. \end{equation} Eq. \eqref{Seq:HTC_analytical2} coincides with Eq. (8) in the main manuscript. It must be noted that all the redundant scaling factors, e.g. $B$ at the denominator of $h_\text{max}$ and $\Psi$, have been introduced such that $\Psi$ would be bounded above by 1. The function $\Psi$ reaches its maximum at: \begin{equation}\label{Seq:Qopt} Q_\text{opt}=4.5\,B. \end{equation} Eq. \eqref{Seq:Qopt} coincides with Eq. (13) in the main manuscript.\\ {\bf Case $\varepsilon_\infty\neq 1$. }\\ The function $f(z)$ for both plasmonic and polar dispersions in the case $\varepsilon_\infty\neq 1$ has the following expression: \begin{equation}\label{Seq:f_einfnot2} f(z) = \frac{(F-1)\Omega^3 z}{(z^2-\Omega_1^2)(z^2-\Omega_2^2)}\frac{\Li{r_p^2(z)}}{Q}, \end{equation} where $F$ is given in Eq. \eqref{Seq:F}. For $\varepsilon_\infty\neq 1$, we have different resuts according to the position of $Q$ with respect to $Q_\text{th}$. From Eq. \eqref{Seq:OmegaGeneral}, it is clear that if $Q>Q_\text{th}$, then the function $f(z)$ has four first-order real poles $\{\pm \Omega_1,\pm \Omega_2\}$, as shown in Fig. \ref{fig:contour2}b; if $Q<Q_\text{th}$ then the function $f(z)$ has four first-order complex poles $\{\pm\Omega_1,\pm \Omega_2\}$, with $\Omega_2=-\Omega_1^*$ as shown in Fig. \ref{fig:contour2}c. Thus, if $Q<Q_\text{th}$, the only poles contributing to the integral are the two in the interior of the contour $\mathcal{L}$, viz. $\{\Omega_1,\,\Omega_2\}$. Therefore, in both cases $Q>Q_\text{th}$ and $Q<Q_\text{th}$ we can solve Eq. \eqref{Seq:h_step3} by applying the residue theorem as follows: \begin{equation}\label{Seq:int_general_last} \text{Im}\left\{\oint_\mathcal{L}f(z)dz\right\}=-\text{Re}\left\{2\pi \sum_{j=1}^2 \mathcal{R}\text{es} (f,\Omega_j) \right\}. \end{equation} By making the limit $\displaystyle \lim_{z\to\Omega_j}(z-\Omega_j)f(z)$, we calculate the residues $\mathcal{R}\text{es} (f,\Omega_1)$, $\mathcal{R}\text{es} (f,\Omega_2)$ which have the following expression: \begin{multline}\label{Seq:f_einfnot1} \displaystyle \mathcal{R}\text{es}(f,\Omega_{\substack{1 \\ 2}}) = \mp\frac{(F-1) }{2\sqrt{4(F^2-1)+\frac{1}{Q^4}-4F(2+\frac{1}{Q^2})}}\\ \times\frac{\Li{-\imag{r_p}(\Omega_{\substack{1 \\ 2}})^2}}{Q}, \end{multline} where $F$ is given in \eqref{Seq:F}, and the complex function $\imag{r_p}(z)$ is the analytical extension of the imaginary part of the Fresnel coefficient in the complex plane (e.g., see Eq. \eqref{Seq:imagrp} for the plasmonic dispersion). By inserting the residues in Eq. \eqref{Seq:f_einfnot1} into Eq. \eqref{Seq:int_general_last}, and in turn plugging this into Eq. \eqref{Seq:h_step3}, one can finally get the expression for the heat conductance. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig3S.pdf} \caption{{\bf(a)} Integral in Eq. \eqref{Seq:h_step3}, equivalent to $8\pi^2d^2\int_0^\infty\Phi(\omega)d\omega$, for a polar dispersion with Lorentz parameters $\varepsilon_\infty=4$, $\omega_\text{TO}=1.49\times 10^{14}\,\text{rad/s}$, $\omega_\text{LO}=1.83\times 10^{14}\,\text{rad/s}$, and resonance frequency $\Omega=1.77\times 10^{14}\,\text{rad/s}$. We compare the exact solution (black curve) calculated using Eq. \eqref{Seq:int_general_last}, with the approximated one (red dashed line) calculated using Eq. \eqref{Seq:h_step4_einf1}, and used as final result in the main manuscript. We also show with a green line the threshold $Q_\text{th}=10.85$ between the low- and high-loss regimes, calculated using Eq. \eqref{Seq:Qth}. For the same dispersion parameters, in {\bf (b)} we plot the emission spectrum normalized by $d^{-2}$, for increasing values of $Q=\Omega/\gamma$, namely $Q=Q_1/50, Q_1/45,Q_1/40,\dots,Q_1$, being $Q_1\approx 200$. We also mark the peak of each emission spectrum with a red marker: in every case, the resonance occurs at frequencies very close to $\Omega$, calculated for $Q\to\infty$.} \label{fig:fig3} \end{figure} \section{Approximations for $h$ and the polariton resonance frequency $\bar\Omega$} We now simplify the exact expressions for $h$, derived in the previous section, for the three scenarios, viz. $\{\varepsilon_\infty=1,\forall Q\}$, $\{\varepsilon_\infty\neq1, Q<Q_\text{th}\}$ and $\{\varepsilon_\infty\neq1, Q>Q_\text{th}\}$, aiming at providing a single expression valid in any regime of $Q$ and $\varepsilon_\infty$. We make the following approximations: (\textit{i}) we neglect the contribution from the second pole $\Omega_2$; (\textit{ii}) we assume $Q\ggQ_\text{th}$. Under these assumptions, we can approximate $\Omega_1\approx\Omega$, with $\Omega$ given in Eq. \eqref{Seq:Omega}. It can be shown that the resulting residue $\mathcal{R}\text{es}(f,\Omega)$ has the same form as the case $\varepsilon_\infty=1$ in Eq. \eqref{Seq:residue_einf1}, and the heat transfer conductance expression is the same as in Eq. \eqref{Seq:HTC_analytical2} and Eq. (13) in the main manuscript. Even if this expression is derived in the high-$Q$ limit, it represents a good approximation also in the low-$Q$ case, as shown in Fig. \ref{fig:fig3} (a) for a case of study. Specifically, we show the integral in Eq. \eqref{Seq:h_step3} for a polar dispersion with Lorentz parameters $\varepsilon_\infty=4$, $\omega_\text{TO}=1.49\times 10^{14}\,\text{rad/s}$, $\omega_\text{LO}=1.83\times 10^{14}\,\text{rad/s}$, $\Omega=1.77\times 10^{14}\,\text{rad/s}$ as a function of the quality factor $Q=\Omega/\gamma$, and there is good agreement with the exact solution over all the considered $Q$ range. It must be noted that the polariton resonance frequency $\bar\Omega_1$ given in Eq. \eqref{Seq:OmegaGeneral} for both plasmonic and polar dispersions is weakly dependent from the optical loss, i.e. $\gamma$ or the quality factor, $Q$. In Fig. \ref{fig:fig3} (b) We show this by monitoring the peak position of the emission spectrum, $\Phi\bomega,$ for the same polar dispersion used in panel (a), for decreasing values of the quality factor, $Q$, or equivalently for increasing value of $\gamma$, starting from $Q_1\approx 200$ and arriving to $Q=Q_1/50\approx 4$. It is clear that, even in the lowest-$Q$ case, the emission spectrum peaks very closely to the resonance frequency $\Omega$ calculated in the limit $\gamma\to0$ or $Q\to \infty$, given in Eq. \eqref{Seq:Omega}. Thus, assuming $\Omega$ as the polariton resonance frequency in any material loss condition, as considered in the main manuscript, represents a good approximation. \section{Temperature dependence in the near-field VS Wien's law} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{h_T_Pi_1panel.pdf} \caption{Optimal temperature $T_\text{opt}$ as a function of the resonance wavelength, calculated using Eq. \eqref{Seq:Topt}. The Drude parameters for the Au, Ag, Cu, Al are taken from \cite{ashcroft_solid_1976}, while the Drude parameter for the other plasmonic materials and the Lorentz parameters for polar materials are taken from \cite{caldwell_low-loss_2015} (tables 1-2). The superscript ${}^o$ and ${}^e$ stand for the ordinary and extraordinary principal axes of the corresponding material, respectively. In the inset, we plot $\Pi$ (in Eq. \eqref{Seq:Pi}), which expresses the normalized thermal conductance in the optimal loss condition $Q=Q_\text{opt}=4.5\,B$ (see Eq. \eqref{Seq:Qopt}). We identify the optimal operating temperature for a material by setting $\Pi=0.9$ (green marker).} \label{fig:fig45} \end{figure} We now discuss the temperature dependence of NFRHT in bulk systems. In the inset of Fig. \ref{fig:fig45}, we display $\Pi$, given in Eq. \eqref{Seq:Pi}, which decreases exponentially as a function of the ratio $\Omega/T$. Hence, to avoid a dramatic damping of $h$, a higher resonance frequency should be compensated by a higher operating temperature, as expected. This is well-understood in the far-field regime with Wien's displacement law that estimates the optimal resonance frequency of a thermal emitter at a given temperature for maximizing the power emitted in the far-field. One can similarly estimate the optimal temperature, $T_\mathrm{opt}$, of a polaritonic thermal emitter in a planar near-field configuration, by maximizing $\displaystyle\Pi\left( \frac{\Omega}{T} \right)$. Since $\Pi$ reaches its maximum $\Pi=1$ in the limit of infinite temperature, we compute the optimal temperature of operation as a function of resonance frequency, $\Omega$, by setting the term $\displaystyle\Pi\left( \frac{\Omega}{T} \right)$ to $0.9$, as shown with the green lines in the inset of Fig. \ref{fig:fig45}. The resulting optimal temperature is expressed as: \begin{equation}\label{Seq:Topt} T_\text{opt} = \frac{\hbar \Omega}{2k_B 0.57} = \frac{b_\text{NF}}{\lambda}, \end{equation} where $b_\text{NF}= 12729\mu m \, K\approx 4.4\times b_\text{Wien}$ (in which the subscript stands for Near-Field) and $\lambda =2\pi c_0/\Omega $ is the resonance wavelength. This dependence of $T_\mathrm{opt}$ from $\Omega$ is shown with the solid line in Fig. \ref{fig:fig45}. As a reference, we also display with the dashed line Wein's displacement law, relevant in the far-field. One can therefore see that in the near-field, considerably higher temperatures are required to reach optimal material performance, as compared to the far-field. This is expected, since far-field thermal emission is generally more broadband with respect to the near-field thermal emission, especially for polaritonic materials \cite{joulain_surface_2005,song_near-field_2015}. Hence, the maximal spectral overlap between the mean energy per photon, $\theta(\omega,T)$ in Eq. \eqref{seq:HTC_def}, and a blackbody spectrum is achieved at much lower temperatures as compared to the overlap between a narrowband near-field resonance and $\theta(\omega,T)$, since $\theta$ broadens as $T$ increases. We display the optimal temperature, calculated using Eq. \eqref{Seq:Topt}, as a function of the resonance wavelength $2\pi c_\mathrm{o}/\Omega$, for polar materials such as Silicon Carbide (SiC), hexagonal Boron Nitride (hBN), and doped semiconductors, e.g. Gallium Arsenide (GaAs), Indium Arsenide (InAs) \cite{cardona_fundamentals_2005,schubert_infrared_2000,caldwell_low-loss_2015}, as well as plasmonic materials such as standard noble metals, e.g. Gold (Au), Silver (Ag), and heavily doped oxides, e.g. IZO and GZO \cite{kim_optimization_2013,kim_plasmonic_2013,caldwell_low-loss_2015}, as in Figs. 1-3 of the main text. It can be seen that most polar media, with resonance frequencies mainly in the mid-IR, will achieve optimal performance at temperatures that are up to two orders of magnitude lower than their plasmonic counterparts, since plasmonic resonances occur mainly in the near-IR, visible and UV regimes. \section{Comparison with literature \cite{ben-abdallah_fundamental_2010}} We now compare the near-field radiative thermal conductance derived in this work with the expression for polar dielectrics provided by Ben-Abdallah \textit{et al. } in \cite{ben-abdallah_fundamental_2010}. In \cite{ben-abdallah_fundamental_2010}, the authors considered a polar material dispersion with high-frequency dielectric permittivity $\varepsilon_\infty=1$ (see Eq. \eqref{Seq:drude_DL}). In their derivation (Eq. (14)), the radiative thermal conductance that they obtained, $h'$, can be written as: \begin{equation}\label{Seq:benAbd_HTC} h' = h_\text{max}'\,\Psi'(Q)\,\Pi\left( \frac{\Omega}{T} \right), \end{equation} where $\Pi\left( \frac{\Omega}{T} \right)$ is the same as in Eq. \eqref{Seq:HTC_analytical2}, given in Eq. \eqref{Seq:Pi}, and the functions $h_\text{max}'$ and $\Psi'$ are given by: \begin{equation}\label{Seq:hmax_prime} h_\text{max}' = \frac{0.12 k_B}{d^2}\Omega, \end{equation} and \begin{equation}\label{Seq:Psi_prime} \Psi'(Q)=\frac{\log Q}{0.37\,Q}. \end{equation} In both our result (Eq. \eqref{Seq:HTC_analytical2}) and the result from \cite{ben-abdallah_fundamental_2010} (Eq. \eqref{Seq:benAbd_HTC}), since $(\Psi,\Psi')$ and $\Pi$ are functions bounded above by 1, $h_\text{max}$ and $h_\text{max}'$ represent the maximum heat-transfer conductance achievable in this configuration. As we shown in the previous sections, the material residue, $B$, is greater than unity, i.e. $B>1$, therefore we can compare $h_\text{max}$ and $h_\text{max}'$ as follows: \begin{equation} \frac{h_\text{max}'}{h_\text{max}}=4.44\,B>4.4. \end{equation} Therefore our analytical estimation for the maximum radiative thermal conductance is at least 4.4 times smaller than the one predicted in \cite{ben-abdallah_fundamental_2010} under the assumption of blackbody-like thermal emission in the near-field ($\xi=1$). We now compare the functions $(\Psi,\Psi')$, taking into account the dependence of heat transfer from optical loss. The function $\Psi'$ depends only on the quality factor $Q$ of the polariton resonance, and neglects the dependence from the other features of a material's dispersion, such as the size of the Reststrahlen band ($\omega_\text{LO}-\omega_\text{TO}$) and its position. In our work, these are included in the material residue, $B$. According to the definition of $\Psi'(Q)$, in Eq. \eqref{Seq:Psi_prime}, this function is maximized at the optimal quality factor $Q_\text{opt}^*=e$, being $e\approx2.72$ the Napier's constant, valid for any polar dielectric with a Lorentz dispersion relation (Eq. \eqref{Seq:drude_DL}), with $\varepsilon_\infty=1$. Conversely, from our derivation, the function $\displaystyle\Psi\left(\frac{Q}{B}\right)$ depends on the dispersion's characteristics through the factor $B$, and the optimal quality factor $Q$ is given by $Q_\text{opt}=4.5\,B$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Psi_PsiP.pdf} \caption{Functions $\Psi$ (black curve) and $\Psi'$ (red curve), accounting for the optical loss for the NFRHT in a polar bulk system with small vacuum gap, calculated using our result in Eq. \eqref{Seq:hmax_Psi2} and the expression from Ben-Abdallah \textit{et al. } \cite{ben-abdallah_fundamental_2010} given in Eq. \eqref{Seq:Psi_prime}, respectively, as a function of the polariton resonance quality factor $Q$. We considered a Lorentz dispersion with parameters $\varepsilon_\infty=1$, $\omega_\text{TO}=1.49\times10^{14}\,\text{rad/s}$, $\omega_\text{LO}=1.83\times10^{14}\,\text{rad/s}$, for which $B=4.93$, calculated using Eq. \eqref{Seq:Bpol}. The different optimal quality factors at which the curves peak are also marked with a dashed line.} \label{fig:Psi_PsiP} \end{figure} In Fig. \ref{fig:Psi_PsiP}, we compare the two functions for a Lorentz dispersion with $\varepsilon_\infty=1$, $\omega_\text{TO}=1.49\times10^{14}\,\text{rad/s}$, $\omega_\text{LO}=1.83\times10^{14}\,\text{rad/s}$, for which $B=4.93$, calculated using Eq. \eqref{Seq:Bpol}. The optimal quality factor predicted by maximizing $\Psi$ in Eq. \eqref{Seq:hmax_Psi2} is $Q_\text{opt}=4.5\,B=22.2$, about an order of magnitude greater than the optimal $Q$ obtained maximizing $\Psi'$ in Eq. \eqref{Seq:Psi_prime}. \input{supplement.bbl} \end{document}
1,477,468,750,692
arxiv
\section{Introduction} \label{sec:introduction} Your computer is continuously, efficiently, and reliably executing computer programs, but does it really understand them? Artificial intelligence researchers have taken great strides towards teaching machines to understand images, speech, natural text, and other media. The problem of understanding computer code has received far less attention over the last two decades. Yet the growth of computing's influence on society shows no signs of abating, with knowledge workers in all domains increasingly asked to create, maintain, and extend computer programs. For all workers, but especially those outside software engineering roles, programming is a means to achieve practical goals, not an end in itself. Programmers deserve intelligent tools that reveal the connections between their code, their colleagues' code, and the subject-matter concepts to which the code implicitly refers and to which their real enthusiasm belongs. By teaching machines to comprehend code, we could create artificial agents that empower human knowledge workers or perhaps even generate useful programs of their own. One computational domain undergoing particularly rapid growth is data science. Besides the usual problems facing the scientist-turned-programmer, the data scientist must contend with a proliferation of programming languages (like Python, R, and Julia) and frameworks (too numerous to recount). Data science therefore presents an especially compelling target for machine understanding of computer code. An AI agent that simultaneously comprehends the generic concepts of computing and the specialized concepts of data science could prove enormously useful, by, for example, automatically visualizing machine learning workflows or summarizing data analyses as natural text for human readers. Towards this prospect, we develop an AI system that forms semantic representations of computer programs. Our system is fully automated, inasmuch as it expects nothing from the programmer besides the program itself and the ability to run it. We have designed our system to handle scripts written by data scientists, which tend to be shorter, more linear, and better defined semantically than the large-scale codebases written by software engineers. Our methodology is not universally applicable. Nevertheless, we think it could be fruitfully extended to other scientific domains with a computational focus, such as bioinformatics or computational neuroscience, by integrating it with existing domain-specific ontologies. We contribute several components that cohere as an AI system but also hold independent interest. First, we define a dataflow graph representation of a computer program, called the \emph{raw flow graph}. We extract raw flow graphs from computer programs using static and dynamic program analysis. We define another program representation, called the \emph{semantic flow graph}, combining dataflow information with domain-specific information about data science. To support the two representations, we introduce an \emph{ontology language} for modeling computer programs, called Monocl, and an \emph{ontology} written in this language, called the Data Science Ontology. Finally, we propose a \emph{semantic enrichment} algorithm for transforming the raw flow graph into the semantic flow graph. The Data Science Ontology is available online\footnote{To browse and search the Data Science Ontology, and to see additional documentation, please visit \url{https://www.datascienceontology.org}.} and our system's source code is available on GitHub under a permissive open source license (see \cref{sec:conclusion}). \subsubsection*{Organization of paper} In the next section, we motivate our method through a pedagogical example (\cref{sec:example}). We then explain the method itself, first informally and with a minimum of mathematics (\cref{sec:methods}) and then again with greater precision and rigor (\cref{sec:math}). We divide the exposition in this way because the major ideas of the paper can be understood without the mathematical formalism, which may be unfamiliar to some readers. We then take a step back from technical matters to locate our work within the ongoing movement towards collaborative, open, and reproducible data science (\cref{sec:data-science-viewpoint}). We also demonstrate our method on a realistic data analysis drawn from a biomedical data science challenge. In the penultimate section, we bring out connections to existing work in artificial intelligence, program analysis, programming language theory, and category theory (\cref{sec:related-work}). We conclude with directions for future research and development (\cref{sec:conclusion}). For a non-technical overview of our work, emphasizing motivation and examples, we suggest reading \cref{sec:introduction,sec:example,sec:data-science-viewpoint,sec:conclusion}. \section{First examples} \label{sec:example} We begin with a small, pedagogical example, to be revisited and elaborated later. Three versions of a toy data analysis are shown in \cref{lst:kmeans-scipy,lst:kmeans-sklearn,lst:kmeans-r}. The first is written in Python using the scientific computing packages NumPy and SciPy; the second in Python using the data science packages Pandas and Scikit-learn; and the third in R using the R standard library. The three programs perform the same analysis: they read the Iris dataset from a CSV file, drop the last column (labeling the flower species), fit a $k$-means clustering model with three clusters to the remaining columns, and return the cluster assignments and centroids. The programs are syntactically distinct but semantically equivalent. To be more precise, the programs are written in different programming languages---Python and R---and the two Python programs invoke different sets of libraries. Moreover, the programs exemplify different programming paradigms. \cref{lst:kmeans-scipy,lst:kmeans-r} are written in functional style and \cref{lst:kmeans-sklearn} is written in object-oriented style. Thus, at the syntactic level, the programs appear to be very different, and conventional program analysis tools would regard them as being very different. However, as readers fluent in Python and R will recognize, the programs perform the same data analysis. They are semantically equivalent, up to possibly numerical error and minor differences in the implementation of the $k$-means clustering algorithm. (Implementations differ mainly in how the iterative algorithm is initialized.) Identifying the semantic equivalence, our system furnishes the same semantic flow graph for all three programs, shown in \cref{fig:semantic-kmeans}. The labeled nodes and edges refer to concepts in the Data Science Ontology. The node tagged with a question mark refers to code with unknown semantics. \begin{figure} \begin{minipage}{\textwidth} \begin{minted}[frame=leftline,rulecolor=\color{gray!50}]{python} import numpy as np from scipy.cluster.vq import kmeans2 iris = np.genfromtxt('iris.csv', dtype='f8', delimiter=',', skip_header=1) iris = np.delete(iris, 4, axis=1) centroids, clusters = kmeans2(iris, 3) \end{minted} \captionof{listing}{$k$-means clustering in Python via NumPy and SciPy} \vspace{\baselineskip} \label{lst:kmeans-scipy} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \begin{minted}[frame=leftline,rulecolor=\color{gray!50}]{python} import pandas as pd from sklearn.cluster import KMeans iris = pd.read_csv('iris.csv') iris = iris.drop('Species', 1) kmeans = KMeans(n_clusters=3) kmeans.fit(iris.values) centroids = kmeans.cluster_centers_ clusters = kmeans.labels_ \end{minted} \captionof{listing}{$k$-means clustering in Python via Pandas and Scikit-learn} \label{lst:kmeans-sklearn} \end{minipage}% \begin{minipage}[t]{0.5\textwidth} \begin{minted}[frame=leftline,rulecolor=\color{gray!50}]{r} iris = read.csv('iris.csv', stringsAsFactors=FALSE) iris = iris[, names(iris) != 'Species'] km = kmeans(iris, 3) centroids = km$centers clusters = km$cluster \end{minted} \captionof{listing}{$k$-means clustering in R} \label{lst:kmeans-r} \end{minipage} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{images/semantic-kmeans} \caption{Semantic flow graph for three versions of $k$-means clustering analysis (\cref{lst:kmeans-scipy,lst:kmeans-sklearn,lst:kmeans-r})} \label{fig:semantic-kmeans} \end{figure} \section{Ideas and techniques} \label{sec:methods} We now explain our method of constructing semantic representations of computer programs. At the highest level, two steps connect a computer program to its representations. First, \emph{computer program analysis} distills the raw flow graph from the program. The \emph{raw flow graph} is a dataflow graph that records the concrete function calls made during the execution of the program. This graph is programming language and library dependent. In the second step, a process of \emph{semantic enrichment} transforms the raw flow graph into the semantic flow graph. The \emph{semantic flow graph} describes the same program in terms of abstract concepts belonging to the Data Science Ontology. This graph is programming language and library independent. Thus, both dataflow graphs capture the execution of a computer program doing data analysis, but at different levels of abstraction. The architecture diagram in \cref{fig:architecture} summarizes our method. Semantic enrichment requires a few supporting actors. An \emph{ontology} (or \emph{knowledge base}), called the Data Science Ontology, underlies the semantic content. It contains two types of knowledge: concepts and annotations. \emph{Concepts} formalize the abstract ideas of machine learning, statistics, and computing on data. The semantic flow graph has semantics, as its name suggests, because its nodes and edges are linked to concepts. \emph{Annotations} map code from data science libraries, such as Pandas and Scikit-learn, onto concepts. During semantic enrichment, annotations determine how concrete functions in the raw flow graph are translated into abstract functions in the semantic flow graph. Such, in outline, is our method. Throughout the rest of this section we develop its elements in greater detail, beginning with the Data Science Ontology and the ontology language in which it is expressed. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/architecture} \caption{System architecture} \label{fig:architecture} \end{figure*} \subsection{The Data Science Ontology} \label{sec:dso} We have begun writing an ontology---the Data Science Ontology---about statistics, machine learning, and data processing. It aims to support automated reasoning about data science software. As we have said, the Data Science Ontology is comprised of concepts and annotations. \emph{Concepts} catalog and formalize the abstract entities of data science, such as data tables and statistical models, as well as processes that manipulate them, like loading data from a file or fitting a model to data. Reflecting the intuitive distinction between ``things'' and ``processes,'' concepts bifurcate into two kinds: types and functions. The terminology agrees with that of functional programming. Thus, a \emph{type} represents a kind or species of thing in the domain of data science. A \emph{function} is a functional relation or mapping from an input type (the \emph{domain}) to an output type (the \emph{codomain}). In this terminology, the concepts of a data table and of a statistical model are types, whereas the concept of fitting a predictive model is a function that maps an unfitted predictive model, together with predictors and response data, to a fitted predictive model. As a modeling assumption, we suppose that software packages for data science, such as Pandas and Scikit-learn, concretize the concepts. \emph{Annotations} say how this concretization occurs by mapping types and functions in software packages onto type and function concepts in the ontology. To avoid confusion between levels of abstraction, we call the former ``concrete'' and the latter ``abstract.'' Thus, a type annotation maps a concrete type---a primitive type or user-defined class in a language like Python or R---onto an abstract type---a type concept. Likewise, a function annotation maps a concrete function onto an abstract function. We construe ``concrete function'' in the broadest possible sense to include any programming language construct that ``does something'': ordinary functions, methods of classes, attribute getters and setters, etc. The division of the ontology into concepts and annotations on the one hand, and into types and functions on the other, leads to a two-way classification. \cref{table:ontology-classification} lists basic examples of each of the four combinations, drawn from the Data Science Ontology. \begin{table} \centering \caption{Example concepts and annotations from the Data Science Ontology} \label{table:ontology-classification} \begin{tabular}{lp{2in}p{2in}} & \textbf{Concept} & \textbf{Annotation} \\ \toprule \textbf{Type} & data table & pandas data frame \\ \cmidrule{2-3} & statistical model & scikit-learn estimator \\ \cmidrule{2-3} \textbf{Function} & reading a tabular data file & \texttt{read\_csv} function in pandas \\ \cmidrule{2-3} & fitting a statistical model to data & \texttt{fit} method of scikit-learn estimators \\ \bottomrule \end{tabular} \end{table} Significant modeling flexibility is needed to accurately translate the diverse APIs of statistical software into a single set of universal concepts. \cref{sec:example} shows, for example, that the concept of $k$-means clustering can be concretized in software in many different ways. To accommodate this diversity, we allow function annotations to map a single concrete function onto an arbitrary abstract ``program'' comprised of function concepts. In \cref{fig:function-annotations}, we display three function annotations relevant to the fitting of $k$-means clustering models in \cref{lst:kmeans-scipy,lst:kmeans-sklearn,lst:kmeans-r}. By the end of this section, we will see how to interpret the three annotations and how the semantic enrichment algorithm uses them to generate the semantic flow graph in \cref{fig:semantic-kmeans}. We have not yet said what kind of abstract ``program'' is allowed to appear in a function annotation. Answering that question is the most important purpose of our ontology language, to which we now turn. \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics{images/annotation-python-scipy-kmeans} \caption{Annotation: \texttt{kmeans2} function in SciPy (cf.\ \cref{lst:kmeans-scipy})} \label{fig:annotation-python-scipy-kmeans} \end{subfigure} \\[0.5\baselineskip] \begin{subfigure}{0.5\textwidth} \centering \includegraphics{images/annotation-python-sklearn-fit} \caption{Annotation: \texttt{fit} method of \texttt{BaseEstimator} class in Scikit-learn (cf.\ \cref{lst:kmeans-sklearn})} \label{fig:annotation-python-sklearn-fit} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics{images/annotation-r-kmeans} \caption{Annotation: \texttt{kmeans} function in R's builtin \texttt{stats} package (cf.\ \cref{lst:kmeans-r})} \label{fig:annotation-r-kmeans} \end{subfigure} \caption{Example function annotations from the Data Science Ontology} \label{fig:function-annotations} \end{figure} \subsection{The Monocl ontology language} \label{sec:monocl} The Data Science Ontology is expressed in an \emph{ontology language} that we call the MONoidal Ontology and Computing Language (Monocl). We find it helpful to think of Monocl as a minimalistic, typed, functional programming language. The analogy usually suggests the right intuitions but is imperfect because Monocl is simpler than any commonly used programming language, being designed for knowledge representation rather than actual computing. The ontology language says how to construct new types and functions from old, for the purposes of defining concepts and annotations. Monocl is written in a point-free textual syntax or equivalently in a graphical syntax of interconnected boxes and wires. The two syntaxes are parallel though not quite isomorphic. In this section, we emphasize the more intuitive graphical syntax. We describe the constructors for types and functions and illustrate them using the graphical syntax. A more formal development is given in \cref{sec:concepts-as-category}. Monocl has a minimalistic type system, supporting product and unit types as well as a simple form of subtyping. A \emph{basic type}, sometimes called a ``primitive type,'' is a type that cannot be decomposed into simpler types. Basic types must be explicitly defined. All other types are \emph{composite}. For instance, the \emph{product} of two types $X$ and $Y$ is another type $X \times Y$. It has the usual meaning: an element of type $X \times Y$ is an element of type $X$ \emph{and} an element of type $Y$, in that order. Products of three or more types are defined similarly. Product types are similar to record types in conventional programming languages, such as \texttt{struct} types in C. There is also a \emph{unit type} $1$ inhabited by a single element. It is analogous to the \texttt{void} type in C and Java, the \texttt{NoneType} type in Python (whose sole inhabitant is \texttt{None}), and the \texttt{NULL} type in R. A type can be declared a \emph{subtype} of one or more other types. To a first approximation, subtyping establishes an ``is-a'' relationship between types. In the Data Science Ontology, matrices are a subtype of both arrays (being arrays of rank 2) and data tables (being tables whose columns all have the same data type). As this example illustrates, subtyping in Monocl differs from inheritance in a typical object-oriented programming language. Instead, subtyping should be understood through \emph{implicit conversion}, also known as \emph{coercion} \cite{reynolds1980,pierce1991}. The idea is that if a type $X$ is a subtype of $X'$, then there is a canonical way to convert elements of type $X$ into elements of type $X'$. Elaborating our example, a matrix simply \emph{is} an array (of rank 2), hence can be trivially converted into an array. A matrix is not strictly speaking a data table but can be converted into one (of homogeneous data type) by assigning numerical names to the columns. In the graphical syntax, types are represented by wires. A basic type $X$ is drawn as a single wire labeled $X$. A product of $n$ types is a bundle of $n$ wires in parallel. The unit type is an empty bundle of wires (a blank space). This should become clearer as we discuss wiring diagrams for functions. A function $f$ in Monocl has an input type $X$, its \emph{domain}, and an output type $Y$, its \emph{codomain}. We express this in the usual mathematical notation as $f: X \to Y$. Like types, functions are either basic or composite. Note that a basic function may have composite domain or codomain. From the programming languages perspective, a program in the Monocl language is nothing more than a function. Functions are represented graphically by \emph{wiring diagrams} (also known as \emph{string diagrams}). A basic function $f: X \to Y$ is drawn as a box labeled $f$. The top of the box has input ports with incoming wires $X$ and the bottom has output ports with outgoing wires $Y$. A wiring diagram defines a general composite function by connecting boxes with wires according to certain rules. The diagram has an outer box with input ports, defining the function's domain, and output ports, defining the codomain. \cref{fig:semantic-kmeans,fig:function-annotations,fig:raw-kmeans-scipy,fig:raw-kmeans-sklearn,fig:raw-kmeans-r,fig:semantic-dream-ra} are all examples of wiring diagrams. \newsavebox{\composebox} \savebox{\composebox}{\includegraphics{images/function-composition}} \begin{figure} \begin{subfigure}{0.25\textwidth} \centering \usebox{\composebox} \caption{Composition} \label{fig:function-composition} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \centering \vbox to \ht\composebox{% \vfill\includegraphics{images/function-product}\vfill} \caption{Product} \label{fig:function-product} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \centering \vbox to \ht\composebox{% \vfill\includegraphics{images/function-copy}\vfill} \caption{Copying data} \label{fig:function-copy} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \centering \vbox to \ht\composebox{% \vfill\includegraphics{images/function-delete}\vfill} \caption{Deleting data} \label{fig:function-delete} \end{subfigure} \caption{Graphical syntax for operations on functions} \label{fig:function-constructors} \end{figure} The rules for connecting boxes within a wiring diagram correspond to ways of creating new functions from old. The two most fundamental ways are composing functions and taking products of functions. The \emph{composition} of a function $f: X \to Y$ with $g: Y \to Z$ is a new function $f \cdot g: X \to Z$, with the usual meaning. Algorithmically speaking, $f \cdot g$ computes \emph{in sequence}: first $f$ and then $g$. The \emph{product} of functions $f: X \to W$ and $g: Y \to Z$ is another function $f \times g: X \times Y \to W \times Z$. Algorithmically, $f \times g$ computes $f$ and $g$ \emph{in parallel}, taking the inputs, and returning the outputs, of both $f$ and $g$. \cref{fig:function-composition,fig:function-product} show the graphical syntax for composition and products. The graphical syntax implicitly includes a number of special functions. For any type $X$, the \emph{identity} function $1_X: X \to X$ maps every element of type $X$ to itself. For each pair of types $X$ and $Y$, the \emph{braiding} or \emph{transposition} function $\sigma_{X,Y}: X \times Y \to Y \times X$ exchanges its two inputs. Identities and braidings are drawn as straight wires and pairs of crossed wires, respectively. Any permutation function can be expressed by taking compositions and products of identities and braidings. Diagrammatically, this means that a bundle of wires may be criss-crossed in arbitrarily complex ways (provided that the wires do not bend backwards). For each type $X$, there is also a \emph{copying} function $\Delta_X: X \to X \times X$, which duplicates its input, and a \emph{deleting} function $\lozenge_X: X \to I$, which discards its input. In the graphical syntax, these functions allow a single output port to have multiple or zero outgoing wires. For instance, given a function $f: X \to Y$, \cref{fig:function-copy,fig:function-delete} display the compositions $f \cdot \Delta_Y: X \to Y \times Y$ and $f \cdot \lozenge_Y: X \to I$. The analogous situation is not permitted of input ports; in a well-formed wiring diagram, every input port has exactly one incoming wire. Besides serving as the ``is-a'' relation ubiquitous in knowledge representation systems, the subtype relation for objects enables ad hoc polymorphism for functions. We extend the definition of function composition to include implicit conversion, namely, to compose a function $f: X \to Y$ with $g: Y' \to Z$, we require not necessarily that $Y$ equals $Y'$, but only that $Y$ be a subtype of $Y'$. Operationally, to compute $f \cdot g$, we first compute $f$, then coerce the result from type $Y$ to $Y'$, and finally compute $g$. Diagrammatically, a wire connecting two boxes has valid types if and only if the source port's type is a subtype of the target port's type. Thus implicit conversions really are implicit in the graphical syntax. Monocl also supports ``is-a'' relations between functions, which we call \emph{subfunctions} in analogy to subtypes. In the Data Science Ontology, reading a table from a tabular file (call it $f$) is a subfunction of reading data from a generic data source (call it $f'$). That sounds intuitively plausible but what does it mean? The domain of $f$, a tabular file, is a subtype of the domain of $f'$, a generic data source. The codomain of $f$, a table, is a subtype of the codomain of $f'$, generic data. Now consider two possible computational paths that take a tabular file and return generic data. We could apply $f$, then coerce the resulting table to generic data. Alternatively, we could coerce the tabular file to a generic data source, then apply $f'$. The subfunction relation asserts that these two computations are equivalent. The general definition of a subfunction is perfectly analogous. \subsection{Raw and semantic dataflow graphs} \label{sec:graphs} With this preparation, we can attain a more exact understanding of the raw and semantic flow graphs. The two dataflow graphs are both wiring diagrams representing a data analysis. However, they exist at different levels of abstraction. The \emph{raw flow graph} describes the computer implementation of a data analysis. Its boxes are concrete functions or, more precisely, the function calls observed during the execution of the program. Its wires are concrete types together with their observed elements. These ``elements'' are either literal values or object references, depending on the type. To illustrate, \cref{fig:raw-kmeans-scipy,fig:raw-kmeans-sklearn,fig:raw-kmeans-r} show the raw flow graphs for \cref{lst:kmeans-scipy,lst:kmeans-sklearn,lst:kmeans-r}, respectively. Note that the wire elements are not shown. \begin{figure} \begin{minipage}{\textwidth} \centering \includegraphics[height=0.2\textheight]{images/raw-scipy-kmeans} \caption{Raw flow graph for $k$-means clustering in Python via NumPy and SciPy (\cref{lst:kmeans-scipy})} \label{fig:raw-kmeans-scipy} \vspace{0.25in} \end{minipage} \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{images/raw-sklearn-kmeans} \caption{Raw flow graph for $k$-means clustering in Python via Pandas and Scikit-learn (\cref{lst:kmeans-sklearn})} \label{fig:raw-kmeans-sklearn} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=0.55\textwidth]{images/raw-r-kmeans} \caption{Raw flow graph for $k$-means clustering in R (\cref{lst:kmeans-r})} \label{fig:raw-kmeans-r} \end{minipage} \end{figure} The \emph{semantic flow graph} describes a data analysis in terms of universal concepts, independent of the particular programming language and libraries used to implement the analysis. Its boxes are function concepts. Its wires are type concepts together with their observed elements. The semantic flow graph is thus an abstract function, composed of the ontology's concepts and written in the graphical syntax, but augmented with computed values. \cref{fig:semantic-kmeans} shows the semantic flow graph for \cref{lst:kmeans-scipy,lst:kmeans-sklearn,lst:kmeans-r}. Another semantic flow graph is shown in \cref{fig:semantic-dream-ra} below. Again, the wire elements are not shown. \subsection{Program analysis} \label{sec:program-analysis} We use computer program analysis to extract the raw flow graph from a data analysis. Program analysis therefore plays an essential role in our AI system. It plays an equally important role in our original publication on this topic \cite{patterson-ibm2017}, reviewed below in \cref{sec:related-work}. In the present work, we have extended our program analysis tools to support the R language, but our basic methodology has changed little. We review only the major points about our usage of computer program analysis, deferring to our original paper for details. Program analysis can be static or dynamic or both. \emph{Static analysis} consumes the source code but does not execute it. Much literature on program analysis is about static analysis because of its relevance to optimizing compilers \cite{nielson1999,aho2006}. \emph{Dynamic analysis}, in contrast, executes the program without necessarily inspecting the code. Our program analysis is mainly dynamic, for a couple of reasons. Static analysis, especially type inference, is challenging for the highly dynamic languages popular among data scientists. Moreover, we record values computed over the course of the program's execution, such as model parameters and hyperparameters. For this dynamic analysis is indispensable. Of course, a disadvantage of dynamic analysis is the necessity of running the program. Crucially, our system needs not just the code itself, but its input data and runtime environment. These are all requirements of scientific reproducibility (see \cref{sec:data-science-viewpoint}), so in principle they ought to be satisfied. In practice they are often neglected. To build the raw flow graph, our program analysis tools record interprocedural data flow during program execution. We begin with the empty wiring diagram and add boxes incrementally as the program unfolds. Besides recording function calls and their arguments and return values, the main challenge is to track the provenance of objects as they are passed between functions. When a new box is added to the diagram, the provenance record says how to connect the input ports of the new box to the output ports of existing boxes. How all this is accomplished depends on the programming language in question. In Python, we register callbacks via \texttt{sys.settrace}, to be invoked whenever a function is called or returns. A table of object provenance is maintained using weak references. No modification of the abstract syntax tree (AST) is necessary. In R, we add callbacks by rewriting the AST, to be invoked whenever a term is (lazily) evaluated. Care must be taken to avoid breaking functions which use nonstandard evaluation, a kind of dynamic metaprogramming unique to R \cite{wickham2014}. Our usage of program analysis involves conceptual as well as engineering difficulties, because the programming model of Monocl is simpler than that of typical programming languages. We mention just one conceptual problem to give a sense of the issues that arise. Monocl is purely functional, whereas most practical languages allow mutation of objects.\footnote{Mutation is more common in Python than in R because most R objects have copy-on-modify semantics.} Our program analysis tools have limited capabilities for detecting mutations. When a ``function'' mutates an object, the mutated object is represented in the raw flow graph as an extra output of the function. For instance, we interpret the \texttt{fit} method of a Scikit-learn estimator, which modifies the model in-place, as returning a new model (\cref{fig:annotation-python-sklearn-fit}). \subsection{Semantic enrichment} \label{sec:semantic-enrichment} The \emph{semantic enrichment} algorithm transforms the raw flow graph into the semantic flow graph. It proceeds in two independent stages, one of expansion and one of contraction. The expansion stage makes essential use of the ontology's annotations. \subsubsection*{Expansion} In the \emph{expansion} stage, the annotated parts of the raw flow graph are replaced by their abstract definitions. Each annotated box---that is, each box referring to a concrete function annotated by the ontology---is replaced by the corresponding abstract function. Likewise, the concrete type of each annotated wire is replaced by the corresponding abstract type. This stage of the algorithm is ``expansionary'' because, as we have seen, a function annotation's definition may be an arbitrary Monocl program. In other words, a single box in the raw flow graph may expand to an arbitrarily large subdiagram in the semantic flow graph. The expansion procedure is \emph{functorial}, to use the jargon of category theory. Informally, this means two things. First, notice that concrete types are effectively annotated twice, explicitly by type annotations and implicitly by the domain and codomain types in function annotations. Functorality requires that these abstract types be compatible, ensuring the logical consistency of type and function annotations. Second, expansion preserves the structure of the ontology language, including composition and products. Put differently, the expansion of a wiring diagram is completely determined by its action on individual boxes (basic functions). Functorality is a modeling decision that greatly simplifies the semantic enrichment algorithm, at the expense of imposing restrictions on how the raw flow graph may be transformed. \subsubsection*{Contraction} It is practically infeasible to annotate every reusable unit of data science source code. Most real-world data analyses use concrete types and functions that are not annotated. This unannotated code has unknown semantics, so properly speaking it does not belong in the semantic flow graph. On the other hand, it usually cannot be deleted without altering the dataflow in the rest of the diagram. Semantic enrichment must not corrupt the dataflow record. As a compromise, in the \emph{contraction} stage, the unannotated parts of the raw flow graph are simplified to the extent possible. All references to unannotated types and functions are removed, leaving behind unlabeled wires and boxes. Semantically, the unlabeled wires are interpreted as arbitrary ``unknown'' types and the unlabeled boxes as arbitrary ``unknown'' functions (which could have known domain and codomain types). The diagram is then simplified by \emph{encapsulating} unlabeled boxes. Specifically, every maximal connected subdiagram of unlabeled boxes is encapsulated by a single unlabeled box. The interpretation is that any composition of unknown functions is just another unknown function. This stage is ``contractionary'' because it can only decrease the number of boxes in the diagram. \subsubsection*{Example revisited} To reprise our original example, semantic enrichment transforms the raw flow graphs of \cref{fig:raw-kmeans-scipy,fig:raw-kmeans-sklearn,fig:raw-kmeans-r} into the same semantic flow graph, shown in \cref{fig:semantic-kmeans}. Let us take a closer look at a few of the expansions and contractions involved. Expansions related to $k$-means clustering occur in all three programs. In the Python program based on SciPy (\cref{fig:raw-kmeans-scipy}), the \texttt{kmeans2} function expands into a program that creates a $k$-means clustering model, fits it to the data, and extracts its cluster assignments and centroids, as described by the annotation in \cref{fig:annotation-python-scipy-kmeans}. The abstract $k$-means clustering model does \emph{not} correspond to any concrete object in the original program. We routinely use this modeling pattern to cope with functions that are not object-oriented with respect to models. By contrast, the Python program based on Scikit-learn (\cref{fig:raw-kmeans-sklearn}) is written in object-oriented style. The \texttt{KMeans} class expands to an abstract $k$-means clustering type. The \texttt{fit} method of the \texttt{KMeans} class is not annotated in the Data Science Ontology. However, the \texttt{fit} method of the superclass \texttt{BaseEstimator} \emph{is} annotated (\cref{fig:annotation-python-sklearn-fit}), so the expansion is performed using that annotation. As this case illustrates, subtyping and polymorphism are indispensable when annotating object-oriented code. The R program (\cref{fig:raw-kmeans-r}) is intermediate between these two styles. The \texttt{kmeans} function, annotated in \cref{fig:annotation-r-kmeans}, directly takes the data and the number of clusters, but returns an object of class \texttt{kmeans}. The cluster assignments and centroids are slots of this object, annotated separately. This design pattern is typical in R, due to its informal type system. Now consider the contractions. In the first program (\cref{fig:raw-kmeans-scipy}), the only unannotated box is NumPy's \texttt{delete} function. Contracting this box does not reduce the size of the wiring diagram. A contraction involving multiple boxes occurs in the second program (\cref{fig:raw-kmeans-sklearn}). The subdiagram consisting of the pandas \texttt{NDFrame.drop} method composed with the \texttt{values} attribute accessor is encapsulated into a single unlabeled box. We have left these functions unannotated for the sake of illustration and because the section of the Data Science Ontology dedicated to data manipulation has not yet been developed. We expect this gap to close as the ontology grows. \section{Mathematical foundations} \label{sec:math} To put the foregoing ideas on a firmer footing, we formalize the ontology and the semantic enrichment algorithm in the language of category theory. We are not professional category theorists and we have tried to make this section accessible to other non-category theorists. Nevertheless, readers will find it helpful to have a working knowledge of basic category theory, as may be found in the introductory textbooks \cite{spivak2014,awodey2010,leinster2014,riehl2016}, and of monoidal category theory, as in the survey articles \cite{baez2010,coecke2010}. For readers without this background, or who simply wish to understand our method informally, this section can be skipped without loss of continuity. Here, in outline, is our program. We take the ontology's concepts to form a category, with type concepts corresponding to objects and function concepts corresponding to morphisms. Defining the ontology language amounts to fixing a categorical doctrine, which will turn out to be the doctrine of cartesian categories with implicit conversion. Up to this point, we conform to the general scheme of categorical knowledge representation, according to which ontologies are simply categories in a suitable doctrine \cite{spivak2012,patterson-arxiv2017}. Having defined the ontology's concepts as a category $\cat C$, we then interpret the annotations as a partial functor from a category $\cat L$ modeling a software ecosystem to the concept category $\cat C$. Finally, we formalize the raw and semantic flow graphs as morphisms in categories of elements over $\cat L$ and $\cat C$. \subsection{Why category theory?} \label{sec:math-motivation} Because category theory does not yet belong to the basic toolbox of knowledge representation, we pause to motivate the categorical approach before launching into the formal development. Why is category theory an appealing framework for representing knowledge, especially about computational processes? We offer several answers to this question. First, there already exist whole branches of computer science, namely type theory and programming language theory, dedicated to the mathematical modeling of computer programs. To neglect them in knowledge representation would be unfortunate. Category theory serves as an algebraic bridge to these fields. Due to the close connection between category theory and type theory \cite{crole1993,jacobs1999}---most famously, the correspondence between cartesian closed categories and simply typed lambda theories \cite{lambek1988}---we may dwell in the syntactically and semantically flexible world of algebra but still draw on the highly developed theory of programming languages. In \cref{sec:concepts-as-category}, we borrow specific notions of subtyping and ad hoc polymorphism from programming language theory \cite{goguen1978,reynolds1980}. Category theory is also useful in its own right, beyond its connection to programming language theory. The essential innovation of category theory over the mainly syntactical theory of programming languages is that programs become \emph{algebraic structures}, analogous to, albeit more complicated than, classical algebraic structures like groups and monoids. Like any algebraic structure, categories of programs are automatically endowed with an appropriate notion of structure-preserving map between them. In this case, the structure-preserving maps are a special kind of \emph{functor}. In \cref{sec:annotations-as-functor}, we formulate the semantic enrichment algorithm as a functor between categories of programs. The structuralist philosophy underlying modern algebra is therefore central to our whole approach. Another advantage of category theory is flexibility of syntax. Unlike the lambda calculus and other type theories, algebraic structures like categories exist independently of any particular system of syntax. Syntactic flexibility is mathematically convenient but also practically important. Monoidal categories admit a graphical syntax of \emph{wiring diagrams}, also known as \emph{string diagrams} \cite{baez2010,selinger2010}. We introduced the graphical syntax informally in \cref{sec:monocl}. It offers an intuitive yet rigorous alternative to the typed lambda calculus's textual syntax \cite{selinger2013}, which beginners may find impenetrable. The family of graphical languages based on string diagrams is a jewel of category theory, with applications to such diverse fields as quantum mechanics \cite{coecke2010}, control theory \cite{baez2015a}, and natural language semantics \cite{coecke2013}. Having arrived at the general categorical perspective, the next question to ask is: what kind of category shall we use to model computer programs? We begin our investigation with cartesian categories, which are perhaps the simplest possible model of typed, functional computing. As we recall more carefully in \cref{sec:concepts-as-category}, \emph{cartesian categories} are symmetric monoidal categories with natural operations for copying and deleting data. Morphisms in a cartesian category behave like mathematical functions. As a model of computation, cartesian categories are very primitive. They do not allow for manipulating functions as data (via lambda abstraction) or for recursion (looping), hence they can only express terminating computations of fixed, finite length. Extensions of this computational model abound. \emph{Cartesian closed categories} arise as cartesian categories with a \emph{closed} structure, whereby the whole collection of morphisms $X \to Y$ is representable as an \emph{exponential} object $Y^X$. Closed categories have function types, in programming jargon. According to a famous result, cartesian closed categories are equivalent to the typed lambda calculus \cite{lambek1988}. \emph{Traced symmetric monoidal categories} model looping and other forms of feedback. According to another classic result, a trace on a cartesian category is equivalent to a Conway fixed point operator \cite{hasegawa1997,hasegawa2003}. Fixed points are used in programming language theory to define the semantics of recursion. Combining these threads, we find in \emph{traced cartesian closed categories} a Turing-complete model of functional computing, amounting to the typed lambda calculus with a fixed point operator. Relaxing the cartesian or even the monoidal structure is another way to boost modeling flexibility. Starting with the cartesian structure, we interpret morphisms that are unnatural with respect to copying as performing non-deterministic computation, such as random number generation or Monte Carlo sampling. We interpret morphisms unnatural with respect to deleting as partial functions, because they raise errors or are undefined on certain inputs. In a symmetric monoidal category $\cat C$ with diagonals (not necessarily cartesian), the morphisms that \emph{do} satisfy the naturality conditions for copying and deleting data form a cartesian subcategory of $\cat C$, called the \emph{cartesian center} or \emph{focus} of $\cat C$ \cite{selinger1999}. It is also possible to relax the monoidal product itself. \emph{Symmetric premonoidal categories} model side effects and imperative programs, where evaluation order matters even for parallel statements, such as variable access and assignment. Any premonoidal category has a \emph{center} that is a monoidal category \cite{power1997}. Thus, classical computational processes form a three-level hierarchy: a symmetric premonoidal category has a center that is symmetric monoidal, which in turn has a cartesian center \cite{jeffrey1997}. This short survey hardly exhausts the categorical structures that have been used to model computer programs. However, our purpose here is not to define the most general model possible, but rather to adopt the \emph{simplest} model that still captures useful information in practice. For us, that model is the cartesian category. The structures in this categorical doctrine agree with the features currently supported by our program analysis tools (\cref{sec:program-analysis}). We expect that over time our software will acquire more features and achieve better fidelity, whereupon we will adopt a more expressive doctrine. The survey above shows that this transition can happen smoothly. In general, modularity is a key advantage of categorical knowledge representation: category theory provides a toolkit of mathematical structures that can be assembled in more or less complex ways to meet different modeling needs. \begin{notation} We compose our maps in diagrammatic (left-to-right) order. In particular, we write the composition of a morphism $f: X \to Y$ with another morphism $g: Y \to Z$ as $f \cdot g: X \to Z$ or simply $fg: X \to Z$. Small categories $\cat{C}, \cat{D}, \cat{E}, \dots$ are written in script font and large categories in bold font. As standard examples of the latter, we write $\CAT{Set}$ for the category of sets and functions and $\CAT{Cat}$ for the category of (small) categories and functors. Other categories will be introduced as needed. \end{notation} \subsection{Concepts as category} \label{sec:concepts-as-category} We formalize the ontology as a category. The type and function concepts in the ontology are, respectively, the objects and morphisms that generate the category. Abstract programs expressed in terms of concepts correspond to general morphisms in the category, assembled from the object and morphism generators by operations like composition and monoidal products. In this subsection, we develop the categorical doctrine where the ontology category will reside, by augmenting cartesian categories, motivated in \cref{sec:math-motivation}, with a form of subtyping based on implicit conversion. Ultimately, we define a Monocl ontology to be a finite presentation of a cartesian category with implicit conversion. The definition of diagonals in a monoidal category is fundamental \cite{selinger1999}. In stating it, we take for granted the definition of a \emph{symmetric monoidal category}; see the references at the beginning of this section for further reading. \begin{definition} A \emph{monoidal category with diagonals} is a symmetric monoidal category $(\cat C, \times, 1)$ together with two families of morphisms, \begin{equation*} \Delta_X: X \to X \times X \qquad\text{and}\qquad \lozenge_X: X \to 1, \end{equation*} indexed by objects $X \in \cat C$. The morphisms $\Delta_X$ and $\lozenge_X$, called \emph{copying} and \emph{deleting}, respectively, are required to make $X$ into a cocommutative comonoid (the formal dual of a commutative monoid). Moreover, the families must be \emph{coherent}, or \emph{uniform}, in the sense that $\lozenge_1 = 1_1$ and for all objects $X,Y \in \cat C$, the diagrams commute: \begin{equation*} \begin{tikzcd}[column sep=large] X \times Y \ar{r}{\Delta_X \times \Delta_Y} \ar[swap]{dr}{\Delta_{X \times Y}} & X \times X \times Y \times Y \ar{d}{1_X \times \sigma_{X,Y} \times 1_Y} \\ & X \times Y \times X \times Y \end{tikzcd} \qquad\qquad \begin{tikzcd}[column sep=large] X \times Y \ar{r}{\lozenge_X \times \lozenge_Y} \ar[swap]{dr}{\lozenge_{X \times Y}} & 1 \times 1 \ar{d}{\cong} \\ & 1 \end{tikzcd} \end{equation*} \end{definition} As explained in graphical terms in \cref{sec:monocl}, the copying and deleting morphisms allow data to be duplicated and discarded, a basic feature of classical (but not quantum) computation. Uniformity is a technical condition ensuring that copying and deleting are compatible with the symmetric monoidal structure. So, for example, a uniform copying operation has the property that copying data of type $X \times Y$ is equivalent to simultaneously copying data of type $X$ and copying data of type $Y$, up to the ordering of the outputs. A monoidal category with diagonals is a very general algebraic structure. Its morphisms need not resemble computational processes in any conventional sense. However, adding just one additional axiom yields the cartesian category, a classical notion in category theory and a primitive model of functional computing. \begin{definition} A \emph{cartesian category} is a monoidal category with diagonals whose copying and deleting maps, $\Delta_X$ and $\lozenge_X$, are \emph{natural} in $X$, meaning that for any morphism $f: X \to Y$, the diagrams commute: \begin{equation*} \begin{tikzcd} X \ar{r}{f} \ar[swap]{d}{\Delta_X} & Y \ar{d}{\Delta_Y} \\ X \times X \ar{r}{f \times f} & Y \times Y \end{tikzcd} \qquad\qquad \begin{tikzcd} X \ar{r}{f} \ar[swap]{dr}{\lozenge_X} & Y \ar{d}{\lozenge_Y} \\ & 1 \end{tikzcd} \end{equation*} We denote by $\CAT{Cart}$ the category of (small) cartesian categories and cartesian functors (strong monoidal functors preserving the diagonals). \end{definition} \begin{remark} Although it is not obvious, this definition of cartesian category is equivalent to the standard definition via the universal property of finite products \cite{heunen2012}. We prefer the alternative definition given here because it is phrased in the language of monoidal categories and string diagrams. \end{remark} In a cartesian category, the naturality conditions on copying and deleting assert that computation is deterministic and total. In more detail, naturality of copying says that computing a function $f$, then copying the output is the same as copying the input, then computing $f$ on both copies. This means that $f$ always produces the same output on a given input, i.e., $f$ is \emph{deterministic}. Naturality of deleting says that computing the function $f$, then deleting the output is the same as simply deleting the input. This means that $f$ is well-defined on all its inputs, i.e., $f$ is \emph{total}. Together, the naturality conditions establish that the category's morphisms behave like mathematical functions. Cartesian categories are perhaps the simplest model of typed, functional computing, as we argued in \cref{sec:math-motivation}. We considered there several extensions and relaxations of the cartesian structure, all centered around morphisms. One can also entertain richer constructions on objects. In programming jargon, this amounts to adding a more elaborate type system. A cartesian category has a type system with product and unit types, introduced from the programming languages perspective in \cref{sec:monocl}. In our experience, augmenting the type system with some form of polymorphism is a practical necessity, for the sake of code annotation and also of knowledge representation. We will not try to summarize the large literature on polymorphism. In keeping with the spirit of this paper, our objective is to define the minimal practically useful system. The following definitions are adapted, with relatively minor modifications, from Joseph Goguen and John C.\ Reynolds \cite{goguen1978,goguen1992,reynolds1980}. \begin{definition} A \emph{category with implicit conversion} is a category $\cat C$ with a distinguished wide subcategory $\cat C_0$ containing at most one morphism between any two objects. If there exists a morphism $X \to X'$ in $\cat C_0$, we write $X \leq X'$ and say that $X$ is a \emph{subtype} of $X'$. The morphism $X \to X'$ itself is called an \emph{implicit conversion} or \emph{coercion}. \end{definition} \begin{remark} To be consistent in our usage of categorical and programming terminology, we ought to say that $X$ is a \emph{subobject} of $X'$. However, the term ``subobject'' already has an established meaning in categorical logic, which is related to, but different than, our usage here. \end{remark} We explained the informal interpretation of subtyping and implicit conversion in \cref{sec:monocl}. One subtle point should be noted: even when types are interpreted as sets, implicit conversions are not necessarily interpreted as set inclusions. In the example from \cref{sec:monocl}, matrices are a subtype of data tables, yet the set of matrices is \emph{not} a subset of the set of data tables. (The implicit conversion function adds names to the columns of the matrix.) Hence the slogan that ``types are not sets'' \cite{morris1973}. Mathematically speaking, the subtype relation defines a preorder on the objects of $\cat C$. Thus, every type $X$ is a subtype of itself. If $X$ is a subtype of $X'$ and $X'$ a subtype of $X''$, then $X$ is a subtype of $X''$. The corresponding implicit conversions are given by identities and by composition, respectively. In what follows, there is no mathematical obstruction to allowing the conversions $\cat C_0$ to form an arbitrary category, not necessarily a preorder. That would, however, defeat the practical purpose: conversions would need to be disambiguated by \emph{names} and hence would cease to be implicit. When $\cat C$ is a monoidal category, we insist that implicit conversions be compatible with the monoidal structure. \begin{definition} A \emph{cartesian category with implicit conversion} is a category $\cat C$ with implicit conversion that is also cartesian. Moreover, the implicit conversions $\cat C_0$ must form a \emph{monoidal} subcategory of $\cat C$. We denote by $\CAT{Cart}_\leq$ the category whose objects are the (small) cartesian categories with implicit conversion and whose morphisms are the cartesian functors that preserve implicit conversions. For brevity, we call these morphisms simply ``functors.'' \end{definition} The definition requires that subtyping be compatible with product types. Specifically, if $X \leq X'$ and $Y \leq Y'$, then $X \times Y \leq X' \times Y'$, with the corresponding implicit conversion given by a product of morphisms. The subtype relation thus makes $\cat C$ into a \emph{monoidal preorder}. \begin{remark} Asking $\cat C_0$ to inherit the cartesian or even the symmetric monoidal structure leads to undesirable consequences, such as unwanted implicit conversions and even strictification of the original category $\cat C$. Namely, if $\cat C_0$ is a symmetric monoidal subcategory of $\cat C$, then the braidings $\sigma_{X,Y}: X \times Y \to Y \times X$ in $\cat C$ must satisfy $\sigma_{X,X} = 1_{X \times X}$, which is false under the intended set-theoretic interpretation. \end{remark} Because our notion of subtyping is operationalized by the implicit conversions, we can extend it from objects to morphisms through naturality squares. \begin{definition} Let $\cat C$ be a category with implicit conversion. A morphism $f$ in $\cat C$ is a \emph{submorphism} (or \emph{subfunction}) of another morphism $f'$, written $f \leq f'$, if in the arrow category $\cat C^\to$ there exists a (unique) morphism $f \to f'$ whose components are implicit conversions. Explicitly, if $f: X \to Y$ and $f': X' \to Y'$ are morphisms in $\cat C$, with $X \leq X'$ and $Y \leq Y'$, then $f \leq f'$ if and only if the diagram commutes: \begin{equation*} \begin{tikzcd} X \ar{r}{f} \ar[swap]{d}{\leq} & Y \ar{d}{\leq} \\ X' \ar{r}{f'} & Y' \end{tikzcd} \end{equation*} \end{definition} \begin{remark} In a closed category, subtypes of basic types, $X \leq X'$ and $Y \leq Y'$, canonically induce subtypes of function types, $Y^{X'} \leq (Y')^X$, by ``restricting the domain'' and ``expanding the codomain.'' Be warned that this construction is \emph{not} the same as a submorphism (it is contravariant in $X$, while a submorphism is covariant in both $X$ and $Y$). Indeed, we do not treat cartesian closed categories at all in this paper. \end{remark} Again, see \cref{sec:monocl} for informal interpretation and examples of this notion. Just as subtypes define a preorder on the objects of $\cat C$, submorphisms define a preorder on the morphisms of $\cat C$. Moreover, submorphisms respect the compositional structure of $\cat C$. They are closed under identities, i.e., $1_X \leq 1_{X'}$ whenever $X \leq X'$, and under composition, i.e., if $f \leq f'$ and $g \leq g'$ are composable, then $fg \leq f'g'$. All these statements are easy to prove. To illustrate, transitivity and closure under composition are proved by pasting commutative squares vertically and horizontally: \begin{equation*} \begin{tikzcd} X \ar{r}{f} \ar[swap]{d}{\leq} & Y \ar{d}{\leq} \\ X' \ar{r}{f'} \ar[swap]{d}{\leq} & Y' \ar{d}{\leq} \\ X'' \ar{r}{f''} & Y'' \end{tikzcd} \qquad\qquad \begin{tikzcd} X \ar{r}{f} \ar[swap]{d}{\leq} & Y \ar{r}{g} \ar{d}{\leq} & Z \ar{d}{\leq} \\ X' \ar{r}{f'} & Y' \ar{r}{g'} & Z' \end{tikzcd} \end{equation*} When $\cat C$ is a \emph{cartesian} category with implicit conversion, submorphisms are also closed under products: if $f \leq f'$ and $g \leq g'$, then $f \times g \leq f' \times g'$, because, by functorality, monoidal products preserve commutative diagrams. We now define an ontology to be nothing other than a \emph{finitely presented} cartesian category with implicit conversion. More precisely: \begin{definition} An \emph{ontology in the Monocl language} is a cartesian category with implicit conversion, given by a finite presentation. That is, it is the cartesian category with implicit conversion generated by finite sets of: \begin{itemize} \item \emph{basic types}, or \emph{object generators}, $X$ \item \emph{basic functions}, or \emph{morphism generators}, $f: X \to Y$, where $X$ and $Y$ are objects \item \emph{basic subtypes}, or \emph{subtype generators}, $X \leq X'$, where $X$ and $X'$ are objects \item \emph{basic subfunctions}, or \emph{submorphism generators}, $f \leq f'$, where $f: X \to Y$ and $f': X' \to Y'$ are morphisms satisfying $X \leq X'$ and $Y \leq Y'$ \item \emph{function equations}, or \emph{morphism equations}, $f=g$, where $f,g: X \to Y$ are morphisms with equal domains and codomains. \end{itemize} If the set of morphism equations is empty, the category is called \emph{free} or \emph{freely generated}. \end{definition} Strictly speaking, a finite presentation of a category is not the same as the category it presents. The former is a finitary object that can be represented on, and manipulated by, a machine. The Monocl language consists of a textual and graphical syntax for defining presentations on a computer. The latter is an algebraic structure of infinite size, convenient for mathematical reasoning. However, we will abuse terminology by calling both finitely presented categories, and particular presentations thereof, ``ontologies.'' At the time of this writing, the Data Science Ontology is freely generated. Inference in a freely generated ontology is straightforward. Deciding the subtype or subfunction relations amounts to computing a reflexive transitive closure. Deciding equality of objects is trivial. Deciding equality of morphisms is the \emph{word problem} in a free cartesian category. The congruence closure algorithm for term graphs \cite[\S 4.4]{baader1999} can be adapted to solve this problem. In the future, the Data Science Ontology will likely include knowledge in the form of morphism equations, creating a need for new inference procedures. If arbitrary morphism equations are allowed, the word problem becomes undecidable. \subsection{Annotations as functor} \label{sec:annotations-as-functor} If the concepts form a category, then surely the annotations ought to assemble into a functor. Let the ontology be a cartesian category $\cat C$ with implicit conversion. Suppose we have another such category $\cat L$, modeling a programming language and a collection of modules written in that language. The annotations should define a functor $F: \cat L \to \cat C$, saying how to translate programs in $\cat L$ into programs in $\cat C$. This tidy story does not quite survive contact with reality. We cannot expect a finite set of formal concepts to exhaust the supply of informal concepts found in real-world programs. Therefore any ``functor'' $F: \cat L \to \cat C$ annotating $\cat L$ must be \emph{partial}, in a sense that we will make precise. There will be both objects and morphisms in $\cat L$ on which $F$ cannot be defined, because the category $\cat C$ is not rich enough to fully interpret $\cat L$. We approach partial functors indirectly, by way of partial functions. In accordance with mathematical custom, we reduce the pre-theoretical idea of ``partial function'' to the ubiquitous notion of total function. There are two standard ways to do this, the first based on pointed sets and the second on spans. They are equivalent as far as sets and functions are concerned but suggest different generalizations to categories and functors. Let us consider them in turn. The category of pointed sets leads to one viewpoint on partiality, popular in programming language theory. Given a set $X$, let $X_\bot := X \sqcup \{\bot\}$ be the set $X$ with a freely adjoined base point $\bot$. A \emph{partial function} from $X$ to $Y$ is then a function $f: X_\bot \to Y_\bot$ preserving the base point ($f(\bot) = \bot$). The function $f$ is regarded as ``undefined'' on the points $x \in X$ with $f(x) = \bot$. This notion of partiality can be transported from sets to categories using enriched category theory \cite{kelly1982,riehl2014}. Categories enriched in pointed sets, where each hom-set has a base morphism $\bot$, have been proposed as a qualitative model of incomplete information \cite{marsden2016}. Such categories make partiality an all-or-nothing affair, because their composition laws satisfy $\bot \cdot f = f \cdot \bot = \bot$ for all morphisms $f$. That is far too stringent. If we adopted this composition law, our semantic representations would rarely be anything but the trivial representation $\bot$. Alternatively, a partial function can be defined as a special kind of span of total functions. Now let us say that a \emph{partial function} from $X$ to $Y$ is a span in $\CAT{Set}$ \begin{equation*} \begin{tikzcd}[column sep=small, row sep=small] & I \ar[tail,swap]{dl}{\iota} \ar{dr}{f} & \\ X & & Y \end{tikzcd} \end{equation*} whose left leg $\iota: I \to X$ is monic (injective). The partial function's domain of definition is $I$, which we regard as a subset of $X$. Although we shall not need it here, we note that partial functions, and partial morphisms in general, can be composed by taking pullbacks \cite[\S 5.5]{borceux1994c}. We interpret the span above as \emph{partially} defining a function $f$ on $X$, via a set of equations indexed by $I$: \begin{equation*} f(x_i) := y_i, \qquad i \in I. \end{equation*} It is then natural to ask: what is the most general way to define a \emph{total} function on $X$ obeying these equations? The answer is given by the pushout in $\CAT{Set}$: \begin{equation*} \begin{tikzcd}[column sep=small] & I \ar[tail,swap]{dl}{\iota} \ar{dr}{f} \ar[phantom, very near end]{dd}{\rotatebox{-45}{$\ulcorner$}} & \\ X \ar[swap]{dr}{f_*} & & Y \ar[tail]{dl}{\iota_*} \\ & Y_* & \end{tikzcd} \end{equation*} Because $\iota: I \to X$ is monic, so is $\iota_*: Y \to Y_*$, and we regard $Y$ as a subset of $Y_*$. The commutativity of the diagram says that $f_*$ satisfies the set of equations indexed by $I$. The \emph{universal property} defining the pushout says that any other function $f': X \to Y'$ satisfying the equations factors uniquely through $f_*$, meaning that there exists a unique function $g: Y_* \to Y'$ making the diagram commute: \begin{equation*} \begin{tikzcd} & I \ar[swap]{dl}{\iota} \ar{dr}{f} & \\ X \ar{r}{f_*} \ar[swap]{dr}{f'} & Y_* \ar[dashed]{d}{g} & Y \ar[swap]{l}{\iota_*} \ar{dl}{\iota'} \\ & Y' & \end{tikzcd} \end{equation*} The codomain of the function $f_*: X \to Y_*$ consists of $Y$ plus a ``formal image'' $f(x)$ for each element $x$ on which $f$ is undefined. Contrast this with the codomain of a function $X \to Y_\bot$, which consists of $Y$ plus a single element $\bot$ representing \emph{all} the undefined values. This viewpoint on partiality generalizes effortlessly from $\CAT{Set}$ to any category with pushouts. We partially define the annotations as a span in $\CAT{Cart}_\leq$ \begin{equation*} \begin{tikzcd}[column sep=small, row sep=small] & \cat{I} \ar[tail,swap]{dl}{\iota} \ar{dr}{F} & \\ \cat{L} & & \cat{C} \end{tikzcd} \end{equation*} whose left leg $\iota: \cat I \to \cat L$ is monic. We then form the pushout in $\CAT{Cart}_\leq$: \begin{equation*} \begin{tikzcd}[column sep=small] & \cat{I} \ar[tail,swap]{dl}{\iota} \ar{dr}{F} \ar[phantom, very near end]{dd}{\rotatebox{-45}{$\ulcorner$}} & \\ \cat{L} \ar[swap]{dr}{F_*} & & \cat{C} \ar{dl}{\iota_*} \\ & \cat{C}_* & \end{tikzcd} \end{equation*} Given a morphism $f$ in $\cat L$, which represents a concrete program, its image $F_*(f)$ in $\cat{C}_*$ is a partial translation of the program into the language defined by the ontology's concepts. The universal property of the pushout in $\CAT{Cart}_\leq$, stated above in the case of $\CAT{Set}$, gives an appealing intuitive interpretation to program translation. The category $\cat C$ is not rich enough to fully translate $\cat L$ via a functor $\cat L \to \cat C$. As a modeling assumption, we suppose that $\cat C$ has some ``completion'' $\overline{\cat C}$ for which a full translation $\overline F: \cat L \to \overline{\cat C}$ \emph{is} possible. We do not know $\overline{\cat C}$, or at the very least we cannot feasibly write it down. However, if we take the pushout functor $F_*: \cat{L} \to \cat{C}_*$, we can at least guarantee that, no matter what the complete translation $\overline{F}$ is, it will factor through $F_*$. Thus $F_*$ defines the most general possible translation, given the available information. The properties of partial functions largely carry over to partial functors, with one important exception: the ``inclusion'' functor $\iota_*: \cat{C} \to \cat{C}_*$ need not be monic, even though $\iota: \cat{I} \to \cat{L}$ is. Closely related is the fact that $\CAT{Cart}_\leq$ (like its cousins $\CAT{Cat}$ and $\CAT{Cart}$, but unlike $\CAT{Set}$) does not satisfy the \emph{amalgamation property} \cite{macdonald2009}. To see how $\iota_*$ can fail to be monic, suppose that the equation $f_1 \cdot f_2 = f_3$ holds in $\cat{L}$ and that the defining equations include $F(f_i) := g_i$ for $i=1,2,3$. Then, by the functorality of $F_*$, we must have $g_1 \cdot g_2 = g_3$ in $\cat{C}_*$, even if $g_1 \cdot g_2 \neq g_3$ in $\cat{C}$. Thus the existence of $F_*$ can force equations between morphisms in $\cat{C}_*$ that do not hold in $\cat{C}$. When the categories in question are finitely presented, the pushout functor also admits a finitary, equational presentation, suitable for computer algebra. Just as we define an ontology to be a finitely presented category, we define an ontology with annotations to be a finitely presented functor. \begin{definition} An \emph{ontology with annotations in the Monocl language} is a functor between cartesian categories with implicit conversion, defined by a finite presentation. Explicitly, it is generated by: \begin{itemize} \item a finite presentation of a category $\cat C$ in $\CAT{Cart}_\leq$, the ontology category; \item a finite presentation of a category $\cat L$ in $\CAT{Cart}_\leq$, the programming language category; and \item a finite set of equations partially defining a functor $F$ from $\cat L$ to $\cat C$. \end{itemize} \end{definition} The equations partially defining the functor $F$ may be indexed by a category $\cat I$, in which case they take the form \begin{equation*} F(X_i) := Y_i, \qquad X_i \in \cat L, \quad Y_i \in \cat C, \end{equation*} for each $i \in \cat I$, and \begin{equation*} F(f_k) := g_k, \qquad f_k \in \cat L(X_i,X_j), \quad g_k \in \cat C(Y_i,Y_j), \end{equation*} for each $i,j \in \cat I$ and $k \in \cat I(i,j)$. The equations present a span $\cat{L} \overset{\iota}\leftarrowtail \cat{I} \overset{F}\rightarrow \cat{C}$ whose left leg is monic and the functor generated by the equations is the pushout functor $F_*: \cat{L} \to \cat{C}_*$ described above. \begin{remark} Our two definitions involving finite presentations are not completely rigorous, but can be made so using generalized algebraic theories \cite{cartmell1978,cartmell1986}. There is a generalized algebraic theory of cartesian categories with implicit conversion, whose category of models is $\CAT{Cart}_\leq$, and a theory of functors between them, whose category of models is the arrow category $\CAT{Cart}_\leq^\to$. Cartmell gives as simpler examples the theory of categories, with models $\CAT{Cat}$, and the theory of functors, with models $\CAT{Cat}^\to$ \cite{cartmell1986}. Any category of models of a generalized algebraic theory is cocomplete and admits free models defined by finite presentations. \end{remark} Before closing this subsection, we should acknowledge what we have left unformalized. In construing the annotations as a functor, we model programming languages like Python and R as cartesian categories with implicit conversion. We do not attempt to do so rigorously. The formal semantics of Python and R are quite intricate and exist only in fragments \cite{guth2013,morandat2012}. Our program analysis involves numerous simplifications, infidelities, and heuristics, as sketched in \cref{sec:program-analysis}. Even if we could complete it, a formalization would probably be too complicated to illuminate anything about our method. We thus rest content with an informal understanding of the relationship between Monocl and full-fledged programming languages like Python and R. \subsection{Flow graphs and categories of elements} To a first approximation, the raw and semantic flow graphs are morphisms in the categories $\cat L$ and $\cat C_*$, respectively. The expansion stage of the semantic enrichment algorithm simply applies the annotation functor $F_*: \cat L \to \cat C_*$ to a morphism in $\cat L$. The contraction stage, a purely syntactic operation, groups together morphisms in $\cat C_*$ that are not images of $\cat C$ under the inclusion functor $\iota_*: \cat C \to \cat C_*$. To complete the formalization of semantic enrichment, we must account for the observed elements in the raw and semantic flow graphs. As noted in \cref{sec:methods}, flow graphs capture not only the types and functions comprising a program, but also the values computed by the program. In category theory, values can be bundled together with objects and morphisms using a device known as the \emph{category of elements}. We formalize the raw and semantic flow graphs as morphisms in suitable categories of elements. The objects and morphisms in the ontology category $\cat C$ can be, in principle, interpreted as sets and functions. A set-theoretic \emph{interpretation} of $\cat C$ is a cartesian functor $I_{\cat C}: \cat C \to \CAT{Set}$. In programming language terms, $I_{\cat C}$ is a \emph{denotational semantics} for $\cat C$. Suppose the concrete language $\cat L$ also has an interpretation $I_{\cat L}: \cat L \to \CAT{Set}$. Assuming the equations partially defining the annotation functor are true under the set-theoretic interpretations, the diagram below commutes: \begin{equation*} \begin{tikzcd}[column sep=small] & \cat{I} \ar[tail,swap]{dl}{\iota} \ar{dr}{F} & \\ \cat{L} \ar[swap]{dr}{I_{\cat L}} & & \cat{C} \ar{dl}{I_{\cat C}} \\ & \CAT{Set} & \end{tikzcd} \end{equation*} By the universal property of the annotation functor $F_*$, there exists a unique interpretation $I_{\cat C_*}: \cat C_* \to \CAT{Set}$ making the diagram commute: \begin{equation*} \begin{tikzcd} \cat{L} \ar{r}{F_*} \ar[swap]{dr}{I_{\cat L}} & \cat{C}_* \ar[dashed,pos=0.33]{d}{I_{\cat{C}_*}} & \cat{C} \ar[swap]{l}{\iota_*} \ar{dl}{I_{\cat C}} \\ & \CAT{Set} & \end{tikzcd} \end{equation*} Each of these three interpretations yields a category of elements, also known as a ``Grothendieck construction'' \cites[\S 12.2]{barr1990}[\S 2.4]{riehl2016}. \begin{definition} The \emph{category of elements} of a cartesian functor $I: \cat C \to \CAT{Set}$ has as objects, the pairs $(X,x)$, where $X \in \cat C$ and $x \in I(X)$, and as morphisms $(X,x) \to (Y,y)$, the morphisms $f: X \to Y$ in $\cat C$ satisfying $I(f)(x) = y$. \end{definition} The category of elements of a cartesian functor $I: \cat C \to \CAT{Set}$ is itself a cartesian category. Composition and identities are inherited from $\cat C$. Products are defined on objects by \begin{equation*} (X,x) \times (Y,y) := (X \times Y, (x,y)) \end{equation*} and on morphisms exactly as in $\cat C$, and the unit object is $(1,*)$, where $*$ is an arbitrary fixed element. The diagonals are also inherited from $\cat C$, taking the form \begin{equation*} \Delta_{(X,x)}: (X,x) \to (X \times X, (x,x)), \qquad \lozenge_{(X,x)}: (X,x) \to (1,*). \end{equation*} We may at last define a \emph{raw flow graph} to be a morphism in the category of elements of $I_{\cat L}$. Likewise, a \emph{semantic flow graph} is a morphism in the category of elements of $I_{\cat C_*}$. Note that the interpretations of $\cat L$, $\cat C$, and $\cat C_*$ are conceptual devices; we do not actually construct a denotational semantics for the language $\cat L$ or the ontology $\cat C$. Instead, the program analysis tools observe a \emph{single} computation and produce a \emph{single} morphism $f$ in the category of elements of $I_{\cat L}$. By the definition of the interpretation $I_{\cat C_*}$, applying the annotation functor $F_*: \cat L \to \cat C_*$ to this morphism $f$ yields a morphism $F_*(f)$ belonging to the category of elements of $I_{\cat C_*}$. In summary, semantic enrichment amounts to applying the annotation functor in the category of elements. The expansion stage simply computes the functor. As an aid to human interpretation, the contraction stage computes a new syntactic expression for the expanded morphism, grouping together boxes that do not correspond to morphisms in the ontology category. \section{The view from data science} \label{sec:data-science-viewpoint} Like the code it analyzes, our AI system is a means, not an end. Its impetus is the transformation of science, currently under way, towards greater openness, transparency, reproducibility, and collaboration. As part of this transformation, data and machines will both come to play a more prominent role in science. In this section, we describe the major themes of this evolution of the scientific process and how we hope our work will contribute to it. We also demonstrate our system on a realistic data analysis from the open science community. \subsection{Towards networked science} Although the World Wide Web has already radically changed the dissemination of scientific research, its potential as a universal medium for representing and sharing scientific knowledge is only just beginning to be realized. A vast library of scientific books and papers is now available online, accessible instantaneously and throughout the world. That is a remarkable achievement, accomplished in only a few decades. However, this endorsement must be qualified in many respects. Scientific articles are accessible---but only to certain people, due to the prevalence of academic paywalls. Even when articles are freely available, the associated datasets, data analysis code, and supporting software may not be. These research artifacts are, moreover, often not amenable to systematic machine processing. In short, most scientific research is now on the Web, but it may not be accessible, reproducible, or readily intelligible to humans or machines. A confluence of social and technological forces is pushing the scientific community towards greater openness and interconnectivity. The open access movement is gradually eroding paywalls \cite{piwowar2018}. The replication crisis affecting several branches of science has prompted calls for stricter standards about research transparency, especially when reporting data analysis protocols \cite{pashler2012,munafo2017}. A crucial standard of transparency is \emph{reproducibility}: the ability of researchers to duplicate the complete data analysis of a previous study, from the raw data to the final statistical inferences \cite{goodman2016}. Reproducibility demands that all relevant datasets, analysis code, and supporting software be available---the same requirements imposed by our system. Another driving force is the growing size and complexity of scientific data. Traditionally, the design, data collection, data analysis, and reporting for a scientific experiment has been conducted entirely within a single research group or laboratory. That is changing. Large-scale observational studies and high-throughput measurement devices are producing ever larger and richer datasets, making it more difficult for the people who collect the data to also analyze it. Creating richer datasets also increases the potential gains from data sharing and reuse. The FAIR Data Principles aim to simplify data reuse by making datasets more ``FAIR'': findable, accessible, interoperable, and reusable \cite{wilkinson2016}. Organizations like the Accelerated Cure Project for Multiple Sclerosis and the Parkinson Progression Marker Initiative are creating integrated online repositories of clinical, biological, and imaging data \cite{marek2011}. In a related development, online platforms like Kaggle, Driven Data, and DREAM Challenges are crowdsourcing data analysis through data science competitions. Science, then, seems to be headed towards a world where all the products of scientific research, from datasets to code to published papers, are fully open, online, and accessible. In the end, we think this outcome is inevitable, even if it is delayed by incumbent interests and misaligned incentives. The consequences of this new ``networked science'' are difficult to predict, but they could be profound \cite{hey2009,nielsen2012}. We and others conjecture that new forms of open, collaborative science, where humans and machines work together according to their respective strengths, will accelerate the pace of scientific discovery. An obstacle to realizing this vision is the lack of standardization and interoperability in research artifacts. Researchers cannot efficiently share knowledge, data, or code, and machines cannot effectively process it, if it is not represented in formats that they readily understand. We aim to address one aspect of this challenge by creating semantic representations of data science code. We will say shortly what kind of networked science applications we hope our system will enable. But first we describe more concretely one particular model of networked science, the data science challenge, and a typical example of the analysis code it produces. \subsection{An example from networked science} \label{sec:dream} As a more realistic example, in contrast to \cref{sec:example}, we examine a data analysis conducted for a DREAM Challenge. DREAM Challenges address scientific questions in systems biology and translational medicine by crowdsourcing data analysis across the biomedical research community \cite{stolovitzky2007,stolovitzky2016}. Under the challenge model, teams compete to create the best statistical models according to metrics defined by the challenge organizers. Rewards may include prize money and publication opportunities. In some challenges, the initial competitive phase is followed by a cooperative phase where the best performing teams collaborate to create an improved model \cite[see, for example,][]{dream-mammography-2017,sieberts2016}. The challenge we consider asks how well clinical and genetic covariates predict patient response to anti-TNF treatment for rheumatoid arthritis \cite{sieberts2016}. Of special interest is whether genetic biomarkers can serve as a viable substitute for more obviously relevant clinical diagnostics. To answer this question, each participant was asked to submit two models, one using only genetic covariates and the other using any combination of clinical and genetic covariates. After examining a wide range of models, the challenge organizers and participants jointly concluded that the genetic covariates do not meaningfully increase the predictive power beyond what is already contained in the clinical covariates. We use our system to analyze the two models submitted by a top-ranking team \cite{kramer2014}. The source code for the models, written in R, is shown in \cref{lst:dream-ra}. It has been lightly modified for portability. The corresponding semantic flow graph is shown in \cref{fig:semantic-dream-ra}. The reader need not try to understand the code in any great detail. Indeed, we hope that the semantic flow graph will be easier to comprehend than the code and hence will be serve as an aid to humans as well as to machines. We grant, however, that the current mode of presentation is far from ideal from the human perspective.\footnote{We would prefer a web-based, interactive presentation, with the boxes and wires linked to descriptions from the ontology. That is, regrettably, outside the scope of this paper.} \begin{listing} \begin{minted}[fontsize=\scriptsize,frame=leftline,rulecolor=\color{gray!50}]{r} library("caret") library("VIF") library("Cubist") merge.p.with.template <- function(p){ template = read.csv("RAchallenge_Q1_final_template.csv") template$row = 1:nrow(template) template = template[,c(1,3)] ids = data.resp$IID[is.na(y)] p = data.frame(ID=ids, Response.deltaDAS=p) p = merge(template, p) p = p[order(p$row), ] p[,c(1,3)] } data = readRDS("pred.rds") resp = readRDS("resp.rds") # non-clinical model data.resp = merge(data, resp[c("FID", "IID", "Response.deltaDAS")]) y = data.resp$Response.deltaDAS y.training = y[!is.na(y)] data.resp2 = data.resp[!(names(data.resp) dummy = predict(dummyVars(~., data=data.resp2), newdata=data.resp2) dummy.training = dummy[!is.na(y),] dummy.testing = dummy[is.na(y),] v = vif(y.training, dummy.training, dw=5, w0=5, trace=F) dummy.training.selected = as.data.frame(dummy.training[,v$select]) dummy.testing.selected = as.data.frame(dummy.testing[,v$select]) m1 = cubist(dummy.training.selected, y.training, committees=100) p1 = predict(m1, newdata=dummy.testing.selected) # clinical model dummy = data.resp[c("baselineDAS", "Drug", "Age", "Gender", "Mtx")] dummy = predict(dummyVars(~., data=dummy), newdata=dummy) dummy.training = dummy[!is.na(y),] dummy.testing = dummy[is.na(y), ] m2 = cubist(dummy.training, y.training, committees=100) p2 = predict(m2, newdata=dummy.testing) ## create csv files p1.df = merge.p.with.template(p1) p2.df = merge.p.with.template(p2) write.csv(p1.df, quote=F, row.names=F, file="clinical_and_genetic.csv") write.csv(p2.df, quote=F, row.names=F, file="clinical_only.csv") \end{minted} \caption{R source code for two models from the Rheumatoid Arthritis DREAM Challenge} \label{lst:dream-ra} \end{listing} \begin{figure*} \centering \includegraphics[width=\textwidth]{images/semantic-dream-ra} \caption{Semantic flow graph for two models from the Rheumatoid Arthritis DREAM Challenge (\cref{lst:dream-ra})} \label{fig:semantic-dream-ra} \end{figure*} The analysts fit two predictive models, the first including both genetic and clinical covariates and the second including only clinical covariates. The models correspond, respectively, to the first and second commented code blocks and to the left and right branches of the semantic flow graph. Both models use the Cubist regression algorithm \cite[\S 8.7]{kuhn2013}, a variant of random forests based on M5 regression model trees \cite{wang1997}. Because the genetic data is high-dimensional, the first model is constructed using a subset of the genetic covariates, as determined by a variable selection algorithm called VIF regression \cite{lin2011}. The linear regression model created by VIF regression is used only for variable selection, not for prediction. Most of the unlabeled nodes in \cref{fig:semantic-dream-ra}, including the wide node at the top, refer to code for data preprocessing or transformation. It is a commonplace among data scientists that such ``data munging'' is a crucial aspect of data analysis. There is no fundamental obstacle to representing its semantics; it so happens that the relevant portion of the Data Science Ontology has not yet been developed. This situation illustrates another important point. Our system does not need or expect the ontology to contain complete information about the program's types and functions. It is designed to degrade gracefully, producing useful partial results even in the face of missing annotations. \subsection{Use cases and applications} \label{sec:applications} Our system is a first step towards an AI assistant for networked, data-driven science. We hope it will enable, or bring us closer to enabling, new technologies that boost the efficiency of data scientists. These technologies may operate at small scales, involving one or a small group of data scientists, or at large scales, spanning a broader scientific community. At the scale of individuals, we imagine an integrated development environment (IDE) for data science that interacts with analysts at both syntactic and semantic levels. Suppose a participant in the rheumatoid arthritis DREAM Challenge fits a random forest regression, using the \texttt{randomForest} package in R. Indeed, the analysts from \cref{sec:dream} report experimenting with random forests, among other popular methods \cite{kramer2014}. By a simple inference within the Data Science Ontology, the IDE recognizes random forests as a tree-based ensemble method. It suggests the sister method Cubist and generates R code invoking the \texttt{Cubist} package. Depending on their expertise, the analysts may learn about new statistical methods or software packages implementing them. Even expert users should benefit from the possibility of more efficient experimentation. As programmers, we are all prone to lapses of discipline in commenting our code. Poorly documented code is difficult to comprehend at a glance. To supplement the explanations written by fallible humans, an AI agent might translate our semantic flow graphs into written descriptions, via natural language generation \cite{gatt2018}. The playful R package \texttt{explainr} does exactly that for a few R functions, in isolation \cite{parker2015}. A more comprehensive system based on our work would span multiple languages and libraries and would document both the individual steps and the high-level design of a data analysis. New possibilities emerge at the scale of online platforms for collaborative data science. Online platforms can host coordinated efforts to solve specific scientific problems, under variations of the challenge model. They may also host products of independent scientific experiments, serving as centralized repositories of papers, data, and code. In both situations, statistical meta-analysis is needed to aggregate the results of individual analyses or studies \cite{gurevitch2018}. Today meta-analysis is a laborious and painstaking process, conducted largely by hand. We hope to lay the groundwork for more automated forms of meta-analysis. Consider the challenge model again. Organizers typically want a panoramic view of the participants' activity. We could straightforwardly generate a summary report, given a corpus of semantic flow graphs corresponding to the submitted analyses. In challenges of scientific interest, organizers tend to be interested in more than simply what is the most predictive model. A recent DREAM Challenge, for example, aims to determine which factors affect the progression of amyotrophic lateral sclerosis (ALS), a fatal neurodegenerative disease with heterogeneous progression timelines \cite{kueffner2018}. The organizers stipulate that submitted predictive models may use only a limited number of clinical features. Using consensus clustering \cite{monti2003}, the organizers then aggregate feature sets across the submitted models to stratify the patients into clinically meaningful subgroups. This and other forms of meta-analysis could conceivably be simplified, or even automated, given sufficiently expressive semantic representations of the models. \section{Related work} \label{sec:related-work} In this paper, we extend and refine our previous work on semantic representations of data analyses \cite{patterson-ibm2017}. Compared to the original work, we designed a new ontology language for modeling computer programs, along with a new ontology about data science written in this language. We also replaced our original, ad hoc procedure for creating the semantic flow graph with the semantic enrichment algorithm, which is more flexible and rests on a firmer mathematical foundation. We presented our current system as a demonstration at IJCAI 2018 \cite{patterson-ijcai2018}. We have been inspired by a constellation of ideas at the intersection of artificial intelligence, program analysis, programming language theory, and category theory. We now position our work in relation to these areas. \subsection{Knowledge representation and program analysis} The history of artificial intelligence is replete with interactions between knowledge representation and computer program analysis. In the late 1980s and early 1990s, automated planning and ruled-based expert systems featured in ``knowledge-based program analysis'' \cite{johnson1985,harandi1990,biggerstaff1994}. Other early systems were based on description logic \cite{devanbu1991,welty2007} and graph parsing \cite{wills1992}. Such projects are supposed to help software developers maintain large codebases (exceeding, say, a million lines of code) in specialized industrial domains like telecommunications. Our research goals are less ambitious in scale but also, we hope, more tractable. We focus on knowledge workers who write short, semantically rich scripts, without the endless layers of abstraction found in large codebases. In data science, the code tends to be much shorter, the control flow more linear, and the underlying concepts better defined, than in large-scale industrial software. Our methodology is accordingly quite different from that of the older literature. \subsection{Machine learning and program analysis} Efforts are now underway to marry program analysis with machine learning. Inspired by an analogy between natural languages and programming languages, AI researchers are transporting successful techniques from natural language processing (NLP), such as Markov models and recurrent neural networks, to program analysis \cite{allamanis2018}. Most program models are based on sequences of syntactic tokens, akin to sequences of words in natural language. Some models use graphical program representations, bringing them closer to our work. For example, a recent method called \texttt{inst2vec} (``instructions to vectors''), inspired by \texttt{word2vec}, fits a skip-gram embedding of program statements, using a notion of statement context which combines data flow and control flow \cite{ben-nun2018}. The logical and statistical paradigms of AI tend to exhibit different performance characteristics and are therefore complementary, not competitive. In the case of program analysis, our method delivers rich, precise, and human-interpretable semantics, at the expense of significant human knowledge engineering. Statistical methods scale better in terms of human effort and degrade more gracefully in the face of incomplete information, but yield semantics that are less precise and harder to interpret. In particular, embedding methods like \texttt{inst2vec}\footnote{\texttt{inst2vec} also differs from our system by operating on LLVM's intermediate representation (IR), not the original code. This choice seems not to be viable for data science because Python and R do not have stable LLVM frontends, among other possible difficulties.} create dense vector representations of statements, whose interpretations are defined only implicitly by their relation to other vectors. The vectors are useful for downstream prediction tasks but are difficult to interpret directly. Moreover, logical and statistical methods tend to understand the slippery notion of ``semantics'' in fundamentally different ways. Vector representations capture distributional information about how concepts are used in practice, whereas ontologies express logical constraints on how concepts are related to each other. Both kinds of information are useful and important. In the future, we hope to investigate ways of integrating logical and statistical information in semantic representations of data science code. \subsection{Ontologies for data science} There already exist several ontologies and schemas related to data science, such as STATO, an OWL ontology about basic statistics \cite{gonzalez-beltran2016}; the Predictive Modeling Markup Language (PMML), an XML schema for data mining models \cite{guazzelli2009}; and ML Schema, a schema for data mining and machine learning workflows under development by a W3C community group \cite{lawrynowicz2017}. What does the Data Science Ontology add to the landscape? While we can certainly point to differences in content---STATO focuses on classical statistics, especially hypothesis testing, whereas we are equally interested in machine learning---we prefer to make a more general point, applicable to all the ontologies that we know of. Every ontology is, implicitly or explicitly, designed for some purpose. The purpose of the Data Science Ontology is to define a universal language for representing data science code. Previous ontologies were designed for different purposes, and we cannot see any straightforward way to adapt them to ours. In STATO, concepts representing statistical methods can have inputs and outputs, but they are too imprecisely specified to map onto actual code, among other difficulties. PMML is a purely static format, designed for serializing fitted models. To successfully model computer programs, one must pay attention to the special structure of programs. That is what we have tried to do with the Data Science Ontology. This aspiration also drives our choice of ontology language. \subsection{Ontology languages and programming languages} We have designed an ontology language, Monocl, to model computer programs. Although it is the medium of the Data Science Ontology, the ontology language is conceptually independent of data science or any other computational domain. Mathematically, it is founded on category theory and programming language theory. References are given in \cref{sec:math}, where we develop the theory. We hope that our project will advance an emerging paradigm of knowledge representation based on category theory \cite{spivak2012,patterson-arxiv2017}. Due in part to the influence of the Semantic Web, the most popular paradigm for knowledge representation today is description logic, a family of computationally tractable subsystems of first-order logic \cite{baader2007}. Why have we not written the Data Science Ontology in a description logic, like the Semantic Web's OWL? We do not claim this would be impossible. The Basic Formal Ontology, expressible in OWL and underlying many biomedical ontologies, divides the world into \emph{continuants} (persistent objects) and \emph{occurrents} (events and processes) \cite{arp2015}. We might follow STATO in modeling data analyses, or computational processes generally, as occurrents. This leads to some awkward consequences, as occurrents are ascribed a spatiotemporal structure which computer programs lack. A more fundamental objection, independent of the Basic Formal Ontology, is that there already exists a long mathematical tradition of modeling programs, beginning nearly one hundred years ago with Alonzo Church's invention of the lambda calculus. We follow a few threads in this tradition in \cref{sec:math-motivation}. Our work very much belongs to it. To instead ignore it, reinventing a programming model inside description logic, would be, at best, an unnecessary duplication of effort. That said, we understand the value of interoperability with existing systems. We are investigating ways to encode the Data Science Ontology in OWL, possibly with some loss of fidelity. \section{Conclusion} \label{sec:conclusion} We have introduced an algorithm, supported by the Monocl ontology language and the Data Science Ontology, for creating semantic representations of data science code. We demonstrated the semantic enrichment algorithm on several examples, pedagogical and practical, and we supplied it with a category-theoretic mathematical foundation. We situated our project within a broader trend towards more open, networked, and machine-driven science. We also suggested possible applications to collaborative data science, at the small and large scales. In future work, we plan to build on the suggestive examples presented in this paper. We will develop methods for automated meta-analysis based on semantic flow graphs and conduct a systematic empirical evaluation on a corpus of data analyses. We also have ambitions to more faithfully represent the mathematical and statistical structure of data science models. Our representation emphasizes computational structure, but the most interesting applications require more. In a similar vein, scientific applications require that statistical models be connected with scientific concepts. Workers across the sciences, and especially in biomedicine, are building large ontologies of scientific concepts. Interoperation with existing domain-specific ontologies is therefore an important research direction. Only by a concerted community effort will the vision of machine-assisted data science be realized. To that end, we have released as open source software our Python and R program analysis tools, written in their respective languages, and our semantic enrichment algorithm, written in Julia.\footnote{All source code is available on GitHub under the Apache 2.0 license \cite{patterson-pyflowgraph2018,patterson-rflowgraph2018,patterson-semanticflowgraph2018}.} We are also crowdsourcing the further development of the Data Science Ontology.\footnote{The Data Science Ontology is available on GitHub under the Creative Commons Attribution 4.0 license \cite{patterson-datascienceontology2018}.} We entreat the reader to join us in the effort of bringing artificial intelligence to the practice of data science. \printbibliography \end{document}
1,477,468,750,693
arxiv
\section{Introduction} \begin{figure*} \includegraphics[height=2.3in, width=0.8\textwidth]{img/d2d_general.png} \caption{We assume that $\mathbf{x}_A$, $\mathbf{x}_B$ are click vectors in two domains A and B from user $u$. Also, $\mathbf{x}_A$ and $\mathbf{x}_B$ can be mapped to the same latent code $\mathbf{z}$ in a shared-latent space $\mathcal{Z}$. $E_A$ and $E_B$ are two encoding functions mapping click vectors to latent codes. $G_A$ and $G_B$ are two generating functions, mapping latent codes to click vectors. We represent $E_A$, $E_B$, $G_A$, and $G_B$ using fully connected layers and implement the shared latent space assumption using a weight-sharing constraint where the connection weights of the last few layers (high-level layers) in $E_A$ and $E_B$ are tied (shown as dashed lines), and where the connection weights of first few layers (high-level layers) in $G_A$, $G_B$ are tied. Therefore, we can learn and generate different features between two domains from the first encoder layers and last generator layers. We can also learn and generate similar features from the last encoder layers and first generator layers. Here, $\mathbf{x}_{AA}$ and $\mathbf{x}_{BB}$ are self-reconstruction vectors, and $\mathbf{x}_{AB}$ and $\mathbf{x}_{BA}$ are domain-translation vectors. $D_A$, $D_B$ are adversarial discriminators for the respected domains in charge of evaluating whether the translated vectors are realistic.} \label{d2d} \end{figure*} During the era of the internet explosion, Recommender Systems (RSs) assumed an important role of supporting users as they find items they need or items reaching the right users. Items can be various products, information, or people, so the RS tends to divide them into small domains in which items have similar attributes \cite{fernandez2012}. Each domain will have specific characteristics. Therefore, to obtain these characteristics, it need to be considered separately. For that reason, many studies have specifically examined a single domain \cite{Gong2016, Chu2017, chen2017attentive}. However, single domains still present numerous difficulties \cite{hu2013personalized}. For example, it can not work well when a user has no interaction in the considered domain or when companies want to cross-sell their products. That these problems are solvable using items from multi-domains \cite{cantador2015cross} leads us to be interested in proposing multi-domain RS. Algorithms that specifically deal with a single domain can process items from multiple domains easily by aggregating all items into a single domain. However, because all items are learned by a sole network or function, it has difficulties in capturing the specific characteristics of respective domains. On the other hand, some algorithms specifically addressing multiple domains extract latent features of the respective domains by a separated network \cite{Lian2017, min2015cross, }. Although they can highlight the particular features of each domain, they have less opportunity to obtain the similar features among domains. Nevertheless, similarities and differences exist among domains. For that reason, multi-domain systems must capture both to achieve good performance. In addition, some other multi-domain studies specifically examine the transfer of knowledge from a source domain that is much denser to a target domain, or from specific sources such as user search query or social network information \cite{shapira2013facebook, pan2016mixed, Elkahky2015}. With the first, it is one-way direction. Knowledge of the target domain seems not to be helpful with the source domain. Moreover, many companies are unable to implement the second one because it is sometimes impossible to get these external data. To address these problems, we propose a multi-domain network structure that can capture both similar and different features among domains and treat every domain equally by taking only implicit feedback inside the system as input. Our model is extended from Unsupervised Image-to-Image Translation Networks (UNIT) \cite{Ming-Yu2017} for the Recommender System, called a Domain-to-Domain Translation Model (D2D-TM). It is based on generative adversarial networks (GANs), Variational Autoencoders (VAEs), and Cycle-Consistency (CC) for weight-sharing. We use the user interaction history of each domain as input, and extract its features through a VAE-GAN-CC network. In summary, the main contributions of this paper are the following. \begin{itemize} \item Propose a multi-domain recommender system that can extract both homogeneous and divergent features among domains. \item Translate one domain to another and vice versa simultaneously. \item Propose an end-to-end deep learning approach for collaborative filtering recommender system which only uses the user interaction history as input \item Conduct rigorous experiments using two real-world datasets with four couple domains and underscore the effectiveness of proposed system over state-of-the-art methods by a large margin. \end{itemize} The remainder of this paper is organized as the following. First, in Section 2, we review related approaches and techniques for recommender systems, which included VAEs, GANs, and a cross-domain recommender system. Section 3 presents an explanation of details of our methods, followed by experiments described in Section 4. We also present conclusions in Section 5. \section{Related Work} Extensive studies have been conducted of recommender systems, with reports presented in a myriad of publications. In this section, we aim at reviewing a representative set of approaches that are closely related to our research. \subsection{Autoencoders } Autoencoders (AEs) use unsupervised learning which has been shown to be effective for learning latent variables in many deep-learning problems. Collaborative Deep Learning (CDL) \cite{Wang2015} and Collaborative Variational Autoencoder (CVAE) \cite{Li2017} are two well known papers that respectively described application of Denoising Autoencoder and Variational Autoencoder in hybrid methods. Two studies have used Autoencoders to extract latent features from item description text, with related reports proposing joint learning between these latent features and collaborative filtering. Otherwise, the recent method, Multi-VAE \cite{Liang2018} uses VAE to reconstruct user-item matrix to achieve a good result although only using rating information. \subsection{Generative Adversarial Networks (GANs)} As new unsupervised learning networks, GANs can achieve promising results, especially in the realm of computer vision. Nevertheless, few GAN applications have been reported in recommender systems. Actually, IRGAN \cite{Wang2017} is the first model to apply a GAN not only to an information retrieval area but also to a recommender system. IRGAN extends discriminator and generator process in traditional GANs to discriminative retrieval and generative retrieval. Whereas discriminative retrieval learns to predict relevant score $r$ given labeled relevant query-document fairs, generative retrieval tries to generate a fake document to deceive discriminative retrieval. Recently, Adversarial Personal Ranking (APR) \cite{He2018}, which enhances the Bayesian personal ranking with adversarial network, and GAN-HBNR \cite{Cai2018}, which proposed a GAN based representation learning approach for heterogeneous bibliographic network, have arisen as new approaches of GAN to recommender systems. \begin{figure} \includegraphics[scale=.2]{img/VAE.png} \caption{General Deep Learning Structure of VAE.} \label{VAE} \end{figure} \subsection{Cross-domain Recommender System} Today, companies are striving to provide a diversity of products or services to users. For example, Amazon is not only e-commerce platform; it is also online movie and music platform. Therefore, cross-domain recommender systems are necessary for them. Moreover, cross-domain RSs can solve data sparsity and the cold start problem, which are important issues related to single domain RSs. Several works exploring cross-domain RSs have included Multiview Deep Neural Network (MV-DNN) \cite{Elkahky2015}, Neural Social Collaborative Ranking (NSCR) \cite{Wang2017CD}, and Cross-domain Content-boosted Collaborative Filtering neural NETwork (CCCFNET) \cite{Lian2017}. Actually, MV-DNN extracts rich features from the user's browsing and search histories to model a user's interests, whereas item features are extracted from three sources including the title, categories, and contents with news or description with Apps. Then it calculates a relevant score using a cosine function. NSCR attempts to learn embedding of bridge users with user--user connections taken from social networks and user--item interaction. CCCFNET aims to learn content-based embedding so that the model can transfer both content-based and collaborative filtering across different domains simultaneously. A similarity among these methods is that they require external information from other sources. For example, MV-DNN requires a user search query, NSCR combines with user social network account, whereas CCCFNET takes content information. Sometimes, it is impossible to get this knowledge. Therefore, we propose a cross-domain model that uses only implicit feedback inside the system. \section{Method} We use $u \in \{1, \cdots, U\}$ to index users, $i_A \in \{1, \cdots, I_A\}$ to index items belonging to domain A, and $i_B \in \{1, \cdots, I_B\}$ to index items belonging to domain B. In this work, we consider learning implicit feedback. The user-by-item interaction matrix is the click \footnote{We use the verb "click" for concreteness. In fact, this can be any type of interaction such as "watch", "view" and "rating"} matrix $\mathbf{X} \in \mathbb{N}^{U \times I}$. The lower case $\mathbf{x}_u = [x_{u1}, x_{u2}, \cdots, x_{uI}]^T \in \mathbb{N}^I$ is a bag-of-words vector, which is the number of clicks for each item from user $u$. With two domains, we have matrix $\mathbf{X}_A \in \mathbb{N}^{U \times I_A}$ with $\mathbf{x}_{A} = [x_{1A}, x_{2A}, \cdots, x_{IA}]^T \in \mathbb{N}^{I_A}$ for domain A, and $\mathbf{X}_B \in \mathbb{N}^{U \times I_B}$ with $\mathbf{x}_{B} = [x_{1B}, x_{2B}, \cdots, x_{IB}]^T \in \mathbb{N}^{I_B}$ for domain B. For simplicity, we binarize the click matrix. It is straightforward to extend it to general count data. \subsection{Framework} Our framework, as presented in Figure \ref{d2d}, is based on variational autoencoder (VAE) and generative adversarial network (GAN). It comprises six subnetworks including two domain click vector encoders $E_A$ and $E_B$, two domain click vector generators $G_A$ and $G_B$, and two domain adversarial discriminators $D_A$ and $D_B$. We maintain the framework structure as \cite{Ming-Yu2017}. We share weights of the last few layers in $E_A$ and $E_B$, so that our model not only extracts different characteristics of two model in the first layers, but also learns their similarity. In parallel, we also share weights of the few first layers in $G_A$ and $G_B$ to make our model able to generate both similar and divergent features. In Figure \ref{d2d}, share layers are denoted as {\color{red}S}, whereas distinct layers are denoted as {\color{blue}D}. Moreover, the GAN model constraint generated vectors of two domains distinctly. Otherwise, our framework learns translation in both directions in one shot. \subsection{VAE} Figure \ref{VAE} portrays the general structure of VAE. In our model, the encoder--generator pair $\{E_A, G_A\}$ constitutes a VAE for domain A, term $\text{VAE}_A$. For an input click vector $\mathbf{x}_A \in A$, the $\text{VAE}_A$ first maps $\mathbf{x}_A$ to a code in a latent space $\mathcal{Z}$ via encoder $E_A$. It then decodes a randomly perturbed version of the code to reconstruct the input click vector via the generator $G_A$. We assume that the components in the latent space $\mathcal{Z}$ are conditionally independent and Gaussian with unit variance. In our formulation, the encoder outputs a mean vector $E_{\mu,A}(\mathbf{x})_A$. The distribution of the latent code $\mathbf{z}_A$ is given as $q_A(\mathbf{z}_A|\mathbf{x}_A \equiv \mathcal{N}(\mathbf{z}_A|E_{\mu, A}(\mathbf{x}_A, \mathbf{I}))$, where $\mathbf{I}$ is an identity matrix. The reconstructed click vector is $\mathbf{x}_{AA} = G_A(\mathbf{z}_A \sim q_A(\mathbf{z}_A|\mathbf{x}_A))$. Similarly, $\{E_{B}, G_{B}\}$ constitutes a VAE for domain B: $\mathbf{\text{VAE}_B}$ where the encoder $E_B$ outputs a mean vector $E_{\mu, B}(\mathbf{x}_B)$ and the distribution of latent code $\mathbf{z}_B$ is given as $q_B(\mathbf{z}_B|\mathbf{x}_B) \equiv \mathcal{N}(\mathbf{z}_B|E_{\mu, B}(\mathbf{x}_B, \mathbf{I}))$. The reconstructed click vector is $\mathbf{x}_{BB} = G_B(\mathbf{z}_B \sim q_B(\mathbf{z}_B|\mathbf{x}_B))$. \subsection{Weight-sharing and Cycle-consistency (CC)} We enforce a weight-sharing constraint relating two VAEs. Specifically, we share the weights of the last few layers of $E_A$ and $E_B$ that are responsible for extracting high-level representations of the input click vectors in the two domains. Similarly, we share the weights of the first few layers of $G_A$ and $G_B$ responsible for decoding high-level representations for reconstructing the input click vector. The shared latent space assumption enables us to use domain-to-domain translation. We can translate a click vector $\mathbf{x}_A$ in domain A to a click vector in domain B through applying $G_B(\mathbf{z}_A\sim q_A(\mathbf{z}_A | \mathbf{x}_A))$. Similarly, click vector from domain B to domain A is generated as $G_A(\mathbf{z}_B \sim q_B(\mathbf{z}_B | \mathbf{x}_B))$. We also use cycle-consistency as a weight-sharing constraint. \subsection{Generative Adversarial Network} \begin{figure} \includegraphics[scale=.2]{img/GAN.png} \caption{General Structure of GAN.} \label{GAN} \end{figure} Figure \ref{GAN} shows a general structure of GAN. Our framework has two generative adversarial networks: $\text{GAN}_A = \{ D_A, G_A \}$ and $\text{GAN}_B = \{ D_B, G_B \}$. In $\text{GAN}_A$, for real click vectors sampled from the first domain, $D_A$ should output true, although it should output false for a click vector generated by $G_A$. Click vectors of two types can be generated from $G_A$. $\mathbf{x}_{AA} = G_A(\mathbf{z}_A \sim q_A(\mathbf{z}_A|\mathbf{x}_A)), \text{and,}$ $\mathbf{x}_{BA} = G_A(\mathbf{z_B} \sim q_B(\mathbf{z}_B |\mathbf{x}_B))$ Because the reconstruction stream can be trained supervisedly, it is sufficient that we merely apply adversarial training to click vectors from the translation stream $\mathbf{x}_{BA}$. We apply a similar processing to $\text{GAN}_B$, where $D_B$ is trained to output true for real click vectors sampled from the second domain dataset and to output false for click vectors generated from $G_B$ \subsection{Learning} We solve the learning problems of $\text{VAE}_A$, $\text{VAE}_B$, $\text{GAN}_A$, and $\text{GAN}_B$ jointly. \begin{align} \displaystyle \min_{E_A, E_B, G_A, G_B} \max_{D_A, D_B} &\mathcal{L}_{\text{VAE}_A} (E_A, G_A) + \mathcal{L}_{\text{GAN}_A}(E_B, G_A, D_A) \nonumber \\ & + \mathcal{L}_{\text{CC}_A}(E_A, G_A, E_B, G_B) \nonumber\\ & \mathcal{L}_{\text{VAE}_B} (E_B, G_B) + \mathcal{L}_{\text{GAN}_B}(E_A, G_B, D_B) \nonumber\\ & + \mathcal{L}_{\text{CC}_B}(E_B, G_B, E_A, G_A) \label{learning} \end{align} \subsubsection{VAE}: VAE training aims for minimizing a variational upper bound. In (\ref{learning}), the VAE objects are the following. \begin{align} \mathcal{L}_{\text{VAE}_A} & = \lambda_1 \textbf{KL}(q_A(\mathbf{z}_A|\mathbf{x}_A)\|p_\eta (\mathbf{z})) - \lambda_2\mathbb{E}_{\mathbf{z}_A \sim q_A(\mathbf{z}_A|\mathbf{x}_A)}[\log p_{G_A}(\mathbf{x}_A|\mathbf{z}_A)] \\ \mathcal{L}_{\text{VAE}_B} & = \lambda_1 \textbf{KL}(q_B(\mathbf{z}_B|\mathbf{x}_B)\|p_\eta (\mathbf{z})) - \lambda_2\mathbb{E}_{\mathbf{z}_B \sim q_A(\mathbf{z}_B|\mathbf{x}_B)}[\log p_{G_B}(\mathbf{x}_B|\mathbf{z}_B)] \end{align} Therein, the hyperparameters $\lambda_1$ and $\lambda_2$ control the weights of the objective terms and the \textbf{KL} divergence terms penalize deviation of the distribution of the latent code from the prior distribution. The regularization allows an easy means of sampling from the latent space. The prior distribution is a zero mean Gaussian $p_\eta (\mathbf{z}) = \mathcal{N}(\mathbf{z}|0, \mathbb{I})$. We model $p_{G_A}$ and $p_{G_B}$ as paper \cite{Liang2018}. Therefore, with a user, minimizing log-likelihood term is equivalent to minimizing the multinomial likelihood for click vector as \begin{align*} \log p_{G_A}(\mathbf{x}_A|\mathbf{z}_A) &= \sum_i^{I_A} \mathbf{x}_{i,A}\log f (\mathbf{x}_{i,AA}) \\ \log p_{G_B}(\mathbf{x}_B|\mathbf{z}_B) &= \sum_i^{I_B} \mathbf{x}_{i,B}\log f (\mathbf{x}_{i,BB}), \end{align*} where $f(.)$ is a softmax function. \subsubsection{GAN}: In (\ref{learning}), the GAN objective functions are given as shown below. \begin{align} \mathcal{L}_{\text{GAN}_A}(E_B, G_A, D_A) &= \lambda_0\mathbb{E}_{\mathbf{x}_A\sim P_A[\log D_A(\mathbf{x}_A)]} + \nonumber \\ & \lambda_0\mathbb{E}_{\mathbf{z}_B\sim q_B(\mathbf{z}_B|\mathbf{x}_B)}[\log(1-D_A(G_A(\mathbf{z}_B)))] \label{gan1} \\ \mathcal{L}_{\text{GAN}_B}(E_A, G_B, D_B) &= \lambda_0\mathbb{E}_{\mathbf{x}_B\sim P_B[\log D_B(\mathbf{x}_B)]} + \nonumber\\ & \lambda_0\mathbb{E}_{\mathbf{z}_A\sim q_A(\mathbf{z}_A|\mathbf{x}_A)}[\log(1-D_B(G_B(\mathbf{z}_A)))] \label{gan2} \end{align} The objective functions in (\ref{gan1}) and (\ref{gan2}) are conditional GAN objective functions. They are used to ensure the translated click vectors resembling click vectors in target domains. Hyperparameter $\lambda_0$ controls the effect of the GAN objective functions. \subsubsection{CC}: We use a VAE-like objective function to model the cycle-consistency constraint, which is given as presented below. \begin{align} \mathbf{L}_{CC_A} (E_A, G_A, E_B, G_B) = &\lambda_3\textbf{KL}(q_A(\mathbf{z}_A|\mathbf{x}_A)\| p_\eta(\mathbf{z})) + \nonumber \\ & \lambda_3\textbf{KL}(q_B(\mathbf{z}_B|\mathbf{x}_{AB})\| p_\eta(\mathbf{z})) - \nonumber\\ & \lambda_4\mathbb{E}_{\mathbf{z}_B \sim q_A(\mathbf{z}_B|\mathbf{x}_{AB})}[\log p_{G_A}(\mathbf{x}_A|\mathbf{z}_B)] \\ \mathbf{L}_{CC_B} (E_B, G_B, E_A, G_A) = &\lambda_3\textbf{KL}(q_B(\mathbf{z}_B|\mathbf{x}_B)\| p_\eta(\mathbf{z})) + \nonumber \\ & \lambda_3\textbf{KL}(q_A(\mathbf{z}_A|\mathbf{x}_{BA})\| p_\eta(\mathbf{z})) - \nonumber\\ & \lambda_4\mathbb{E}_{\mathbf{z}_A \sim q_B(\mathbf{z}_A|\mathbf{x}_{BA})}[\log p_{G_B}(\mathbf{x}_B|\mathbf{z}_A)] \end{align} Therein, the negative log-likelihood objective term ensures that a twice-translated click vector resembles the input one and that the \textbf{KL} terms penalize the latent codes deviating from the prior distribution in the cycle-reconstruction stream. Hyperparameters $\lambda_3$ and $\lambda_4$ control the weights of the two objective terms. \subsection{Predict} \subsubsection{From domain A to domain B} Assuming a history click vector $\mathbf{x}_A$ of user $u$ in domain A, we predict click vector $\mathbf{x}_{AB}$ of user $u$ in domain B as $\mathbf{x}_{AB} \sim G_B(E_A(\mathbf{x}_A))$. \subsubsection{From domain B to domain A} Similarly, with a history click vector $\mathbf{x}_B$ of user $u$ in domain B, we predict his click vector $\mathbf{x}_{BA}$ in domain A as $\mathbf{x}_{BA} \sim G_A(E_B(\mathbf{x}_B))$. \section{Experiments} This section presents an evaluation of our proposed method on real-world datasets of Amazon\footnote{http://jmcauley.ucsd.edu/data/amazon/} and Movielens\footnote{https://grouplens.org/datasets/movielens/}. Then we present a comparison with others state-of-the-art methods. The experimentally obtained results constitute evidence of significant improvement over competitive baselines. \subsection{Dataset Description} \begin{table} \caption{Of datasets after preprocessing, \#user, \#item\_A, and \#item\_B respectively represent the number of users, the number of items in domain A, and the number of item in domain B. Dense\_A and dense\_B respectively refer to the density percentages of rating matrixes from domain A and domain B}. \begin{tabular}{|l|l|l|l|l|l| } \hline Dataset & \#user & \#item\_A & \#item\_B &dense\_A & dense\_B \\ \hline Health\_Clothing & 6557 & 16069 &18226 &0.08 & 0.05 \\ \hline Video\_TV & 5459 & 10072 & 28578 & 0.14 & 0.1 \\ \hline Drama\_Comedy & 6023 & 1490 & 1081 & 3.3 & 3.3 \\ \hline Romance\_Thriller & 5891 & 455 & 475 & 5.27 & 6.4\\ \hline \end{tabular} \label{datasetsum} \end{table} \subsubsection{Amazon} We created two datasets from four Amazon review subsets: Health\_Clothes from Health and Personal Care and Clothing, Shoes and Jewelry; Video\_TV from Video Games and Movies and TV. In each dataset, we maintained a list of users who review in both subsets as well as products which the users reviewed. We treat the rating as implicit feedback. \[ r_{ij} = \begin{cases} 1 & \quad \text{if user } i \text{ rated for item }j\\ 0 & \quad \text{otherwise } \end{cases} \] \subsubsection{Movielens} >From dataset Movielens 1M, we created two subsets: Drama\_Comedy and Romance\_Thriller. The Drama\_Comedy dataset included users who rated for both Drama and Comedy movies, Drama movies and Comedy, which the users rated and rating scores among them. We similarly prepared Romance\_Thriller and consider rating scores as implicit feedback of the Amazon dataset. We named datasets following an A\_B structure. For instance, the dataset designated as Health\_Clothing means domain A is Health and Personal Care products; domain B is Clothing, Shoes and Jewelry products. After preprocessing, we have details of four datasets as in Table \ref{datasetsum}. From this, it is apparent that Movielens is much denser than Amazon. Hence, it can be considered as we tested our model in both sparse and dense case. \subsection{Evaluation Scheme} We use two ranking based metrics: Recall@K and normalized discounted cumulative gain (NDCG@K) \cite{wang2013theoretical}. With each user, we sort the predicted list and take the K highest score items. Then we compare with ground truth items. Recall@K is defined as the percentage of purchase items which are of the recommended list: \begin{align*} \text{Recall@K} = \frac{\text{Number of items that a user likes in the top K}}{\text{total number of items that a user likes}} \end{align*} However, NDCG@K is defined as the most frequently used list evaluation measure that incorporates consideration of the position of correctly recommend items. First, we consider the discounted cumulative gain (DCG) of a user as \begin{align*} \text{DCG@K} = \sum_{i=1}^K \frac{2^{hit_i} - 1}{\log_2(i+1)} \end{align*} where \[ hit_i = \begin{cases} 1 & \quad \text{if item } i^{th} \text{ in groud truth list }\\ 0 & \quad \text{otherwise } \end{cases} \] Because DCG is unequal among users, we normalize it as \begin{align*} \text{NDCG@K} = \frac{DCG@K}{IDCG@K}, \end{align*} where ideal discounted cumulative gain is represented as IDCG. \begin{align*} \text{IDCG@K} = \sum_{i=1}^{|HIT|} \frac{2^{hit_i} - 1}{\log_2(i+1)} \end{align*} Therein, $|HIT|$ is a list of ground truth up to position K. The final result reported the average over all users. \subsection{Experimental Settings} We divided all users in each dataset randomly following 70\% for training, 5\% for validation to optimize hyperparameters, and 25\% for testing. We train models using the entire click history of train users. In validation and test processes, we use a click vector of domain A to predict the click vector of domain B and vice versa. We will choose settings which gave the best recall@50 in validation sets. The overall structure for the Drama\_Comedy and Romance\_Thriller dataset is [I-200-100-50-100-200-I], whereas the first [100] is the shared layer in the encoder. The second [100] is the shared layer in the generator, [50] represents the latent vector dimension, and $I$ stands for the number of products in domain A or B. For the Amazon dataset, because the number of products in each domain is much greater than in the Movielens dataset, the overall structure for Health\_Clothing and Video\_TV dataset is [I-600-200-50-200-600-I], whereas the first [200] is share-layer in encoder, the second [200] is the share-layer in the generator, [50] is latent vector dimension, and $I$ is number of products in domain A or B. We also found that with sparse dataset as Amazon, adding a dropout layer to the input layer will give a better result. With each hidden layer in encoder and generator, we apply a leaky ReLU activation function with scale of 0.2. With discriminator network, we apply tanh function for each hidden layer, except the last layer. \subsection{Baselines} The models included in our comparison are listed as follows: \begin{itemize} \item \textbf{CDL}: Collaborative Deep Learning \cite{Wang2015} is a probabilistic feedforward model for the joint learning of stacked denoising autoencoder (SDAE) and collaborative filtering. For item contents, we combined the title and descriptions in Health\_Clothing and Video\_TV datasets as well as crawler movie descriptions from the IMBD website \footnote{https://www.imdb.com/} for Drama\_Comedy and Romance\_Thriller datasets. Then we merged products of the two domains into one set. Subsequently, we followed the same procedure as that explained in \cite{Wang2015} to preprocess the text information. After removing stop words, the top discriminate words according to the tf-idf values were chosen to form the vocabulary. We chose 8000 words for each dataset. Next, we use grid search and the validation set to ascertain the optimal hyperparamters. We search $\lambda_u$ in [0.1,1,10], $\lambda_v$ in [1, 10, 100] and $\lambda_r$ in [0.1, 1, 10]. Results demonstrated that the two-layer model which has detailed architecture as '8000-200-50-200-8000' yielded the best results in validation sets. \item \textbf{Multi-VAE}: Multi-VAE \cite{Liang2018} is a collaborative filtering method that uses Variational Autoencoder (VAE) to reconstruct a user--item rating matrix. We concatenated two user--item matrixes from two domains so that the click vector of user $u$ is $[x_{1A}, x_{2A}, \cdots, x_{IA}, x_{1B}, \cdots, x_{IB}]$. Results demonstrated that structure '\#products-600-200-50-200-600-\#products' with a dimension of latent vector 50 yielded superior results in validation sets. \item \textbf{CCCFNET}: Content-Boosted Collaborative Filtering Neural Network \cite{Lian2017} is a state-of-the-art hybrid method for cross-domain recommender systems. With a user, it uses a one-hot-encoding vector that extracts from a user--item rating matrix, but with the item, it combines both one-hot-encoding vector from user--item matrix and item attributes. Then, after learning, user hidden representation will include Collaborative Filtering (CF) factors and content preference, whereas item hidden representation includes CF factors and item content representation. We combine text attributes as in CDL with a user--item matrix, so that with each domain, the item input vector is $[x_{u1}, x_{u2}, \cdots, x_{uN}, x_{w1}, x_{w2}, \cdots, x_{wS}]$ for which N is the number of users and S is 8000. The best neural network structure is '200-50'. \item \textbf{APR}: Adversarial Personal Ranking \cite{He2018} enhances Bayesian Personal Ranking with an adversarial network. We use publicly available source code provided by authors, but it cannot obtain competitive performance on the datasets used for this study. Therefore, we do not plot the results of APR in Figure \ref{result} \end{itemize} \begin{figure*} \subfloat[]{\includegraphics[width = 1.7in]{img/RecallHealth.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/NDCGHealth.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/RecallClothing.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/NDCGClothing.png}} \\ \subfloat[]{\includegraphics[width = 1.7in]{img/RecallVideo.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/NDCGVideo.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/RecallTV.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/NDCGTV.png}} \\ \subfloat[]{\includegraphics[width = 1.7in]{img/RecallDrama.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/NDCGDrama.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/RecallComedy.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/NDCGComedy.png}} \\ \subfloat[]{\includegraphics[width = 1.7in]{img/RecallRomance.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/NDCGRomance.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/RecallThriller.png}} \subfloat[]{\includegraphics[width = 1.7in]{img/NDCGThriller.png}} \caption{Recall and NDCG in four datasets with four methods.} \label{result} \end{figure*} \begin{figure} \subfloat[]{\includegraphics[width = 1.6in]{img/RecallHealthcompare.png}} \subfloat[]{\includegraphics[width = 1.6in]{img/RecallClothingcompare.png}} \caption{Comparing the recall of model components in the Health\_Clothing dataset.} \label{component} \end{figure} \begin{figure} \subfloat[]{\includegraphics[width = 1.6in]{img/RecallHealthloss.png}} \subfloat[]{\includegraphics[width = 1.6in]{img/RecallClothingloss.png}} \caption{Compare recall of reconstruction loss functions for the Health\_Clothing dataset.} \label{loss} \end{figure} \subsection{Performance Comparison} Figure \ref{result} presents Recall and NDCG results of Multi-VAE, CDL, CCCFNET, and D2D-TM of each domain in four datasets. In light of these results, we have made the following observations. \begin{itemize} \item With \textbf{Multi-VAE}, it has some similar characteristics with our model such as using user interaction vectors as input and learning features through VAE. The main difference is that our model can learn differences between domains. It is apparent that if two domains differ in a certain attribute (Romance\_Thriller and Drama\_Comedy dataset), our model is higher than Multi-VAE obly 2.9\%--7.8\% in Recall@50. However, with two domains that differ in many attributes such as Health, Personal Care, and Clothing, Jewelry in the Health\_Clothing dataset, our model outperforms Multi-VAE with 44.8\% in Recall@50. Another reason is that only VAE might let the system overfit while extracting features by VAE. In such cases, discriminating by GAN helps the system avoid overfitting. Moreover, it can learn latent features better. The result demonstrates that learning specific features of each domain and integrating VAE-GAN can enhance performance. \item With \textbf{CDL}, although it is a hybrid method, our model still can outperform by 17.9\% (Thriller) to 129\% (Health) in Recall@50. The first reason is similar to that for Multi-VAE: single-domain methods do not work well in multiple domains. The second reason is that different with CDL, our model only needs to train some users who have many interactions in both domains, but it can infer for all users. It not only reduces sparsity problems; it is also appropriate with real systems in cases where we need not do retraining when a new user comes. \item With \textbf{APR}, we are unable to obtain competitive performance. In addition to the same reasons given for Multi-VAE and CDL, another possible reason is that GAN might work well in generating problems but not in extracting features as VAE. In our model, VAE is the main model to learn features, and the purpose of GAN is supporting for VAE in obtaining good features of two domains by trying to distinct generations between them. \item With \textbf{CCCFNET}, a hybrid cross-domain method, our model can outperform by 52.7\% (Health) to 88.8\% (Thriller) in Recall@50. A possible reason is that the VAE-GAN model can learn latent features better than the simple Multilayer perceptron model can. \end{itemize} All four algorithms in baselines worked with the assumption that a user's behavior does not change. Even with CCCFNET, the user behavior is modeled as a sole network. However, based on special characteristics of each domain, user behavior presents some differences among domains. For example, a user is a saver who only bought inexpensive clothes, but with health care products, he must follow a doctor's advice and might purchase based on effectiveness, not on price. Our model has ability to capture both similar and different features of user behavior. Therefore, it is reasonable that our model can outperform the baselines. Figure \ref{component} and Figure \ref{loss} respectively present the effectiveness of each component in our model as well as result of multinomial likelihood. \subsubsection{Component} Because VAE is key model to learn latent feature, we can keep VAE and try to ignore CC, GAN, or both. We designate D2D-TM full, D2D-TM VAE\_CC, D2D-TM VAE\_GAN, and D2D-TM VAE respectively as our original model, model ignoring CC, ignoring GAN and ignoring both CC and GAN. Experiments in Figure \ref{component} show that both CC and GAN are important to achieve high performance. However, results of D2D-TM VAE\_GAN are slightly better than those for D2D-TM VAE\_CC. A possible result is that GAN creates a strong constraint to avoid VAE overfitting so that VAE can extract latent features better. Weight-sharing and CC are important parts by which similarity can be learned between two domains, shown as D2D-TM VAE\_CC is higher than D2D-TM VAE 8.1\% in Health and Personal Care. The result that D2D-TM VAE is slightly better than Multi-VAE also demonstrates that learning different domains separately can improve performance. \subsubsection{Reconstruction Loss Function} In the UNIT framework, they use L1 loss for reconstruction. That is suitable with image data, but with click data, Multinomial log loss is more appropriate. Otherwise, many studies about RS used log likelihood (log loss) or Gaussian likelihood (square loss). Therefore, we experimented with loss of four types. With L1 loss, log loss and square loss, the activation function tanh can achieve superior results. Figure \ref{loss} shows that the Multinomial log likelihood can outperform other types. A possible reason is that with click dataset, each element in the input vector is 0 or 1. Therefore, the square loss and L1 loss are unsuitable. Otherwise, the click input is assumed to be generated from a multinomial distribution. Demonstrably, it is better than log likelihood. \section{Conclusion} This paper presents a proposal of the D2D-TM network structure that can extract both homogeneous and divergent features among domains. This is first model reported to apply VAE-GAN to multi-domain RS. Moreover, our network can infer items in both domains simultaneously. Experiments have demonstrated that our proposed model can significantly outperform state-of-the-art methods for recommendation with more robust performance. Moreover, because our network uses only implicit feedback, it can be easily adopted for use by many companies.
1,477,468,750,694
arxiv
\section{Introduction, definitions and notations} The classical Hermite-Hadamard inequality \cite{Had} states that if a function $f\colon[a,b]\to\mathbb{R}$ is convex, then \begin{equation*} f\left(\frac{a+b}{2}\right)\leq \frac{1}{b-a}\int_a^b f(t)\d{t}\leq \frac{f(a)+f(b)}{2}. \label{eq:HH-dim1} \end{equation*} This inequality has been discussed by many mathematicians. We refer to \cite{DP,NP} and the references therein. In the last few decades, several generalizations of the Hermite-Hadamard inequality have been established and studied. One of them (\cite{Bes}) says that if $\Delta\subset\mathbb{R}^n$ is a simplex with barycenter $\mathbf{b}$ and vertices $\mathbf{x}_0,\dots,\mathbf{x}_n$ and $f\colon\Delta\to\mathbb{R}$ is convex, then \begin{equation}\label{eq:HH-Bess} f(\mathbf{b})\leq \frac{1}{\Vol(\Delta)}\int_\Delta f(\mathbf{x})\d{\mathbf{x}}\leq \frac{f(\mathbf{x}_0)+\dots f(\mathbf{x}_n)}{n+1}. \end{equation} Wąsowicz and Witkowski in \cite{WW} and Mitroi and Spiridon in \cite{MS} investigated the relationship between the left and right-hand sides of \eqref{eq:HH-Bess}. \\ Interesting refinement of both inequalities in \eqref{eq:HH-Bess} was obtained by Ra{\"\i}ssouli and Dragomir in \cite{RD}. In this paper we use their method to obtain another refinement of the left-hand side of Hermite-Hadamard inequality on simplices. Before we formulate the main theorem of this paper, we first give some definitions and notations. For a fixed natural number $n\geq 1$ let $N=\{0,1,\dots,n\}$. Suppose $\mathbf{x}_0,\dots, \mathbf{x}_n\in\mathbb{R}^n$ are such that the vectors $\vv{\mathbf{x}_0\mathbf{x}_i},\ i=1,\dots,n$ are linearly independent. The set $\Delta=\conv\left\{\mathbf{x}_i\colon i\in N \right\}$ is called a \textit{simplex}. Such simplex is an $n$-dimensional object and we shall call it sometimes an $n$-simplex if we would like to emphasize its dimension. The point \begin{equation*} \mathbf{b}=\frac{1}{n+1}(\mathbf{x}_0+\dots+\mathbf{x}_n) \end{equation*} is called the \textit{barycenter} of $\Delta$. For any subset $K$ of $N$ of cardinality $k\leq n$ we define an $(n-k)$-simplex $\Delta^{[K]}$ as follows. For each $j\in N\setminus K$ let \begin{equation} \mathbf{x}^{[K]}_j=\frac{1}{n+1}\sum_{i\in K} \mathbf{x}_i + \frac{n+1-k}{n+1}\mathbf{x}_j \label{eq:vertices of Delta[K]} \end{equation} and \begin{equation} \Delta^{[K]}=\conv\left\{\mathbf{x}^{[K]}_j\colon j\in N\setminus K \right\}. \label{eq:definition of Delta[K]} \end{equation} Obviously $\Delta^{[\emptyset]}=\Delta$ and $\Delta^{[K]}=\mathbf{b}$ if $\card N\setminus K=1$. The integration over a $k$-dimensional simplex will be always with respect to the $k$-dimensional Lebesgue measure denoted by $\d{\mathbf{x}}$ and the $k$-dimensional volume will be denoted by $\Vol$. There will be no ambiguity, as the dimension will be obvious from the context. By $H(\mathbf{a},\lambda):\mathbb{R}^n\to \mathbb{R}^n$ we denote the homothety with center $\mathbf{a}$ and scale $\lambda$, given by the formula $$H(\mathbf{a},\lambda)(\mathbf{x})=\mathbf{a}+\lambda(\mathbf{x}-\mathbf{a}).$$ \section{Refinement of the left-hand side} This is the main result of our paper. \begin{theorem} If $f\colon\Delta\to\mathbb{R}$ is a convex function and $K\subset L\subsetneq N$, then $$\frac{1}{\Vol \Delta^{[L]}}\int_{\Delta^{[L]}}f(\mathbf{x})\d{\mathbf{x}}\leq \frac{1}{\Vol \Delta^{[K]}}\int_{\Delta^{[K]}}f(\mathbf{x})\d{\mathbf{x}}.$$ \label{theorem:main} \end{theorem} Given the remark stated after the formula \eqref{eq:definition of Delta[K]} it is clear that Theorem \ref{theorem:main} refines the LHS of \eqref{eq:HH-Bess}. Let us begin with two observations, which will make clear the nature of simplices $\Delta^{[K]}$. First observation follows immediately from \eqref{eq:vertices of Delta[K]}. \begin{obs} All simplices $\Delta^{[K]}$ have common barycenter. \end{obs} \begin{obs} If $K\subset L\subsetneq N$ and $\card L=\card K+1$, then $\Delta^{[L]}$ arises from $\Delta^{[K]}$ in the following way:\\ let $l\in L\setminus K$ and let $\Delta^{[K]}_l$ be the face of $\Delta^{[K]}$ opposite to $\mathbf{x}_l^{[K]}$. Then \begin{equation*} \Delta^{[L]}=H\left(\mathbf{x}_l^{[K]},\frac{n-\card K}{n+1-\card K}\right)\left(\Delta^{[K]}_l\right) \end{equation*} \label{obs:2} \end{obs} \begin{proof} Assume, without loss of generality that $K=\{1,\dots,k\}$ and $L=\{0\}\cup K$. Let $k<s\leq n$. By \eqref{eq:vertices of Delta[K]} the vertices of $\Delta^{[L]}$ are $$x^{[L]}_{s}=\frac{1}{n+1}\sum\limits_{i=0}^k\mathbf{x}_{i}+\frac{n-k}{n+1}\mathbf{x}_s.$$ Then \begin{align*} x^{[L]}_{s}&=\tfrac{1}{n+1}\sum\limits_{i=1}^k\mathbf{x}_{i}+\tfrac{1}{n+1}\mathbf{x}_0+\tfrac{n-k}{n+1}\mathbf{x}_s\\ &=\tfrac{1}{n+1}\sum\limits_{i=1}^k\mathbf{x}_{i}+\tfrac{n+1-k}{n+1}\mathbf{x}_0+\tfrac{n-k}{n+1}\mathbf{x}_s-\tfrac{n-k}{n+1}\mathbf{x}_0\\ &=\tfrac{1}{n+1}\sum\limits_{i=1}^k\mathbf{x}_{i}+\tfrac{n+1-k}{n+1}\mathbf{x}_0+\tfrac{n-k}{n+1-k}\left( \tfrac{n+1-k}{n+1}\mathbf{x}_s-\tfrac{n+1-k}{n+1}\mathbf{x}_0\right)\\ &=\mathbf{x}_0^{[K]}+\tfrac{n-k}{n+1-k}\left(\mathbf{x}_s^{[K]}-\mathbf{x}_0^{[K]}\right) =H\left(\mathbf{x}_0^{[K]},\tfrac{n-k}{n+1-k}\right)\left(\mathbf{x}_s^{[K]}\right).\qedhere \end{align*} \end{proof} Let us brief on the approach proposed by Dragomir and Ra{\"\i}ssouli in \cite{RD}. They constructed the sequence of subsimplices of $\Delta$ as follows:\\ Let $\mathbf{b}$ be the barycenter of $\Delta$. One can divide $\Delta$ into $n+1$ subsimplices $$D_i=\conv\{\mathbf{x}_0,\dots,\mathbf{x}_{i-1},\mathbf{b},\mathbf{x}_{i+1},\dots,\mathbf{x}_n\},\quad i=0,1,\dots,n.$$ It is important to note that all these simplices have the same volume.\\ Denote by $\mathcal{D}_1$ the set of simplices created this way. The set $\mathcal{D}_{p+1}$ is constructed by applying the above procedure to all simplices in $\mathcal{D}_p$. Dragomir and Ra{\"\i}ssouli proved that for a convex function $f:\Delta\to\mathbb{R}$ one has \begin{align} f(\mathbf{b})\leq \frac{1}{\card \mathcal{D}_p}\sum_{\delta\in\mathcal{D}_p} f(\mathbf{b}_\delta)&\leq \frac{1}{\card \mathcal{D}_{p+1}}\sum_{\delta\in\mathcal{D}_{p+1}} f(\mathbf{b}_\delta)\notag\\ \intertext{and} \lim_{p\to\infty} \frac{1}{\card \mathcal{D}_p}\sum_{\delta\in\mathcal{D}_p} f(\mathbf{b}_\delta)&= \frac{1}{\Vol \Delta}\int_\Delta f(\mathbf{x})\d{\mathbf{x}},\label{eq:DR2} \end{align} where $\mathbf{b}_\delta$ denotes the barycenter of $\delta$. We shall use the above procedure to prove the main result of this paper. \begin{proof}[Proof of Theorem~\ref{theorem:main}.] Obviously it is enough to prove the result in case $\card K+1=\card L$. As above we may assume $K=\{1,\dots,k\}$ and $L=\{0\}\cup K$. Let $\Sigma=\Delta^{[K]}_0$ denote the face of $\Delta^{[K]}$ opposite to $\mathbf{x}_0^{[K]}$. For simplicity denote by $H$ the homothety with center $\mathbf{x}_0^{[K]}$ and scale $\frac{n-k}{n+1-k}$. Then, by Observation \ref{obs:2} we see that $\Delta^{[L]}=H(\Sigma)$. Let us apply the Dragomir-Ra\"\i ssouli process to $\Sigma$. Thus we obtain a sequence of sets of subsimplices of $\Sigma$ denoted by $\mathcal{D}_p$. \\ Fix $p\geq 1$. For every $\sigma\in \mathcal{D}_p$ let $\Sigma_\sigma=\conv \left(\sigma\cup\left\{x_0^{[K]}\right\}\right)$. Clearly the simplices $\Sigma_\sigma$ form a partition of $\Delta^{[K]}$ into simplices of the same height thus $\Vol \Sigma_\sigma=\Vol\Delta^{[K]} / \card\mathcal{D}_p$.\\ Now we apply the left-hand side of the Hermite-Hadamard inequality to all simplices $\Sigma_\sigma$ to obtain \begin{align}\label{eq:sum in barycenters} \frac{1}{\card \mathcal{D}_p}\sum_{\sigma\in \mathcal{D}_p} f(\mathbf{b}_{\Sigma_\sigma})&\leq \sum_{\sigma\in \mathcal{D}_p}\frac{1}{\Vol \Sigma_\sigma}\int\limits_{\Sigma_\sigma} f(\mathbf{x}) \d{\mathbf{x}} =\frac{1}{\Vol \Delta^{[K]} }\int\limits_{\Delta^{[K]}} f(\mathbf{x}) \d{\mathbf{x}}. \end{align} Since $\Delta^{[L]}$ is the image of $\Sigma$ by $H$, the sets $$H(\mathcal{D}_p)=\{H(\sigma): \sigma\in \mathcal{D}_p\}$$ form the Dragomir-Ra\"\i ssouli sequence for $\Delta^{[L]}$. Moreover, \textit{ comme par miracle} \cite{Prev}, the barycenters of $\Sigma_\sigma$ and that of $H(\sigma)$ coincide, i.e. \begin{equation} \mathbf{b}_{\Sigma_\sigma}=\mathbf{b}_{H(\sigma)}. \label{eq:equality of barycenters} \end{equation} From \eqref{eq:sum in barycenters} and \eqref{eq:equality of barycenters} we conclude \begin{align}\label{eq:sum in barycenters L} \frac{1}{\card \mathcal{D}_p}\sum_{\sigma\in \mathcal{D}_p} f(\mathbf{b}_{H(\sigma)})&\leq \frac{1}{\Vol \Delta^{[K]} }\int_{\Delta^{[K]}} f(\mathbf{x}) \d{\mathbf{x}}. \end{align} and applying \eqref{eq:DR2} \begin{align}\label{eq:limit sum in barycenters L} &\lim_{p\to\infty} \frac{1}{\card \mathcal{D}_p}\sum_{\sigma\in \mathcal{D}_p} f(\mathbf{b}_{H(\sigma)}) = \frac{1}{\Vol \Delta^{[L]} }\int_{\Delta^{[L]}} f(\mathbf{x}) \d{\mathbf{x}}. \end{align} Now the assertion follows immediately from \eqref{eq:limit sum in barycenters L} and \eqref{eq:sum in barycenters L}. \end{proof} From Theorem \ref{theorem:main} we obtain the corollary. \begin{corollary} Let $K_0, K_1,\ldots, K_{n}$ be a sequence of subsets of $N$ such that $$K_0\subset K_1 \subset \ldots \subset K_{n} \ \text{and} \ \card K_i=i, \ i=0,1, \ldots, n.$$ If $f:\Delta\to\mathbb{R}$ is convex, then \begin{align*} f(\mathbf{b})&=\frac{1}{\Vol(\Delta^{[K_{n}]})}\int_{\Delta^{[K_{n}]}}f(\mathbf{x})\d{\mathbf{x}}\leq \frac{1}{\Vol(\Delta^{[K_{n-1}]})}\int_{\Delta^{[K_{n-1}]}}f(\mathbf{x})\d{\mathbf{x}}\\ &\leq \ldots\leq\frac{1}{\Vol(\Delta^{[K_1]})}\int_{\Delta^{[K_1]}}f(\mathbf{x})\d{\mathbf{x}}\leq\frac{1}{\Vol(\Delta^{[K_0]})}\int_{\Delta^{[K_0]}}f(\mathbf{x})\d{\mathbf{x}}\\ &=\frac{1}{\Vol(\Delta)}\int_{\Delta}f(\mathbf{x})\d{\mathbf{x}} \end{align*} (note that $\Vol(\Delta^{[K_i]})$ denotes $(n-i)$-dimensional volume and $\int_{\Delta^{[K_{i}]}}\dots\d{\mathbf{x}}$ denotes integration with respect to $(n-i)$-dimensional Lebesgue measure). \end{corollary} Applying Theorem \ref{theorem:main} to all possible proper subsets of $N$ of the same cardinality and summing the obtained inequalities, we obtain the following result. \begin{corollary} If $f\colon \Delta\to\mathbb{R}$ is a convex function, then $$\frac{1}{\Vol \Delta}\int_{\Delta}f(\mathbf{x})\d{\mathbf{x}}\geq \frac{1}{\binom {n+1} {k}}\sum_{\substack{\displaystyle{K\subsetneq N}\\{\card K=k}}}\frac{1}{\Vol \Delta^{[K]}}\int_{\Delta^{[K]}}f(\mathbf{x})\d{\mathbf{x}}\,.$$ \end{corollary} By Theorem \ref{theorem:main} we get the following corollary. \begin{corollary} Let $f\colon\Delta\to\mathbb{R}$ be a convex function and let $k< l \leq n$. Then $$\frac{1}{\binom {n+1} {l}}\sum_{\mathclap{\substack{{L}\\{\card L=l}}}}\frac{1}{\Vol \Delta^{[L]}}\int_{\Delta^{[L]}}f(\mathbf{x})\d{\mathbf{x}}\leq \frac{1}{\binom {n+1} {k}}\sum_{\mathclap{\substack{{K}\\{\card K=k}}}}\frac{1}{\Vol \Delta^{[K]}}\int_{\Delta^{[K]}}f(\mathbf{x})\d{\mathbf{x}}.$$ \end{corollary} \begin{proof} Clearly it is sufficient to prove the corollary only in case $l=k+1$. Fix $K=\{1,\dots,k\}$. We have $n+1-k$ oversets of $K$ of cardinality $k+1$. Applying Theorem \ref{theorem:main} to $K$ and all such oversets and summing the obtained inequalities, we deduce \begin{align* \sum_{\mathclap{\substack{{L\supset K}\\{\card L=k+1}}}}\frac{1}{\Vol \Delta^{[L]}}\int_{\Delta^{[L]}}f(\mathbf{x})\d{\mathbf{x}}\leq (n+1-k) \frac{1}{\Vol \Delta^{[K]}}\int_{\Delta^{[K]}}f(\mathbf{x})\d{\mathbf{x}}. \end{align*} Summing this over all possible $K$, we obtain \begin{equation*} (k+1)\sum_{\mathclap{\substack{{L}\\{\card L=k+1}}}}\frac{1}{\Vol \Delta^{[L]}}\int_{\Delta^{[L]}}f(\mathbf{x})\d{\mathbf{x}} \leq(n+1-k)\sum_{\mathclap{\substack{{K}\\{\card K=k}}}}\frac{1}{\Vol \Delta^{[K]}}\int_{\Delta^{[K]}}f(\mathbf{x})\d{\mathbf{x}}, \end{equation*} since every $L$ has $k+1$ subsets of cardinality $k$. We complete the proof by multiplying both sides by $\frac{k!(n-k)!}{(n+1)!}$. \end{proof} \bigskip \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} AW came up with an idea, MN extended it and performed all necessary calculations. All authors read and approved the final manuscript.
1,477,468,750,695
arxiv
\section*{APPENDIX} \section*{ACKNOWLEDGMENT} We are grateful for the support of the Swiss Drone and Robotics Centre, the Swiss Rescue Troops, our sponsors who gave us the opportunity to develop our prototype, and the helpful advice and assistance of Michael Riner-Kuhn, Matthias Müller, Cornelia Della Casa, Luciana Borsatti, and Nicholas Lawrance. \bibliographystyle{IEEEtran} \section{Introduction} \label{sec:introduction} \subsection{Motivation} Whenever natural catastrophes such as earthquakes or landslides strike near populated areas, chances are high that people get buried in destroyed buildings. If not found, rescued, and cared for within 72 hours, the chances of survival are low \cite{10.1145/1582379.1582514}. Today, rescue workers are mostly limited to rudimentary tools and techniques. In a typical scenario rescue forces first try to identify the rough location of potentially trapped people using rescue dogs. Subsequently, a squad assembles at the site and tries to contact victims by alternately knocking and listening for any answers. The success of this method depends on whether the victim is conscious or not. Consequently, people requiring rescue most urgently cannot be located with this method. Since the debris is structurally highly unstable, the rescue forces risk their lives having little information about the dangers they are about to face. To overcome these challenges, we propose the vine-like \ac{sar} robot \textit{RoBoa} shown in Fig.\,\ref{fig:overview}. The robot is able to maneuver in three-dimensional rubble fields, allowing rescue workers to find buried victims in a safer, faster and more reliable way by using a unique combination of locomotion and steering. \begin{figure \centering \includegraphics[width=0.48 \textwidth]{images/Roboa_overview.jpg} \caption{The RoBoa system consists of the supply box from which the everting robot operates. The components of the everting robot are an inflatable fabric tube with an internal steerable robot, and a head with a camera and sensors. The internal robot is hidden inside the tube, right behind the head.} \label{fig:overview} \end{figure} \subsection{Related Work} Several different kinds of \ac{sar} robots can be found in literature. Drones can detect people and give an overview of the whole situation by mapping the area\,\cite{millane2018c} or detecting people on the surface\,\cite{drone_stateof}, but they cannot get into the debris to find buried victims. Track-based robots can easily cross rough terrain on the surface but have trouble getting deeper into the debris since they need to be connected to a base either by a cable or wirelessly\,\cite{yamauchi2004packbot, soryu, inachus}. A cable introduces considerable amounts of friction while wireless signals cannot pass through the iron reinforcements of the debris. The same issue occurs with wheel-based\,\cite{6386291}, or legged robots\,\cite{7758092, cockroachrobot}. Worm-like robots can be smaller than wheel-based robots but they still need to be connected to a base by a cable or wirelessly\,\cite{activescope}. In contrast, growing robots have the advantage that the outside hull remains static with respect to the environment. Therefore, the friction to the surroundings is insignificant when moving. Growing can for example be achieved by 3D printing \cite{sadeghi2017toward}. However this has difficulties with overheating. After \SI{15}{\minute} the heat transfers to other parts of the system which causes problems in electronic parts and leads to softening of the unused filament. The system is also rather slow with a maximal growing speed of \SI{4}{\milli\metre\per\minute}. Another approach to growing is taken by vine robots. Vine robots do not have to overcome the friction from the outside hull to the environment but only the internal friction within the robot, which is simpler to control. These robots thus have the capability to carry cables to the tip by controlling the friction inside of the robot. Hawkes et al.\,\cite{vinepaper} describes an everting tube, which is characteristic of vine robots, and provides application ideas such as acting as a fire hose or a pneumatic jack, exploring new places with a camera at the tip or the usage as a \ac{sar} robot. Another variant of vine robots consists of multiple smaller tubes\,\cite{Tsukagoshi2011TipGA}. Current methods for steering vine robots attach a soft structure such as series pouch motors to the whole body of the vine robot\,\cite{7989648, doi:10.1089/soro.2018.0034, Coad_2020, niiyama2015pouch}, in the following referred to as \emph{body-steered}. Because the pouch motors are present along the full length of the robot's tube, manufacturing such a vine robot becomes more difficult the longer it gets. Jeong et al.\,\cite{jeong2020tip} show a retraction mechanism for vine robots by pulling the material back right at the tip in order to avoid kinks. Different methods for mounting sensors to the tip of the vine robot have been explored in literature, a form fit\,\cite{jeong2020tip} is preferred over a magnetic attachment\,\cite{luong2019eversion}, since the latter detaches more easily in rough environments. The steering mechanism of the robot described in this paper includes 3D-printed pneumatic actuators that consist of three strands that elongate when put under pressure. This actuator principle is based on\,\cite{Hadzic2018, festoproboscis}. \subsection{Contributions} \begin{itemize} \item A new steering concept for vine robots: By putting pneumatic actuators, which always stay at the front, inside the vine robot, the robot is able to change its shape and thus steer through debris. In comparison with a previous mechanism design\,\cite{7989648}, the bending radius is reduced to \SI{20}{\centi\metre}. \item The longest, steerable vine robot (to the best of our knowledge): Being able to steer when fully extended at \SI{17}{\metre} length, the robot outgrows previous steerable robots\,\cite{7989648} by \SI{70}{\%}. Previously, the whole tube had to change its shape, and thus steering and manufacturing the tube becomes challenging. The robot described here only steers the tip, making the steering mechanism independent of the length and simplifying manufacturing. \item Resistance against adverse conditions encountered in real-world environments: The robot was not only tested in a laboratory environment but also multiple times in the debris of a destroyed house. \item A moving pressure supply: The valve terminal is mounted onto the pneumatic actuators. Those actuators always stay at the tip of the robot and thus the valve terminals move with the speed of the robot's tip. Thus only one pressure tube has to be carried to the tip. \end{itemize} \section{Design} \label{sec:design} In this section we present the design of our robot and the decisions made during its construction. An overview of the system is shown in Fig.\,\ref{fig:overview}. \subsection{Requirements} \label{requirements} The main goal of this work was to develop a concept for a \ac{sar} robot that supports rescue workers in their missions. Thanks to the collaboration with the Swiss Rescue Troops, the requirements for the use in a \ac{sar} environment were determined (e.g. fields of debris after an earthquake): \begin{enumerate} \item The robot has to operate under debris and perform sustained operations in this adverse environment. It has to deal with dust, darkness, small holes and spaces, and obstacles like stones or collapsed structures. Thus, the \emph{locomotion} principle of the robot must minimize friction within such environments and navigate through tight spaces. The locomotion principle should also enable backwards movement. \item According to the Swiss Rescue Troops' experience, the system would be advantageous if it is longer than \SI{3}{\meter}. At a \emph{length} of \SI{15}{\metre} or more almost every victim could be reached. Therefore, it was decided to set this length as the required length of the robot. \item Because the Swiss Rescue Troops use drills with a diameter of up to \SI{112}{\milli\meter} to enter destroyed buildings, the maximum \emph{diameter} of the robot is constrained by this value. \item To allow for sufficient \emph{maneuverability} in debris, the robot has to be bendable 90$^\circ$ horizontally as well as 45$^\circ$ vertically at a turning radius of \SI{25}{\centi\metre}. \item Different sensors need to be implemented at the tip of the robot to allow \emph{localization} of victims under challenging lighting conditions and enable audio \emph{communication}. \item The robot can be \emph{operated} by a trained user. \end{enumerate} The work towards the final prototype was then guided by those requirements to realize a system that addresses the needs of rescue workers as completely as possible. \Cref{sec:results} discusses how well the requirements were met. \subsection{Everting Tube} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{images/overview_ir.jpg} \caption{(a) The internal robot with the attached head is shown. Highlighted are the sensing head, the pneumatic actuator with valve terminal for lateral movements, and the cases containing the \ac{sbc} and \ac{poe} splitter. (b) Maximum achievable deflection of a single actuator segment without the tube.} \label{fig:IR} \end{figure} As a first step during the development, different propulsion and robot types were analyzed for their suitability in the aforementioned environment. This choice was strongly influenced by the need for communication between the robot's tip (in the debris) and its base station (outside of the debris). For this, we tested a wireless connection with an agent located under the debris using a miniaturized computer (Raspberry Pi 4). During the experiment, a stable connection could only be achieved up to a distance of \SI{5}{\metre} with conventional WiFi. Since a requirement is operation with lengths of up to \SI{15}{\metre}, the only viable alternative is a tether between tip and base station. However, this raises the problem of friction between the environment and the cable being dragged along by the tip. Experience from the predecessor project called Proboscis\,\cite{Hadzic2018} showed that tracked robots with a diameter of \SI{112}{\milli\metre} (as per the requirements) have difficulties overcoming this friction. In order to control the friction between cables and environment, we opted for locomotion based on the eversion principle used by vine robots\,\cite{vinepaper}. The main component is a furled tube made of a \SI{20}{Denier} coated ripstop nylon with welded seams that is everted by pressurizing it with air. The tube material exposed to the outside (referred to as \emph{outer tube}) does not move forward relative to the environment, but is elongated by transporting more tube material on the inside (referred to as \emph{inner tube}) to the front and everting it there. The decisive advantage of this approach is that the robot has to overcome hardly any friction with the rough and unpredictable surroundings. For a detailed introspection on how our design interacts with the tube see Fig.\,\ref{fig:tube}. However, the friction does not disappear, but instead occurs on the inside of the tube. Specifically, the friction comes about due to velocity differences between inner and outer tube, as well as between inner tube, internal robot, and cables. Since the inside is fully subject to our design, it is easier to control friction there. For example, the smoothness of the surface of the tube can be influenced through a favourable material pairing or the use of lubricants. Unique properties of the everting principle for locomotion are: \begin{itemize} \item The system is not stationary in itself but changes constantly, which means that the part of the tube being at the front changes while everting. Thus, it is more challenging to mount something (e.g. sensors) to the tip on the outside of the tube. Our solution to the head mounting is described in \Cref{sec:sensors}. \item The tube is soft and deformable, which renders the analysis, control, and simulation of the locomotion more challenging. Our solution to controlled lateral steering is described in \Cref{sec:lateral_movement}. \item Fully controlled backward movements of the tube by “everting inwards” leads to kinks of the tube, a modeling and controls challenge we will address with future designs. \end{itemize} Despite the challenges associated with these properties, this approach to locomotion is still the most promising when it comes to achieving operation ranges of more than \SI{10}{\metre} and high maneuverability. \subsection{Supply Box} \begin{figure} \centering \includegraphics[width=\linewidth]{images/box.jpg} \caption{Supply box with removed lid.} \label{fig:box} \end{figure} The tube is stored in a supply box (shown in Fig.\,\ref{fig:box}), which also houses the cable (see \Cref{sec:lateral_movement}) and the onboard computer. The box is pressurized up to \SI{20}{\kilo\pascal} relative to ambient air pressure to inflate the tube. This box will always remain outside of the debris thus its weight and dimensions are not a limiting factor for the movement of the robot, but for convenient operation it was aimed for a weight which two adults can carry. The weight of the box ended up to be around \SI{50}{\kilo\gram}. The majority of the weight stems from the aluminium plates with \SI{27}{\kg}, aluminium profiles weight \SI{10.5}{\kg} and electronic parts \SI{6}{\kg}. The remaining weight is from screws, sealant, tube material, buttons and air valves. The rescue troops are equipped with pneumatic tools such as jack hammers and have therefore a compressor with included generator on site. This simplified the design requirements, since the robot can rely on existing equipment. In the field the robot's theoretical maximum power draw of \SI{350}{\watt} can be provided by the current equipment. The tube and cable are stored on two spools which are driven by separate Nema 23 stepper motors with \SI{3}{\newton\meter} torque. The spools cannot be driven by the same motor without a transmission because the supply cable moves at the velocity of the head, which is only half of the speed of the tube. Slip rings are necessary between the stationary supply box and the rotating spools. Aluminium profiles build a cubic structure. The grooves in the profile are used to mount all axles and electronic parts inside the box. The aluminium plates sealing the box from air leakage are also mounted to them. \subsection{Lateral Movement} \label{sec:lateral_movement} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{images/tube.jpg} \caption{(a) Two exterior views and a cross-sectional view of the interlocking mechanism between the head and the tube. (b) Cross section of the internal robot within the tube is shown. Note that the second segment of the pneumatic actuator has been omitted for simplicity.} \label{fig:tube} \end{figure} To control the forward locomotion of the tip of the everting tube, we propose a structure that can actively be bent at the front inside of the tube. This structure is henceforth referred to as the \textit{internal robot}. An overview of the system is shown in Fig.\,\ref{fig:IR}. The internal robot mainly consists of actuator segments and valve terminals. The segments are made of 3D printed nylon (FS3200PA-F) and allow the robot to deflect laterally when pressurized with up to \SI{400}{\kilo\pascal}. The valve terminals regulate the airflow from and to the soft actuators. Each terminal consists of proportional flow valves and a custom \ac{pcb} with a micro controller. To minimize the volume of air of which the pressure has to be controlled, the valve terminals are located directly at their associated actuator respectively. The internal robot requires pressurized air and a communication interface. To achieve this, the internal robot is connected to the supply box with a slim air hose as well as a single \ac{poe} cable, combining Ethernet and power lines. To split the data and power and manage communication between the low-level microcontrollers, the head and the computer in the supply box, the robot carries a \ac{poe} splitter as well as an \ac{sbc} packaged in two small cases. \subsection{Sensors} \label{sec:sensors} The ability to steer the robot remotely without direct line of sight makes a good sensing payload mandatory. The head, which contains all necessary sensors, is located at the tip of the robot. It is attached to the internal robot with an interlocking mechanism similar to \cite{Coad_2020}. Fig.\,\ref{fig:tube}a shows Roboa's interlocking mechanism. The interlocking wheels of the head and the ones on the internal robot slightly interlock in each other to prevent the head from detaching. A small gap between the wheels allow the tube to slip through. All interlocking wheels are supported by small ball bearings to minimize the friction. The head contains a greyscale 640x480 pixel camera and an \ac{imu} for navigation and a speaker and microphone to communicate with located victims. The head is attached to the internal robot in a such a way that they always share the same orientation. Therefore, steering to the left for the operator on the screen corresponds to the same motion of the robot in the field. Since the head is isolated and unreachable by a cable (see Fig.\,\ref{fig:tube}) -- due to the everting tube between the internal robot and the head -- it is battery powered and transmits its readings to the internal robot wirelessly. The battery of the head lasts for approximately one hour. The on-board \ac{sbc} of the internal robot then transmits the data back to the operator. Due to the small distance between the head and the receiver, the connection proved to be reliable even when operating at the maximum range. \subsection{Control} For the control of the lateral motion the robot has a draw-wire sensor on each of the three pneumatic actuators to calculate the deflection by measuring the elongation along three directions parallel to the actuator. The measured values are compared against desired reference values which are computed using a piecewise constant curvature model\,\cite{webster2010design}. The measured deviation is then used by a PID controller to adjust the pressure in the corresponding actuator strand. The reference values for the actuator deflection are mapped from inputs given using a conventional joystick. The mapping prioritizes movements in the first, frontal segment before actuating the second one, since the bending of the first actuator is usually more desirable for the operator as observed in the field. Additionally, the second actuator has to move the first one, which can be difficult because of the added weight. The unique principle of deflection among vine robots allows precise angular control of the deflection of the tip, which is unprecedented for vine robots in this size. The throttle lever of the joystick corresponds to forward motion of the tube. Forward motion is caused by the two stepper motors, which are controlled by an Arduino. Currently, the pressure in the box is only controlled manually by a proportional valve. \section{Experimental Evaluation} \label{sec:results} \begin{figure}[hbp] \centering \includegraphics[width=0.48\textwidth]{images/debris_o.jpg} \caption{(a) Followed path (approximately) to find a team member. The time to reach the goal was on average \SI{20}{\minute}. (b) Picture of the robot inside the debris. (c) Debris in which RoBoa was tested. On the right side of the image the robot, as well as the operator.} \label{fig:debris} \end{figure} \subsection{Real-world Evaluation} To test RoBoa's capabilities under realistic conditions, RoBoa was taken to the Swiss Rescue Troop's training site in Wangen an der Aare. The debris in which RoBoa was tested and the approximately followed path are shown in Fig.\,\ref{fig:debris}. The goal of this test was to find a person hidden inside of the debris by remote-controlling RoBoa and getting feedback solely through the camera feed. On its way into the debris, RoBoa's pilot had to maneuver over obstacles and through openings too small for a person. Eventually, the robot found the hidden person and was able to communicate with her. The test on the training ground proved the operational capability of RoBoa in a realistic environment. It was able to follow a \SI{10}{\metre} long path with several obstacles, two curves and one 90$^\circ$ turn without getting stuck inside the debris. This test was performed by a pilot teleoperating the robot at the ground station with the provided camera feed. The results of this first test in a realistic environment were very positively received by the Swiss Rescue Troops. \subsection{Characterization of Capabilities} The robot uses the everting principle and moves forward at a maximum velocity of \SI{6}{\metre\per\minute} depending on the environment. Backward movement is not possible in the current state. It is a much greater challenge than expected at first sight because of the kinking characteristic of the tube. When RoBoa is fully deployed, it has a length of \SI{17}{\metre}. The length can be adjusted by using a longer or shorter tube. The tube can be used to pave a way to the victim and to pull the victim out of the debris. However, the tube has to be replaced after four to five deployments because it can get damaged in the debris and the compressor can only compensate for a certain amount of leakage. The part of the robot with the largest diameter is the head with a diameter of \SI{101}{\milli\metre}. Lateral movement is possible by up to 120$^\circ$ left and right. Due to the weight of the head attachment and the cylindrical shape of the internal robot it is able to bend upwards by 50$^\circ$ when supported by the environment. Without support it is able to lift its head about \SI{20}{\centi\metre} from the ground achieving a bending angle of 20$^\circ$ before it looses its balance and rolls over due to its circular shape. At high angles the internal robot obstructs the eversion process which prohibits simultaneous forwards motion with large deflections. Instead, large deflections can be used to look around before straightening the actuators again to continue moving forwards. In general, the maneuverability of RoBoa is limited in open spaces because the part of the tube that is behind the internal robot cannot hold shapes by itself and needs external structures to remain bent. Only turns of about \SI{15}{\degree/\metre} are possible on frictional surfaces such as grass or gravel. RoBoa's head provides the pilot with a camera feed which makes the steering using a joystick very intuitive. Its built in speaker and microphone allows the pilot to communicate with the victim inside the debris. In addition, hardware components for precise localization are implemented, but the necessary software does not run yet. However, sensing in the head is easily adjustable independent from the rest of the robot. Setting up the system is not possible for a person that has not been instructed. In summary, RoBoa fulfills most of the set requirements in regards to locomotion, dimension, steering and communication. However, there is still potential to improve the robot (e.g. backward movement, localization). \section{Conclusion} \subsection{Discussion} The key capabilities of the realized prototype are a length at maximum tested extension of up to \SI{17}{\metre}, a maximum diameter of \SI{101}{\milli\metre}, and a maneuverability of up to 120$^\circ$ bending angle and down to \SI{20}{\centi\metre} turning radius. This steerable everting tube robot can reach hard to access areas many meters into a field of rubble, making it particularly suitable for \ac{sar} applications in confined and unstructured environments. Simplified localization is enabled by the positioning of the internal robot and by the tube connection from the supply box to the robot's head. The camera, microphone and speakers allow steering the robot and communicating with victims without seeing the whole robot. Besides the actual deployment as a \ac{sar} robot in test scenarios, the novelty of the presented design lies in the combination of the everting vine robot concept for locomotion with a multi-segment, pneumatic actuator for steering. This decouples locomotion from lateral movement: The vine robot principle allows for more versatile forward locomotion in debris fields than other locomotion approaches because it minimizes friction. The internal robot offers advantages for navigating in narrow environments. Its attached valve terminals enable a portable and decentralized pressure supply Compared to a body-steered vine robot, our actuated internal robot allows for a high bending angle and a low turning radius, can lift high payloads at the tip of more than \SI{1}{\kilogram}, and enables s-shapes. As it is always located at the front, the unused actuator-length is minimized and the robot only steers where necessary. This is particularly useful for the debris environment's unique nature with many constraints due to obstacles and small spaces. The steerable front decides on the path and the rest of the robot follows due to the constraints. However, this means that the robot requires interaction with the environment to steer successfully and cannot direct its motion properly in open spaces. Using feedback and state estimation at the internal robot, precise steering at the tip can be achieved. The decoupling enables the development of such a long vine robot as the tube does not need to contain any actuators itself. This simplifies both tube manufacturing and everting. Furthermore, the tube can be easily replaced if damaged during deployment. Nevertheless, the decoupling also comes with some drawbacks. The usage of the internal robot adds complexity to manufacturing, friction inside and weight to the system. The last point makes it more difficult to overcome longer cracks in comparison to a body-steered vine robot. Moreover, the rigid structure of the head and the internal robot prohibits the vine robot's natural capability to shrink. To sum up, the prototype showed that this concept has the potential to successfully serve as a \ac{sar} robot that can navigate through debris and locate trapped victims. The feedback received during the evaluation of the system together with the Swiss Rescue Troops at their training site was that RoBoa would provide an immediate utility for their operations. \subsection{Future Work} The demonstrated utility of the system led to a follow-up project funded by the Swiss Drone and Robotics Centre. To soon support rescue workers in day to day training and active application of the system, the goal is to provide them with a device at technology readiness level 7. The focus lies on increasing the system's reliability and capability and taking the current subsystems from a prototype stage to a reproducible level, particularly the internal robot and the supply box. To do so, we would like to explore alternative cross-sections for the internal robot to overcome problems with moving upwards in certain situations. The weight the internal robot adds to the system is intended to be reduced, for example by designing a lighter valve terminal. Special 3D-printing technologies will minimize friction in the everting mechanism and a new interconnection system will increase the internal robot's modularity and thus will simplify its application. A smaller and lighter supply box will enable faster deployment of the robot. Furthermore, although the tube material is already resistant against adverse environmental conditions, its life-time is limited. We would like to investigate other materials and ways to further improve this. New functionalities that we are planning to add are backward movement of the tip, a water supply line for victims, and a more capable sensor system to localize and identify victims, for example \ac{slam} and chemical noses.
1,477,468,750,696
arxiv
\section{Introduction} The first direct detections of gravitational waves by the advanced LIGO detectors \cite{GW_2016, GW_2016b, GW_2017, GW_2017b} have opened the era of gravitational wave astronomy. These fascinating detection results are a consequence of technology breakthroughs allowing us to measure tiny displacements of $\sim 10^{-18}$\,m of macroscopic test masses \cite{aLIGO2013,aLIGO2014,aLIGO2015,aLIGO2015b, aVirgo2015, grote2010}. It is the thermal noise of the test masses that sets a severe limitation for the detector sensitivity in their most sensitive frequency band from $50$~Hz to $2000$~Hz \cite{99a1BrLeVy,99a1BrGoVy,00a1BrGoVy,HarryBook2012}. In current detectors, a major source for thermal noise is Brownian structural noise \cite{nawrodt2011} in mirror coatings. The thermally induced random stresses in the coatings and the substrates of the test masses produce random deformations of their surfaces, which are detected as thermal noise at the interferometer output. A gravitational wave induces a variation of the frequency of the main mode in the arm cavity of the interferometer, which is registered as a phase shift of a monochromatic optical wave reflected from the cavity. Concurrently, the thermal noise of the mirror's surface also randomly changes the eigenfrequency of the mode masking the gravitational wave signal. The same considerations hold true for the thermal noise of the beam splitter (BS) in the interferometer. Usually, the beam splitter is a cylindrical plate made of an optically transparent material. One surface is covered by a reflecting (R) coating and the other by an anti-reflecting (AR) one. The thermal fluctuations of \textit{both} surfaces also change the interferometer's eigenfrequency. In previous works, BS noise was estimated by the approximation of BS as infinitely thin plates \cite{harmsPRD2004} and in the simplifications of small light beam radii, compared to the BS size and light at normal incidence \cite{somiya2009b}. In this paper, we present the accurate computation of BS thermal noise in gravitational wave detectors from first principles, following the approach formulated in \cite{18a1TuLeVyPLA}. In combination with the fluctuation-dissipation theorem (FDT) \cite{Callen1951PR, LL5,Levin1998} this approach allows the accurate computation of the thermal noise resulting from thermal fluctuations of both R and AR surfaces of BS with light beams of finite size and oblique incidence. A similar approach has been used in \cite{17a1KrDiHeNaLeVy,18plaDiHuNaKr} for computing thermal noise in reflective gratings and in \cite{15a1DeGo} for evaluating the influence of an absorbing layer on the resonant frequencies and Q-factors of spherical microresonators. Here we apply our approach for the calculation of Brownian noise in substrate and coating of the BS for arbitrary polarizations of the light. Our estimates show that the contributions of thermoelastic \cite{99a1BrGoVy} and thermorefractive \cite{00a1BrGoVy, benthem2009} noise are subdominant to the Brownian noise computed here for frequencies between 100 and 4000 Hz. \begin{figure}[b] \includegraphics[width=0.45\textwidth]{figure1.pdf} \caption{Schematic view of the BS plate in GW interferometers. Fluctuations of R and AR surfaces (see inset) produce noise in the detecting dark port. The end mirrors are denoted by EM. SR and PR are signal and power recycling mirrors with the intensity transmissions $T_w$ and $T_s$, respectively. In the case of the advanced LIGO interferometers, additional input mirrors (IM) in the arms (shown by dashed lines) are illustrated. In GEO600 these additional mirrors are absent.}\label{DCsimple} \end{figure} \section{Model and statement of the problem} We consider the simplified model of the GEO600 interferometer as shown in Fig.~\ref{DCsimple}. We assume that the length of north (east) arm differs from a multiple of the laser wavelength $\lambda$ by small displacements $x_n$ ($x_e$): \begin{align} L_n &= n\lambda +x_n, \quad L_e= \ell\lambda +x_e,\quad L_n\simeq L_e\simeq L, \end{align} where $n$ and $\ell$ are integer numbers. We also assume that the length $L$ of north and east arm is much larger than the length $\ell_w$ of west arm and length $\ell_s$ of the south arm (see notations in Fig.~\ref{DCsimple}): \begin{align} \label{Lell} L \gg \ell_s,\ \ell_w. \end{align} To begin with, we assume that the power recycling (PR) and signal recycling (SR) mirrors are perfectly reflecting. In the case of perfectly tuned arms, we have two optical modes: the ``west" mode (the standing wave is in the \textit{west} arm and is absent in the south arm) and the ``south" mode (the standing wave is in the \textit{south} arm and absent in the west arm). The displacements $x_e,\ x_n$ of the end mirrors produce a coupling between the two modes. Thus, the Hamiltonian $\mathcal H$ for the two modes is written in the following form (for details see \cite{Law1995}): \begin{align} \mathcal H &= \hslash \omega_w a_w^*a_w\left(1-\frac{x_+}{L}\right) + \hslash \omega_s a_s^*a_s \left(1-\frac{x_+}{L}\right)\nonumber\\ \label{Hcross} &\quad + \hslash \sqrt{\omega_s \omega_w}\big(a^*_wa_s+a_wa^*_s\big)\frac{x_-}{L},\\ \label{xpm} &\boxed{x_\pm \equiv \frac{x_e\pm x_n}{2}.} \end{align} The cross term of the Hamiltonian is responsible for the occurrence of a gravitational wave signal at the photo detector. It provides information on small variations of the differential coordinate $x_-$. Deformations of the BS surfaces also result in a coupling of the two modes. Therefore, we must calculate the coupling coefficients and particularly the cross term to translate the BS fluctuations into effective fluctuations $x_-^\text{eff}$ of the differential coordinate. Let us consider the eigenfrequencies $\omega_w$, $\omega_s$ and $\omega_0$ \eqref{Hcross} for this particular case: \begin{align} \label{omega0} \omega_w=\omega_s,\ \omega_0 = \omega_s \left(1 - \frac{x_+}{L}\right). \end{align} Instead of the partial field coordinates $a_s,\ a_w$ for two coupled modes (oscillators), we introduce eigen (normal) coordinates $b_\pm$ and rewrite the Hamiltonian as follows: \begin{subequations} \label{H2} \begin{align} \mathcal H &= \mathcal H_+ + \mathcal H_-,\quad \mathcal H_\pm \equiv \hslash \omega_\pm b_\pm^* b_\pm ,\\ \label{bpm} & b_\pm \equiv \frac{a_w \pm a_s}{\sqrt 2},\quad \omega_\pm = \omega_0\left(1 \pm \frac{x_-}{L}\right). \end{align} \end{subequations} The $b_+$ mode in the east arm is called "east" mode (it is absent in the north arm) and the $b_-$ mode is called "north" mode. In the relations \eqref{H2}, there are two independent oscillators and for each of them we calculate the adiabatic invariants $\mathcal I_\pm$ and the resulting frequency variations and variations of the energies $\mathcal E_\pm$: \begin{subequations} \label{modespm} \begin{align} \mathcal I_+ &= \frac{\mathcal E_+}{ \omega_+},\quad \mathcal I_- = \frac{\mathcal E_-}{ \omega_-},\\ \label{Deltaomega} \frac{\Delta \omega_+}{\omega_+} & = \frac{\Delta \mathcal E_+}{\mathcal E_+} , \quad \frac{\Delta \omega_-}{\omega_-} = \frac{\Delta \mathcal E_-}{\mathcal E_-}. \end{align} The adiabatic invariant of an oscillator is conserved, if its frequency changes very slowly compared to its oscillation period (here: the period of the optical oscillations). We are now interested in the effective small changes of $x_-$ and $x_+$, created by small perturbations of the BS surface. In our case, variations of the eigenfrequencies above can be written using (\ref{omega0}, \ref{bpm}): \begin{align} \label{setpm} \frac{\Delta \omega_+}{\omega_+} & = -\frac{x_+}{L} + \frac{x_-}{L},\quad \frac{\Delta \omega_-}{\omega_-} = -\frac{x_+}{L} - \frac{x_-}{L}, \end{align} By using the Eqs. \eqref{setpm} and \eqref{Deltaomega}, we find the effective relative displacements: \begin{align} \label{Epm} - \frac{x_+}{L} &= \frac{\Delta \mathcal E_+}{\mathcal E_+} + \frac{\Delta \mathcal E_-}{\mathcal E_-}\,,\\ \label{xminus2} \frac{x_-}{L} & = \frac{\Delta \mathcal E_+}{\mathcal E_+} - \frac{\Delta \mathcal E_-}{\mathcal E_-}\, . \end{align} \end{subequations} Let us assume that a small perturbation of the BS surface appears slowly on the initially flat surface. This will change $\Delta \mathcal E_+$ and $\ \Delta \mathcal E_-$, which can be computed from the elastic energy stored in the BS due to the applied ponderomotive light pressures. As the surface perturbations are small, we can apply the presented approximations and calculate the pressures with Maxwell stress tensor of the \textit{unperturbed} field in the region around the surface deformation. \subsection{Calculations of the effective relative displacement $(x_-/L)$} For the calculation of the terms in \eqref{xminus2} it is convenient to use the eigenamplitudes $b_\pm$ \eqref{bpm}. The work performed against the pressure is equal to the change of energy in the cavity taken with opposite sign. The light pressure $p_i$ acting on dielectric media's surface may be calculated by means of Maxwell stress tensor $\sigma_{ij}$ inside and outside the material: \begin{align} \label{sigmaD} \sigma_{ij}^\epsilon &= \frac{1}{4\pi}\left(\epsilon E_iE_j +H_iH_j -\frac{\epsilon E^2+H^2}{2}\,\delta_{ij}\right). \end{align} The pressure in \eqref{sigmaD} is directed along the \textit{outer} normal of the surface. For example, for a reflecting surface of the BS, the stress tensor is equal to the pressure acting along $(\eta)-$axis (see Fig.~\ref{DCsimple}). In contrast, the stress tensor calculated for the fields directly \textit{beneath} the reflecting surface (inside the BS) is equal to the pressure acting along the $(-\eta)-$axis (compare Fig.~\ref{Mstress}). \begin{figure}[b] \includegraphics[width=0.25\textwidth]{figure2.pdf} \caption{Maxwell stress tensor $\sigma_{nn}$ leads to pressure $p_n$ along \textit{outer} normal $\vec n$ to the boundary.}\label{Mstress} \end{figure} The total normal pressure is equal to the difference of the pressures inside and outside the BS at the surface: \begin{align} \label{pi} p_i=p_i^\text{outer} - p_i^\text{inside}. \end{align} The relative energy variations $\Delta \mathcal E_\pm $ can be calculated as the work of ponderomotive light forces resulting from the total normal pressure: \begin{align} \label{DeltaE} \frac{\Delta \mathcal E_\pm }{\mathcal E_\pm} &= - \int F_\pm (\vec r_\bot)\,u(\vec r_\bot)\, d\vec r_\bot,\quad F_\pm = \frac{p_\pm (\vec r_\bot)}{\mathcal E_\pm }, \end{align} where $ u(\vec r_\bot)$ is a small perturbation of the BS surface in normal direction, depending on the coordinate $\vec r_\bot$ on the BS surfaces. The averaging functions $F_\pm$ are defined by the pressures $p_\pm$ calculated for the corresponding modes and have to be applied to \textit{both} sides of the BS, as will be shown later. For small perturbations $\Delta \mathcal E \ll\mathcal E$ the functions $F_\pm$ depend only on geometric factors, i.e. surface perturbations, radius of light beam, refractive index and thickness of the BS. They do not depend on the mode amplitudes $b_\pm$, because the energies $\mathcal E_\pm$ and the pressures $p_\pm$ are both proportional to $\sim |b_\pm|^2$. \subsection{Brownian Noise Power Spectral density}\label{SD} Following the FDT \cite{Callen1951PR, LL5,Levin1998}, we have to apply virtual pressures $p(\vec r) =F_0 p_\pm(\vec r)\sin\omega t$, oscillating with an angular frequency $\omega$, to calculate the mean dissipated power $W(\omega)$ in the BS and to finally calculate the Brownian noise power spectral density $S_{BS}(\omega)$: \begin{align}\label{SBS} S_{BS}(\omega) &= \frac{8 k_BT W(\omega)}{\omega^2 F_0^2}, \end{align} \noindent where $k_B$ is the Boltzmann constant and $T$ is the absolute temperature. In this paper, we are interested in structural Brownian noise \cite{Saulson1990PRD}. In the model of structural loss the dissipated power $W$ is defined by the phenomenological loss angle $\phi$: \begin{align} \label{Energy} W = \mathbb E \omega\phi, \end{align} where $\mathbb E$ is the mean elastic energy stored in the BS. The pressures on the beam splitter $p_\pm$ have striped patterns \cite{14a1HeCrGrHiLuNaSiVaVyWi,18a1TuLeVyPLA} i.e. which are proportional to $\sim \cos^2 k\xi/\sqrt 2$ ($k\equiv \omega_0/c$, $c$ is the speed of light). Hence, the applied pressure can be divided into two parts: 1) a smooth (non-striped) pressure $P_\text{sm}$ and 2) a striped pressure $P_\text{str}\sim \cos \sqrt 2 k\xi$. The elastic energies for the problems 1 and 2 can be calculated separately. Indeed, the total energy can be calculated as integral over both sides of the BS: \begin{subequations} \begin{align} \mathbb E&=\frac{1}{2}\int \left(P_\text{sm}(\vec r_\bot)+P_\text{str}(\vec r_\bot)\right)\\ &\qquad \times \left(u_\text{sm}(\vec r_\bot)+u_\text{str}(\vec r_\bot)\right)d\vec r_\bot\nonumber\\ &= \mathbb E_\text{sm} + \mathbb E_\text{str} + \mathbb E_\times ,\\ \mathbb E_\text{sm} &= \frac{1}{2}\int P_\text{sm}(\vec r_\bot)u_\text{sm}(\vec r_\bot)\, d\vec r_\bot,\\ \mathbb E_\text{str} &= \frac{1}{2}\int P_\text{str}(\vec r_\bot)u_\text{str}(\vec r_\bot)\, d\vec r_\bot\\ \mathbb E_\times &=\frac{1}{2}\int \left[ P_\text{sm}(\vec r_\bot)u_\text{str}(\vec r_\bot + P_\text{str}(\vec r_\bot)u_\text{sm}(\vec r_\bot))\right] d\vec r_\bot\,, \end{align} \end{subequations} where $u_\text{sm}(\vec r_\bot)$ and $u_\text{str}(\vec r_\bot)$ are surface displacements caused by smooth and striped pressures, respectively. Obviously, the cross term of energy $\mathbb E_\times$ will be negligibly small after integration over the surface, due to the fast oscillating multiplier $\cos \sqrt 2 k\xi$. To our best knowledge, there is no approach to solve problem 1 analytically. Hence, we solve it numerically using the finite element tool COMSOL Multiphysics \cite{COMSOL}. In contrast, problem 2 can be solved analytically using the well-known solution for a half infinite elastic media \cite{14a1HeCrGrHiLuNaSiVaVyWi}. This method can be applied, because the pressure contribution with fast spatial oscillation, characterized by the wave vector $k_0$, leads only to deformations located close to the surface. In particular, the elasticity divergence $\Theta=u_{xx} + u_{yy} + u_{zz}$ decreases along the $z$-axis normal to the surface as $\sim e^{-kz}$. \section{Calculations of pressures} We assume all light beams in the interferometer to have an amplitude Gaussian distribution over the cross section: \begin{align} \label{f0} f_0(r_\bot) = \frac{e^{-r_\bot^2/2r_0^2}}{\sqrt{\pi r_0^2}},\quad \int|f_0|^2 \, d\vec r_\bot =1, \end{align} where $d\vec r_\bot \equiv r_\bot \, dr_\bot\, d\phi$. For gravitational wave detectors we can assume that the beam radius $r_0$ is \textit{large}: \begin{align} \label{condr0} r_0\gg \lambda=\frac{2\pi}{k}, \end{align} where $\lambda$ is the wavelength. Hence, we can consider the wave in the cavity as a plane wave with an amplitude multiplied by the Gaussian factor $f_0$ omitting terms of higher orders $\sim 1/kr_0$ (see for example \cite{kogelnik1966, davis1979, davis1981}). For the calculations, we introduce the following relations between the coordinates $(x_e,\ x_n)$ and $(\xi,\ \eta)$ (see Fig.~\ref{DCsimple}): \begin{align} x_e &=\frac{\xi -\eta}{\sqrt 2},\quad x_n = \frac{\xi +\eta}{\sqrt 2},\\ \xi &= \frac{x_e + x_n}{\sqrt 2},\quad \eta =\frac{- x_e + x_n}{\sqrt 2}\,. \end{align} Inside the BS the light propagates with an angle $\alpha$ with respect to the BS axis (see Fig.~\ref{BSplusS}): \begin{align} \label{sin} \sin \alpha =\frac{1}{n\sqrt 2} ,\quad 2a= h \tan \alpha. \end{align} On the AR surface there are two centers of Gaussian distributions separated by distance $2a$ (see Fig.~\ref{BSplusS}). Inside the BS, the beams propagate along the axes $y_e,\ y_n$. These coordinates may be recalculated from the coordinates $(\xi,\, \eta)$: \begin{align} y_e & =\xi \sin \alpha + \eta \cos\alpha,\\ y_n &= \xi\sin\alpha - \eta \cos \alpha. \end{align} We calculate the fields and pressures for different polarizations of traveling waves: S-polarization (vector of electric field is normal to the plane of figure) and p-polarization (vector of magnetic field is normal to the plane of figure). As a result, after simple but cumbersome calculations presented in Appendix \ref{app1}, we obtain the averaging functions on both surfaces of the BS and for each polarization orientation. We present these functions in the following 2 subsections. \subsection{S-polarization summary}\label{secS} For the GEO600 interferometer and s-polarization, we obtain the following equation for the eigenfrequency shift, i.e. effective relative displacement $x_-/L$. For the reflecting surface R of the BS we retrieve using \eqref{xminus2} and (\ref{prS+}, \ref{prSmb}) \begin{align} \left.\frac{x_-}{L}\right|_{rS} &=\int \frac{F_r^s(\vec r_\bot)}{L}\,u_\bot(\vec r_\bot)\, d\vec r_\bot,\\ \frac{F_r^s(\vec r_\bot)}{L} &\equiv \left(\frac{p^{rS+}}{\mathcal E_+} - \frac{p^{rS-}}{\mathcal E_-}\right)\\ \label{Rsmode} & = \frac{f_\bot^2}{2L}\left\{\left[2n -\frac{1}{n} +1\right]\right.\\ &\qquad \left. -\left[1 + \frac{1}{n} +2\sqrt 2(n-1) \right] \cos 2 k_{bs}\xi\right\}, \nonumber \end{align} where we use the notations of \eqref{sin} and \begin{align} \label{kbs} k_{bs} & = \frac{k}{\sqrt 2},\\ \label{fbot} f_\bot &\equiv \frac{1}{\sqrt{\pi r_0^2}} \exp\left(-\frac{z^2 + 0.5\xi^2}{2r_0^2}\right). \end{align} For the anti-reflecting surfaces AR of the BS we get with \eqref{xminus2} and (\ref{parS+E}, \ref{parS-}): \begin{align} \left.\frac{x_-}{L}\right|_{arS} &=\int \frac{F_{ar}^s(\vec r_\bot)}{L}\,u_\bot(\vec r_\bot)\, d\vec r_\bot, \\ \frac{F_{ar}^s(\vec r_\bot)}{L} &\equiv \left(\frac{p^{arS+}}{\mathcal E_+} - \frac{p^{arS-}}{\mathcal E_-}\right)\\ \label{ARsmode} &= \frac{1}{2L} \left\{ \left( f_+^2\right)\left[1 - 2n +\frac{1}{n}\right]\right.\\ &\quad \left. + 2\sqrt 2(n-1)f_- f_+ \cos\kappa(\xi_+ + \xi_-) \right.\nonumber\\ &\qquad \left. - f_+^2 \left(1 - \frac{1}{n}\right) \cos 2\kappa \xi_+ \right\}.\nonumber \end{align} Here, we use the following notations (\ref{sin}, \ref{kbs}, \ref{fbot}) and definitions: \begin{align} \label{fpm} f_{\pm} & = \sqrt\frac{1}{\pi r_0^2}\exp\left(-\frac{z^2+0.5(\xi\mp a)^2}{2r_0^2}\right),\\ & \label{xipm} \xi_\pm \equiv \xi \mp a\,. \end{align} \subsection{P-polarization summary}\label{secP} For the GEO600 interferometer and p-polarization, we obtain the following equations for the eigenfrequency shift, i.e. effective $x_-/L$. For the surfaces R we get using \eqref{xminus2} and (\ref{FrP+}, \ref{prPm}) \begin{align} \left.\frac{x_-}{L}\right|_{rP} &=\int \frac{F_{r}^p(\vec r_\bot)}{L}\,u_\bot(\vec r_\bot)\, d\vec r_\bot,\\ \frac{F_{r}^p(\vec r_\bot)}{L} &\equiv \left(\frac{p^{rP+}}{\mathcal E_+} - \frac{p^{rP-}}{\mathcal E_-}\right)\\ \label{Rpmode} &= \frac{f_\bot^2}{2L}\left\{\left[2n - \frac{1}{n} +1\right] \right.\\ &\quad \left.+\left[1+ \frac{1}{n} + 2\sqrt 2\big(n-1\big)\right] \cos\sqrt 2 k\xi\right\}. \nonumber \end{align} For the surface AR we get using \eqref{xminus2} and (\ref{parP+}, \ref{parP-}) \begin{align} \left.\frac{x_-}{L}\right|_{arP}& =\int \frac{F_{ar}^p(\vec r_\bot)}{L}\,u_\bot(\vec r_\bot)\, d\vec r_\bot, \\ \frac{F_{ar}^p(\vec r_\bot)}{L} &\equiv \left(\frac{p^{arP+}}{\mathcal E_+} - \frac{p^{arP-}}{\mathcal E_-}\right)\\ \label{ARpmode} &= \frac{1}{2L} \left\{ \left( f_+^2\right)\left[1 - 2n + \frac{1}{n}\right]\right.\\ &\qquad \left. - 2\sqrt 2(n-1)f_- f_+ \cos \kappa(\xi_+ +\xi_-) \right.\nonumber\\ &\qquad \left. +\left( f_+^2\right) \left(1 - \frac{1}{n}\right) \cos 2k_{bs}\xi_+ \right\}. \nonumber \end{align} We see that the "smooth" parts of the pressure acting on R and AR surfaces are the same both for p- and s-polarizations (compare \eqref{Rsmode} with \eqref{Rpmode} and \eqref{ARsmode} with \eqref{ARpmode}). However, the "striped" contributions have opposite signs for s- and p-polarization while being equal in their absolute value. Hence, for non-polarized light and for the case of equal s- and p-polarized field amplitudes, the ``striped" term vanishes. \subsection{Generalization for the aLIGO interferometers} The results in subsections \ref{secS} and \ref{secP} are obtained for the GEO600 configuration \textit{without} input mirrors (IM), shown by dashed rectangles in Fig.~\ref{DCsimple}. For the calculation of the mode energies $\mathcal E_\pm$, we use the fact, that the amplitudes of the fields on BS are nearly the same as in the arms, and in addition we utilize the approximation \eqref{Lell}. For advanced LIGO, we have to recalculate the energies $\mathcal E_\pm$ by taking into account, that the mean amplitudes in the arms are by a factor of approximately $2/\sqrt T_\mathrm{IM}$ larger than on the BS. Here $T_\mathrm{IM}$ represents the power transmittance of the IM. Hence, we can generalize, for example, formula \eqref{Rsmode} for s-polarization on the R surface by the transformation: \begin{align} \label{rule} F_r^s|_\text{GEO} &= F_r^s|_\text{aLIGO}\times \frac{T_\mathrm{IM}}{4}. \end{align} Obviously, the other formulas (\ref{ARsmode}, \ref{Rpmode}, \ref{ARpmode}) can be generalized for advanced LIGO using the same transformation \eqref{rule}. \begin{table}[b] \caption{Parameters of BS for GEO600 and aLIGO.}\label{table1} \begin{tabular}{|p{0.25\textwidth} | r|r|} \hline \hline Parameters & aLIGO & GEO600 \\ \hline \hline Radius $R$ of BS, m & $0.1875$ & $0.13$ \\ Height $h$ of BS, m & $0.064$ & $0.08$\\ Refractive index $n_{\text{SiO}_2}$ & $1.45$ & $1.45$\\ $ a =\frac{h}{\sqrt{2n^2 -1}}$ & $0.036$ & $0.045$\\ Radius $w_0=\sqrt 2 r_0$ of light beam on BS, m \footnote{$r_0$ is the beam radius for the \textit{intensity} distribution, which is proportional to $\sim \exp[-r^2/r_0^2]$, whereas $w_0$ is the radius for the \textit{amplitude} distribution, which is proportional to $\sim \exp[-r^2/w_0^2]$). So $w_0 = \sqrt 2\, r_0$.} & $ 0.06$ & $0.0088$\\ Young's modulus $E_{\text{SiO}_2}$, GPa & 73.1 & 73.1\\ Poisson's ratio $\nu_{\text{SiO}_2}$ &0.17 &0.17 \\ Density $\rho_{\text{SiO}_2}$, kg/m$^3$ & 2203 & 2203\\ Loss angle $\phi_{\text{SiO}_2}$ & $10^{-8}$ & $10^{-8}$\\ Power transmittance of input mirrors in arms of aLIGO & $5\times 10^{-3}$ & --- \\ \hline \hline \end{tabular} \end{table} \section{Calculation of the elastic energy} To calculate the spectral density of the Brownian BS noise in terms of relative displacements $x_-/L$ for s-polarization, we have to apply virtual pressures $p_{rs}$ to the R surface of the BS using \eqref{Rsmode} and $p_{ars}$ to the AR surface using \eqref{ARsmode}: \begin{align}\label{press} p_{rs}= F_0 F_r^s(\vec r) \sin\omega t,\quad p_{ars}= F_0 F_{ar}^s(\vec r) \sin\omega t, \end{align} where $F_0$ is a constant, see \eqref{SBS}. For the p-polarization we should use (\ref{Rpmode}, \ref{ARpmode}). Then we calculate the mean elastic energy $\mathbb E$ stored in the BS and substitute this result into \eqref{SBS} by taking \eqref{Energy} into account. Since the elastic problem cannot be solved analytically, we performe the computations numerically using COMSOL \cite{COMSOL}. We use the parameters for the BS of GEO600 and advanced LIGO listed in Table~\ref{table1}. For the following considerations, we account only for the smooth contributions of the pressures \eqref{press}, which are equal for both polarizations (see (\ref{Rsmode}, \ref{ARsmode}, \ref{Rpmode}, \ref{ARpmode}). For the solution of the elastic problem, we have to fulfill two conditions: a) the sum of all external forces and b) the total torque of all external forces should be equal to zero. However, the pressures integrated over the R and the AR surfaces are not equal to zero, i.e. the total force acting on the BS is not equal to zero. Hence, in analogy to inertial forces we have to apply an additional volume force $f$, which is \textit{homogeneously distributed} over the BS such that the total force is equal to zero \cite{Liu2000}: \begin{align} f &= - \frac{1}{\pi R^2 h}\int \left[ p_{r}(\vec r_\bot) + p_{ar}(\vec r_\bot)\right] d S=\\ &= -\frac{\sqrt 2 F_0}{\pi R^2h}\times (1-\epsilon_F) , \end{align} where the integration is performed over the area of the BS, having radius $R$ and height $h$. In approximation $R\to \infty$, the coefficient $\epsilon_F$ is zero. For the parameters of advanced LIGO and GEO600 in Table \ref{table1}, the numerical computations yield: \begin{align} \epsilon_F^\text{LIGO} &\simeq -0.000142,\\ \epsilon_F^\text{GEO} &\simeq 3.3\times 10^{-16}. \end{align} The total torque of the external forces is not zero, because the centre of $p_{ar}$ is shifted from the symmetrical cylinder axis by distance $a$. Hence, we have to introduce an additional volume force $f_\text{add}\sim y/R$ in order to eliminate effective torques: \begin{align} f_\text{add} &= (1-\epsilon_T) \frac{\sqrt 2\, F_0}{\pi R^2 h}\left(1 - 2n +\frac{1}{n} \right) \times \frac{2a y}{R^2}, \end{align} where the small coefficient $\epsilon_T$ results from the finite dimension of the BS. In the approximation $R\to \infty$, we retrieve $\epsilon_T\to 0$. For the parameters in Table~\ref{table1}, numerical computations lead to: \begin{align} \epsilon_T^\text{LIGO} &\simeq 0.00576962,\\ \epsilon_T^\text{GEO} &\simeq 1.19\times 10^{-13}. \end{align} \subsubsection{Substrate thermal Brownian noise of the BS} In order to calculate Brownian noise from thermal fluctuation in the BS substrate, we calculate the mean elastic energy stored in the BS. Using COMSOL, we performed numerical calculations of the mean elastic energy for parameters in Table~\ref{table1}: \begin{align} \label{Eligo} \mathbb E_\text{aLIGO} &= 3.98 \times 10^{-10} \ J\times \frac{F_0^2}{N^2}, \\ \label{Egeo} \mathbb E_\text{GEO} &= 1.97 \times 10^{-9} \ J\times \frac{F_0^2}{N^2}. \end{align} Using \eqref{SBS}, we compute the power spectral density $S_\text{BS-}$ of the BS Brownian noise, recalculated to the differential coordinate $x_-$: \begin{align} \label{estLIGO} \left.\frac{\sqrt{S_\text{BS-}(\omega)}}{L}\right|_\text{LIGO} &\simeq 4.5\times 10^{-27}\, \frac{1}{\sqrt\text{Hz}}, \\ \label{estGEO} \left.\frac{\sqrt{S_\text{BS-}(\omega)}}{L}\right|_\text{GEO} &\simeq 2.7\times 10^{-23}\, \frac{1}{\sqrt\text{Hz}}, \end{align} at the angular frequency $\omega= 2\pi\times 100 \, \text{s}^{-1} $ for advanced LIGO and GEO600, respectively. For advanced LIGO we have taken the transformation \eqref{rule} into account. The current sensitivity of advanced LIGO \cite{aLIGO2015b} is about $\sqrt{S_x}/L \simeq 5\times 10^{-23}\, 1/\sqrt\text{Hz}$, for the future cryogenic LIGO Voyager \cite{Voyager2014} the planned sensitivity is about $\sqrt{S_x}/L \simeq 8\times 10^{-25}\, 1/\sqrt\text{Hz}$. Since the substrate BS noise is substantially smaller than this sensitivity, LIGO Voyager will not be limited by it. The current sensitivity of GEO600 \cite{grote2010} is about $\sqrt{S_x}/L \simeq 3\times 10^{-22}\, 1/\sqrt\text{Hz}$. It is about 10 times larger than BS noise evaluated here. \begin{table}[b] \caption{Parameters of the reflecting and anti-reflecting coatings of the BS for advanced LIGO \cite{coyne2005, billingsley2018}.}\label{table2} \begin{tabular}{|p{0.28\textwidth} | r|r|} \hline \hline Parameters & SiO$_2$ & Ta$_2$O$_5 $ \\ \hline \hline Refractive index & 1.45 & 2.1\\ Young modulus, GPa & 73.1 & 140\\ Poisson ratio &0.17 &0.23 \\ Loss angle & $1\times 10^{-4}$ & $4\times 10^{-4}$\\ R coating total thickness $h_r$, nm & $ 523$ & $320$\\ AR coating total thickness $h_{ar}$, nm & $517$ & $ 359$\\ \hline \hline \end{tabular} \end{table} \subsubsection{Coating thermal Brownian noise of the BS} The Brownian noise of the R and AR coatings has to be calculated separately, because the loss angle of the alternating layers of $Ta_2O_5$ and $SiO_2$ are much larger than the loss of the substrate. These coatings are required to provide the optical function of the reflective and anti-reflective BS surfaces. For advanced LIGO, the parameters of the BS coatings are listed in Table~\ref{table2}. The parameters of the GEO600 coatings are not published. Therefore, we assume the optical coating design to be the same as for LIGO (see Table~\ref{table2}). We calculate the energy stored in the coatings with the assumption that the coatings are thin compared to the BS thickness. Thus, the strains in the layers are approximately the same as on the upper side of the substrate. We use the following expression for the volume density $w$ of elastic energy in the layer \cite{Saulson1990PRD, 14a1HeCrGrHiLuNaSiVaVyWi}: \begin{align} \label{wn2} w_n & = \frac{1}{2} \frac{(1+\nu)(1-2\nu) \sigma_{zz}^2}{Y(1-\nu)} \nonumber\\ &\quad + \frac{Y \,(u_{xx}+u_{yy})^2}{4(1-\nu)}+ \frac{Y \,(u_{xx}-u_{yy})^2}{4(1+\nu)} \\ &\qquad + \frac{Y}{1+\nu}\times u_{xy}^2. \nonumber \end{align} Here $\sigma_{zz}$ is the normal component of the stress tensor, $Y$ and $\nu$ are Young's modulus and Poisson's ratio of the layers (made of Ta$_2$O$_5$ or SiO$_2$), whereas $u_{ij}$ are the tangent components of the strain tensors, which are the same as for the substrate's surface. The total energy stored in the SiO$_2$ and Ta$_2$O$_5$ layers of the R and the AR coatings can be calculated as: \begin{align} \mathbb E_\text{coat}&= h_{r\, \text{SiO}_2} \int w_n|_{r\, \text{SiO}_2}\, dS_r\\ &\qquad +h_{r\, \text{Ta}_2\text{O}_5} \int w_n|_{r\, \text{Ta}_2\text{O}_5}\, dS_r\\ &\qquad + h_{ar\, \text{SiO}_2} \int w_n|_{ar\, \text{SiO}_2}\, dS_r \\ &\qquad +h_{ar\, \text{Ta}_2\text{O}_5} \int w_n|_{ar\, \text{Ta}_2\text{O}_5}\, dS_r, \end{align} where the integration is carried out over the R and AR surfaces. Performing a numeric integration using $u_{ij}$, obtained for the substrate and the parameters listed in Table~\ref{table2}, we obtain at $\omega=2\pi 100$ s$^{-1}$: \begin{align} \label{est2GEO} \left.\frac{\sqrt{S_\text{BS-}(\omega)}}{L}\right|_\text{GEO} &\simeq 6.2\times 10^{-23}\, \frac{1}{\sqrt\text{Hz}},\\ \label{est2LIGO} \left.\frac{\sqrt{S_\text{BS-}(\omega)}}{L}\right|_\text{LIGO} &\simeq 0.98\times 10^{-26}\, \frac{1}{\sqrt\text{Hz}} \end{align} \noindent Fig. \ref{fig:GEO600sensitivity} shows the BS Brownian noise in comparison to the other severe limitations of the GEO600 sensitivity, namely the mirror coating Brownian and the beam-splitter thermorefractive noise. Over the whole detection bandwidth of GEO600, the BS Brownian noise is about 35$\%$ lower than the mirror coating Brownian noise. For frequencies larger than 100\,Hz, the BS Brownian noise is more dominant than the BS thermorefractive noise. At the most sensitive frequency of GEO at 500 Hz, the BS noise \eqref{est2GEO} accounts for approximately $10\%$ of the current sensitivity. Fig. \ref{fig:GEO600sensitivity} shows also the sensitivity curve of the proposed GEO-HF upgrade \cite{Grote2010CQG}. This curve so far does not take the BS Brownian noise into account. As illustrated in Fig. \ref{fig:GEO600sensitivity}, the BS Brownian noise impairs the feasible sensitivity of the GEO-HF by about $50\%$ for the frequency range between $50$ and $1000$ Hz. \begin{figure} \includegraphics[width=0.5\textwidth]{figure3.pdf} \caption{Noise $\sqrt{S}/L$ of GEO600 in $1/\sqrt{\text{Hz}}$ versus mechanical readout frequency $f$ for the following thermal noise contributions: The mirror coating Brownian \cite{Lueck2010JOP}, the BS thermorefractive \cite{benthem2009} and the BS Brownian noise computed in this work. Additionally, the measured sensitivity of 2009 \cite{Lueck2010JOP} and the design sensitivity of GEO-HF \cite{Grote2010CQG} are shown.}\label{fig:GEO600sensitivity} \end{figure} \section{Conclusion} In this contribution we applied the direct method of thermal noise calculations from first principles formulated in \cite{18a1TuLeVyPLA}, to the computation of Brownian thermal noise of beam splitters in gravitational wave interferometers. We have demonstrated how the light pressure on both reflective and anti-reflective surfaces contributes to the total thermal noise of the BS. To this end, we took finite sized BS substrates and the coating contributions into account. An important new ingredient on our calculations is taking into account the stripped pattern of the form-factor that represents the sensitivity of the interferometer’s readout to the BS surface displacement. The pattern is due to the standing waves inside each of the arms\footnote{We note that similar standing-wave patterns are important in computations of thermoelastic \cite{99a1BrGoVy} and thermorefractive \cite{00a1BrGoVy, benthem2009} noise of the BS.} The striped contribution of the pressure turns out to be negligibly small for the substrate Brownian noise. But it provides an increase of the total spectral density of coating Brownian noise by about 50\% (see Sec. IVB of \cite{14a1HeCrGrHiLuNaSiVaVyWi}). The results show, that the BS noise is negligibly small for advanced LIGO. However, for GEO600, it accounts for about 10\% of the current noise budget in the most sensitive frequency band. Furthermore, BS noise impairs the feasible sensitivity of the proposed GEO-HF design by about 50\%. This is because the additional Fabry-Perot arm cavities in LIGO lead to smaller light powers at the BS, compared to the power circulating in the arms. Hence, the BS noise in comparison to the test mass noise is suppressed. Whereas in GEO600, the contribution of BS noise is much larger, because the light power at the BS is the same as at the test masses. Overall, the Brownian BS noise is not a critical issue for the current sensitivity of gravitational wave detectors. However, the presented approach is based on a first-principle method with the Hamiltonian as starting point. It is thus useful for noise computations in other interferometer topologies, for example Sagnac interferometers or other complex optical devices. \acknowledgments We are grateful to Garilynn Billingsley for providing us with the information about the thickness of the coating layers on the aLIGO beamsplitters. S.K. acknowledges partial support from EURAMET within project 17FUN05 PhotOQuant. S.V. acknowledges partial support from the Russian Science Foundation (Grant No. 17-12-01095) and from the TAPIR GIFT MSU Support of the California Institute of Technology. This document has LIGO number P1800211-v1.
1,477,468,750,697
arxiv
\section{Conclusion} \label{sec:conclusion} In this paper we demonstrated the efficacy of VQA datasets generated using 3D computer graphics to incorporate new skills into existing VQA models trained on real data. We particularly showed that we can teach a VQA model how to count objects in the real world by using only synthetic data while not decreasing the model performance on other types of questions. This is challenging since real and synthetic datasets often exhibit a large domain gap. We further proposed \fswap{} as a simple yet effective technique for domain adaptation that is competitive and surpasses previous methods in our experiments. \section{Broader Impact} The main ethical aspects of this work have to do with data privacy and mitigating implicit biases in the existing VQA models. In this work, we explored 3D simulation platforms to generate realistic synthetic data as a promising direction to augment or replace existing datasets, effectively avoiding the exposure of potentially sensitive information. However, our approach is not able to generate a broad diversity of animated objects (e.g., people or animals interacting in the scenes) given that Hypersim only contains indoor scenes, and TDW provides a limited quantity of these type of model assets. While we leverage this information in our generated data, a more complex 3D system would be ideal to craft a more diverse set of animated objects. Therefore, this work is a stepping stone for further explorations to address this issue in the future. \vspace{0.02in} \vspace{0.04in} \textbf{Acknowledgments} This work was supported in part by the National Science Foundation under Grants No.~\#2221943 and \#2040961. \clearpage {\small \bibliographystyle{ieee_fullname} \section{Introduction} \label{sec:intro} \begin{figure}[tbh] \centering \includegraphics[width=0.96\linewidth]{Images/Figure1.pdf} \vspace{-0.05in} \caption{Training samples for VQA from real and synthetic datasets. The first row shows existing examples from the {VQA~2.0} dataset. The second row shows examples from Hypersim~\cite{Roberts2020HypersimAP}, a hyper-realistic synthetic dataset we extend for VQA. The third row shows some examples we generate using ThreeDWorld~\cite{Gan2020ThreeDWorldAP}. We show type-specific questions for each dataset, i.e., counting questions, color related questions, and yes/no questions. } \vspace{-0.25in} \label{fig:lead} \label{fig:num_labeled} \end{figure} Data augmentation is an effective way to achieve better generalization on several visual recognition and natural language understanding tasks. Existing work on Visual Question Answering (VQA) has explored augmenting the pool of questions and answers, e.g.~by perturbing or masking some parts of the images~\cite{kafle2017data,tang2020semantic,agarwal2020towards, PatchMix_2021_BMVC}. Moreover, curating large-scale datasets is a laborious task and sourcing images is an expensive process that needs to account for practical issues such as copyright and privacy. Augmenting existing datasets with synthetically generated data offers a path to enhance our existing data-driven models at a lower cost. Our work focuses on leveraging synthetically generated data through the use of modern 3D generated computer graphics using a couple of novel resources -- Hypersim~\cite{Roberts2020HypersimAP} and ThreeDWorld~\cite{Gan2020ThreeDWorldAP}. In the past, leveraging synthetic data has proven challenging due the particularly wide domain gap between synthetic images and real images. However, there have been some successes in tasks such as eye gaze estimation~\cite{shrivastava2017learning}, embodied agent navigation~\cite{savva2017minos,deitke2020robothor,savva2019habitat}, and autonomous driving~\cite{prakash2019structured, richter2021enhancing}. There have also been some synthetic datasets for visual question answering such as CLEVR~\cite{Johnson2017CLEVRAD} CLEVRER~\cite{Yi2020CLEVRERCE}, and VQA Abstract~\cite{antol2015vqa}. However these VQA datasets build a closed world that is not designed to generalize to real world images. Remarkably, some recent work has managed to show domain transfer from cartoon images to real images~\cite{Zhang2021DomainrobustVW}, but there is still a limitation on how much could be learned from these existing resources. Our proposed Hypersim-VQA and ThreeDWorld-VQA datasets provide a promising alternative that more realistically captures real world settings and offers a path forward in this direction. Figure~\ref{fig:lead} shows synthetic image samples along with the VQA 2.0 dataset~\cite{goyal2017making}. Our work also proposes feature swapping (\fswap{}) as a simple yet effective method to augment a currently existing VQA dataset with computer graphics generated examples. Existing methods for domain adaptation rely on the assumption that adaptation can be addressed by making the out-of-domain samples match the distribution of the in-domain samples. However current work often operationalizes this assumption by making the input images themselves look more like the real images e.g.~\cite{tzeng2017adversarial,hoffman2018cycada,rodriguez2019domain}. While there has been success in applying these techniques in domain adaptation for a number scenarios, we claim that perhaps adapting the input space is a harder problem that needs to be solved in order to have effective domain adaptation. Feature Swapping relies instead on swapping random object-level intermediate feature representations. We posit that unless realistic style-transfer is desired from the input domain to the target domain, as long as the two domains are matched at the feature level -- domain adaptation can take place. We explain and compare our \fswap{} approach with other methods such as adversarial domain adaptation and demonstrate superior results. \vspace{0.04in} \noindent Our contributions can be summarized as follows: \begin{compactitem} \item Dataset generation: We are providing an extension of the Hypersim dataset for VQA, and automatically creating a synthetic VQA dataset using ThreeDWorld. \vspace{0.04in} \item Feature swapping (F-SWAP): We propose a surprisingly simple yet effective new technique for incorporating synthetic images in our training while mitigating domain shift. Our method does not rely on GANs or adversarial losses which could be difficult to train. \vspace{0.04in} \item Experimental results: We provide an empirical analysis, using well known techniques such as adversarial augmentation, domain independent fusion, and maximum mean discrepancy matching to alleviate the visual domain gap vs our proposed approach -- and analysis on knowledge transfer between skills. \vspace{0.04in} \end{compactitem} We first introduce related work (Sec.~\ref{sec:related}), then we describe our proposed synthetic dataset generation process (Sec.~\ref{sec:dataset}), then we explain the motivation and details of our feature swapping method (Sec.~\ref{sec:fswap}), then we describe and discuss our experiments(Sec.~\ref{sec:exp}), and finally we conclude the paper (Sec.~\ref{sec:conclusion}). Our synthetic datasets and code are available at \href{https://simvqa.github.io}{https://simvqa.github.io.} \section{Related Work} \label{sec:related} Our work is related to both general efforts at improving visually-grounded question-answering models, and efforts targeting data augmentation for VQA and the use of synthetic data for other visual reasoning tasks. \vspace{0.04in} \noindent{\bf Visual Question Answering (VQA).} There has been much progress on the task of VQA, where the goal is to answer a question conditioned on both an image and a question text input~\cite{ren2015exploring,antol2015vqa}. A lot of work in VQA measure progress using the {VQA~2.0} benchmark~\cite{goyal2017making}. Much of the work in this area focuses on exploring new architectural designs which can effectively model the interaction between the image and the text modalities, such as bilinear pooling~\cite{fukui2016multimodal}, bottom-up-top-down attention~\cite{anderson2018bottom}, neural module networks~\cite{hu2018explainable, hu2017learning}, and most recently transformer architectures~\cite{chen2020uniter,li2019unicoder,li2019visualbert}. However, most work assumes that models are trained on real image-question-answer triplets, and that they will be applied to settings with similar data distributions. Our paper instead investigates a setting where we leverage synthetic training data to learn certain skills so that the model generalizes to real images at test time. \vspace{0.04in} \noindent{\bf Data augmentation for VQA.} Data augmention in VQA has often been studied within the context of model robustness. Chen~et~al~\cite{chen2020counterfactual} select visual objects in images and words in questions which are critical for answer prediction and synthesizes new samples by masking out critical visual regions or words. Whitehead~et~al~\cite{whitehead2020learning} leverage existing linguistics resources to create word substitution rules for paraphrases, synonyms and antonyms which are then used to generate question perturbations for VQA. Gokhale~et~al~\cite{gokhale2020mutant} explored VQA data synthesis via a combination of semantic manipulation on image content and questions. However, previous work in this direction generates new samples via perturbations on top of the original real-image VQA dataset, which limits the diversity and the range of variations for generated samples. In contrast, our work leverages photo-realistic, multi-physics synthetic environments, and is able to generate rich image-question pairs parameterized by scene/room types, camera view, object dimensions, object counts and object orientations. In concurrent work Gupta~et~al~\cite{gupta2022swapmix} propose feature swapping for avoiding contextual bias in visual question answering. \vspace{0.04in} \noindent{\bf Synthetic data using simulated environments.} The value of leveraging simulated environments to augment training has been explored in various vision tasks, such as object detection, semantic segmentation, and pose estimation~\cite{tremblay2018falling, Salas2020TrainingWS, Zanella2021AutogeneratedWD, Meloni2021SAILenvLI, JohnsonRoberson2017DrivingIT,Le2021EDENMS}. Synthetic environments have also been applied to vision and language problems, such as embodied agent learning~\cite{Duan2021ASO, Savva2019HabitatAP, Kolve2017AI2THORAI, GarciaGarcia2018TheRA, Szot2021Habitat2T, Ehsani2021ManipulaTHORAF}, using platforms such as the Unreal Engine \cite{MartinezGonzalez2019UnrealROXAE, Qiu2017UnrealCVVW}, and using existing scenes and spaces manually created by specialized designers and content creators~\cite{UE4Arch}. Within the task of VQA, to train and diagnose model performance on compositional questions, synthetic datasets such as CLEVR~\cite{Johnson2017CLEVRAD} and CLEVRER~\cite{Yi2020CLEVRERCE} have been proposed. However, models trained on such synthetic datasets typically do not generalize to real images as they were designed under a closed world assumption. In this paper, we explore approaches which leverage both real-image VQA data for its richness in visual concepts and question types, as well as synthetic datasets generated from controllable, configurable 3D environment. We found that using this approach we can generate arbitrarily large-amounts of high-quality data for type-specific questions. \section{Synthetic Dataset Generation} \label{sec:dataset} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Images/Fig_TDW_Pipeline.pdf} \vspace{-0.15in} \caption{Sample pipeline for generating VQA data using ThreeDWorld. a) Manually select scenes from a set of random camera walks. b) Select one of the generated scene graphs containing object information such as positions, number, color, and materials. c) Generate question-answer pairs following a template based on the scene graph. d) Finally, generate images by placing objects and modifying characteristics of the scene based on steps b and c.} \label{fig:TDW_pipeline} \vspace{-0.1in} \end{figure} First, we describe the generation of a VQA dataset by extending the existing Hypersim dataset~\cite{Roberts2020HypersimAP}~(section~\ref{extending_hypersim_for_vqa}). We name this dataset Hypersim-VQA, or H-VQA, for short. Then we explore the automatic creation of a VQA Dataset using ThreeDWorld~\cite{Gan2020ThreeDWorldAP} (section~\ref{automatic_generation_with_tdw}). We name this dataset ThreeDWorld-VQA, or W-VQA, for short. \subsection{Extending Hypersim for VQA} \label{extending_hypersim_for_vqa} Hypersim~\cite{Roberts2020HypersimAP} is an existing 3D graphics generated dataset with a high image quality and displays a diverse array of scenes and objects. Hypersim metadata includes the complete geometry information per scene, dense per-pixel semantic instance segmentations for every image, and instance-level NYU40 labels annotations. We extend these data by manually annotating objects on all images given their dimensions and positions in the scene. Additionally, we add questions and answers based on the number of appearances of an object in an image and their location with respect to other objects in the same frame. Since we have the 3D bounding boxes coordinates for each object in a scene, we can calculate the distance $d$, plunge $p$ and azimuth $a$ for two objects located in positions $(x_1, y_1, z_1)$ and $(x_2, y_2, z_2)$ as \begin{small} \begin{eqnarray} d &=& \sqrt{(x_2-x_1)^2+(y_2-y_1)^2+(z_2 -z_1)^2}, \\ p &=& \text{deg}\left(\arcsin \left(\frac{z_2-z_1}{d}\right)\right), \\ a &=& \text{deg}\left(\arctan \left(\frac{y_2-y_1}{x_2-x_1}\right)\right), \end{eqnarray} \end{small} where $d$ is the amount of space between two objects, $p$ is the angle of inclination measured from the horizontal axis formed by aligning two objects, and $a$ is the angle between two objects, measured clockwise with respect to the North. Once we have the distance between all objects in a scene, we define clusters of objects and use the azimuth and plunge to define the position of one object with respect to the other objects in the same scene. Finally, we generate yes/no questions and answer pairs based in the visibility of an object in a scene frame. We call this new set Hypersim-VQA. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Images/Fig_TDW_Samples.pdf} \vspace{-0.15in} \caption{In our VQA dataset generation pipeline, we can automatically manipulate the scene composition, object materials and colors, allowing our grammar to generate more challenging questions and answers. } \vspace{-0.1in} \label{fig:TDW_samples} \end{figure} \subsection{Automatic VQA Generation} \label{automatic_generation_with_tdw} ThreeDWorld (TDW)~\cite{Gan2020ThreeDWorldAP}, is a platform for interactive multi-modal physical simulation that we use to generate images. We follow the steps shown in Figure~\ref{fig:TDW_pipeline} to generate the image $I$, question $Q$ and answer $A$ triplets for our W-VQA dataset. In this section, we provide a detailed description of the synthetic-generation pipeline. \input{04-data_tdw} \vspace{-0.1in} \paragraph{Synthetic Image Generation} We placed multiple cameras with random configurations in the $44$ scenes, by randomly generating $x_c$, $y_c$, and $z_c$ coordinates for camera positions, and $\theta_c$ for directions that the cameras look at. We then manually selected a set of camera configurations that have good views of an empty room, to later place objects in front of them. For example, Figure~\ref{fig:TDW_pipeline} illustrates a scene in which we placed a random table at the left of the image, a random small object (backpack or lamp) on the table, and another random small object on the ground, which follows the scene graph depicted in Step B of the Figure. We also change the material of objects at this stage, and place a random number of objects in the image following the scene graph configuration. An example of how changing materials and colors visually affects a generated image is showcased in Figure~\ref{fig:TDW_samples}. We calculated the positions of these objects relative to the camera using \vspace{-0.05in} \begin{eqnarray} x &=& x_c + \, r \cos{\theta}, \\ y &=& y_0 + \, h, \\ z &=& z_c + \, r \sin{\theta}, \end{eqnarray} where $x$, $y$ and $z$ are the position coordinates of the placed object, $\theta$ is the direction of the object with respect to the camera, typically within $30$ degrees to $\theta_c$, the direction that the camera looks at, $r$ is the distance between the object and the camera, and $y_0$ is the coordinate of height at the floor level in the scene. In addition, since $h$ is an estimated height with respect to the object size, we waited 25 frames for these objects to fall to their natural stationary positions using the TDW physics engine. Finally, TDW allows to capture the RGB images from the camera view along with the id and category per-pixel semantic masks, which we later use to verify the number of objects in the image and avoid object occlusions. \input{04-data_grammar} \begin{figure}[t!] \centering \fbox{\includegraphics[width=0.98\linewidth]{Images/fig-grammar.pdf}} \vspace{-0.1in} \caption{Template based grammar we use to generate question/answer pairs given our generated image and it's corresponding scene-graph. } \vspace{-0.1in} \label{fig:TDW_grammar} \end{figure} \section{Feature Swapping} \label{sec:fswap} Given a triplet of images $I$, questions $Q$ and answers $A$, we have access to three datasets from different domains, where $(I_R, Q_R, A_R) \in R$ correspond to a Real-VQA dataset consisting of real images (we use {VQA~2.0}\cite{goyal2017making}), $(I_H, Q_H, A_H) \in H$ correspond to the Hypersim-VQA dataset, and $(I_W, Q_W, A_W) \in W$ correspond to the TDW-VQA dataset. We assume that the images and their corresponding questions are inputs to a VQA model, and the objective is to predict as output the corresponding ground-truth answers. Our goal with feature swapping is to train a model that is generic and as domain invariant as possible. The motivation for feature swapping relies in observing that in all three datasets we can find similar types of objects and configurations but the appearance of the objects might differ. Our goal with feature swapping is then to randomly replace during the training the object-level features for some of the objects with the features for an equivalent object from another domain. Given an image $I$, we use a pre-trained model $G$ to extract the image region features $G_f(I) = \{f_1, f_2, ..., f_n\}$ along with their corresponding pseudo-labels $G_{sl}(I) = \{sl_1, sl_2, ..., sl_n\}$ which are labels predicted by Faster-RCNN corresponding to annotations from Visual Genome~\cite{Krishna2016VisualGC}. Pseudo-labeling has proven effective in semi-supervised learning where only a portion of the training data is annotated\cite{PseudoLabel,arazo2019pseudolabeling,curriculum2021}. Since we have access to all images from the three sets, we create a dictionary $D_{\textit{type}}$ per dataset with $\textit{type} = R \lor H \lor W$, where the $[\textit{key}, \textit{value}]$ of a dictionary $D_\textit{type}$ corresponds to the pseudo-label $(sl_i)_\textit{type}$, and all the region features $[(f_i)_\textit{type}, ..., (f_m)_\textit{type}]$ that the model $G$ assign as $(sl_i)_\textit{type}$ respectively. Once we retrieve the information for all the dictionaries $D_R, D_H, D_W$, we use them to swap features from one dataset to the other. While training, when sampling datapoints from $R$, we randomly select an Image $I_R$ and get all the region features $G_f(I_R)$ and its corresponding pseudo-labels $G_{sl}(I_R)$. Since we have access to all dictionaries, we lookup for the pseudo-labels that also exist in $D_H \lor D_W$, for simplicity, $D_S = D_H \lor D_W$, thus, after obtaining $G_{sl}(I_R) \in D_S$ we proceed to randomly select a portion $\lambda |G_f(I_R)|$ of the corresponding pseudo-labeled features in $D_S$ and replace them with the matching features in $I_R$. In all of our experiments, $\lambda = 0.2$. Figure~\ref{fig:Methods} shows pseudo-code for this algorithm. In VQA, the model takes as input pre-computed region features that come from a pretrained object detection model, but the prediction scores are often ignored. VQA models assume that these region features contain enough information for vision and language reasoning. Here we are assuming that the pseudo-labels associated with region features are good predictions; thus, we are augmenting the feature space of the input images from the real domain with features from the synthetic domain by perturbing the input image feature space with features that $G$ scores as similar. In contrast to other methods that rely on adversarial augmentation, our presented approach does not need training or any adaptation. Importantly, since we are performing feature replacements in latent representations we also bypass the need to perform anything resembling style-transfer in the input pixel domain. \section{Experiments} \label{sec:exp} First, we describe our experimental settings~(Sec.~\ref{dataset_settings}), then we describe our data augmentation experiments~(Sec.~\ref{sec:aug}), then we describe our experiments using various domain alignment techniques including \fswap{}~(Sec.~\ref{sec:domain-alignment}), and finally we show how certain types of question beyond counting can influence the accuracy of counting questions~(Sec.~\ref{sec:questionshelp}). \subsection{Experimental Settings} \label{dataset_settings} \paragraph{Datasets.} \noindent \textbf{Real-VQA.} Following Whitehead~et~al~\cite{Whitehead2021SeparatingSA}'s skill-concept separation for compositional analysis, we take the VQA 2.0 dataset~\cite{goyal2017making}, and separate the counting questions for a detailed analysis on how synthetic data may affect a model performance. For training, we create two different splits: R-VQA$_C$ that corresponds to the training set with only counting questions, and R-VQA$_{NC}$ which corresponds to the VQA 2.0 training set without counting questions. R-VQA$_C$ contains $48,431$ datapoints, and R-VQA$_{NC}$ contains $378,018$ datapoints. For testing, we use the standard {VQA~2.0} validation set and report our results on {\it Numeric} questions, where ${\sim}85\%$ of the questions correspond to counting questions, {\it Others}, and {\it Overall} for the general accuracy. \textbf{Hypersim-VQA.} The Hypersim dataset~\cite{Roberts2020HypersimAP} comes with annotations corresponding to the NYU40 labels. Additionally, we manually annotate $460$ scenes and add $1,250$ extra labels for objects whose semantic masks has generic annotations, such as \textit{otherstructure}, \textit{otherfurniture} and \textit{otherprop}. We then generate $254,174$ counting questions for $41,551$ images. In our experiments, we use a subset of $20,000$ questions that only containe NYU40 labels (excluding \textit{otherstructure}, \textit{otherfurniture} and \textit{otherprop}) and include $10,000$ randomly selected from the extra annotated labels. We also generate $40,000$ yes/no questions probing whether an object is present in an image following Section~\ref{extending_hypersim_for_vqa}. \textbf{TDW-VQA.} We generate $33,264$ counting related datapoints and add $30,000$ yes/no questions to the same images. Additionally, we generate $12,000$ extra images and add color and material questions following Section~\ref{automatic_generation_with_tdw}, for a total of $87,264$ automatically generated datapoints using the ThreeDWorld simulation platform. \vspace{-0.1in} \paragraph{Base VQA model.} Multi-modal transformer-based architectures currently hold the state-of-the-art results on VQA Challenge~\cite{Yu2019DeepMC,chen2020uniter,li2019unicoder}. For our experiments, we select the top-performing model without large-scale pre-training~\cite{Yu2019DeepMC} as our base model. Our base code follows the hyper-parameter selection included in their publicly available implementation\footnote{https://github.com/MILVLG/mcan-vqa}. \vspace{-0.1in} \subsubsection{Image Features.} We experiment with three types of input image features: \vspace{0.05in} \noindent \textbf{Region Features.} We extract intermediate features from a Faster R-CNN model~\cite{Ren2015FasterRT} with ResNet-101 as the backbone, pretrained on the Visual Genome dataset. Following~\cite{Anderson2018BottomUpAT, Yu2019DeepMC}, we obtain a dynamic number of objects $m \in [10, 100]$ by setting a confidence threshold. If the number of objects is lower than $100$, we use zero-padding to fill the final matrix of shape $100 \times 2048$. \vspace{0.05in} \noindent \textbf{Grid Features from CLIP ResNet-50.} Following \cite{Jiang2020InDO, Shen2021HowMC}, we use the CLIP model~\cite{Radford2021LearningTV} with the ResNet-50 visual backbone and extract the features from the RoI Pooling layer without any additional fine-tuning. With this approach, we can extract an image representation matrix of size $558 \times 2048$. \vspace{0.05in} \noindent \textbf{Grid Features from CLIP ViT-B.} We also divide the raw input images into grids of $2 \times 2$, $4 \times 4$ and $8 \times 8$, and use them as inputs for the CLIP ViT-B model~\cite{Radford2021LearningTV}. We then aggregate the outputs and we use them as an image representation matrix of size $85 \times 552$. \subsubsection{Textual Features.} Following~\cite{Yu2019DeepMC} we tokenize the input questions into words and transform them into feature vectors using pre-trained 300-dimensional GloVe word embeddings~\cite{Pennington2014GloVeGV}. These embeddings are passed through a one-layer LSTM~\cite{Hochreiter1997LongSM}. We then use all the output features for all corresponding words. \vspace{-0.1in} \subsubsection{Domain alignment methods.} For comparisons to earlier work, we consider the following domain alignment methods that have been proposed in the past either for VQA or for other similar visual recognition problems. \begin{table}[t] \newcolumntype{Y}{>{\raggedright\arraybackslash}X} \newcolumntype{Z}{>{\centering\arraybackslash}X} \newcolumntype{W}[1]{>{\centering\arraybackslash\hspace{0pt}}p{#1}} \centering \footnotesize \setlength\tabcolsep{0.1pt} \renewcommand{\arraystretch}{1.0} \begin{tabularx}{0.98\columnwidth}{l c c c W{0.35in} c W{0.35in} W{0.35in} c Y} \toprule {\multirow{3}{*}{\bf Feature backbone}} &~~~& {\multirow{3}{*}{\bf Feature size}} &~~~& \multicolumn{4}{c}{\bf Training data} &~~~& {\bf R-VQA}\\ &&&& {\bf Real}&& \multicolumn{2}{c}{\bf Synthetic} && {\bf Accuracy}\\ \cmidrule{5-5}\cmidrule{7-8}\cmidrule{10-10} &&&& {\scriptsize \textbf{R}} && {\scriptsize \textbf{H}} & {\scriptsize \textbf{W}} && {\it Numeric}\\ \midrule {\multirow{3}{*}{FRCNN – RN101}} && {\multirow{3}{*}{100$\times$2048}} && \cellcolor{gray!20} \checkmark &\cellcolor{gray!20}&\cellcolor{gray!20} &\cellcolor{gray!20} &\cellcolor{gray!20}&\cellcolor{gray!20} 42.73\\ &&&& \checkmark && \checkmark & && 44.70\Rise{1.97}\\ &&&& \checkmark && & \checkmark && 42.86\rise{0.13}\\ \cmidrule{1-10} {\multirow{3}{*}{CLIP - RN50}} && {\multirow{3}{*}{558$\times$2048}} && \cellcolor{gray!20} \checkmark &\cellcolor{gray!20}&\cellcolor{gray!20} &\cellcolor{gray!20} &\cellcolor{gray!20}&\cellcolor{gray!20} 42.83\\ &&&& \checkmark && \checkmark & && 43.61\rise{0.78}\\ &&&& \checkmark && & \checkmark && 42.91\rise{0.08}\\ \cmidrule{1-10} {\multirow{3}{*}{CLIP – ViT-B }} && {\multirow{3}{*}{85$\times$512}} && \cellcolor{gray!20}\checkmark &\cellcolor{gray!20}&\cellcolor{gray!20} &\cellcolor{gray!20} &\cellcolor{gray!20}&\cellcolor{gray!20} 41.93\\ &&&& \checkmark && \checkmark & && 43.98\Rise{2.05}\\ &&&& \checkmark && & \checkmark && 41.35\drop{0.58}\\ \bottomrule \end{tabularx} \caption{Data augmentation using synthetic data improves Real-VQA performance (R-VQA) on numeric questions, especially when using Hypersim-VQA (H). In all these experiments only counting questions were used for training from both the existing Real-VQA dataset, VQA$_{C}$ (\textbf{R}) and our synthetic dataset variants: Hypersim-VQA (\textbf{H}) and TDW-VQA (\textbf{W}).} \label{tab:countingonly} \vspace{-0.05in} \end{table} \begin{table*}[t] \newcolumntype{Y}{>{\raggedright\arraybackslash}X} \newcolumntype{Z}{>{\centering\arraybackslash}X} \centering \footnotesize \setlength\tabcolsep{1pt} \renewcommand{\arraystretch}{1.1} \begin{tabularx}{0.78\textwidth}{l c c c ZcZZ c YYY} \toprule {\multirow{3}{*}{\bf Feature backbone}} &~~~& {\multirow{3}{*}{\bf Feature size}} &~~~& \multicolumn{4}{c}{\bf Training data} &~~~& \multicolumn{3}{c}{\bf\multirow{2}{*}{R-VQA Accuracy}}\\ &&&& {\bf Real}&& \multicolumn{2}{c}{\bf Synthetic} &&&\\ \cmidrule{5-5}\cmidrule{7-8}\cmidrule{10-12} &&&& {\scriptsize R-VQA$_{NC}$} && {\scriptsize H-VQA$_C$} & {\scriptsize W-VQA$_C$} && {\it Numeric} & {\it Others} & {\it Overall} \\ \midrule {\multirow{4}{*}{FasterRCNN – RN101}} && {\multirow{4}{*}{100$\times$2048}} && \cellcolor{gray!20} \checkmark &\cellcolor{gray!20} &\cellcolor{gray!20} &\cellcolor{gray!20} &\cellcolor{gray!20}&\cellcolor{gray!20} 6.08 &\cellcolor{gray!20} 68.94 &\cellcolor{gray!20} 60.69\\ &&&& \checkmark && \checkmark & && 15.99\Rise{9.91} & 68.97 & 62.02\Rise{1.33}\\ &&&& \checkmark && & \checkmark && 21.18\Rise{15.1} & 68.91 & 62.65\Rise{1.96} \\ &&&& \checkmark && \checkmark & \checkmark && 24.96\Rise{18.8} & 68.91 & 63.14\Rise{2.45} \\ \cmidrule{1-12} {\multirow{4}{*}{CLIP - RN50}} && {\multirow{4}{*}{558$\times$2048}} &&\cellcolor{gray!20} \checkmark &\cellcolor{gray!20}&\cellcolor{gray!20} &\cellcolor{gray!20} &\cellcolor{gray!20}&\cellcolor{gray!20} 4.55 &\cellcolor{gray!20} 69.70 &\cellcolor{gray!20} 61.15\\ &&&& \checkmark && \checkmark & && 10.24\Rise{5.69} & 69.63 & 61.84\rise{0.69}\\ &&&& \checkmark && & \checkmark && 14.67\Rise{10.12} & 69.45 & 61.76\rise{0.61}\\ &&&& \checkmark && \checkmark & \checkmark && 17.81\Rise{13.26} & 69.81 & 62.98\Rise{1.83} \\ \cmidrule{1-12} {\multirow{4}{*}{CLIP – ViT-B }} && {\multirow{4}{*}{85$\times$512}} &&\cellcolor{gray!20} \checkmark &\cellcolor{gray!20}&\cellcolor{gray!20} &\cellcolor{gray!20} &\cellcolor{gray!20}&\cellcolor{gray!20} 5.06 &\cellcolor{gray!20} 70.12 &\cellcolor{gray!20} 61.58\\ &&&& \checkmark && \checkmark & && 14.06\Rise{9.00} & 70.06 & 62.71\Rise{1.13}\\ &&&& \checkmark && & \checkmark && 17.25\Rise{12.19} & 70.06 & 63.12\Rise{1.54}\\ &&&& \checkmark && \checkmark & \checkmark && 20.72\Rise{15.66} & 70.05 & 63.57\Rise{1.99}\\ \bottomrule \end{tabularx} \vspace{-0.05in} \caption{Learning counting skill on real data using data augmentation with our synthetic datasets. In all these experiments only counting questions were used from our synthetic dataset variants: Hypersim-VQA (H-VQA$_{C}$) and TDW-VQA (W-VQA$_{C}$).} \vspace{-0.1in} \label{tab:nocounting} \end{table*} \vspace{0.04in} \noindent\textbf{Adversarial adaptation.} This approach is a modification of the unsupervised domain adaptation of Ganin~et~al~\cite{Ganin2015UnsupervisedDA}. Since our goal is to minimize the feature gap from real $I_R$ and synthetic $(I_W \cup I_H)$ images, instead of using the questions or answers ground-truth to predict the class labels (as the label predictor block), we use an auto-encoder $D(E(\cdot))$ to reconstruct the input features $X$ of the images, and a domain classification model $DC$ that is trained to distinguish the domain of each input. This domain classifier is then connected to the underlying input features $X$ but its gradients are multiplied by a negative constant during training. This gradient reversal layer encourages the features of both domains to remain indistinguishable. This process commonly known as adversarial domain adaptation is optimized in an alternative fashion as follows: \begin{small} \begin{equation} L_D = \sum {{(\log(DC(X)) + \log(1 - \hat{DC}(X)))}}, \end{equation} \begin{equation} L_R = \sum (D(E(X))-\hat{D}(\hat{E}(X)))^2, \end{equation} \begin{equation} L_{total} = L_R + \alpha L_D, \end{equation} \end{small} where $L_{total}$ is the loss function to be optimized that encourages a good reconstruction while discouraging the features to encode any domain specific information. \vspace{0.04in} \noindent \textbf{Distribution alignment adaptation.} In this approach, we use $N$ auto-encoder architectures $D(E(\cdot))$ corresponding to the datasets we want to align, (e.g., VQA 2.0 as $R$, TDW as $W$, and Hypersim as $H$) we then compute the Maximum Mean Discrepancy (MMD) loss~\cite{MMDgretton12a} among the intermediate layers of each model. By doing this, the real and synthetic features distributions are encouraged to get closer as similarly used for domain adaptation in~\cite{Tzeng2014DeepDC, Saito2018MaximumCD}. This is performed as follows: \begin{small} \begin{equation} L_{D\diamond} = \text{MMD}(E(X_R), \hat{E}(X_\diamond)), \end{equation} \begin{equation} L_{R\diamond} = \sum (D(E(X_{\diamond}))-\hat{D}(\hat{E}(X_{\diamond})))^2, \end{equation} \begin{equation} L_{total} = L_{R} + \alpha L_{DW} + \beta L_{DH}, \end{equation} \end{small} \\ where $\diamond$ can be replaced by $W$ for the TDW features, and $H$ for our extended Hypersim dataset features, and $R$ represents Real-VQA features. Unlike the adversarial domain adaptation approach, here the adversary is not a classifier but a loss that tries to match the distribution of the features across the pair of domains. \vspace{0.06in} \noindent \textbf{Domain independent fusion.} Inspired by Wang~et~al~\cite{Wang2020TowardsFI}'s work on bias mitigation, we perform domain independent training, where we treat the real and synthetic output space as separate. To do so, we create a new set of classes that contains tokens from the synthetic set only, and extend the real set answer token space with these new tokens, as show in the third method of Figure~\ref{fig:Methods}. This approach can be viewed as two classifiers with a shared backbone that has access to the decision boundary of both the real and synthetic domain. \begin{table*}[ht] \newcolumntype{Y}{>{\raggedright\arraybackslash}X} \newcolumntype{Z}{>{\centering\arraybackslash}X} \centering \footnotesize \setlength\tabcolsep{1pt} \renewcommand{\arraystretch}{1.2} \begin{tabularx}{\textwidth}{l l c YYY c YYY c YYY} \toprule {\multirow{2}{*}{\bf Data}} & {\multirow{2}{*}{\bf Method}} &~~~& \multicolumn{3}{@{\hskip 0.22in}c}{\bf +$\textbf{0\%}$ R-VQA$_\textbf{C}$} &~~~& \multicolumn{3}{@{\hskip 0.15in}c}{\bf +$\textbf{1\%}$ R-VQA$_\textbf{C}$} &~~~& \multicolumn{3}{@{\hskip 0.15in}c}{\bf +$\textbf{10\%}$ R-VQA$_\textbf{C}$} \\ \cmidrule{4-6}\cmidrule{8-10}\cmidrule{12-14} &&& \textit{Numeric} & \textit{Others} & \textit{Overall} && \textit{Numeric} & \textit{Others} & \textit{Overall} && \textit{Numeric} & \textit{Others} & \textit{Overall} \\ \midrule \rowcolor{gray!20} H-VQA$_C$ & Simple Augmentation && 15.99 & 68.97 & 62.02 && 29.64 & 68.45 & 63.34 && 35.72 & 68.61 & 64.29\\ H-VQA$_C$ & Adversarial && 16.07\rise{0.08} & 66.01\Drop{2.96} & 59.46\Drop{2.56} && 28.31\Drop{1.33} & 66.89\Drop{1.56} & 61.83\Drop{1.51} && 35.01\drop{0.71} & 66.91\Drop{1.7} & 62.71\Drop{1.58} \\ H-VQA$_C$ & MMD && 24.79\Rise{8.80}& 67.13\Drop{1.84} & 61.58\drop{0.44} && 31.61\Rise{1.97} & 67.78\drop{0.67} & 63.04\drop{0.30} && 38.87\Rise{3.15} & 68.36\drop{0.25} & 64.49\rise{0.2} \\ H-VQA$_C$ & Domain Independent && 22.87\Rise{6.88} & 68.65\drop{0.32} & 62.64\rise{0.62} && 29.05\drop{0.59} & 68.73\rise{0.28} & 63.52\rise{0.18} && 37.67\Rise{1.95} & 69.34\rise{0.73} & 65.17\rise{0.88}\\ \midrule H-VQA$_C$ & Feature Swapping {\footnotesize (\fswap{})} && 23.38\Rise{7.39} & 69.07\rise{0.10} & 63.07\Rise{1.05} && 31.64\Rise{2.00} & 69.08\rise{0.63} & 64.15\rise{0.81} && 39.71\Rise{3.99} & 69.13\rise{0.52} & 65.26\rise{0.97}\\ \midrule \rowcolor{gray!20} W-VQA$_C$ & Simple Augmentation && 21.18 & 68.91 & 62.65 && 31.18 & 68.97 & 64.01 && 38.47 & 68.86 & 64.87 \\ \midrule W-VQA$_C$ & Feature Swapping {\footnotesize (\fswap{})} && 26.84\Rise{5.66} & 68.89\drop{0.02} & 63.67\Rise{1.02} && 31.21\rise{0.03} & 68.82\drop{0.15} & 63.89\drop{0.12} && 38.54\rise{0.07} & 68.97\rise{0.11} & 64.97\rise{0.10}\\ \bottomrule \end{tabularx} \caption{Counting skill learning under different low-regime settings for Real VQA counting questions (R-VQA$_C$). All models share the basic training set: VQA$_{NC}$ (the non-counting subset of VQA v2 training data). } \label{tab:alignment} \vspace{-0.1in} \end{table*} \subsection{Data augmentation experiments} \label{sec:aug} First, we evaluate the effect of augmenting Real-VQA data with the proposed synthetic datasets. We are interested to test if the ability of VQA models to answer counting questions on synthetic data could improve the counting performance on real VQA data. We experiment with two different settings for data augmentation. The first setting tests a scenario where real and synthetic data contain the same question type (in this case, counting questions). Table~\ref{tab:countingonly} shows that, under different feature backbones, the performance of counting questions on real data is improved when R-VQA$_{C}$ is augmented with the proposed H-VQA dataset. The second setting targets a more challenging case, where the real data does not overlap with the synthetic data in terms of questions types. Specifically, in this setting, for real data, we use R-VQA$_{NC}$, which does not contain counting questions. So the model needs to learn the skill for counting questions from the augmented synthetic data alone. Table~\ref{tab:nocounting} shows that in all settings of feature backbones, and different combinations of synthetic data augmentations, the model learns to answer counting questions. In this case, augmenting with ThreeDWorld-VQA seems to outperform augmenting with Hypersim-VQA, perhaps due to a greater extent of controllability of the generated scenes. Lastly, the best results are obtained by data augmentation using both synthetic datasets. \subsection{Domain alignment.} \label{sec:domain-alignment} As demonstrated in Section~\ref{sec:aug}, counting skills learned from our synthetic datasets can effectively transfer to real VQA data, even when the real training data does not contain counting questions. Here, we explore to what extent skill learning using synthetic data can be helped by explicit alignment of visual features between two domains. The real data used in this experiment includes R-VQA$_{NC}$, as well as R-VQA$_{C}$ under three different regimes ($0\%$, $1\%$, $10\%$). Table~\ref{tab:alignment} summarizes the experimental results when using different domain alignment approaches. Compared to the baseline method of simple data augmentation, we do not observe an overall improvement with Adversarial Adaptation. Compared to Domain Independent, MMD seems to generate more consistent gains on counting questions, across various regimes for R-VQA$_C$; however, this gain is also accompanied by decreased performance on the split of \textit{Others} and sometimes on the overall evaluation data. Finally, the results suggest that Feature Swapping outperforms the baseline and other domain alignment methods, and produces consistent gains on counting questions as well as the overall accuracy, across different regimes of VQA$_C$. \subsection{Effect of question distribution.} \label{sec:questionshelp} In previous experiments, we focus on augmenting the real dataset with synthetic data of a specific skill type. In this section, we experiment with increased diversity of questions on synthetic data, and how it may effect the performance on different subsets of the real data. As shown in Table~\ref{tab:dist}, on the \textit{Others} category, we observe increased performance when adding more question types with the TDW-VQA dataset but not with Hypersim-VQA, likely due to the richer object repository and the more controllable environment of TDW. Interestingly, for both datasets, adding other question types results in a noticeable gain on the counting questions. We hypothesize that these additional questions help with visual concept learning (on color, object existence, etc), which consequently benefits counting skill learning since visual concept learning is a basic step to answering counting questions. \begin{table}[ht] \centering \small \begin{tabularx}{\columnwidth}{lccc} \toprule \multirow{2}{*}{\textbf{Training data: R-VQA$_{\textbf{NC}}$}} & \multicolumn{3}{c}{\bf R-VQA Accuracy} \\ \cmidrule{2-4} & {\it Numeric} & {\it Others} & {\it Overall} \\ \midrule H & 15.99 & 68.97 & 62.02 \\ H + Yes/No Questions & 22.11 & 68.38 & 63.17\\ \midrule W & 21.18 & 68.91 & 62.65 \\ W + Yes/No Questions & 25.43 & 70.10 & 64.24\\ W + Color Questions & 26.98 & 70.24 & 64.56\\ \bottomrule \end{tabularx} \caption{Effect of the distribution of synthetic data. We add other type-specific questions to our synthetic data and evaluate its effect on real data.} \label{tab:dist} \vspace{-0.1in} \end{table} \section{Supplementary Material} First, we show a list of hyper-parameters and implementation details for all of our adaptation methods in Section~\ref{sec:hyper_param}. Then we show some samples of images and their corresponding per-pixel masks, along with the verification algorithm for counting and occlusions in Section~\ref{sec:id_cat_masks}. Then we show some graph samples from our pool of manually designed scenes for the W-VQA dataset and describe their functionality for our automatic triplet (IQA) generation in Section~\ref{sec:scene_graphs_samples}. Finally in Sections~\ref{sec:wvqa_samples} and \ref{sec:hvqa_samples} we show some samples from W-VQA and H-VQA we randomly select from a diverse set of scenes, with different backgrounds, camera position and illumination. \subsection{Hyper-parameter Selection} \label{sec:hyper_param} The following are all the hyper-parameter selection for all of our algorithms: $lr$ refers to learning rate, $E$ to the number of training epochs, $O$ to the optimizer type, $O_{wd}$ is the optimizer weight decay, $O_\epsilon$ is the term added to the denominator to improve numerical stability, $O_\beta$ are a tuple of coefficients used for computing running averages of gradient and its square. For the Adversarial and MMD methods, the auto-encoder network ($AE$) is trained separately, in a 2 step format following Zhang~et~al.~\cite{Zhang2021DomainrobustVW} Two-stage DA; in both cases the first number in $E$ refers to the training epoch parameter for the $AE$. For Domain Independent, $di_{\text{tokens}}$ is the additional output we use for the synthetic answer tokens. \begin{table}[h] \vspace{-0.05in} \setlength{\tabcolsep}{6.4pt} \begin{center} \scalebox{0.75}{ \begin{tabular}{l|l?l|l} \multicolumn{2}{c}{Adversarial} & \multicolumn{2}{c}{MMD} \\ \hline $lr = 15e-4$ & $E = 100+13$ & $lr = 1e-3$ & $E = 150+13$ \\ $O_{wd} = 1e-6$ & $O =$ Adam & $O_{wd} = 1e-4$ & $O =$ Adam \\ $O_\epsilon=1e-4$ & $O_\beta=(0.8, 0.8)$ & $O_\epsilon=1e-4$ & $O_\beta=(0.8, 0.8)$ \\ $\alpha = \frac{2} {(1 + exp(-10 * p)) - 1}$ & & $\alpha = 0.4$ & $\beta = 0.6$ \\ \midrule \multicolumn{2}{c}{Domain Independent} & \multicolumn{2}{c}{F-SWAP} \\ \hline $lr = 15e-4$ & $E = 13$ & $lr = 15e-4$ & $E = 13$ \\ $O_{wd} = 0.2$ & $O =$ Adam & $O_{wd} = 1e-1$ & $O =$ Adam \\ $O_\epsilon=1e-9$ & $O_\beta=(0.9, 0.9)$ & $O_\epsilon=1e-9$ & $O_\beta=(0.9, 0.98)$ \\ $di_{\text{tokens}} = 100$ & & $\beta=1.$ & $\lambda=0.2$ \\ \end{tabular} } \end{center} \vspace{-0.15in} \caption{Hyper-parameter selection details for all methods.} \label{tab:class_results1} \vspace{-0.15in} \end{table} \subsection{RGB and Mask Samples} \label{sec:id_cat_masks} ThreeDWorld (TDW) \cite{Gan2020ThreeDWorldAP} allows to capture the RGB images from the camera view along with the id and category per-pixel semantic masks, which we later use to verify the number of objects in the image and avoid object occlusions. Figure~\ref{fig:imgs_and_masks} shows some samples we randomly select from our generated W-VQA set. The first column correspond to the RGB image, the second and third columns correspond to the category and id masks respectively. We verify if an object overlaps to another and assess the object counts by computing the intersection over union. \begin{figure}[h!] \centering \includegraphics[width=0.96\linewidth]{Images/RandomSamples_Masks.pdf} \vspace{-0.05in} \caption{Random samples from the images we generate using TDW along with their category masks (second row) and id masks (third row). } \vspace{-0.05in} \label{fig:imgs_and_masks} \end{figure} \subsection{Scene-Graph Samples} \label{sec:scene_graphs_samples} Let $E$ denote the set of scene entities and consider the set of binary relations $R$. Then a scene graph $SG \in E \times R \times E$ is a collection of ordered triplets $(o, p, o)$ = object, position, and object. For example, as shown in the first sample in Figure~\ref{fig:scene_graphs}, with $A$=lamp, $B$=table, $C$=backpack, the triplet $(A, position, B)$ indicates that a \textcolor{red}{lamp} is \textcolor{orange}{on top of} the \textcolor{violet}{table}, or the \textcolor{violet}{table} is \textcolor{Mahogany}{under} the \textcolor{red}{lamp}. Similarly, the triplet $(B, position, C)$ indicates that the \textcolor{blue}{backpack} is \textcolor{orange}{to the left of} the \textcolor{violet}{table}, or the \textcolor{violet}{table} is \textcolor{Mahogany}{to the right of} the \textcolor{blue}{backpack}. In this way, from a relationship, there are at least two possible positions, $p \land p^{-1}$, e.g., $p =$ left and $p^{-1} =$ right. When sampling from these graphs, each node in $E$ could also be assigned three different attributes: the number of objects to appear in the same scene $n = randrange(20)$, the color, and material type which are selected from a list of available materials and colors from the set of Records in TDW~\footnote{\url{https://github.com/threedworld-mit/tdw/blob/master/Documentation/misc_frontend/materials_textures_colors.md}}. \begin{figure}[h] \centering \includegraphics[width=0.96\linewidth]{Images/Fig_SceneGraphs.pdf} \vspace{-0.05in} \caption{Some of the scene graphs designed for our automated synthetic dataset generation. While generating images, we select one graph and randomly select the number of objects per position $node := [A, B, C, D]$, it's color and materials. Then we use the grammar introduced in Section~\ref{automatic_generation_with_tdw} to generate the questions and corresponding answers. } \vspace{-0.05in} \label{fig:scene_graphs} \end{figure} \onecolumn \subsection{W-VQA Generated Samples} \label{sec:wvqa_samples} We show some random samples we generate for our W-VQA dataset in Figure~\ref{fig:fig_W_VQA_extra}, following Section~\ref{automatic_generation_with_tdw}. \begin{figure*}[h!] \centering \includegraphics[width=0.75\linewidth]{Images/Fig_TDW_extra_samples.pdf} \vspace{-0.05in} \caption{Additional samples of our W-VQA dataset. The first row showcase simple configurations using the same background. The second row shows diverse compositions using indoor scenes. The third row shows compositions of challenging counting questions. The fourth row shows outdoor objects and scenes. Finally, the fifth row shows materials and color related questions using the same object in different camera positions. Best viewed in color.} \label{fig:fig_W_VQA_extra} \end{figure*} \subsection{H-VQA Generated Samples} \label{sec:hvqa_samples} We show some random samples we generate for our H-VQA dataset in Figure~\ref{fig:fig_H_VQA_extra}. \begin{figure*}[h!] \centering \captionsetup{justification=centering,margin=3cm} \includegraphics[width=0.77\linewidth]{Images/Fig_HYPER_extra_samples.pdf} \vspace{-0.05in} \caption{Additional samples of our H-VQA dataset. We generate questions and answers from manual and existing semantic annotations from Hypersim~\cite{Roberts2020HypersimAP}. Best viewed in color. } \label{fig:fig_H_VQA_extra} \end{figure*}
1,477,468,750,698
arxiv
\section{Introduction} In the interval constrained 3-coloring problem, we are given a set $\I$ of intervals defined on $[n] := \{1,\ldots,n\}$ and a \emph{requirement} function $r : \I \to \setZ^3_{\ge 0}$, which maps each interval to a triple of non-negative integers. The objective is to determine a coloring $\chi : [n] \to \{1,2,3\}$ such that each interval gets the proper colors as specified by the requirements, i.e.~$\sum_{i \in I} e_{\chi(i)} = r(I)$ where $e_1,e_2,e_3$ are the three unit vectors of $\setZ^3$. This problem is motivated by an application in biochemistry to investigate the tertiary structure of proteins as shown in the following illustration. \begin{figure}[hb] \centering \includegraphics[height=30mm,page=1]{figures.pdf} \caption{Coloring of the residues of a protein chain according to their exchange rates.} \label{fig:bio} \end{figure} More precisely, in Hydrogen-Deuterium-Exchange (HDX) experiments proteins are put into a solvent of heavy water ($D_2O$) for a certain time after which the amount of residual hydrogen atoms, that have exchanged with deuterium atoms, is measured~\cite{LEHMP02}. Doing this experiment for several timesteps, one can determine the exchange rate of the residues. These exchange rates indicate the solvent accessibility of the residues and hence they provide information about the spatial structure of the protein. Mass spectroscopy is one of the methods for measuring these exchange rates. To this end, the proteins are digested, i.e.~cut into parts which can be considered as intervals of the protein chain, and the mass uptake of each interval is measured. But thereby only bulk information about each interval can be obtained. Since there is not only one protein in the solvent but millions and they are not always cut in the same manner, we have this bulk information on overlapping fragments. That is, we are given the number of slow, medium, and fast exchanging residues for each of these intervals and our goal is to find a feasible assignment of these three exchange rates to residues such that for each interval the numbers match with the bulk information. Though the interval constrained 3-coloring problem is motivated by a particular application, its mathematical abstraction appears quite simple and ostensibly more general. In terms of integer linear programming, the problem can be equivalently formulated as follows. Given a matrix $A \in \{0,1\}^{m \times n}$ with the \emph{row-wise consecutive-ones property} and three vectors $b_{1,2,3} \in \setZ_{\ge 0}^m$, the constraints \begin{equation} \label{eq} \begin{pmatrix} A & 0 & 0 \\ 0 & A & 0 \\ 0 & 0 & A \\ I & I & I \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ b_3 \\ 1 \end{pmatrix} \end{equation} have a binary solution, i.e.~$x_{1,2,3} \in \{0,1\}^n$, if and only if the corresponding interval constrained 3-coloring problem has a feasible solution. We may assume w.l.o.g.~that the requirements are consistent with the interval lengths, i.e.~$A\cdot 1 = b_1 + b_2 + b_3$, since otherwise we can easily reject the instance as infeasible. Hence, we could treat $x_3$ as slack variables and reformulate the constraints as \begin{equation}\label{eq:packing} A x_1 = b_1, \qquad A x_2 = b_2, \qquad x_1 + x_2 \leq 1. \end{equation} It is known that if the matrix $A$ has the \emph{column-wise} consecutive ones property (instead of \emph{row-wise}), then there is a reduction from the two-commodity integral flow problem, which has been proven to be NP-complete in~\cite{EIS76}. However, the NP-completeness w.r.t.~row-wise consecutive ones matrices has been an open problem in a series of papers as outlined in the following subsection. \subsection{Related Work} The problem of assigning exchange rates to single residues has first been considered in~\cite{SAC08}. In that paper, the authors presented a branch-and-bound framework for solving the corresponding coloring problem with $k$ color classes. They showed that there is a combinatorial polynomial time algorithm for the case of $k=2$. Moreover, they asked the question about the complexity for $k > 2$. In~\cite{SWAT08}, the problem has been called \emph{interval constrained coloring}. It has been shown that the problem is NP-hard if the parameter $k$ is part of the input. Moreover, approximation algorithms have been presented that allow violations of the requirements. That is, a quasi-polynomial time algorithm that computes a solution in which all constraints are $(1+\varepsilon)$-satisfied and a polynomial time rounding scheme, which satisfies every requirement within $\pm 1$, based on a technique introduced in~\cite{GKPS06}. The latter implies that if the LP relaxation of~\eqref{eq} is feasible, then there is a coloring satisfying at least $\tfrac{5}{16}$ of the requirements. APX-hardness of finding the maximum number of simultaneously satisfiable intervals has been shown in~\cite{Can08} for $k \ge 2$ provided that intervals may be counted with multiplicities. But still, the question about the complexity of the decision problem for fixed $k \geq 3$ has been left open. In~\cite{KNU09}, several fixed parameter tractability results have been given. However, the authors state that they do not know whether the problem is tractable for fixed $k$. \subsection{Our contribution} In this paper, we prove the hardness of the interval constrained $k$-coloring problem for fixed parameter $k$. In fact, we completely settle the complexity status of the problem, since we show that already the interval constrained 3-coloring problem is NP-hard by a reduction from 3-SAT. This hardness result holds more generally for any problem that can be formulated like \eqref{eq}. Moreover, we even show the stronger result, that it is still difficult to satisfy almost all of the constraints for a feasible instance. More precisely, we prove that there is a constant $\epsilon > 0$ such that it is NP-hard to distinguish between instances where all constraints can be satisfied and those where only a $(1-\epsilon)$ fraction of constraints can be simultaneously satisfied. To this end, we extend our reduction using expander graphs. This gap hardness result implies APX-hardness of the problem of maximizing the number of satisfied constraints. It is important to note that our construction does neither rely on multiple copies of intervals nor on inconsistent requirements for an interval, i.e.~in our construction for every interval $(i,j)$ we have unique requirements that sum up to the length of the interval. \section{NP-hardness} \begin{theorem} It is NP-hard to decide whether there exists a feasible coloring $\chi$ for an instance $(\I, r)$ of the interval constrained 3-coloring problem. \end{theorem} \begin{proof} The proof is by reduction from the 3-SAT problem. \smallskip \noindent Suppose to be given an instance of the 3-SAT problem, defined by $q$ clauses $C_1, \dots, C_q$ and $p$ variables $x_1, \dots, x_p$. Each clause $C_i$ $(i=1, \dots, q)$ contains 3 literals, namely $y_1(i),y_2(i),y_3(i)$. Each literal $y_h(i)$ $(i=1, \dots, q$ and $h=1,2,3)$ \emph{refers to} a variable $x_j$, that means, it is equal to either $x_j$ or $\bar x_j$ for some $j$ in $1, \dots, p$. A truth assignment for the variables $x_1, \dots, x_p$ satisfies the 3-SAT instance if and only if, for each clause, at least one literal takes the value $true$. \begin{figure}[htb] \centering \includegraphics[width = 0.875 \textwidth,page=2]{figures.pdf} \caption{The sequence of nodes in an instance of the interval constrained 3-coloring problem.} \label{fig:1} \end{figure} We now construct an instance of the interval constrained 3-coloring problem. For each clause $C_i$ we introduce a sequence of consecutive nodes. This sequence is, in its turn, the union of three subsequences, one for each of the three literals (see Fig.~\ref{fig:1}). In the following, for the clarity of presentation, we drop the index $i$, if it is clear from the context. We denote color 1 by RED, color 2 by BLACK and color 3 by WHITE. \paragraph{Literal $y_1(i)$.} The subsequence representing literal $y_1$ is composed of 8 nodes. Among them, there are three special nodes, namely $t_1,f_1$ and $a_1$, that play a key role since they encode the information about the truth value of the literal and of the variable $x_j$ it refers to. The basic idea is to achieve the following two goals: 1) given a feasible coloring, if $\chi(t_1)$ is BLACK, we want to be able to construct a truth assignment setting $x_j$ to $true$, while if $\chi(f_1)$ is BLACK, we want to be able to construct a truth assignment setting the variable $x_j$ to $false$; 2) given a feasible coloring, if $\chi(a_1)$ is RED, we want to be able to construct a truth assignment where $y_1$ is $true$. To achieve the first goal, we will impose the following property: \begin{property}\label{(i)} In any feasible coloring, exactly one among $t_1$ and $f_1$ will be BLACK. \end{property} \noindent To achieve the second goal, and being consistent with the first one, we must have the property that: \begin{property}\label{(ii)} In any feasible coloring, if $\chi(a_1) = RED$, then $\chi(t_1) = BLACK$ if $y_1 = x_j$, while $\chi(f_1) = BLACK$ if $y_1 = \bar x_j$. \end{property} \noindent To guarantee properties~\eqref{(i)} and \eqref{(ii)}, we introduce a suitable set $\mathcal I(y_1)$ of six intervals\footnote{In principle, interval $I_5$ and the node it contains are not needed. However, this allows to have the same number of WHITE and BLACK colored nodes for the sake of exposition.}, shown in Fig.~\ref{fig:2}a. \begin{figure}[htb] \centering \subfigure[]{\includegraphics[width = 0.42 \textwidth,page=3]{figures.pdf}} \hspace*{0.05 \textwidth} \subfigure[]{\includegraphics[width = 0.42 \textwidth,page=4]{figures.pdf}} \caption{Literal $y_1$. The picture on the right shows the three feasible colorings. On a black and white printout red color appears as grey.} \label{fig:2} \end{figure} \noindent The requirement function for such intervals changes whether $y_1=x_j$ or $y_1=\bar x_j$. If $y_1=x_j$, we let $r(I_1)= (1,1,1)$; $r(I_2)= (1,1,1)$; $r(I_3)= (1,0,1)$; $r(I_4)= (1,1,2)$; $r(I_5)= (0,1,0)$; $r(I_6)= (2,3,3)$. For any feasible coloring there are only three possible outcomes for such sequence, reported in Fig.~\ref{fig:2}b. Observe that the properties \eqref{(i)} and \eqref{(ii)} are enforced. Now suppose that $y_1 = \bar x_j$: then we switch the requirement function with respect to WHITE and BLACK, i.e. define it as follows: $r(I_1)= (1,1,1)$; $r(I_2)= (1,1,1)$; $r(I_3)= (1,1,0)$; $r(I_4)= (1,2,1)$; $r(I_5)= (0,0,1)$; $r(I_6)= (2,3,3)$. Trivially, the possible outcomes for such sequence are exactly the ones in Fig.~\ref{fig:2}b but exchanging the BLACK and WHITE colors. \paragraph{Literal $y_3(i)$.} The sequence of nodes representing literal $y_3$ is similar to the one representing $y_1$. We still have a sequence of 8 nodes, and three special nodes $t_3,f_3$ and $a_3$. As before, we let $t_3$ and $f_3$ encode the truth value of the variable $x_j$ that is referred to by $y_3$, while $a_3$ encodes the truth value of the literal $y_3$ itself. Therefore, we introduce a set $\mathcal I(y_3)$ of intervals in order to enforce the following properties: \begin{property}\label{(iii)} In any feasible coloring, exactly one among $t_3$ and $f_3$ will receive color BLACK. \end{property} \begin{property}\label{(iv)} In any feasible coloring, if $\chi(a_3) = RED$, then $\chi(t_3) = BLACK$ if $y_3 = x_j$, while $\chi(f_3) = BLACK$ if $y_3 = \bar x_j$. \end{property} \noindent Fig.~\ref{fig:3}a shows the nodes and the six intervals that belong to $\mathcal I(y_3)$: observe that the sequence is similar to the one representing $y_1$, but the position of node $a_3$ and the intervals are now ``mirrored''. If $y_3 = \bar x_j$, we let $r(I_1)= (1,1,1)$; $r(I_2)= (1,1,1)$; $r(I_3)= (1,0,1)$; $r(I_4)= (1,1,2)$; $r(I_5)= (0,1,0)$; $r(I_6)= (2,3,3)$. Fig.~\ref{fig:3}b reports the three possible outcomes for such sequence in a feasible coloring. Note that properties \eqref{(iii)} and \eqref{(iv)} hold. Now suppose that $y_3 = x_j$: once again, we switch the requirement function with respect to WHITE and BLACK. \begin{figure}[htb] \centering \subfigure[]{\includegraphics[width = 0.425 \textwidth,page=5]{figures.pdf}} \hspace*{0.05 \textwidth} \subfigure[]{\includegraphics[width = 0.425 \textwidth,page=6]{figures.pdf}} \caption{Literal $y_3$} \label{fig:3} \end{figure} \paragraph{Literal $y_2(i)$.} The sequence of nodes representing literal $y_2$ is slightly more complicated. It is composed of 36 nodes, and among them there are 4 special nodes, namely $t_2,f_2,a_2^{\ell}$ and $a_2^r$ (see Fig.~\ref{fig:4}). Still, we let $t_2$ and $f_2$ encode the truth value of the variable $x_j$ that is referred to by $y_2$, while $a_2^{\ell}$ and $a_2^r$ encode the truth value of the literal. \begin{figure}[htb] \centering \includegraphics[width = 0.9 \textwidth,page=7]{figures.pdf} \caption{Literal $y_2$} \label{fig:4} \end{figure} Similarly to the previous cases, we want to achieve the following goals: 1) given a feasible coloring, if $\chi(t_2)$ is BLACK, we want to be able to construct a truth assignment setting the variable $x_j$ to $true$, while if $\chi(f_2)$ is BLACK, we want to be able to construct a truth assignment setting the variable $x_j$ to $false$; 2) given a feasible coloring, if $\chi(a_2^{\ell})=\chi(a_2^r)=$ RED, we want to be able to construct a truth assignment where the literal $y_2$ is $true$. We are therefore interested in the following properties: \begin{property}\label{(v)} In any feasible coloring, exactly one among $t_2$ and $f_2$ will receive color BLACK. \end{property} \begin{property}\label{(vi)} In any feasible coloring, if $\chi(a_2^{\ell}) = RED$ and $\chi(a_2^r) = RED$, then $\chi(t_2) = BLACK$ if $y_2 = x_j$, and $\chi(f_2) = BLACK$ if $y_2 = \bar x_j$. \end{property} \noindent In this case, we introduce a set $\mathcal I(y_2)$ of 14 suitable intervals, shown in Fig.~\ref{fig:4}. The requirements for the case $y_2 = \bar x_j$ are given in the following table. \[ \setlength{\arraycolsep}{1.75mm} \begin{array}{c|cccccccccccccc} & I_1 & I_2 & I_3 & I_4 & I_5 & I_6 & I_7 & I_8 & I_9 & I_{10} & I_{11} & I_{12} & I_{13} & I_{14} \\ \hline RED & 1 & 1 & 1 & 1 & 0 & 2 & 1 & 1 & 1 & 1 & 0 & 2 & 0 & 4 \\ BLACK & 1 & 1 & 0 & 1 & 1 & 3 & 1 & 1 & 0 & 1 & 1 & 3 & 2 & 7 \\ WHITE & 1 & 1 & 1 & 2 & 0 & 3 & 1 & 1 & 1 & 2 & 0 & 3 & 1 & 7 \end{array} \] \noindent Observe that the set of intervals $\{I_1, \dots, I_6 \}$ is defined exactly as the set $\mathcal I(y_3)$, therefore the possible outcomes for the sequence of 8 nodes covered by such intervals are as in Fig.~\ref{fig:3}b. Similarly, the set of intervals $\{I_7, \dots, I_{12} \}$ is defined exactly as the set $\mathcal I(y_1)$, therefore the possible outcomes for the sequence of 8 nodes covered by such intervals are as in Fig.~\ref{fig:2}b. Combining $r(I_6)$ and $r(I_{12})$ with $r(I_{14})$, it follows that in any feasible coloring $\chi$, exactly one node among $t_2$ and $f_2$ has WHITE (resp. BLACK) color, enforcing Property $\eqref{(v)}$. Still, note that if $\chi(a_2^{\ell}) = RED$ and $\chi(a_2^r) = RED$, then both the leftmost node and the rightmost node covered by interval $I_{13}$ have color BLACK, therefore $t_2$ must have color WHITE otherwise $r(I_{13})$ is violated. Together with Property \eqref{(v)}, this enforces Property \eqref{(vi)}. In case $y_2 = x_j$, once again we switch the requirement function with respect to WHITE and BLACK. \smallskip \noindent It remains to describe the role played by the first 13 nodes and the last 15 nodes of the sequence, that so far we did not consider. We are going to do it in the next paragraph. \paragraph{Intervals encoding truth values of literals.} For each clause $C_i$, we add another set $\mathcal I(C_i)$ of intervals, in order to link the nodes encoding the truth values of its three literals. The main goal we pursue is the following: given a feasible coloring, we want to be able to construct a truth assignment such that at least one of the three literals is $true$. To achieve this, already having properties \eqref{(ii)}, \eqref{(iv)} and \eqref{(vi)}, we only need the following property: \begin{property}\label{(vii)} For any feasible coloring, if $\chi(a_1) \neq RED$ and $\chi(a_3) \neq RED$ , then $\chi(a_2^{\ell}) = \chi(a_2^r) = RED$. \end{property} \noindent Fig.~\ref{fig:5} shows the six intervals that belong to $\mathcal I(C_i)$. The requirement function is: $r(I_1)=(1,2,2);$ $r(I_2)=(1,2,2);$ $r(I_3)=(1,6,6);$ $r(I_4)=(1,3,3);$ $r(I_5)=(1,2,2);$ $r(I_6)=(1,7,7)$. We now show that Property \eqref{(vii)} holds. Suppose $\chi$ is a feasible coloring, and let $v_1, \dots, v_{13}$ be the first 13 nodes of the sequence introduced for literal $y_2$. By construction, if $\chi(a_1) \neq RED$, then there is a node $v_j : \chi(v_j) = RED$ and $j \in \{1,2,3\}$ , otherwise $r(I_1)$ is violated. Similarly, if $\chi(a_2^{\ell}) \neq RED$, then there is a node $v_j : \chi(v_j) = RED$ and $j \in \{11,12,13\}$ , otherwise $r(I_2)$ is violated. On the other hand, this subsequence contains exactly one node with RED color, otherwise $r(I_3)$ is violated. It follows that at least one among $a_1$ and $a_2^{\ell}$ has RED color. The same conclusions can be stated for nodes $a_2^r$ and $a_3$. Putting all together, it follows that the Property \eqref{(vii)} holds. \begin{figure}[htb] \centering \includegraphics[width = 0.9 \textwidth,page=8]{figures.pdf} \caption{Set of intervals $\mathcal I(C_i)$.} \label{fig:5} \end{figure} \paragraph{Intervals encoding truth value of variables (later also called: variable intervals).} Our last set of intervals will force different nodes to take the same color, if they encode the truth value of the same variable. In particular, we aim at having the following property: \begin{property}\label{(viii)} In any feasible coloring, $\chi(t_h(i) ) = \chi(t_k(i'))$ if both literals $y_h(i)$ and $y_k(i')$ refer to the same variable $x_j$. \end{property} \noindent To achieve this, for each pair of such literals we add a big interval $I(y_h(i), y_k(i'))$ from $f_k(i')$ to $t_h(i)$ (assuming $i' < i$ without loss of generality). Note that, by construction, there is a subset of intervals that partitions all the internal nodes covered by the interval. That means, we know exactly the number of such nodes that must be colored with color RED, BLACK and WHITE (say $z_1,z_2,z_3$ respectively). Then, we let the requirement function be $r(I(y_h(i), y_k(i'))) = (z_1, z_2+1, z_3+1)$. Under these assumptions, if $\chi$ is a feasible coloring then $\chi(t_h(i)) \neq \chi(f_k(i'))$, and in particular one node will have WHITE color and the other one BLACK color. Combining this with properties \eqref{(i)},\eqref{(iii)} and \eqref{(v)}, the result follows. \medskip Notice that such an interval constrained 3-coloring instance can clearly be constructed in polynomial time. Now we discuss the following claim in more details. \begin{claim} There exists a truth assignment satisfying the 3-SAT instance if and only if there exists a feasible coloring $\chi$ for the interval constrained 3-coloring instance. \end{claim} First, suppose there exists a feasible coloring. We construct a truth assignment as follows. We set a variable $x_j$ to $true$ if $\chi(t_h(i))=BLACK$, and to $false$ otherwise, where $y_h(i)$ is any literal referring to $x_j$. Note that, by Property \eqref{(viii)}, the resulting truth value does not depend on the literal we take. Still, combining Property \eqref{(vii)} with properties \eqref{(ii)},\eqref{(iv)} and \eqref{(vi)}, we conclude that, for each clause, at least one literal will be $true$. By construction, we therefore end up with a truth assignment satisfying the 3-SAT instance. The result follows. \smallskip Now suppose that there is a truth assignment satisfying the 3-SAT instance. The basic idea, is to construct a coloring $\chi$ such that the following property holds for all literals: \begin{property}\label{(viv)} $\chi(t_h(i)) =$ BLACK (resp. WHITE) if and only if $y_h(i)$ refers to a $true$-variable (resp. $false$-variable). \end{property} \smallskip \noindent Consider the sequence of nodes representing literal $y_1(i)$, and suppose $y_1(i) = x_j$ for some $j$. We color such nodes as in Fig.~\ref{fig:2}b-\emph{(1)} if the literal is $true$ in the truth assignment, and as in Fig.~\ref{fig:2}b\emph{-(3)} otherwise. If $y_1(i) = \bar {x}_j$, switch BLACK and WHITE colors, in both previous cases. Now focus on the sequence of nodes representing literal $y_3(i)$. If $y_3(i) =\bar x_j$ for some $j$, we color such nodes as in Fig.~\ref{fig:3}b-\emph{(1)} if the literal is $true$, and as in Fig.~\ref{fig:3}b\emph{-(3)} otherwise. If $y_3(i) = {x}_j$, switch BLACK and WHITE colors, in both previous cases. Finally, consider the sequence of nodes representing literal $y_2(i)$. Suppose $y_2(i) = \bar x_j$. We color the 18 nodes in the middle of the sequence as in Fig.~\ref{fig:8}\emph{-(1)} if $y_2(i)$ is $true$, as in Fig.~\ref{fig:8}\emph{-(2)} if both $y_2(i)$ and $y_1(i) $ are $false$, and as in Fig.~\ref{fig:8}\emph{-(3)} otherwise. Once again, if $y_2(i) = {x}_j$, we switch BLACK and WHITE colors, in all the previous three cases. Notice that, by construction, Property \eqref{(viv)} holds, and all requirements for the intervals in $\mathcal I(y_h(i))$ $(i=1, \dots, q$ and $h=1, 2, 3$) are not violated. \begin{figure}[htb] \centering \includegraphics[width = 0.9 \textwidth,page=9]{figures.pdf} \caption{Coloring of nodes representing literal $y_2$} \label{fig:8} \end{figure} Now we show how to color the first 13 nodes $(v_1, \dots, v_{13})$ and the last 15 nodes $(w_1, \dots, w_{15})$ of the sequence representing literal $y_2(i)$, in such a way that the requirements of the intervals $I_1, \dots, I_6$ in $\mathcal I(C_i)$ are not violated ($i=1, \dots, q)$. Note that, by construction, at least one node among $a_1$ and $a_2^{\ell}$ is colored with RED. In fact, if $y_1(i)$ is $true$ then $\chi(a_1) =$ RED, while if $y_1(i) = false$ then $a_2^{\ell}$ is colored with RED. Similarly, at least one node among $a_3$ and $a_2^r$ is colored with RED, since $\chi(a_2^r) \neq RED$ only if both literals $y_1(i)$ and $y_2(i)$ are $false$: then, necessarily $y_3(i)$ is $true$, and therefore $\chi(a_3) = RED$. Let us focus on the nodes $v_1, \dots, v_{13}$, and let $u$ be the node in between $v_{13}$ and $a_2^{\ell}$. In the following, we refer to WHITE as the \emph{opposite} color of BLACK and vice versa. As we already discuss, we can have only two cases: Case 1: $\chi(a_1) = \chi(a_2^{\ell}) = RED$. We color $v_1$ with the opposite color of $f_1$, and the nodes $v_2$ and $v_3$ with BLACK and WHITE. Note that $r(I_1)$ is not violated. We then color $v_4,v_5,v_6$ with the opposite color of $v_1,v_2,v_3$ respectively. Similarly, we color $v_{13}$ with the opposite color of $u$. Then, we color $v_{12}$ and $v_{11}$ with BLACK and WHITE, so that $r(I_2)$ is not violated. Once again, we assign to $v_{10},v_9,v_8$ the opposite color of $v_{13},v_{12},v_{11}$ respectively. Finally, we let $\chi(v_7) = RED$. Note that $r(I_3)$ is not violated. Case 2: $\chi(a_1) \neq RED$ and $\chi(a_2^{\ell}) = RED$, or vice versa. Suppose $\chi(a_1) \neq RED$ (the other case is similar). Both nodes $a_1$ and $f_1$ can have only BLACK or WHITE colors. Then, we can color $v_1$ and $v_2$ with the opposite color of $a_1$ and $f_1$ respectively, and $v_3$ with color RED, so that $r(I_1)$ is not violated. Still, we color $v_4$ and $v_5$ with the opposite color of $v_1$ and $v_2$. Finally, we color $v_6$ and $v_7$ with BLACK and WHITE. To the remaining nodes $v_8, \dots, v_{13}$ we assign the same colors as in Case 1. One checks that requirements of intervals $I_2$ and $I_3$ are not violated. One can prove in a similar manner that nodes $(w_1, \dots, w_{15})$ can be properly colored, without violating the requirements of intervals $I_4,I_5,I_6$. \smallskip Finally, since Property \eqref{(viv)} holds, it is easy to see that, for each couple of literals $y_h(i), y_k(i')$, the requirement $r(I(y_h(i), y_k(i')))$ is also not violated. The result then follows \end{proof} \section{Gap hardness} We will now argue that not only the interval constrained 3-coloring problem but also its gap version is NP-hard, i.e., it is hard to distinguish between satisfiable instances and those where only up to a $(1-\epsilon)$ fraction of constraints may be simultaneously satisfied. For the purpose of our argument we will use the following, rather restricted, definition of gap hardness. We will only talk about maximization versions of constraint satisfaction problems. Think of an instance of the problem as being equipped with an additional parameter $t$ called threshold. We ask for a polynomial time algorithm which given the instance answers: \begin{itemize} \item ``YES'' if all the constraints can be satisfied, \item ``NO'' if there is no solution satisfying more than $t$ constraints. \end{itemize} Note that for instances, where more than $t$ but not all constraints can be simultaneously satisfied, any answer is acceptable. We will now restrict our attention to the case where the threshold is a fixed fraction of the total amount of constraints in the instance. We call problem A to be \emph{gap NP-hard} if there exists a positive $\epsilon$ such that there is no polynomial time algorithm to separate feasible instances from those where only at most a $(1-\epsilon)$ fraction of the constraint can be simultaneously satisfied unless $P = NP$. Observe that gap NP-hardness implies APX-hardness, but not vice versa. For example the linear ordering problem (also known as max-subdag) is APX-hard~\cite{papa_apx}, but is not gap NP-hard, since feasible instances may be found by topological sorting. Let us first note that the 3-SAT problem, which we used in the reduction from the previous section, has the gap hardness property. It is the essence of the famous PCP theorems that problems with such gap hardness exist. For a proof of the gap hardness of 3-SAT see~\cite{gap_amp}. Before we show how to modify our reduction to prove gap hardness of the interval constraint coloring problem, we need to introduce the notion of \emph{expander graphs}. For brevity we will only give the following extract from~\cite{gap_amp}. \begin{definition} Let $G = (V,E)$ be a $d$-regular graph. Let $E(S,\overline{S}) = | (S\times\overline{S}) \cap E |$ equal the number of edges from a subset $S \subseteq V$ to its complement. The \emph{edge expansion} of $G$ is defined as \[ h(G)= \min_{S:|S|\leq |V|/2}\frac{E(S,\overline{S})}{|S|}. \] \end{definition} \begin{lemma} There exists $d_0 \in \setN$ and $h_0 > 0$, such that there is a polynomial-time constructible family $\{ X_n \} _{n \in \setN}$ of $d_0$-regular graphs $X_n$ on $n$ vertices with $h(X_n) \geq h_0$. (Such graphs are called expanders). \end{lemma} Let us now give a ``gap preserving'' reduction from gap 3-SAT to gap interval constrained 3-coloring. Consider the reduction from the previous section. Observe that the amount of intervals in each literal gadget, and therefore also in each clause gadget, is constant. The remaining intervals are the variable intervals. While it is sufficient for the NP-hardness proof to connect occurrences of the same variable in a ``clique'' fashion with variable intervals, it produces a potentially quadratic number of intervals. Alternatively, one could connect these occurrences in a ``path'' fashion, but it would give too little connectivity for the gap reduction. The path-like connection has the desired property of using only linear amount of intervals, since each occurrence of a variable is linked with at most two other ones. We aim at providing more connectivity while not increasing the amount of intervals too much. A perfect tool to achieve this goal is a family of expander graphs. Consider the instance of the interval coloring problem obtained by the reduction from the previous section, but without any variable intervals yet. Consider literal gadgets corresponding to occurrences of a particular variable $x$. Think of these occurrences as of vertices of a graph $G$. Take an expander graph $X_{|V(G)|}$ and connect two occurrences of $x$ if the corresponding vertices in the expander are connected. For each such connection use a pair of intervals. These intervals should be the original variable interval and an interval that is one element shorter on each of the sides. We will call this pair of intervals a variable link. Repeat this procedure for each of the variables. Observe that the number of variable links that we added is linear since all the used expander graphs are $d_0$-regular. By contrast to the simple path-like connection, we now have the property, that different occurrences of the same variable have high edge connectivity. This can be turned into high penalty for inconsistent valuations of literals in an imperfect solution. \begin{theorem} Constrained interval 3-coloring is gap NP-hard. \end{theorem} \begin{proof} We will argue that the above described reduction is a gap-preserving reduction from the gap 3-SAT problem to the gap interval 3-coloring problem. We need to prove that there exists a positive $\epsilon$ such that feasible instances are hard to separate from those less than $(1-\epsilon)$ satisfiable. Let $\epsilon_0$ be the constant in the gap hardness of gap 3-SAT. We need to show two properties: that the ``yes'' instances of the gap 3-SAT problem are mapped to ``YES'' instances of our problem, and also that the ``NO'' instances are mapped to ``NO'' instances. The first property is simple, already in the NP-hardness proof in the previous section it was shown that feasible instances are mapped by our reduction into feasible ones. To show the second property, we will take the reverse direction and argue that an almost feasible solution to the coloring instance can be transformed into an almost feasible solution to the SAT instance. Suppose we are given a coloring $\chi$ that violates at most $\epsilon$ fraction of the constraints. Suppose the original 3-SAT instance has $q$ clauses, then our interval coloring instance has at most $c \cdot q$ intervals for some constant $c$. The number of unsatisfied intervals in the coloring $\chi$ is then at most $\epsilon q c$. We will say that a clause is \emph{broken} if at least one of the intervals encoding it is not satisfied by $\chi$. We will say that a variable link is broken if one of its intervals is not satisfied or one of the clauses it connects is broken. An unsatisfied variable link interval contributes a single broken link; an unsatisfied interval within a clause breaks at most $3 d_0$ intervals connected to the clause. Therefore, there is at most $3 d_0 \epsilon q c$ broken variable links in total. Recall that each variable link that is not broken connects occurrences of the same variable in two different not broken clauses. Moreover, by the construction of the variable link, these two occurrences display the same logical value of the variable. Consider the truth assignment $\phi$ obtained as follows. For each variable consider its occurrences in the not broken clauses. Each occurrence associates a logical value to the variable. Take for this variable the value that is displayed in the bigger set of not broken clauses, break ties arbitrarily. We will now argue, that $\phi$ satisfies a big fraction of clauses. Call a clause \emph{bad} if it is not broken, but it contains a literal such that in the coloring $\chi$ this literal was active, but $\phi$ evaluates this literal to false. Observe that if a clause is neither broken nor bad, then it is satisfied by $\phi$. It remains to bound the amount of bad clauses. Consider the clauses that become bad from the choice of a value that $\phi$ assigns to a particular variable $x$. Let $b_x$ be the number of such clauses. By the connectivity property of expanders, the amount of variable links connecting these occurrences of $x$ with other occurrences is at least $h_0 b_x$. As we observed above, all these variable links are broken. Since there are in total at most $3 d_0 \epsilon q c$ broken links, we obtain that there is at most $\tfrac{3}{h_0} d_0 \epsilon q c$ bad clauses. Hence, there are at most $(\tfrac{3}{h_0} d_0 +1)\epsilon q c$ clauses that are either bad or broken and they cover all the clauses not satisfied by $\phi$. It remains to fix $\epsilon = \tfrac{h_0}{(3 d_0 + h_0)c} \epsilon_0$ to obtain the property, that more than $\epsilon_0$ unsatisfiable instances of 3-SAT are mapped to more than $\epsilon$ unsatisfiable instances of the constrained interval 3-coloring problem. \end{proof} \smallskip \noindent {\bf Acknowledgment} \noindent We thank Steven Kelk for valuable discussion. \bibliographystyle{splncs}
1,477,468,750,699
arxiv
\section{Introduction} The algebraic theory of differential equations, also known as differential algebra~\cite{Ritt}, aims at studying nonlinear differential equations using methods of algebra and algebraic geometry. For doing this, one typically abstracts from functions (analytic, meromorphic, etc) to elements of differential fields (fields equipped with a derivation or several commuting derivations). This approach turned out to be fruitful yielding interesting results from theoretical and applied perspectives (see, e.g., \cite{Fliess1987, Boulier, vanderPut2003,Pila2016, jfunc}). Furthermore, one can additionally use powerful tools from model theory to study differential fields (see, e.g., \cite{Marker, Nagloo2021}). In this context, a fundamental question is how to transfer results about differential fields back to the realm of analysis. There are two classical theorems in differential algebra typically used for this purpose: \begin{itemize} \item \emph{Ritt's theorem of zeroes}~\cite[p. 176]{Ritt} which can be viewed as an analogue of Hilbert's Nullstellensatz. The theorem implies that any system of nonlinear PDEs having a solution in some differential field has a solution in a field of meromorphic functions on some domain. \item \emph{Seidenberg's embedding theorem}~\cite{Seid1958} which is often used as a differential analogue of the Lefschetz principle (e.g.~\cite{jfunc, Gauchman1989,Buium1995, Binyamini2017, Hardouin2008}). The theorem says that any countably generated differential field with several commuting derivations can be embedded into a field of meromorphic functions on some domain. \end{itemize} In~\cite{Seid1958}, Seidenberg gave a complete proof of his theorem for the case of a single derivation (see also~\cite[Appendix~A]{Marker}). For the PDE case, he gave a sketch which reuses substantial parts of Ritt's proof of Ritt's zero theorem from~\cite{Ritt}. The latter proof concludes the whole monograph and heavily relies on the techniques developed there. In particular, Ritt's proof uses the machinery of characteristic sets~\cite[Chapter V]{Ritt} which is a fundamental tool in differential algebra but not so well-known in the broader algebra community and quite technical existence theorem for PDEs due to Riquier~\cite[Chapter VIII]{Ritt} (see also~\cite{Riquier}) which, to the best of our knowledge, is not discussed in the standard PDE textbooks. Due to the importance of the theorems of Ritt and Siedenberg as bridges between the algebraic and analytic theories of nonlinear PDEs, we think that it is highly desirable to have short proofs of these theorems accessible to people with some general knowledge in algebra and PDEs. In the present paper, we give such proofs. Our proofs rely only on some basic facts from differential algebra and the classical Cauchy-Kovalevskaya theorem for PDEs. Our proof strategy is inspired by the argument from~\cite[Theorem~3.1]{GRP} for the case of one derivation. However, the techniques from~\cite{GRP} had to be substantially developed in order to tackle the PDE case (which is quite subtle~\cite{Lemaire2003}) and to prove both Ritt's and Seidenberg's theorem (not only the Ritt's as in~\cite{GRP}). The key ingredients of the argument are an auxiliary change of derivations (Lemma~\ref{lemmacoef}) which helps us to bring a system of PDEs into the form as in the Cauchy-Kovalevaskaya theorem, Taylor homomorpishms (Definition~\ref{deftaylor}) allowing to build formal power series solutions, and a characterization of differentially simple algebras (Lemma~\ref{lemmasimple}). The paper is organized as follows. Section~\ref{sec:preliminaries} contains the basic definitions used to state the main results in Section~\ref{sec:main}. Section~\ref{sec:proofs_notions} contains relevant notions and facts from algebra and analysis used in the proofs. The proofs are located in Section~\ref{sec:proofs}. Section~\ref{sec:spec} contains a remark on the special case of algebras over $\mathbb{C}$. \section{Preliminaries}\label{sec:preliminaries} \subsection{Algebra} Throughout the paper, all algebras are assumed to be \emph{unital} (that is, with a multiplicative identity element). \begin{notation}[Multi-indices] For every $\alpha = (\alpha_1, \ldots, \alpha_m) \in \mathbb{Z}_{\geqslant 0}^m$ and for every tuple $t = (t_1, \ldots, t_m)$ of elements of a ring, we denote \[ t^{\alpha} := t_1^{\alpha_1}\cdot \ldots \cdot t^{\alpha_m} \quad\text{ and }\quad \alpha! := \alpha_1!\cdot \ldots \cdot \alpha_m!. \] \end{notation} \begin{definition}[Differential rings and algebras] Let $\Delta = \{ \delta_1, \ldots, \delta_m\}$ be a set of symbols. \begin{itemize} \item Let $R$ be a commutative ring. An additive map $\delta \colon R \to R$ is called \emph{derivation} if $\delta(ab) = \delta(a) b + a \delta(b)$ for any~$a,b\in R$. \item A commutative ring $R$ is called \emph{$\Delta$-ring} if $\delta_1, \ldots, \delta_m$ act on $R$ as pairwise commuting derivations. If $R$ is a field, it is called \emph{$\Delta$-field}. \item Let $A$ be a commutative algebra over ring $R$. If $A$ and $R$ are $\Delta$-rings and the action of $\Delta$ on $R$ coincides with the restriction of the action of $\Delta$ on $R\cdot 1_A \subseteq A$, then $A$ is called \emph{$\Delta$-algebra} over $R$. \end{itemize} \end{definition} \begin{definition}[Differential generators] Let $A$ be a $\Delta$-algebra over a $\Delta$-ring $R$. A set $S \subseteq A$ is called a set of \emph{$\Delta$-generators} of $A$ over $R$ if the set \[ \{ \delta^{\alpha} s \mid s \in S,\; \alpha \in \mathbb{Z}_{\geqslant 0}^m\} \] of all the derivatives of all the elements of $S$ generates $A$ as $R$-algebra. A $\Delta$-algebra is said to be $\Delta$-finitely generated if it has a finite set of $\Delta$-generators. $\Delta$-generators for $\Delta$-fields are defined analogously. \end{definition} \begin{definition}[Differential homomorphisms] Let~$A$ and~$B$ be $\Delta$-algebras over $\Delta$-ring~$R$. A map~$f\colon A \rightarrow B$ is called~\emph{$\Delta$-homomorphism} if~$f$ is a homomorphism of commutative~$R$-algebras and~$f(\delta a) = \delta f(a)$ for all~$\delta \in \Delta$ and~$a\in A$. An injective~$\Delta$-homomorphism is called \emph{$\Delta$-embedding}. \end{definition} \begin{definition}[Differential algebraicity] Let $A$ be a $\Delta$-algebra over a $\Delta$-ring $R$. An element~$a\in A$ is said to be~\emph{$\Delta$-algebraic} over~$R$ if the set~$\{ \delta^{\alpha}a \mid \alpha \in \mathbb{Z}_{\geqslant 0}^m \}$ of all the derivatives of $a$ is algebraically dependent over~$R$. In other words, $a$ satisfies a nonlinear PDE with coefficients in $R$. \end{definition} \subsection{Analysis} \begin{definition} [Multivariate holomorphic functions] Let $U \subseteq \mathbb{C}^m$ be a domain. A function~$f: U \rightarrow \mathbb{C}$ is called a \emph{holomorphic} function in~$m$ variables on $U$ if it is holomorphic on~$U$ with respect to each individual variable. The set of all holomorphic functions on $U$ will be denoted by $\mathcal{O}_m(U)$ \end{definition} \begin{notation} Let~$f$ be a holomorphic function on~$U \subseteq \mathbb{C}^m$. By~$V(f)$ we denote the set of zeroes of~$f$. \end{notation} \begin{definition}[{Multivariate meromorphic functions, \cite[Chapter IV, Definition 2.1]{FL}}] Let $U\subseteq \mathbb{C}^m$ be a domain. A \emph{meromorphic} function on $U$ is a pair $(f, M)$, where~$M$ is a thin set in~$U$ and~$f \in \mathcal{O}_m(U\setminus M)$ with the following property: for every~$z_0\in U$, there is a neighbourhood~$U_0$ of~$z_0$ and there are functions $g, h \in \mathcal{O}_m(U_0)$, such that~$V(h)\subseteq M$ and \[ f(z) = \dfrac{g(z)}{h(z)}~\text{ for every }~z\in U_0 \setminus M. \] The set of meromorphic functions on a domain~$U$ is denoted~$\mathcal{M}er_m(U)$. By convention we define $\mathcal{M}er_0(U) = \mathcal{O}_0(U) = \mathbb{C}$. For every domain $U \subseteq \mathbb{C}^m$, the field $\mathcal{M}er_m(U)$ has a natural structure of $\Delta$-field with $\delta_i \in \Delta$ acting as $\frac{\partial}{\partial z_i}$, where $z_1, \ldots, z_m$ are the coordinates in $\mathbb{C}^m$. Furthermore, if $U \subseteq V$, then there is a natural $\Delta$-embedding $\mathcal{M}er_m(V) \subseteq \mathcal{M}er_m(U)$. \end{definition} \section{Main Results}\label{sec:main} \begin{theorem}[Seidenberg's embedding theorem] Let $W \subseteq \mathbb{C}^m$ be a domain and let~$K \subseteq \mathcal{M}er_m(W)$ be at most countably $\Delta$-generated~$\Delta$-field (over $\mathbb{Q}$). Let $L \supset K$ be a $\Delta$-field finitely $\Delta$-generated over~$K$. Then there exists a domain $U \subseteq W$ and a~$\Delta$-embedding~$f\colon L \rightarrow \mathcal{M}er_m(U)$ over $K$. \end{theorem} \begin{theorem}[Ritt's theorem of zeroes] Let $W \subseteq \mathbb{C}^m$ be a domain and let~$K \subseteq \mathcal{M}er_m(W)$ be a~$\Delta$-field. Let~$A$ be a finitely generated~$\Delta$-algebra over~$K$. Then there exists a non-trivial $\Delta$-homomorphism~$f: A \rightarrow \mathcal{M}er_m(U)$ for some domain ${U \subseteq W \subseteq \mathbb{C}^m}$ such that $f(a)$ is~$\Delta$-algebraic over~$K$ for any~$a\in A$. \end{theorem} \begin{corollary}\label{cor:holomorphic} Let~$A$ be a finitely $\Delta$-generated~$\Delta$-algebra over~$\mathbb{C}$. Then there exists a non-trivial $\Delta$-homomorphism~$f\colon A \rightarrow \mathcal{O}_m(U)$ for some domain ${U \subseteq \mathbb{C}^m}$. \end{corollary} \begin{proof} Ritt's theorem yields the existence of a $\Delta$-homomorphism $f\colon A\to \mathcal{M}er_m(W)$. Let $a_1, \ldots, a_n$ be a set of $\Delta$-generators of $A$. There is a domain $U \subseteq W$ such that $f(a_1), \ldots, f(a_n)$ are holomorphic in $U$. Therefore, the restriction of $f$ to $U$ yields a $\Delta$-homomorphism $A \to \mathcal{O}_m(U)$. \end{proof} \section{Notions and results used in the proofs}\label{sec:proofs_notions} \subsection{Algebra} \begin{notation} Let $R$ be a $\Delta$-ring. By~$R[[z_1,\ldots,z_m]]$ we denote the ring of formal power series over~$R$ in variables~$z_1,\ldots,z_m$. It has a natural structure of $\Delta$-algebra over $R$ with $\delta_i \in \Delta$ acting as $\frac{\partial}{\partial z_i}$. \end{notation} \begin{definition}[Taylor homomorphisms]\label{deftaylor} Let $A$ be a $\Delta$-algebra over $\Delta$-field $K$, $L \supseteq K$ be a~$\Delta$-field and the action of $\Delta$ on $L$ be trivial. Let $\psi\colon A \to L$ be a (not necessarily differential) homomorphism of $K$-algebras. Let~$w\in L^m$. Then we define a map called \emph{Taylor homomorphism} $T_{\psi, w} \colon A \to L[[t_1,\ldots,t_m]]$ by the formula \[ T_{\psi, w}(a) := \sum\limits_{\alpha \in \mathbb{Z}^m_{\geqslant 0}} \psi(\delta^\alpha a)\dfrac{(t-w)^\alpha}{\alpha!} \quad \text{ for every } a\in A. \] Direct computation shows~\cite[\S 44.3]{YuPbook} that that $T_{\psi, w}$ is a $\Delta$-homomorphism. \end{definition} \begin{notation} Let $R$ be a $\Delta$-ring. For every subset $S \subseteq R$, by~$\Delta^\infty S$ we denote the set $\{\delta^\alpha s | \alpha\in \mathbb{Z}^m_{\geqslant 0}, s\in S\}$ of all derivatives of the elements of~$S$. \end{notation} \begin{definition}[Differential polynomials] Let $R$ be a $\Delta$-ring. Consider an algebra of polynomials \[ R[\Delta^\infty x_1,\ldots, \Delta^\infty x_n] := R[\delta^\alpha x_i| \alpha\in\mathbb{Z}^m_{\geqslant 0}, i=1,\ldots,n] \] in infinitely many variables $\delta^\alpha x_i$. We define the structure of $\Delta$-algebra over $R$ by \[ \delta_i (\delta^\alpha x_j) := (\delta_i \delta^\alpha) x_j \text{ for every } 1\leqslant i \leqslant m,\; 1\leqslant j \leqslant n,\; \alpha\in \mathbb{Z}_{\geqslant 0}^m. \] The resulting algebra is called the \emph{the algebra of $\Delta$-polynomials} in $x_1, \ldots, x_n$ over $R$. \end{definition} \begin{definition}[Separants] \label{defsepinit} Let $R$ be a $\Delta$-ring. Let~$P(x) \in R[\Delta^\infty x]$. We introduce an~ordering on the derivatives of~$x$ as~follows: \begin{equation}\label{eq:ord} \delta^\alpha x < \delta^\beta x\iff \alpha <_{\grlex} \beta, \end{equation} where~$\grlex$ is the graded lexicographic ordering of~$\mathbb{Z}^m_{\geqslant 0}$. Let $\delta^\mu x$ be the highest (w.r.t. the introduced ordering) derivative appearing in~$P$. Consider~$P$ as a univariate polynomial in~$\delta^{\mu}x$ over~$R[\delta^\alpha x| \alpha <_{\grlex} \mu]$. We define the \emph{separant} of $P$ by \[ \sep_x^{\Delta}(P) := \frac{\partial}{\partial (\delta^\mu x)} P. \] \end{definition} \begin{remark} Throughout the rest of the paper, we assume that the ordering of a set of derivatives of an element of a~$\Delta$-algebra is the one defined in~\eqref{eq:ord}. \end{remark} \begin{definition}[Differential algebraicity and transcendence] Let $R$ be a $\Delta$-ring and let $A$ be a $\Delta$-algebra over $R$. \begin{itemize} \item A subset~$S\subseteq A$ is said to be \emph{$\Delta$-dependent} over~$R$ if~$\Delta^\infty S$ is algebraically dependent over~$R$. Otherwise, $S$ is called \emph{$\Delta$-independent} over~$R$. \item An element~$a\in A$ is said to be~\emph{$\Delta$-algebraic} over~$R$ if the set~$\{a\}$ is $\Delta$-dependent over~$R$. Otherwise, $a$ is called $\Delta$-transcendental over $R$. \end{itemize} \end{definition} \begin{definition}[Differential transcendence degree] Let~$A$ be a~$\Delta$-algebra over field~$K$. Any maximal~$\Delta$-independent over~$K$ subset of~$A$ is called a \emph{$\Delta$-transcendence basis} of~$A$ over~$K$. The cardinality of a $\Delta$-transcendence basis does not depend on the choice of the basis~\cite[II.9, Theorem~4]{Kolchin} and is called the~\emph{$\Delta$-transcendence degree} of~$A$ over~$K$ (denoted by~$\operatorname{difftrdeg}_K^\Delta A$). \end{definition} \begin{definition}[Differential ideals] Let~$R$ be a $\Delta$-ring. A subset~$I \subseteq R$ is called a \emph{differential ideal} if it is an ideal of~$A$ considered as a commutative algebra and~$\delta a \in I$ for any~$\delta\in \Delta$ and~$a\in I$. \end{definition} \begin{notation} Throughout the rest of the paper, we use the notation $\Delta_0 := \Delta \setminus \{\delta_1\}$. \end{notation} \subsection{Analysis} The following is a special case of the Cauchy-Kovalevskaya theorem~\cite[Chapter V, \textsection 94]{Goursat} which is sufficient for our purposes. \begin{theorem}[Cauchy-Kovalevskaya] Consider holomorphic functions in variables $z_1, \ldots, z_m$. The operator of differentiation with respect to $z_i$ will be denoted by $\delta_i$ for $i = 1, \ldots, m$. For a positive integer $r$, we introduce a set of multi-indices $M_r := \{\alpha \in \mathbb{Z}_{\geqslant 0}^m \mid |\alpha| \leqslant r, \alpha_1 < r\}$. Consider a~PDE in an unknown function $u$ \begin{equation} \label{syskov} \delta_1^{r} u = F(z_1, \ldots, z_m; \delta^{\alpha}u \mid \alpha \in M_r), \end{equation} where $F$ is a rational function over $\mathbb{C}$ in $z_1, \ldots, z_m$ and derivatives $\{\delta^\alpha u \mid \alpha \in M_r\}$. Consider complex numbers $a_1, \ldots, a_m$ and functions $\varphi_0, \ldots, \varphi_{r - 1}$ in variables $z_2, \ldots, z_m$ holomorphic in a neighbourhood of $(a_2, \ldots, a_m)$ such that $F$ is well-defined under the substitution: \begin{enumerate} \item $a_i$ for $z_i$ for every $1 \leqslant i \leqslant m$ \item and $(\delta^{(\alpha_2, \ldots, \alpha_m)}\varphi_{\alpha_1})(a_2, \ldots, a_m)$ for $\delta^\alpha u$ for every $\alpha \in M_r$. \end{enumerate} Then there is a unique function $u$ holomorphic in a neighborhood of $(a_1, \ldots, a_m)$ satisfying~\eqref{syskov} and \[ (\delta_1^i u)|_{z_1 = a_1} = \varphi_i \quad\text{ for every }\quad 0 \leqslant i < r. \] \end{theorem} \section{Proofs}\label{sec:proofs} This section is structured as follows. In Section~\ref{sec:dintegral}, we introduce the notion of $\Delta$-integral elements which is an algebraic way saying that an element satisfies a PDE as in the Cauchy-Kovalevskaya theorem. We prove that there always exists a linear change of derivations making a fixed element $\Delta$-integral (Lemma~\ref{lemmacoef}) and prove Lemma~\ref{lemmafinite} which is a key tool for reducing the problem to the same problem in fewer derivations. Section~\ref{sec:seidenberg} contains the proof of Seidenberg's embedding theorem which proceeds by induction on the number of derivations using Lemma~\ref{lemmafinite}. We deduce Ritt's theorem of zeroes in Section~\ref{sec:ritt} from Seidenberg's theorem and Lemma~\ref{lemmasimple} characterizing $\Delta$-simple algebras. \subsection{Differentially integral generators}\label{sec:dintegral} \begin{definition}[$\Delta$-integral elements] Let $R$ be a $\Delta$-ring and let $A$ be a $\Delta$-algebra over $R$. An element~$a\in A$ is said to be~\emph{$\Delta$-integral} over~$R$ if there exists $P(x) \in R[\Delta^\infty x]$ such that \begin{itemize} \item $P(a)=0$, $\sep_x^\Delta (P) (a) \neq 0$; \item the highest (w.r.t. the ordering~\eqref{eq:ord}) derivative in~$P$ is of the form~$\delta_1^r x$. \end{itemize} \end{definition} \begin{remark} \label{eqremark} If~$a\in A$ is~$\Delta$-integral over~$R$, then the equality~$\delta_1 (P(a)) = 0$ can be rewritten as \[ \sep_x^\Delta P (a) \cdot \delta_1^{r+1}a = q(a), \quad \text{where } q\in R[\delta^\alpha x \mid \alpha <_{\grlex} (r + 1, 0, \ldots, 0)]. \] Therefore, if $\sep_x^\Delta P (a)$ is invertible in $A$, we have~$\delta_1^{r+1}a = \dfrac{q(a)}{\sep_x^\Delta(P) (a)}$. \end{remark} \begin{lemma} \label{lemmacoef} Let $R$ be a $\Delta$-ring and let $A$ be a $\Delta$-algebra over $R$. Let $A$ be $\Delta$-generated over $R$ by $\Delta$-algebraic over $R$ elements $a_1, \ldots, a_n$. Then there exists an~invertible $\mathbb{Z}$-linear change of derivations transforming~$\Delta$ to~$\Delta^\ast$ such that~$a_1,\ldots,a_n$ are~$\Delta^\ast$-integral over~$R$. \end{lemma} \begin{proof} Fix $1 \leqslant i \leqslant n$. Since $a_i$ is $\Delta$-algebraic over $R$, there exists nonzero $f_i \in R[\Delta^\infty x]$ such that $f_i(a_i) = 0$. We will choose this $f_i$ so that its highest (w.r.t.~\eqref{eq:ord}) derivative is minimal and, among such polynomials, the degree is minimal. We will call such $f_i$ a \emph{minimal} polynomial for~$a_i$. We introduce variables~$\lambda_2, \ldots, \lambda_m$ algebraically independent over~$A$ and extend the derivations from~$A$ to~$A[\lambda_2,\ldots,\lambda_m]$ by $\delta_i\lambda_j = 0$ for all~$i=1,\ldots,m$ and $j=2,\ldots,m$. Consider a set of derivations $D := \{ \dd_1, \dd_2, \ldots, \dd_m\}$ defined by \[ \dd_1 := \delta_1, \quad \dd_j := \delta_j + \lambda_i \delta_1 \text{ for } j = 2, \ldots, m. \] Consider any $1 \leqslant i \leqslant n$. We rewrite $f_i$ in terms of $D$ replacing $\delta_1$ with $\dd_1$ and $\delta_j$ with $\dd_j - \lambda_i \dd_1$ for $j = 2, \ldots, m$. We denote the order of the highest derivative appearing in $f_i$ by $r_i$ and the partial derivative $\frac{\partial}{\partial (\dd_1^{r_i}x)}f_i$ by $s_i$. We will show that $s_i(a_i) \neq 0$. We write \[ s_i(x) = \dfrac{\partial f_i}{\partial (\dd_1^{r_i} x)} = \sum\limits_{q_1 + \ldots + q_m = r_i} \lambda_2^{q_2}\ldots\lambda_m^{q_m}\dfrac{\partial f_i}{\partial(\delta_1^{q_1}\ldots \delta_m^{q_m}x)}. \] Due to the minimality of $f_i$ as a vanishing polynomial of $a$ and the algebraic independence of~$\lambda_j$, the latter expression does not vanish at $x = a_i$. So, $s_i(a_i) \neq 0$. Since, for every $1 \leqslant i \leqslant n$, $s_i(a_i)$ is a nonzero polynomial in $\lambda_2, \ldots, \lambda_m$ over $A$, it is possible to choose the values~$\lambda^\ast_2, \ldots, \lambda_m^\ast \in \mathbb{Z} \subset R$ so that neither of~$s_i(a_i)$ vanishes at $(\lambda_2^\ast, \ldots, \lambda_m^\ast)$. Let $\Delta^\ast = \{\delta_1^\ast,\ldots,\delta_m^\ast\}$ be the result of plugging these values to $D$. Then we have $\operatorname{sep}_x^{\Delta^\ast}f(a_i) = \dfrac{\partial f_i}{\partial ((\delta_1^{\ast})^{r_i} x)}(a_i) \neq 0$ for every $i = 1, \ldots, n$, so $a_1, \ldots, a_n$ are $\Delta^\ast$-integral over $R$. \end{proof} \begin{lemma} \label{lemmafinite} Let $R$ be a $\Delta$-ring and let $A$ be a $\Delta$-algebra over $R$. Assume that $A$ is a domain and is $\Delta$-generated by $a_1, \ldots, a_n$ over $R$ which are $\Delta$-integral over $R$. Then there exists~$a\in A$ such that~$A[1/a]$ is finitely $\Delta_0$-generated over~$R$. \end{lemma} \begin{proof} We will prove the lemma by induction on the number~$n$ of~$\Delta$-generators of~$A$. If~$n=0$, then~$A=R$ and~$A$ is clearly finitely $\Delta_0$-generated. Assume that the lemma is proved for all~extensions $\Delta$-generated by less than~$n$ elements. Applying the induction hypothesis to $\Delta$-algebra~$A_0 := R[\Delta^\infty a_1, \ldots, \Delta^\infty a_{n - 1}]$, we obtain~$b_0 \in A_0$ such that~$A_0[1/b_0]$ is a finitely $\Delta_0$-generated~$R$-algebra. Since $a_n$ is $\Delta$-integral over $R$, there exists~$P(x)\in R[\Delta^\infty x]$ such that $P(a_n) = 0$, the highest derivative in~$P$ is~$\delta_1^r x$, and~$b_2:=\sep^\Delta_x(P)(a_n) \neq 0$. We claim that \begin{equation}\label{eq:fingen} A\left[ \frac{1}{b_1b_2}\right] = A_0\left[ \frac{1}{b_1}, \Delta_0^\infty \left(\frac{1}{b_2}\right), \Delta_0^{\infty} (\delta_1^{\leqslant r}a_n)\right], \end{equation} where $\delta_1^{\leqslant r} a_n := \{a_n, \delta_1 a_n, \ldots, \delta_n^{r}a_n\}$. Since $A_0[1/b_1]$ is finitely $\Delta_0$-generated over~$R$, this would imply that~$A[1/(b_1b_2)]$ is finitely $\Delta_0$-generated over~$R$ as~well. In order to prove~\eqref{eq:fingen}, it is sufficient to show that the images of~$\{\delta_1^{\leqslant r} a_n, 1/b_2\}$ under $\delta_1$ belong to $B := A_0[1/b_1, \Delta_0^\infty (1/b_2), \Delta_0^\infty (\delta_1^{\leqslant r}a_n)]$. This is clear for~$\delta_1^{< r}a_n$, so it remains to show that~$\delta_1^{r+1}a_n, \delta_1(1/b_2) \in B$: \begin{itemize} \item For~$\delta_1^{r+1}a_n$ we use Remark~\ref{eqremark} to write \[ \delta_1^{r+1}a_n = \dfrac{-q(a_n)}{\operatorname{sep}^{\Delta}_x (P) (a_n)} = \dfrac{-q(a_n)}{b_2} \in B, \text{ where } q \in R[\Delta_0^\infty (\delta_1^{\leqslant r}x)]. \] \item For~$\delta_1(1/b_2)$, we observe that \[ \delta_1 \left(\dfrac{1}{b_2}\right) \in \dfrac{1}{b_2^2}A_0[\Delta_0^\infty (\delta_1^{\leqslant r} a_n)] \subseteq B.\qedhere \] \end{itemize} \end{proof} \subsection{Proof of Seidenberg's Theorem}\label{sec:seidenberg} \begin{lemma} \label{lemmamer} Let~$W\subseteq \mathbb{C}^m$ be a domain and~$K$ be a countably $\Delta$-generated subfield of~$\mathcal{M}er_m(W)$. Then there exist~$c\in \mathbb{C}$ and a~domain~$V \subseteq W \cap \{z_1 = c\}$ such that, for every $f \in K$, $f|_{\{z_1 = c\}}$ is a well-defined element of $\mathcal{M}er_{m - 1}(V)$ and, therefore, the restriction to $\{z_1 = c\}$ defines a $\Delta_0$-embedding $K \to \mathcal{M}er_{m - 1}(V)$. \end{lemma} \begin{proof} Let~$K$ be $\Delta$-generated by~$\{b_i\}_{i=1}^\infty$. For every $i \geqslant 0$, denote by~$S_i$ the set of singularities of~$b_i$. By definition, $S_i$ is a nowhere dense subset of~$\mathbb{C}^m$. Therefore, the union~$S = \bigcup\limits_{i=1}^{\infty} S_i$ is~a~meagre set. As~$W$ is a domain in~$\mathbb{C}^m$, the difference~$W \setminus S$ is non-empty. Choose any point $(w_1, \ldots, w_m) \in W \setminus S$. Then all the restrictions of~$b_i$ to~$\{z_1 = w_1\}$ are holomorphic at~$(w_2,\ldots,w_m)$ and meromorphic in some vicinity~$V \subseteq W\cap \{t_1 = w_1\}$ of $(w_2,\ldots,w_m)$. Since every element $f \in K$ is a rational function in $b_i$'s and their partial derivatives, its restriction to $\{z_1 = w_1\}$ is also a well-defined meromorphic function on $V$. \end{proof} \begin{lemma}\label{lemmatransc} Let $U \subseteq \mathbb{C}^m$ be a domain. For every countably $\Delta$-generated $\Delta$-field $K \subseteq \mathcal{M}er_m(U)$, $\operatorname{difftrdeg}_K^\Delta \mathcal{M}er_m(U)$ is infinite. \end{lemma} \begin{proof} Suppose~$\operatorname{difftrdeg}_K^\Delta \mathcal{M}er_m(U) = l < \infty$. Let~$\tau_1,\ldots,\tau_l$ be a~$\Delta$-transcendence basis of~$\mathcal{M}er_m(U)$ over~$K$. Let~$L$ be a field~$\Delta$-generated by~$K$ and~$\tau_1,\ldots,\tau_l$. Note that~$L$ is still at most countably~$\Delta$-generated and~$\mathcal{M}er_m(U)$ is $\Delta$-algebraic over~$L$. Choose an arbitrary point~$c \in U$ and denote by~$F$ a subfield of~$\mathbb{C}$ generated by the values at~$c$ of those elements of~$L$ that are holomorphic at~$c$. Clearly~$F$ is at most countably generated and the transcendence degree of~$\mathbb{C}$ over~$F$ is infinite. Now any function in~$\mathcal{M}er_m(U)$ that is holomorphic at~$c$ and such that the set of values of its derivatives at~$c$ is~transcendental over~$F$ is~$\Delta$-transcendental over~$L$, which contradicts to the assumption that~$\mathcal{M}er_m(U)$ is $\Delta$-algebraic over~$L$. \end{proof} \begin{notation} Let~$A$ be a~$\Delta$-algebra without zero divisors. By~$\operatorname{Frac}(A)$ we denote the field of fractions of~$A$. \end{notation} We are now ready to prove Seidenberg's theorem. \setcounter{theorem}{0} \begin{theorem}[Seidenberg's embedding theorem]\label{main} Let $W \subseteq \mathbb{C}^m$ be a domain and let~$K \subseteq \mathcal{M}er_m(W)$ be at most countably $\Delta$-generated~$\Delta$-field (over $\mathbb{Q}$). Let $L \supset K$ be a $\Delta$-field finitely $\Delta$-generated over~$K$. Then there exists a domain $U \subseteq W$ and a~$\Delta$-embedding~$f\colon L \rightarrow \mathcal{M}er_m(U)$ over $K$. \end{theorem} \begin{proof} We will first reduce the theorem to the case when $L$ is $\Delta$-algebraic over $K$. Assume that it is not, and let $u_1, \ldots, u_\ell$ be a $\Delta$-transcendence basis of $L$ over $K$. Lemma~\ref{lemmatransc} implies that there exist functions $f_1, \ldots, f_\ell \in \mathcal{M}er_m(W)$ $\Delta$-transcendental over $K$. Let $\Tilde{K}$ be a $\Delta$-field generated by $K$ and $f_1, \ldots, f_\ell$. The embedding $K \to L$ can be extended to an embedding $\Tilde{K} \to L$ by sending $f_i$ to $u_i$ for every $1\leqslant i \leqslant \ell$. Therefore, by replacing $K$ with $\Tilde{K}$ we will further assume that $L$ is $\Delta$-algebraic over $K$. We~will prove the theorem by induction on~the number~$m$ of~derivations. If~$m=0$, then~$L$ can be embedded into~$\mathbb{C}$ by \cite[Chapter V, Theorem 2.8]{Lang}. Suppose~$m > 0$. Let~$a_1, \ldots, a_n$ be a set of $\Delta$-generators of $L$ over $K$. Let $A := K[\Delta^\infty a_1, \ldots, \Delta^\infty a_n]$. Since~$A$ is $\Delta$-algebraic over $K$, by Lemma~\ref{lemmacoef}, there exist and invertible $m\times m$ matrix $M$ over $\mathbb{Q}$ such that, for a new set of derivations \[ \Delta^\ast = \{\delta_1^\ast, \ldots, \delta_m^\ast\} := M\Delta, \] $a_1,\ldots,a_n$ are~$\Delta^\ast$-integral over~$K$. Due to the invertibility of $M$, every $\Delta^\ast$-embedding $L \to \mathcal{M}er_m(U)$ over $K$ yields a $\Delta$-embedding. Therefore, by changing the coordinates in the space $\mathbb{C}^m$ from $(z_1, \ldots, z_m)$ to $M^{-1}(z_1, \ldots, z_m)$, we can further assume that $\Delta = \Delta^\ast$, so $a_1, \ldots, a_n$ are $\Delta$-integral over $K$. Lemma \ref{lemmafinite} implies that there exists~$a \in A$ such that~$B:= A[a^{-1}]$ is finitely $\Delta_0$-generated. By $\Delta$-integrality of~$a_1,\ldots,a_n$ and Remark~\ref{eqremark}, for every $1 \leqslant i \leqslant n$, there exists a positive integer $r_i$ and a rational function $g_i \in K(\delta^\alpha y \mid \alpha <_{\grlex} (r, 0, \ldots, 0))$ such that $a_i$ satisfies \begin{equation}\label{sys1} \delta^{r_i} a_i = g_i(a_i). \end{equation} Since $K$ is at most countably $\Delta$-generated, Lemma \ref{lemmamer} implies that there exist~$w_1 \in \mathbb{C}$ and~$V\subseteq W \cap \{z_1 = w_1\} \subseteq \mathbb{C}^{m-1}$ such that the restriction to $\{z_1 = w_1\}$ induces a $\Delta_0$-embedding~$\rho\colon K \to \mathcal{M}er_{m-1}(V)$. We apply the induction hypothesis to $\Delta_0$-fields $\rho(K)$ and $\Frac(B) = L$. This yields $\Delta_0$-embedding~$h: L \rightarrow \mathcal{M}er_{m - 1} (\widetilde{V})$ for some~$\widetilde{V} \subseteq V$. Choose a point~$v = (w_2,\ldots,w_m)\in \widetilde{V}$ such that all the $h(a_i)$ are holomorphic at~$v$ and all the~$g_i(a_i)$ are holomorphic at~$w = (w_1,w_2,\ldots,w_m) \in W$. Consider the Taylor homomorphism $T_{h, w}\colon A\rightarrow \mathcal{M}er_{m-1}(\widetilde{V})[[z_1]]$ defined as follows (see Definition~\ref{deftaylor}): \[ T_{h, w}(a) := \sum\limits_{k=0}^\infty h(\delta_1^k a)\dfrac{(z_1-w_1)^k}{k!} \quad\text{ for every }a \in A. \] Note that~$T_{h, w}$ is a~$\Delta$-homomorphism. Fix $1 \leqslant i \leqslant n$. Since $a_1$ is a solution of~\eqref{sys1} and $T_{h, w}$ is a $\Delta$-homomorphism, $T_{h, w}$ is a formal power series solution of~$\delta_1^{r_i}y = g_i(y)$ corresponding to holomorphic initial conditions \[ y|_{z_1 = w_1} = h(a_i),\; (\delta_1 y)|_{z_1 = w_1} = h(\delta_1 a_i),\; \ldots, \; (\delta_1^{r_i - 1} y)|_{z_1 = w_1} = h(\delta_1^{r_i - 1} a_i). \] By the Cauchy-Kovalevskaya theorem, this solution is holomorphic in some vicinity~$U_i$ of~$w$. We set $U := \bigcap_{i = 1}^n U_i$. Thus,~$T_{h, w}$ induces a non-trivial~$\Delta$-homomorphism from~$A$ to~$\mathcal{M}er_m(U)$. Since~$h$ is injective,~$T_{h, w}$ is also injective, so it can be extended to a $\Delta$-embedding $L \to \mathcal{M}er_m(U)$ over $K$. \begin{comment} Introduce new variables~$z_1,\ldots,z_n$ such that~$\delta_i^\ast$ acts on~$\mathcal{M}er_m$ as~$\dfrac{\partial}{\partial z_i}$ for all~$i=1,\ldots,m$. Since $K_0$ is at most countably $\Delta^\ast$-generated, Lemma \ref{lemmamer} implies that there exist~$w_1\in \mathbb{C}$ and~$V\subseteq W \cap \{z_1 = w_1\} \subseteq \mathbb{C}^{m-1}$ such that~${K}_0 |_{\{z_1 = w_1\}} \subseteq \mathcal{M}er_{m-1}(V)$. This means that there exists an embedding of~$\Delta^\ast_0$-fields $\rho : K_0 \rightarrow \mathcal{M}er_{m-1}(V)$. We now view $K_0$ as a subfield of $\mathcal{M}er_{m-1}(V)$ and apply the induction hypothesis to $\Delta_0^\ast$-algebra $B$. This yields $\Delta_0^\ast$-embeddings~$h: B \rightarrow \mathcal{M}er_{m-1} (\Tilde{V})$ and~$h_0:= h\circ e: A_0 \rightarrow \mathcal{M}er_{m-1} (\Tilde{V})$ for some~$\Tilde{V} \subseteq V$. Choose a point~$v = (w_2,\ldots,w_m)\in \Tilde{V}$ such that all the $h_0 (a_i)$ are holomorphic at~$v$ and all the~$g_i(a_i)$ are holomorphic at~$w = (w_1,w_2,\ldots,w_m) \in W$. Consider a Taylor homomorphism $T_w(h):A_0\rightarrow \mathcal{M}er_{m-1}(\Tilde{V})[[z_1]]$ defined as follows: $$T_w(h_0)(a) := \sum\limits_{k=0}^\infty h_0((\delta_1^\ast)^k a)\dfrac{(z_1-w_1)^k}{k!}$$ for any~$a\in A_0$. Here we consider the images of the elements of~$A_0$ under~$h_0$ as functions in variables~$z_2,\ldots,z_m$. Note that~$T_w(h_0)(A_0) \subseteq T_w(h)(B)$. Note that~$T_w(h_0)$ is a~$\Delta^\ast$-homomorphism. Since $e(a_1), \ldots, e(a_n)$ is a solution of~\eqref{sys1} and $T_w(h_0)$ is a $\Delta^\ast$-homomorphism, \[ (T_w(h_0)(a_1),\ldots,T_w(h_0)(a_n)) \] is a formal power series solution of~\eqref{sys1} corresponding to holomorphic initial conditions \begin{equation} \begin{cases} y_1|_{z_1=w_1} = h(a_1),\\ \vdots\\ \delta^{\ast r_1 - 1}_1 y_1|_{z_1=w_1} = h(\delta_1^{\ast r_1-1}a_1),\\ \vdots\\ y_i|_{z_1=w_1} =h(a_i),\\ \vdots\\ \delta^{\ast r_i - 1}_1 y_i|_{z_1=w_1} = h(\delta_1^{\ast r_i - 1}a_i),\\ \vdots\\ y_n|_{z_1=w_1} = h(a_n),\\ \vdots\\ \delta^{\ast r_n - 1}_1 y_n|_{z_1=w_1} = h(\delta_1^{\ast r_n - 1}a_n).\\ \end{cases} \end{equation} By Kovalevskaya's theorem, this solution is holomorphic in some vicinity~$U$ of~$w$. Thus,~$T_w(h_0)$ induces a non-trivial~$\Delta^\ast$-homomorphism from~$A_0$ to~$\mathcal{M}er_m(U)$. Since~$h_0$ is injective,~$T_w(h_0)$ is also injective. By Lemma~\ref{lemmachange}, there also exists a $\Delta$-embedding $f_0: A_0 \rightarrow \mathcal{M}er_m(U)$. Let~$f$ be its restriction to~$A$. Then~$f$ is the desired embedding. \end{comment} \end{proof} \subsection{Proof of Ritt's theorem}\label{sec:ritt} \begin{definition}[Differentially simple rings] A~$\Delta$-ring~$R$ is called~\emph{$\Delta$-simple} if it contains no proper~$\Delta$-ideals. \end{definition} \begin{lemma} \label{lemmasimple} Let~$A$ be a $\Delta$-simple $\Delta$-algebra $\Delta$-generated by~$a_1,\ldots,a_n$ over a $\Delta$-field K. Then $A$ does not contain zero divisors. Furthermore, assume that there exists an integer $\ell$ such that \begin{itemize} \item $a_1, \ldots, a_\ell$ form a $\Delta$-transcendence basis of $A$ over $K$; \item $a_{\ell + 1}, \ldots, a_n$ are $\Delta$-integral over $K[\Delta^\infty a_1, \ldots, \Delta^\infty a_\ell]$. \end{itemize} Then~$A$ has finite $\Delta_0$-transcendence degree over~$K$. In particular, $\ell = 0$. \end{lemma} \begin{proof} Consider any non-zero (not necessarily differential) homomorphism~$\psi : A \rightarrow F$ ($F\supseteq K$ is a field) and the corresponding Taylor homomorphism~$T_{\psi, 0}\colon A \rightarrow F[[z_1,\ldots,z_m]]$, which is a~$\Delta$-homomorphism. Since~$A$ is~$\Delta$-simple, the kernel of~$T_{\psi, 0}$ is zero. Therefore,~$T_{\psi, 0}$ is a~$\Delta$-embedding of~$A$ into~$F[[z_1, \ldots, z_m]]$. Since~the latter does not contain zero divisors, the same is true for~$A$. Assume that~$A$ has infinite $\Delta_0$-transcendence degree, that is, $\ell > 0$. Since $a_{\ell + 1}, \ldots, a_n$ are $\Delta$-integral over $R := K[\Delta^\infty a_1, \ldots, \Delta^\infty a_\ell]$, Lemma~\ref{lemmafinite} implies that there exists an element~$b\in A$ such that~$A_0 := A[1/b]$ is a finitely $\Delta_0$-generated algebra over~$R$. Note that~$A_0$ is also $\Delta$-simple. Let $A_0 = R[\Delta_0^\infty b_1, \ldots, \Delta_0^\infty b_s]$. For every $j \geqslant 0$, consider $\Delta_0$-algebra \[ B_j := K[\Delta_0^\infty (\delta_1^{(<j)}a_1), \ldots, \Delta_0^\infty (\delta_1^{(<j)}a_l), \Delta_0^\infty b_1,\ldots, \Delta_0^\infty b_s]. \] For every~$j \geqslant 0$, we have \[ jl \leqslant \operatorname{difftrdeg}^{\Delta_0}_K B_j \leqslant jl+s. \] This inequality implies that there exists~$N$ such that, for every~$j>N$,~$\delta_1^j a_1,\ldots,\delta_1^j a_l$ are~$\Delta_0$-independent over~$B_j$. Consider any non-zero $\Delta_0$-homomorphism \[ \Tilde{\varphi}\colon B_N \rightarrow L, \] where~$L \supseteq K$ is a $\Delta_0$-field. Due to the~$\Delta_0$-independence of the rest of the elements~$\delta^j_1 a_i$ for~$1\leqslant i\leqslant l$ and $j > N$, $\Tilde{\varphi}$ can be extended to a~homomorphism~$\varphi\colon A_0 \rightarrow L$ so that~$\varphi(\delta_1^j a_i) = 0$ for every~$1\leqslant i \leqslant l$ and~$j > N$. Consider a Taylor homomorphism $T_{\varphi, 0} \colon A_0 \to L[[z]]$ with respect to $\delta_1$. Since $\varphi$ was a $\Delta_0$-homomorphism, $T_{\varphi, 0}$ is a $\Delta$-homomorphism. It remains to observe that the kernel of~$T_{\varphi, 0}$ contains~$\delta_1^{N+1}a_1,\ldots,\delta_1^{N+1}a_l$ contradicting to the fact that~$A_0$ is~$\Delta$-simple. \end{proof} \begin{theorem}[Ritt's theorem of zeroes]\label{Ritt} Let $W \subseteq \mathbb{C}^m$ be a domain and let~$K \subseteq \mathcal{M}er_m(W)$ be a~$\Delta$-field. Let~$A$ be a finitely generated~$\Delta$-algebra over~$K$. Then there exists a non-trivial $\Delta$-homomorphism~$f: A \rightarrow \mathcal{M}er_m(U)$ for some domain ${U \subseteq W \subseteq \mathbb{C}^m}$ such that $f(a)$ is~$\Delta$-algebraic over~$K$ for any~$a\in A$. \end{theorem} \begin{proof} We can represent $A$ as $A = R / J$, where~$R := K[\Delta^\infty x_1,\ldots,\Delta^\infty x_n]$ and $J \subseteq R$ is a differential ideal. Since $R$ is a countable-dimensional $K$-space, $J$ can be generated by at most countable set of generators. Pick any such set and denote the $\Delta$-field generated by the coefficients of the generators by $K_0$. Then $K_0$ is countably $\Delta$-generated. Let $R_0 := K_0[\Delta^\infty x_1,\ldots,\Delta^\infty x_n]$. Since $J$ is defined over $K_0$, for $A_0 := R_0/(J \cap R_0)$, we have~$A = K \otimes_{K_0} A_0$. Let~$I$ be a maximal differential ideal of~$A_0$, and consider the canonical projection~$\pi \colon A_0\rightarrow A_0/I$. Let~$a_1,\ldots,a_n$ be a set of $\Delta$-generators of $A_0 / I$. Since~$A_0/I$ is differentially simple, by Lemma~\ref{lemmasimple}, $A_0/I$ does not have zero divisors and is $\Delta$-algebraic over $K_0$. We apply Theorem~\ref{main} to the $\Delta$-fields $K_0 \subseteq \Frac(A_0 / I)$ and obtain a $\Delta$-embedding $h \colon A_0/I \rightarrow \mathcal{M}er_m(U)$ over $K$. Since~$A_0/I$ is~$\Delta$-algebraic over~$K_0$,~$h(a)$ is~also~$\Delta$-algebraic over~$K_0$ for any~$a \in A_0/I$. Let~$f_0 := h\circ\pi$, then $f_0(a)$ is~$\Delta$-algebraic over~$K_0$ for any~$a\in A_0$. Since~$K \subseteq \mathcal{M}er_m(U)$, we can construct a nontrivial $\Delta$-homomorphism $f \colon K\otimes_{K_0} A_0 \to \mathcal{M}er_m(U)$ as the tensor product of the embedding $K \to \mathcal{M}er_m(U)$ and $f$. The $\Delta$-algebraicity of the image of $f_0$ over $K_0$ implies the $\Delta$-algebraicity of the image of the image of $f$ over $K$. \end{proof} \section{Remarks on the analytic spectrum}\label{sec:spec} \begin{definition}[Analytic spectrum] Consider $\mathbb{C}$ as a $\Delta$-field with the zero derivations. Let $A$ be a finitely $\Delta$-generated $\Delta$-algebra over $\mathbb{C}$. A homomorphism (not necessarily differential) $\psi \colon A \to \mathbb{C}$ of $\mathbb{C}$-algebras is called \emph{analytic} if, for every $a \in A$, the formal power series $T_{\psi, 0}(a)$ has a positive radius of convergence. The set of the kernels of analytic $\mathbb{C}$-homomorphisms is called the \emph{analytic spectrum of}~$A$. \end{definition} Corollary~\ref{cor:holomorphic} implies the following. \begin{corollary} Let~$A$ be a finitely generated ~$\Delta$-algebra with identity over~$\mathbb{C}$. Then the analytic spectrum of~$A$ is a Zarisky-dense subset of its maximal spectrum. \end{corollary} \begin{proof} Assume that analytic spectrum of~$A$ is not Zarisky-dense in $\operatorname{spec} A$. Then it is contained in some maximal proper closed subset~$F = \{M \in \operatorname{spec}A | a\in M\}$ for some non-nilpotent~$a\in A$. Consider the localization~$A[a^{-1}]$ and a non-trivial~$\Delta$-homomorphism~$f\colon A[a^{-1}] \rightarrow \mathcal{O}_m(U)$ given by Corollary~\ref{cor:holomorphic}. Then~$f(a)\neq 0$. We fix $u \in U$ such that $f(a)(u) \neq 0$ and consider a homomorphism~$g\colon A \rightarrow \mathbb{C}$ defined by $g(b) := f(b)(u)$ for every~$b\in A$. Note that~$I := \operatorname{Ker}(g)$ is an analytic ideal such that~$a\not\in I$, so we have arrived at the contradiction. \end{proof} \subsection*{Acknowledgements} GP was partially supported by NSF grants DMS-1853482, DMS-1760448, and DMS-1853650 and by the Paris Ile-de-France region. \bibliographystyle{abbrvnat}
1,477,468,750,700
arxiv
\section{Introduction} Tl-doped PbTe (Pb$_{1-x}$Tl$_{x}$Te) is a degenerate semiconductor with a small carrier concentration of $\sim10^{20}$ holes/cm$^{3}$ or less. However, for Tl concentrations $x$ beyond a critical value $x_c \sim 0.3 \%$ it is observed to superconduct,\cite{Matsushita_2005} with a maximum $T_c$ of 1.5 K for the highest Tl concentrations,\cite{Nemov_1998} comparable to more metallic systems. Furthermore, thallium is the only impurity known to cause superconductivity in PbTe, even though other impurities are able to dope to similar carrier concentrations and similar densities of states. Given the anomalously high maximum $T_c$ value, combined with the unusual concentration dependence, there has been considerable discussion as to the role that the Tl impurities play in the superconductivity of this material.\cite{Moizhes_1983, Schuttler_1989, Hirsch_1985, Krasinkova_1991, Dzero_2005} PbTe has a rocksalt structure and has been treated with reasonable success using ionic models (i.e., Pb$^{2+}$Te$^{2-}$).\cite{Weiser_1981} Thallium impurities substitute on the Pb site, and calculations have shown that Tl$^{+}$ is more stable than Tl$^{3+}$ in the PbTe lattice.\cite{Weiser_1981} This implies that Tl impurities will act as acceptors, and indeed Hall measurements confirm that for small doping levels the hole concentration increases by one hole for every Tl impurity.\cite{Kaidanov_1985, Dzero_2005} Carrier freeze-out is not observed to the lowest temperatures, indicating that the dopant atoms do not behave as hydrogen-like impurities due to the large static dielectric constant of the host PbTe.\cite{Nimtz} However, for concentrations beyond a characteristic value the Hall number is observed to rise at a much slower rate with $x$ and does not increase beyond $\sim 10^{20}$ cm$^{-3}$,\cite{Matsushita_2005, paper3} suggesting that the additional impurities act in a self-compensating manner. Significantly, within the uncertainty of these measurements, this characteristic concentration is the same as $x_c$, the critical concentration required for superconductivity. It is remarkable that as $x$ is increased beyond $x_c$, $T_c$ rises linearly over two orders of magnitude from 15 mK for $x \sim0.3 \%$ to 1.5 K for $x \sim 1.5\%$, while the hole concentration varies by less than a factor of two. This behavior, combined with the absence of any detectable magnetic impurities in the diamagnetic susceptibility, has been interpreted as evidence that the Tl impurities are present in a mixed valence state composed of a mixture of both Tl$^{+}$ and Tl$^{3+}$ valences for $x > x_c$. We recently argued that anomalies in the normal state resistivity of Tl-doped PbTe, that are present only for superconducting samples ($x>x_c$) and not for nonsuperconducting samples ($x<x_c$),\cite{Matsushita_2005, Fisher_2005} might be associated with a charge Kondo effect involving these degenerate Tl valence states. Within such a scenario, the quantum valence fluctuations associated with the Tl impurities also provide a possible pairing mechanism for holes in the valence band of the host PbTe.\cite{Dzero_2005} In light of the anomalous behavior of Tl-doped PbTe we have investigated the superconducting properties of single crystal samples for a range of Tl concentrations up to the solubility limit of approximately 1.5$\%$. In this paper, we present measurements of the heat capacity and $H_{c2}$ and the resulting estimates for coherence length, penetration depth, Ginzburg-Landau parameter, and critical fields. Our measurements show that the material is a Type II, weak-coupled BCS superconductor in the dirty limit. We discuss implications of these observations for the charge Kondo model. \section{\label{sec:ii}Sample preparation and experimental methods} Single crystals of Pb$_{1-x}$Tl$_x$Te were grown by an unseeded physical vapor transport method. Polycrystalline source material was synthesized by combining PbTe, Te, and either Tl$_2$Te or elemental Tl in appropriate ratios and sintering at 600$^\circ$C, regrinding between successive sintering steps. For the crystal growth, broken pieces of source material were sealed in an evacuated quartz ampoule and placed in a horizontal tube furnace held at 750$^\circ$C for 7--10 days. A small temperature gradient of approximately 1--2$^\circ$C/cm allowed crystals to nucleate and grow at one or both of the cooler ends of the ampoule. Each vapor growth produced several well-formed crystals up to a few millimeters in size that could be cut and cleaved to prepare bars for thermodynamic and transport measurements. The thallium content was measured by Electron Microprobe Analysis (EMPA) using PbTe, Te, and Tl$_2$Te standards. Errors in Tl content $x$ shown in subsequent figures reflect the uncertainty of the microprobe method for such low dopant concentrations. The Tl concentration for individual samples was observed to be homogeneous within the uncertainty of this measurement. The heat capacity of single crystal samples was measured using a thermal relaxation technique in a Quantum Design Physical Property Measurement System. Crystals with a mass of approximately 10--15 mg were prepared with a flat surface for good thermal contact to the sample platform. Measurements were made in zero applied field and in a field $H > H_{c2}$ (typically $H$ = 0.5--1 T). The field was oriented at an arbitrary angle to the crystal axes, depending on the orientation of the flat surface. The upper critical field $H_{c2}$ was measured for several values of the Tl content $x$ by following resistive transitions as a function of temperature for different applied magnetic fields. The resistivity was measured using geometric bars cleaved from the larger as-grown crystals, such that the current flowed along the [100] direction while the magnetic field was oriented parallel to the equivalent [001] direction. Electrical contact was made using Epotek H20E silver epoxy on sputtered or evaporated gold pads and showed typical contact resistances of 1--4 $\Omega$. Resistivity measurements were made at 16 Hz and with current densities in the range of 25 mA/cm$^2$ (corresponding to a current of 100 $\mu$A for low-temperature measurements) to 1 A/cm$^2$ at higher temperatures. To check for heating effects, resistivity data were taken for different current densities and for warming and cooling cycles for each sample. \section{\label{sec:iii}Results} \begin{figure}[!] \includegraphics{fig1.eps} \caption{\label{fig:Fig1} Heat capacity of Pb$_{1-x}$Tl$_x$Te single crystals, shown as $C_p/T$ versus $T^2$, for representative Tl concentrations. For superconducting samples, data were taken in an applied field $H > H_{c2}$. Inset shows electronic contribution $\gamma$ (left axis) and density of states at the Fermi level $N(0)$ (right axis) as a function of Tl concentration $x$. Dashed line shows values calculated from known PbTe band parameters and measured values of the Hall number, as described in the main text.} \end{figure} Heat capacity data for representative Tl concentrations are shown in Fig.~\ref{fig:Fig1} as $C_p/T$ versus $T^2$ for applied fields that totally suppress the superconductivity. For all samples there is considerable curvature in the data even at low temperatures, presumably due to the relatively low Debye temperature $\Theta_D$ of PbTe. Data were fit to $C/T = \gamma + \beta T^2 + \delta T^4$ from the base temperature (0.3 K) up to 1 K. From $\beta=N(12\pi^4/5)R\Theta^{-3}$, where $R=8.314$ J/(mol K) and $N=2$ for PbTe, we estimate $\Theta_D = 168 \pm 4$ K for $x=0\%$, which is consistent with previous reports.\cite{Ravich,Chernik_1981a} Thallium substitution does not substantially affect this value but causes a clear increase in the electronic contribution $\gamma$, suggesting a rapid rise in the density of states with $x$. Values of $\gamma$ obtained from the above fits are shown in the inset to Figure 1 as a function of Tl concentration $x$ and are in broad agreement with previously published values for polycrystalline samples.\cite{Chernik_1981b} \begin{figure}[t!] \includegraphics{fig2.eps} \caption{\label{fig:Fig2}$C_p$ versus $T$ in zero applied field showing the superconducting anomaly for several Tl concentrations $x$. Dashed lines show the geometric construction used to obtain $\Delta C$ and $T_c$ for $x=1.4\%$. Inset shows $\Delta C$/$\gamma T_c$ as a function of $x$. Uncertainty in $\Delta C$/$\gamma T_c$ is derived principally from errors in the geometric construction used to estimate $\Delta C$.} \end{figure} Heat capacity data in zero field are shown in Figure 2 for representative Tl concentrations with $T_c$ above 0.3 K. $T_c$ values were obtained from the midpoint of the heat capacity anomaly and agree well with data obtained from resistive transitions (Figure 3). The jump at $T_c$, $\Delta C$, can be estimated using a standard geometric construction extrapolating normal state and superconducting state behaviors towards $T_c$, as indicated by dashed lines for $x$ = 1.4$\%$ in Figure 2. Resulting estimates of $\Delta C / \gamma T_c$ are shown in the inset to Figure 2 as a function of Tl concentration $x$. The value for the highest Tl concentration, $x$ = 1.4$\%$, is $\Delta C/\gamma T_c$ = 1.55 $\pm$ 0.12, which is close to the weak coupling BCS result of 1.43. As $x$ is reduced, the data show a small but significant systematic variation, tending towards a smaller value for smaller Tl concentrations. The smallest value, 1.00 $\pm$ 0.20, is recorded for $x=0.8\%$, which is the lowest Tl concentration for which we can confidently extract $\Delta C$ given the base temperature of our instrument. \begin{figure} \includegraphics{fig3.eps} \caption{\label{fig:Tcvx}Superconducting transition temperatures $T_c$ as a function of Tl concentration $x$. Solid symbols are obtained from heat capacity measurements, and open symbols are from resistivity data. Line is drawn to guide the eye.} \end{figure} The upper critical field $H_{c2}(T)$ was determined from resistivity measurements for several Tl concentrations. Representative data, showing the uniform suppression of the superconducting transition in an applied field, are shown in Fig.~\ref{fig:trans_vs_field} for $x$ = 1.4$\%$. An estimate of $T_c$ was obtained from the midpoint of the resistive transition for each applied field. Resulting $H_{c2}$ curves are shown in Fig.~\ref{fig:hc2} for $x$ = 1.1$\%$ and 1.4$\%$. Error bars indicate the width of the superconducting transition measured by the difference in temperature between $10\%$ and $90\%$ of the resistive transition. The upper critical field at zero temperature $H_{c2}(T=0)$ can be estimated from these data using the Werthamer-Helfand-Hohenberg approximation \cite{Werthamer_1966} \begin{equation} H_{c2}(0)=0.69(dH_{c2}/dT)_{T_c}T_c. \end{equation} Resulting values for $x$ = 1.1$\%$ and 1.4$\%$ are approximately 3900 Oe and 6000 Oe respectively (Table~\ref{tab:table1}) and are consistent with reasonable extrapolations of the lowest temperature data in Fig. 5. The errors in $H_{c2}(0)$ listed in Table~\ref{tab:table1} are estimated from the difference between the above approximation and a parabolic fit to the observed data. \begin{figure} \includegraphics{fig4.eps} \caption{\label{fig:trans_vs_field}Representative resistivity data for $x=1.4\%$, showing the superconducting transition as a function of temperature for different magnetic fields (0, 108, 1083, 2166, 3791, and 4875 Oe) applied parallel to the [001] direction.} \end{figure} \begin{figure} \includegraphics{fig5.eps} \caption{\label{fig:hc2}Temperature dependence of $H_{c2}$ for $x=1.1\%$ and $1.4\%$ with the field parallel to the [001] direction. Error bars were determined as described in the main text. Lines are drawn to guide the eye.} \end{figure} Superconducting parameters such as the coherence length and penetration depth are dependent on the electron mean free path $l=v_F\tau=v_F\mu m^*/e$, where $v_F$ is the Fermi velocity and $\mu$ is the hole mobility. From Hall effect measurements at 1.8 K, we find that the Hall number $p_H$ is $\sim9 \times 10^{19}$ cm$^{-3}$ for $x = 1.1\%$ and $\sim1 \times 10^{20}$ for $x = 1.4\%$.\cite{paper3} Combining these data with measured values of the residual resistivity,\cite{Matsushita_2005} we find the hole mobility $\mu$ is approximately 100 cm$^2$V$^{-1}$s$^{-1}$ for $x = 1.1\%$ and 60 cm$^2$V$^{-1}$s$^{-1}$ for $x = 1.4\%$. Taking into account the existence of both light and heavy holes at the $L$ and $\Sigma$ points in the Brillouin zone respectively,\footnote{\label{test}The valence band maximum is centered at the $L$ points of the Brillouin zone and consists of relatively light holes characterized by $m_l =0.31m_0$ and $m_t =0.022m_0$ [R. Dornhaus, G. Nimtz, and B. Schlict, {\it Narrow-Gap Semiconductors}, vol. 98 of {\it Springer Tracts in Modern Physics} (Springer-Verlag, New York, 1983)]. Small deviations from nonparabolicity can be safely ignored in estimating approximate values for the Fermi level. Somewhat less is known of the secondary band maximum centered at the $\Sigma$ points in the Brillouin zone. Reasonable estimates were reported close to $m_{\Sigma} \sim m_0$ with an anisotropy of $\sim$10 [B.~F. Gruzinov, I.~A. Drabkin, and Yu.~I. Ravich, Sov. Phys. Semicond. {\bf 13}, 315 (1979)]. These holes are substantially more massive and therefore dominate the density of states. In estimating the Fermi energy and other electronic parameters from Hall effect measurements, we have assumed that the band offset (170 meV) does not vary with $x$.} we assume the elastic scattering limit holds at low temperatures such that $l_L = l_{\Sigma}=l$. The Fermi level then lies in the range 190--210 meV below the top of the valence band. Consequently, average values of $v_F$ are $\sim 1.4\times 10^6$ m/s for the $L$ holes and $1 \times 10^5$ m/s for the $\Sigma$ holes. The resulting values for $l$ are listed in Table \ref{tab:table1}. The principle contribution to the uncertainty in this quantity arises from errors in the geometric factor used to calculate resistivity of samples. Propagation of this error is the dominant effect in the uncertainties of subsequent derived quantities, including $\xi_0$ and $\lambda_{\mathrm{eff}}$. The Ginzburg-Landau coherence length $\xi(0)$ is calculated from $H_{c2}(0)$ by \begin{equation} H_{c2}(0)=\frac{\Phi_0}{2\pi\xi^2(0)}, \end{equation} where $\Phi_0=2.0678\times10^{-15}$ T$\,$m$^2$. Estimates for $\xi(0)$ are 290 \AA{} for $x=1.1\%$ and 240 \AA{} for $x=1.4\%$ (Table~\ref{tab:table1}) and should be independent of orientation for this cubic material. The small values of $l$ implies that the material is in the dirty limit with $l < \xi_0$. Therefore, the intrinsic coherence length $\xi_0$ can be extracted from the approximation $\xi(0) \sim (l\xi_0)^{1/2}$, where the values are listed in Table~\ref{tab:table1}. In comparison, the BCS expression for $\xi_0$ is \begin{equation} \xi_0=\frac{\alpha\hbar v_F}{k_B T_c}, \end{equation} where the BCS value of $\alpha$ is 0.18. Using values of $\xi_0$ derived from the dirty limit approximation, we find $v_F$ estimated from this formula (given in Table~\ref{tab:table1}) is between those calculated separately for the $L$ and $\Sigma$ holes. This is consistent with a mixed contribution from both carrier types due to the substantial scattering implied from the short mean free path. The London penetration depth for two carrier types is given by \begin{equation} \frac{1}{\lambda_L^2}=\frac{\mu_0 n_L e^2}{m_L}+\frac{\mu_0 n_{\Sigma} e^2}{m_\Sigma}, \end{equation} where the superfluid densities $n_L$ and $n_\Sigma$ are approximated as the carrier densities for each carrier type, and $m_L$ and $m_\Sigma$ are the effective masses of each band. The corresponding values of $\lambda_L$ are listed in Table~\ref{tab:table1} and are almost independent of orientation. In the dirty limit, we can estimate the effective penetration depth from \begin{equation} \lambda_{\mathrm{eff}}=\lambda_L(\xi_0/l)^{1/2}, \end{equation} values of which are given in Table~\ref{tab:table1}. These estimates are in good agreement with microwave conductivity measurements that show $\lambda(0) \sim 3$ $\mu$m for $x=1.4\%$.\cite{Ormeno} Finally, we find the Ginzburg-Landau parameter using $\kappa=\lambda_{\mathrm{eff}}/\xi(0)$ and estimate $H_c$ and $H_{c1}$ from the relationships $H_{c2}=\sqrt{2}\kappa H_c$ and $H_{c1}=\frac{H_c}{\sqrt{2}\kappa}\ln\kappa$ (Table~\ref{tab:table1}). \begin{table \caption{\label{tab:table1}Superconducting parameters of Pb$_{1-x}$Tl$_x$Te for two representative Tl concentrations.} \begin{ruledtabular} \begin{tabular}{ccc} &$x=1.1$ at.$\%$ &$x=1.4$ at.$\%$\\ \hline $T_c$ &1.16 $\pm$ 0.01 K &1.38 $\pm$ 0.03 K\\ $H_{c2}(0)$ &0.39 $\pm$ 0.04 T &0.60 $\pm$ 0.07 T\\ $l$ &32 $\pm$ 8 \AA &19 $\pm$ 5 \AA\\ $\xi(0)$ &290 $\pm$ 15 \AA &240 $\pm$ 14 \AA\\ $\xi_0$ &2600 $\pm$ 700 \AA &3000 $\pm$ 850 \AA\\ $v_F$ &$2.2 \pm 0.6 \times10^5$ m/s &$3.0 \pm 0.8 \times10^5$ m/s\\ $\lambda_L$ &1600 $\pm$ 80 \AA &1500 $\pm$ 120 \AA\\ $\lambda_{\mathrm{eff}}$ &1.4 $\pm$ 0.4 $\mu$m &1.9 $\pm$ 0.5 $\mu$m\\ $\kappa$ &48 $\pm$ 12 &79 $\pm$ 20\\ $H_c$ &57 $\pm$ 14 Oe &54 $\pm$ 13 Oe\\ $H_{c1}$ &3 $\pm$ 0.8 Oe &2 $\pm$ 0.5 Oe\\ \end{tabular} \end{ruledtabular} \end{table} \section{\label{sec:iv}Discussion} The above results indicate that Tl-doped PbTe is a Type II, BCS superconductor in the dirty limit, which is not too surprising given that the material is a doped semiconductor. To a large extent this observation rules out the possibility of more exotic scenarios for the superconductivity, such as condensation of preformed pairs, at least for the highest Tl concentrations. Here, we discuss some implications for the charge Kondo model that we have previously proposed for this material and consider alternative explanations. First, we briefly reiterate the salient features of the charge Kondo model relevant to understanding these data. The idea of a charge Kondo effect associated with degenerate valence (charge) states of a valence-skipping element was first discussed by Taraphder and Coleman,\cite{Taraphder_1991} and was later re-examined in the limit of dilute impurities for the case $T_c$ $\sim$ $T_K$ by Dzero and Schmalian.\cite{Dzero_2005} Weak hybridization of these degenerate impurity states with conduction electrons (or in the case of Tl-doped PbTe, with valence band holes), results in a Kondo-like effect with various parallels to the more common magnetic case. Here, the pseudospins correspond to zero or double occupancy of an impurity orbital, which can be described in terms of a negative effective $U$. The degeneracy of the two valence states in PbTe is not accidental but emerges naturally from the doping effect of the Tl impurities themselves.\cite{Dzero_2005} For values of the chemical potential less than a characteristic value $\mu^*$, the impurities are all present as one valence (Tl$^+$), which act to dope the material. As more impurities are added, eventually a value of the chemical potential $\mu^*$ is reached for which the two valence states are degenerate, and at which value the chemical potential is then pinned.\cite{Dzero_2005} The resulting charge Kondo effect, if present, clearly requires that hybridization between the impurity states and the host material is relatively weak. The semiconducting nature of the host PbTe would naturally provide an environment in which the local density of states at the impurity sites is rather small. Now, we discuss the origin of the enhanced electronic contribution to the heat capacity seen in Figure 1. The density of states at the Fermi energy, $N(0)$, can be estimated from the linear term $\gamma$ in the heat capacity. The resulting values of $N(0)$ are shown on the right axis of the inset to Figure 1. Part of the observed enhancement can be attributed to band filling effects, since the Hall number continues to rise slowly with $x$ even for $x > x_c$.\cite{paper3} However, as has been discussed elsewhere,\cite{Chernik_1981b} the observed heat capacity is larger than expected from the band structure of PbTe alone (dashed line in the inset to Fig. 1), implying the presence of additional states associated with the Tl impurities. Within the charge Kondo model, the additional contribution would arise from the Abrikosov-Suhl resonance that develops at $E_F$ for temperatures below $T_K$. If this is the case, then in principle we can estimate the concentration of Kondo impurities using the crude approximation $\gamma T_K \sim R$ln2 per mole of impurities. Unfortunately, uncertainty in the band parameters describing the $\Sigma$ band$^{21}$ means that it is difficult to confidently extract the magnitude of the additional contribution to the heat capacity over and above the band-filling effect. Nevertheless, we can make a rough estimate to at least put limits on the applicability of this model. Using the measured Hall coefficient,\cite{paper3} published band parameters,$^{21}$ and the assumption that the band offset does not change with Tl doping, the band contribution to $\gamma$ can be estimated as shown by the dashed line in the inset to Fig. 1. For $x=1.4\%$, the observed $\gamma$ is 0.46 mJ$\,$mol$^{-1} \,$K$^{-2}$ larger than the expected band contribution. If this enhancement is due to Kondo physics, then for $T_K \sim 6$ K (the value estimated in Ref.~\onlinecite{Matsushita_2005} for $x=0.3\%$), the concentration of Kondo impurities is approximately $7\times10^{18}$ cm$^{-3}$. In comparison, $x$ = 1.4$\%$ corresponds to a Tl concentration of $2\times10^{20}$ cm$^{-3}$. Hence, if a charge Kondo description is appropriate for this material, and if the Kondo temperature is $\sim 6$ K, then only a small fraction ($\sim3\%$) of the Tl impurities are contributing to this effect. Within the charge Kondo model outlined above, this would imply that the Tl impurities in PbTe must be characterized by a range of $\mu^*$ values, such that only the subset of impurities for which $\mu = \mu^*$ have degenerate valence states. There are other observations that appear to support this tentative conclusion. As we had previously noted,\cite{Matsushita_2005} the magnitude of the resistivity anomaly is also less than what would be expected if all of the Tl impurities were contributing to the Kondo effect. Data for lower Tl concentrations, for which a reasonable fit of the low-temperature data can be made over an extended temperature range, indicated that approximately 1$\%$ of the Tl impurities contribute to the Kondo behavior.\cite{Matsushita_2005} This is in broad agreement with the value deduced above from the heat capacity enhancement. In addition, the Hall number is observed to continue to rise for $x > x_c$,\cite{paper3} implying that the chemical potential is not pinned at one precise value, but rather is slowed in its progress as $x$ increases, also consistent with a distribution of $\mu^*$ values. Invoking Kondo physics of course implies a temperature dependence to the enhancement of $\gamma$ for temperatures above $T_K$. Our measurements (Fig. 1) show that the enhancement to $\gamma$ is temperature independent for temperatures between 0.3 K and 1 K. However, uncertainty in this difference grows rapidly at higher temperatures due to the increasingly large phonon contribution to the heat capacity. As a result, we cannot unambiguously extract the temperature dependence of any heat capacity enhancement beyond the estimated Kondo temperature of 6 K. Within a BCS scenario, $T_c$ varies exponentially with $-1/N(0)V$, where $V$ is the pairing interaction. Figure 6 shows ln($T_c$) versus $1/\gamma$ for samples with $x > x_c$. For samples with $T_c > 0.5$ K, both parameters were extracted from the same physical crystal. However, for samples with a lower critical temperature, $T_c$ was determined from resistivity measurements on different crystals from the same growth batch, introducing additional errors due to uncertainty in the Tl concentration. As can be seen, ln($T_c$) scales approximately linearly with $1/\gamma$ within the uncertainty of the measurements. For a constant $V$, this would imply that the observed trend in $T_c$ with $x$ (Fig.~\ref{fig:Tcvx}) is due to the increasing density of states (inset to Fig. 1). However, the situation is less clear if the charge Kondo picture is applicable, in which case $V$ depends strongly on the Tl concentration,\cite{Dzero_2005} and the enhancement in $N(0)$ derives from Kondo physics. In the case of a superconductor with magnetic impurities, although $N(0)$ is enhanced by this effect, the superconductivity is nevertheless suppressed for $T \sim T_K$ due to the pair-breaking effect associated with the rapid fluctuations in the magnetic moment.\cite{Maple_1972, Muller-Hartmann_1971} In the case of the charge Kondo model, the situation is slightly more complex because the impurities now provide both a local pairing mechanism as well as a pair-breaking effect close to $T_K$. Consequently, the range of temperatures over which it is anticipated that $T_c$ will be suppressed is predicted to be much lower than $T_K$,\cite{Dzero_2005} in contrast to the case of magnetic impurities. Hence, for the case $T_c \sim T_K$, the superconductivity can in principle benefit from the enhancement in $N(0)$ due to the charge Kondo effect in a way that it cannot for magnetic impurities. The observed trend shown in Fig. 6 may reflect this effect, but it is difficult to obtain quantitative estimates of the relative contributions to $T_c$ from the enhancement in $N(0)$ and the pairing interaction itself in this crossover regime of $T_c \sim T_K$.\cite{Dzero_2005} \begin{figure} \includegraphics{fig6.eps} \caption{\label{fig:lnTc}Plot of ln($T_c$) vs. $1/\gamma$. Dashed line is a guide for the eye.} \end{figure} In the charge Kondo model, if $T_c$ is large compared to $T_K$, then the pseudospin moment is unscreened at $T_c$, in which case the superconductivity is born from preformed pairs. In this limit, one would anticipate a much smaller anomaly in the heat capacity $\Delta C/\gamma T_c$ than the BCS result of 1.43. As noted in Section~\ref{sec:iii}, this is clearly not the case for the highest Tl concentrations, consistent with our previous observation that $T_c \sim T_K$ for this material.\cite{Matsushita_2005} However, it is difficult to understand the observed $x$ dependence of $\Delta C/\gamma T_c$ within this same picture. Since $T_c$ decreases with decreasing $x$ (Fig. 3), one would expect the superconductivity to become more BCS-like at lower Tl concentrations. Instead, we find that $\Delta C/\gamma T_c$ becomes substantially smaller as $x$ is reduced (inset to Figure 2). Experiments are in progress to measure the heat capacity of samples with yet smaller Tl concentrations to even lower temperatures to see whether this trend continues. Could the superconductivity in Tl-doped PbTe have its origin in more mundane physics after all? While the data presented here enable us to characterize this material as a BCS superconductor, they do not allow us to distinguish between different pairing mechanisms. As we have previously argued,\cite{Matsushita_2005} many aspects of the observed thermodynamic and transport properties are suggestive of charge Kondo physics. Moreover, the uniqueness of the Tl impurities, being the only dopant to cause superconductivity in this material, cannot be ignored. Nevertheless, in the absence of experiments directly probing the Tl valence (which are currently in progress), we cannot rule out less exciting possibilities, including the formation of a narrow impurity band with a relatively large density of states. In such a case, the pairing interaction would most likely be phonon mediated, though the substantial residual resistance might argue that strong Coulomb scattering also plays a role. The observed low-temperature resistivity anomaly would then presumably have its origin in some form of weak localization, though the temperature and field dependence of this feature appear to argue against such a scenario.\cite{Matsushita_2005} \section{Conclusions} In summary, we have shown that Tl-doped PbTe is a Type II, BCS superconductor in the dirty limit. None of these observations is in disagreement with the charge Kondo model previously described, though they do put some limitations on its applicability. Specifically, the relatively small enhancement of the electronic contribution to the heat capacity implies that if a charge Kondo description is appropriate then only a small fraction of the Tl impurities can be participating in the Kondo physics. Within the model described in Ref.~\onlinecite{Dzero_2005}, this can be understood in terms of a distribution of $\mu$* values, such that only a subset of the Tl impurities have both valence states exactly degenerate for a particular value of the chemical potential within this range, though this has yet to be experimentally verified. \begin{acknowledgments} We gratefully thank J.~Schmalian, M.~Dzero, M.~R. Beasley, and B.~Moyzhes for numerous helpful discussions. We also acknowledge Robert E. Jones for technical assistance with EMPA measurements and A.~T. Sommer for help with Hall effect measurements. This work is supported by the DOE, Office of Basic Energy Sciences, under contract number DE-AC02-76SF00515. I.~R.~F was also supported by the Alfred P.~Sloan and Terman Foundations. \end{acknowledgments}
1,477,468,750,701
arxiv
\section{Introduction} One of the most crucial questions for a policy maker is how to assign a treatment to an individual or a group. For example, during the COVID-19 pandemic, each government has tried to find an effective order of vaccination. Recently, statistical treatment rules based on the decision theoretic framework have received much attention in treatment evaluation studies (for a general review, see \citet{manski2004statistical,manski2021econometrics} and \citet{hirano2020asymptotic}). Compared to the conventional approaches based on the point estimation and inference procedures, statistical treatment rules make it possible to evaluate a broader range of treatment rules, which includes a direct map from data to an action. Despite active research in this area, most studies focus on the individualistic treatment response and we have limited results for the case where treatment outcomes depend on each other. However, as we can see from the vaccination example, it is important in many empirical settings to consider dependent treatment outcomes In this paper we study a treatment assignment rule in the presence of treatment outcome dependency. In addition to the problem of vaccination, there are many applications that a policy maker has to weigh dependent treatment outcomes. \citet*{heckman1999human} evaluate the effect of a tuition reduction policy in a general equilibrium framework. They show that ignoring the outcome dependency over-estimates the effect of the policy on college enrollment more than 10 times. \citet{duflo2004scaling} also argues that even a randomized field experiment result faces a challenge in scaling up to a larger level because of the general equilibrium effects or, more generally, dependent treatment outcomes. Using Danish data on a large job assistance program, \citet{gautier2018estimating} show that the unemployed who are not selected in the program spend even more time in job search than those who look for a job in provinces without such a program. Thus, the outcome of the untreated depends on that of the treated, and the treatment evaluations assuming independent treatment outcomes can mislead a policy maker.\footnote{See also \cite{beaman2012social}, \cite{bursztyn2014understanding}, and \cite{duflo2003role} for additional examples.} We investigate this problem in the framework of the statistical decision theory. Treatment outcomes are allowed to depend on each other in a flexible way. We aim to construct a treatment assignment rule under a certain criterion and to characterize it. Thus, a treatment choice using sample data, i.e.\ a statistical decision rule, is the main object of interest in this paper. Having in mind a large-scale policy implementation, we do not impose any individual network information available. Instead, we impose a shape restriction on treatment response functions following \cite{manski2013identification}. Specifically, we assume \emph{anonymous interactions}, which implies that the treatment response of an individual does depend on the treatment status of others but is invariant of the identity of other individuals. In other words, it is independent of the permutation of the treatment assignments on others. For example, in the job assistance program above, this condition implies that the negative effect of the policy on the untreated only depends on the total size of people who receive the benefit of the job assistance program. This assumption provides a good approximation of the world with a large-scale policy implementation, and it makes both theoretical and empirical analyses feasible by reducing the domain of the response function substantially. We define the sampling process carefully following the statistical decision theory framework. It contrasts to the standard \emph{individualistic} treatment effect model in that our process represents both the treatment status variable and the outcome variables as a vector. The dimension of the vector is the same as the number of different treatment ratios in the target population. We adopt the minimax regret approach to handle the underlying ambiguity of the data generating process. We propose an intuitive decision rule called the multinomial empirical success (MES) rule that extends the empirical success rule in \citet{manski2004statistical} to the current setup. We investigate the properties of the MES rule followed by the possible applications. The main contributions of this paper are summarized as follows. First, we prove that the MES rule achieves the asymptotic optimality for the minimax regret criterion. Using the structure of the finite action problem in statistics literature, it extends the seminal optimality result in \citet{hirano2009asymptotics} to multiple treatments. Second, we derive the non-asymptotic bounds of the expected welfare and the maximum regret under the MES rule. It is challenging to obtain these bounds since outcomes are correlated under social interaction. We also provide two applications on how these bounds can be used: (i) designing an optimal sampling procedure, and (ii) computing the sufficient sample size to allow additional covariates in the treatment rule. The rest of the paper is organized as follows. We finish this section by reviewing related literature. In section \ref{sec:framework} we provide the main framework of the analysis. In section \ref{sec:multinomial} we define the MES rule and derive the upper bounds of the maximum regret. We also provide two applications of these bounds. In section \ref{sec:optimality} we show the asymptotic optimality of the MES rule. We provide some concluding remarks in section \ref{sec:conclusion}. All proofs and technical details are deferred to the appendix. \subsection{Related Literature} In the seminal work of \cite{manski2004statistical}, he considers the statistical decision theory in the context of heterogeneous treatment rules. He proposes the empirical success rule and derives the finite sample bounds of the minimax regret. \citet{stoye2009minimax} characterizes the minimax regret rule using the game theoretic approach and shows that the empirical success rule is a good approximation of the minimax regret rule under certain sampling processes. \citet{hirano2009asymptotics} apply the limit experiment framework to develop large sample approximations to the statistical treatment rules. \citet{kitagawa2018should} propose the empirical welfare maximization (EWM) method that selects the treatment rule maximizing the sample analogue of the social average welfare. \citet{athey2021policy} propose a doubly robust estimation procedure for the EWM problem and show the rate-optimal regret bounds. \cite{mbakop2021model} consider a large class of admissible rules and propose a penalized EWM method that chooses the optimal size of the policy class. \citet{manski2016sufficient,manski2019trial} argue to design clinical trials based on the goal of statistical treatment rules rather than on the statistical power of a hypothesis test. Motivated by a risk-averse policy maker, \citet{manski2007admissible} and \citet*{kitagawa2022treatment} propose nonlinear transformations of welfare and regret. \citet{manski2013identification} studies identification of treatment effects with social interaction. To make the problem feasible, he proposes possible approximation methods including \emph{anonymous interaction}, which will be explained in detail later. \citet{manski2009identification} analyzes statistical treatment rules under the anonymous interaction assumption and the shape restriction on the mean welfare function. \citet{viviano2019policy} proposes the network empirical welfare maximization method under the anonymous interaction assumption among those in the first-degree neighbor. However, our approach is different from his since we propose an intuitive estimation procedure that does not require heavy computation. It is also new that the proposed multinomial empirical success rule achieves the asymptotic optimality in the sense of \citet{hirano2009asymptotics}. \section{Framework}\label{sec:framework} We consider the following framework based on \cite{manski2004statistical} and \cite{stoye2009minimax}. Consider a social planner who assigns a binary treatment $T \in \{0,1\}$ to each individual $j$ in a heterogeneous population $J$. The population is divided into mutually exclusive and exhaustive groups based on observed characteristics (e.g.\ high school graduate vs. college graduate). Let $g \in \{1,2,\ldots,G\}$ be the index of a group and $n_{g}$ be the (population) size of group $g$. Individual $j$ in group $g$ has a response function $y_{jg} :\{0,1\}\times\{0,1\}^{n_{g} -1} \mapsto [0,1]$ that maps each possible group treatment vector $\mathbf{t} = (t_1,\dots,t_{n_{g}}) \in \{0,1\}^{n_{g}}$ into an outcome in $[0,1]$. Thus, we can write $ y_{jg}(\mathbf{t}) = y_{jg}(t_j, \mathbf{t}_{-j}) $, where $t_j$ is the treatment assigned to individual $j$ and $\mathbf{t}_{-j}$ represents the treatment vector for individuals in the same group excluding person $j$'s treatment assignment. This response function generalizes the individualistic treatment in a way that the spillover effect is allowed inside the same group (e.g.\ segmented labour markets). Note that the model allows the most flexible interactions when the whole population is categorized as a single group. The range of $[0,1]$ is a simple normalization and any bounded outcome space can be allowed. For notational simplicity, we consider a single group from now on and drop the subscript $g$ unless it causes any confusion. We consider a probability space $(J, \Sigma, P_J)$. The population $J$ is dense in the sense that $ P_J(\{j\})=0$, for all $j\in J$. The social planner cannot distinguish members of $J$. Therefore, we can consider the model as an induced random process, $Y(\mathbf{t})$, which is a potential outcome depending not only on individual treatment status, $t_j$, but on possible treatments of other members, $\mathbf{t}_{-j}$. Given the large size of the population $J$, this random process in the most general structure is intractable. Following the social interaction literature, we impose the following assumption. \begin{assum}[Anonymous Interactions, \cite{manski2013identification}]\label{assump:anonymous} The outcome of individual $j$ is invariant with respect to permutations of the treatments received by other members of the group. \end{assum} Assumption \ref{assump:anonymous} implies that a treatment ratio is a sufficient statistic for $\mathbf{t}_{-j}$. Let $\pi(\mathbf{t})$ be a treatment ratio of treatment vector $\mathbf{t}$. Then, for two treatment vectors $\mathbf{t} \neq \mathbf{t}'$ such that $\mathbf{t}= (t_j, \mathbf{t}_{-j})$ and $\mathbf{t}'= (t_j, \mathbf{t}'_{-j})$, Assumption \ref{assump:anonymous} implies that \eqs{ y_j(\mathbf{t})=y_j(\mathbf{t}') \mbox{ if }\pi(\mathbf{t})=\pi(\mathbf{t}'). } Therefore, the outcome of a treatment $\mathbf{t}$ depends on individual's treatment status $t_j$ and $\pi(\mathbf{t})$, and we can rewrite the the response function $y_j(\mathbf{t})$ as $y_j(t_j,\pi(\mathbf{t}_{-j})): \{0,1\} \times \Pi \mapsto [0,1]$, where $\Pi:=[0,1]$. The potential outcome processes now become $(Y_0(\pi),Y_1(\pi))$ whose distribution is $P_Y(Y_0(\pi),Y_1(\pi))$. Note that the induced measure $P_Y$ can be constructed from $P_J$ given the response function $y_j(\cdot)$. The distribution $P_Y$ is identified with a state of the world $\theta \in \Theta$ that is unknown to the policy maker. Note that $\{P_{Y,\theta}(Y_0(\pi),Y_1(\pi)): \theta \in \Theta\}$ is composed of all possible distributions on the outcome space $[0,1]^2$ for each $\pi \in \Pi$. To make the main arguments clear, we impose an additional assumption that the set $\Pi$ is discrete. \begin{assum}[Discrete Choice Set]\label{assump:discrete Pi} Let $\pi$ be the fraction of treated individuals in a group. The support of $\pi$ denoted by $\mathbf{\Pi}$ is a discrete set of finite elements. \end{assum} Assumption \ref{assump:discrete Pi} is suitable to many applied settings since the treatment ratio set may be constrained exogenously for ethical, budgetary, equity, legislative or political reasons. In addition, this is a practical assumption when experiments are costly to implement at all feasible treatment ratios. The assumption could also provide a good approximation if $\mathbf{\Pi}$ is a continuous interval but outcome function $y_j$ is smooth in $\pi$ We provide the following examples below. \begin{exmp}[Job placement assistantship program] \label{Job placement assistantship program} \citet{crepon2013labor} design a two-stage randomized experiment to evaluate the direct and displacement impacts of job placement assistance (JPA) on the labor market outcomes of young, educated job seekers in France. Individuals are organized in segmented labour markets (e.g. cities) and five treatment ratios (0\%, 25\%, 50\%, 75\%, and 100\%) are considered. An individual's labor market outcome depends not only on his/her treatment status but on the treatment ratio (fraction of individuals who received the JPA in their labor market). \end{exmp} \begin{exmp} [Cholera vaccine coverage] \cite{root2011role} analyze data from a field trial in Bangladesh to assess the evidence of indirect protection from cholera vaccines when vaccination coverage rates varies according to the social network. Households are organized into independent groups using kinship connections. Vaccine coverage rate is discretized into the following ranges: $(0,27.2\%]$, $(27.2,40.0\%]$, $(40.0-50.0\%]$, $(50.0\%-62.5\%]$, and $(62.5\%, 100\%]$. \end{exmp} We now turn our attention to a random sample that helps the policy maker infer the state of the world $\theta$. Let $\mathbf{\Pi}= \{\pi_1, \pi_2,\dots, \pi_K\}$. The experiment generates a sample space $\Omega := ( \{0,1\} \times [0, 1])^{n}$, where $n :=\sum_{k=1}^K n_{k}$ and $n_{k}$ is the subgroup size of an experiment with a treatment ratio $\pi_k$. A typical element of $\Omega$ is represented by \eqs{ \omega^n := \{(t_{i_1}(\pi_1) ,y_{i_1}(\pi_1))_{i_1=1}^{n_{1}},(t_{i_2}(\pi_2) ,y_{i_2}(\pi_2))_{i_2=1}^{n_{2}}, \dots , (t_{i_K}(\pi_K) ,y_{i_K}(\pi_K))_{i_K=1}^{n_{K}} \}. } Conditional on the treatment $t_{n_k}(\pi_k)$, $y_{n_k}(\pi_k)$ is an independent realization of $Y_{t}(\pi_k)$ for $t=0,1$. Therefore, it helps a policy maker to infer the state of the world $\theta$. To make notation simple, we assume the equal subgroup size, $n_1=\cdots=n_K = n/K$, and $\omega^n$ is composed with n-copies of $ \omega_i := \{(t_{i}(\pi_1) ,y_{i}(\pi_1)), \dots , (t_{i}(\pi_K) ,y_{i}(\pi_K)) \}. $ The policy maker constructs a statistical treatment rule $\delta : \Omega \mapsto \mathbf{\Pi}$\, that maps a sample realization $\omega^n$ onto a treatment assignment ratio $\pi \in \Pi$. Recall that we restrict our attention to a single group in this framework but the statistical treatment rule can be group-specific when there are multiple groups. In section \ref{sec:extensions}, we extend the current frame to the multiple groups case. The expected outcome (or social welfare) given the statistical treatment rule $\delta$ and the state $\theta$ is \eq{ u(\delta,\theta) & := \int U(\delta(\omega^n),\theta) dQ^n_{\theta}(\omega^n) \label{eq:expected-outcome-general} \\ & = \sum_{k=1}^K U(\pi_k,\theta) \Pr(\delta(\omega^n)=\pi_k ; \theta), \label{eq:expected-outcome} } where $Q^n_{\theta}$ is a distribution of $\omega_i$ given state $\theta$, $U(\pi,\theta):= (1-\pi) \cdot E_{\theta}[Y_0(\pi)] + \pi \cdot E_{\theta}[Y_1(\pi)]$ is the expected outcome (or social welfare) for any given treatment ratio $\pi$ in state $\theta$, and $E_{\theta}[Y_t(\pi)]$ is the mean potential outcome given $\theta$ and $\pi$. Note that the potential outcome variable $Y_t(\pi)$ depends on the treatment of others through $\pi$. This point becomes clearer if we compare the expected outcome in \eqref{eq:expected-outcome-general} with that of the individualistic treatment model (e.g.\ \citet{stoye2009minimax}). When there is no social interaction, the mean potential outcome is independent of the group treatment ratio $\pi$, i.e.\ $E_{\theta}[Y_t(\pi)]=E_{\theta}[Y_t]$. Then, the expected outcome in \eqref{eq:expected-outcome-general} becomes \eqs{ \int U(\delta(\omega^n),{\theta}) dQ^n_{\theta}(\omega^n) & = \int \left( (1-\delta(\omega^n)) E_{\theta}[Y_0] + \delta(\omega^n) E_{\theta}[Y_1] \right) dQ^n_{\theta}(\omega^n) \\ & = E_{\theta}[Y_0] \left( 1 - \int \delta(\omega^n) dQ^n_{\theta}(\omega^n) \right) + E_{\theta}[Y_1] \int \delta(\omega^n) dQ^n_{\theta}(\omega^n) \\ & \equiv \mu_0 (1- E_{\theta}[\delta(\omega)]) + \mu_1 E_{\theta}[\delta(\omega)], }where the last line is equal to the expected outcome in \citet{stoye2009minimax} using his notation. It is interesting to compare our framework to the individualistic multiple-treatment design. Given the finite number of treatment ratios, one might want to interpret the framework in terms of $K$ different individual treatments without any social interaction: e.g.\ define $Y_1 := Y_1(\pi_1), Y_2 := Y_1(\pi_2), \ldots, Y_K:= Y_1(\pi_K)$ and set $(Y_0,Y_1,\ldots,Y_K)$ as a vector of potential outcomes. However, this multiple-treatment design does not capture the feedback effect of the social interaction for any non-treated individual. Note that $Y_0(\pi)$ still depends on the treatment ratio $\pi$ in our framework, which is not embedded in the potential outcome vector $(Y_0,Y_1,\ldots,Y_K)$ of the standard multiple-treatment design. The decision problem is to find a statistical treatment rule that maximizes the expected outcome function $u(\delta,\theta)$. However, there exists ambiguity in the sampling process and we need some decision criteria for unknown $\theta$. In this paper, we adopt the minimax regret rule following \citet{manski2004statistical} and \citet{stoye2009minimax}. The regret function of $\delta$ given state $\theta$ is defined as \eq{ R(\delta,{\theta}) := \max_{d \in D} u(d,{\theta}) - u(\delta,{\theta}), \label{eq:regret} }where $D$ is a set of all possible statistical treatment rule. The minimax regret solution of the decision problem is defined as \eq{ \delta^* := \argmin_{\delta \in D} \sup_{{\theta} \in \Theta} R(\delta,{\theta}). \label{eq:optimal-solution} } \section{Multinomial Empirical Success Rule}\label{sec:multinomial} In this section we propose a feasible statistical decision rule and characterize it by the non-asymptotic bounds on the maximum regret. To show the main idea, we keep focusing on a single group case. The results are extended into the multiple-group case in section \ref{sec:extensions} and we show how they can be used to determine the proper level of groups. In most examples, it is intractable to achieve the optimal statistical treatment rule by solving \eqref{eq:optimal-solution} directly as $R(\delta,\theta)$ involves integration over finite sample distributions. As an alternative, researchers propose various statistical treatment rules and analyze whether they are close to the optimal solution. One of the popular rules is an empirical success rule that substitutes empirical success rates for the population counterparts. We propose such an empirical success rule suitable for the proposed setup. To focus on our main arguments, we restrict our attention to samples with a strict ordering of the estimates for ${U}(\pi,s)$ for all $\pi \in \mathbf{\Pi}$. We define our multinomial empirical success (MES) rule as follows: \begin{align} \delta^{MES}(\omega) := \sum_{k=1}^K \pi_k \cdot I\left(\hat{U}(\pi_k)>\max_{\pi \in \mathbf{\Pi}_{-k} } \hat{U}(\pi)\right), \label{eq:MES} \end{align} where $\mathbf{\Pi}_{-k} := \mathbf{\Pi} \setminus \{\pi_k\}$ and \begin{align} \hat{U}(\pi_k) & := (1 - \pi_k) \cdot \hat{E}_{P_{\theta}}[Y_0(\pi_k)] + \pi_k \cdot \hat{E}_{P_{\theta}}[Y_1(\pi_k)] \nonumber \\ & = (1-\pi_k)\cdot \frac{\sum_{n_k=1}^{N_{k}} y_{n_k}(\pi_k)\cdot I(t_{n_k}(\pi_k)=0)}{\sum_{n_k=1}^{N_{k}} I(t_{n_k}(\pi_k)=0)} + \pi_k \cdot \frac{\sum_{n_k=1}^{N_{k}} y_{n_k}(\pi_k)\cdot I(t_{n_k}(\pi_k)=1)}{ \sum_{n_k=1}^{N_{k}} I(t_{n_k}(\pi_k)=1)}. \label{eq:empirical_U_hat} \end{align} Note that, using the convention $0\cdot \infty =0$, we define $\hat{U}(0)=N_1^{-1} \sum_{n_1=1}^{N_1} y_{n_1}(0)$ when $\pi_1=0$. Similarly, $\hat{U}(1)=N_K^{-1} \sum_{n_K=1}^{N_K} y_{n_1}(1)$ when $\pi_K=1$. We have a few remarks on the proposed statistical decision rule. First, we call the rule in \eqref{eq:MES} as a Multinomial Empirical Success (MES) rule to emphasize the multinomial choice set in the setting. Second, we estimate $E_{P_{\theta}}[Y_t(\pi)]$ by using the empirical measure that depends on the unknown state $\theta$ of the world. Thus, both $\hat{U}(\pi_k)$ and the outcome of $\delta^{MES}(\cdot)$ depend on $\theta$ although it is not included as an argument explicitly. Third, the MES rule encompasses the (unconditional) empirical success rule in \citet{manski2004statistical}. Let $\Pi=\{0,1\}$ with $\pi_1=0$ and $\pi_2=1$. Then, the MES rule becomes \begin{align*} \delta^{MES}(\omega) & = 0 \cdot I\left(\hat{U}(0,{\theta})>\hat{U}(1,{\theta})\right) + 1 \cdot I\left(\hat{U}(0,{\theta})< \hat{U}(1,{\theta})\right)\\ & = I\left( \frac{1}{N_1} \sum_{n_1=1}^{N_1} y_{n_1}(0) < \frac{1}{N_2} \sum_{n_2=1}^{N_2} y_{n_1}(1) \right), \end{align*} which is the empirical success rule in \citet{manski2004statistical}. We next evaluate the expected outcome in \eqref{eq:expected-outcome} using the MES rule in \eqref{eq:MES}: \begin{align*} u(\delta^{MES},\theta) &= \sum_{k=1}^K \Pr(\delta^{MES}(\omega^n)=\pi_k) \cdot U(\pi_k, {\theta}) \\ &= \sum_{k=1}^K \Pr\left(\hat{U}(\pi_k)>\max_{\pi \in \mathbf{\Pi}_{-k} } \hat{U}(\pi)\right)\cdot U(\pi_k, {\theta})\\ &= \sum_{k=1}^K \Pr\left(\bigcap^K_{j=1, j\neq k}\{\hat{U}(\pi_k)> \hat{U}(\pi_j) \}\right)\cdot U(\pi_k, {\theta}). \end{align*} As we discussed above, $u(\delta^{MES}, {\theta})$ is intractable since it involves all possible finite sample distributions. However, building on \citet{manski2004statistical}, we can construct the bounds for the expected outcome with the MES rule as follows: \begin{theorem}\label{thm:bounds} Fix $\theta\in\Theta$. Let $\mathbf{\Pi} = \{\pi_1, \dots, \pi_K \}$, $\Delta_{kl} :=|U(\pi_k, {\theta})- U(\pi_l, {\theta})|$ for $k,l=1,\ldots,K$, and $\pi_{M^*}:=\argmax_{\pi \in \mathbf{\Pi}} U(\pi,{\theta})$. Then, the following inequality holds: \begin{align} U(\pi_{M^*}, {\theta})- \sum_{k=1}^{K} \exp \Bigg[-2\Delta_{{M^*}k}^2 \{A_{k}+A_{M^*} \}^{-1} \Bigg]\cdot \Delta_{{M^*}k} \leq u(\delta^{MES}, {\theta}) \leq U(\pi_{M^*}, {\theta}), \label{eq:main-theorem} \end{align} where $A_{k}:= (1-\pi_k)^{2}N_{k0}^{-1} + \pi_{k}^{2}N_{k1}^{-1} $ and $A_{{M^*}}:=(1-\pi_{M^*})^{2}N_{{M^*} 0}^{-1} + \pi_{M^*}^{2}N_{{M^*} 1}^{-1}$ with $N_{kt}$ representing the number of individuals with, $\pi=\pi_{k}$ and $T=t$. \end{theorem} It is worth comparing these bounds with those in Proposition 1 of \citet{manski2004statistical}. Note that both frameworks allow the potential outcome distributions to vary across some indexing variables. For example, the potential outcomes in \citet{manski2004statistical} depend on exogenous conditioning variables $X$, i.e.\ heterogeneous treatment effects over $X$. However, we focus on the dependence of the potential outcomes on the choice variable $\pi$. They look similar from the mathematical perspective, but the implications are quite different since the result in this paper allows the effect of social interaction. This point becomes clearer when we extend the model to the case that includes additional conditioning variables. We further investigate the finite sample penalty of the lower bound in \eqref{eq:main-theorem}, which measures the possible difference of $u(\delta^{MES}, \theta)$ from the ideal solution $U(\pi_{\pi_{M^*}}, \theta)$. First, the penalty converges to zero at the exponential rate as $N_{tk}$ increases for all $t\in \{0,1\}$ and $k \in \{1, \dots, K\}$. Second, the penalty is maximized when $\Delta_{M^*k}=\{A_{k}+A_{M^*}\} ^{1/2}/2$ for each $k \neq M^*$. Thus, we can compute the upper bound of the penalty as follows: \begin{align} \sum_{k=1}^{K} \exp \Bigg[-2\Delta_{{M^*}k}^2 \{A_{k}+A_{{M^*}}\}^{-1} \Bigg] \cdot \Delta_{{M^*}k} \leq \frac{1}{2} \cdot e^{-\frac{1}{2}}\sum_{k=1, k \neq M^*}^{K} \{A_{k}+A_{{M^*}}\}^{\frac{1}{2}}. \label{eq:ub-of-penalty} \end{align} Third, it is interesting to investigate the relationship between the cardinality of $\Pi$ denoted by $K$ and the penalty size. Consider the following example of two possible choice sets $\Pi_1$ and $\Pi_2$ such that $\Pi_1 \subset \Pi_2$. Let $\pi_{M^*}$ be the optimal solution of $\Pi_1$. If $\pi_{M^*}$ is also the optimal solution of $\Pi_2$, then $\Pi_2$ has a larger penalty than $\Pi_1$. However, if the optimal solution of $\Pi_2$ denoted by $\pi_{M^{**}}$ is different from $\pi_{M^{*}}$, then $\Pi_2$ may have a smaller penalty than $\Pi_1$. Note that $\Delta_{M^{**}k} > \Delta_{M^{*}k}$ for all $k \in \Pi_1$ and that there may exits some $k \in \Pi_1$ such that $\exp [-2\Delta_{{M^{**}}k}^2 \{A_{k}+A_{{M^{**}}} \}^{-1} ] < \exp [-2\Delta_{{M^*}k}^2 \{A_{k}+A_{{M^*}} \}^{-1} ]$. Therefore, a larger choice set may improve the finite sample lower bound only if it contains a better welfare outcome. Finally, we investigate the uniform bound of the regret function over $\theta$. The upper bounds of the regret function with $\delta^{MES}$ is represented in terms of the penalty: \eqs{ 0 \leq R(\delta^{MES}, \theta) \leq \sum_{k=1}^{K} \exp \Bigg[-2\Delta_{{M^*}k}^2 \{A_k + A_{M^*} \}^{-1} \Bigg] \cdot \Delta_{{M^*}k} \leq \frac{1}{2} \cdot e^{-\frac{1}{2}}\sum_{k=1, k \neq M^*}^{K} \{A_{k}+A_{{M^*}}\}^{\frac{1}{2}}. } Different from the result in \citet{manski2004statistical}, $A_{{M^*}}$ in the right hand side depends on $\theta$ since $\pi_{M^*}$ is defined in terms of $U(\pi,{\theta})$. Therefore, we need an additional step to achieve the uniform bound. Let $\overline{A} := \max_{k \in \{1,\dots, K\}}A_k$. Note that $\overline{A} \ge A_{M^*}$ and that $\overline{A}$ is independent of $\theta$. Then, the desired uniform bound is achieved as follows: \begin{align} 0 \leq \sup_{{\theta} \in \Theta}R(\delta^{MES}, {\theta})\leq \frac{1}{2} \cdot e^{-\frac{1}{2}}\sum_{k=1, A_k \neq \overline{A}}^{K} \{A_{k}+\overline{A}\}^{\frac{1}{2}}.\label{eq:upper_bound_regret} \end{align} These finite sample bounds give us two useful applications. First, we apply this bound to solve the quasi-optimal experiment design problem. Second, we can extend the bound to the covariate dependent treatment rule and determine the minimum sample size to adopt a finer covariate set as in \citet{manski2004statistical}. We provide these applications in the following two subsections. \subsection{Application 1: Quasi-optimal Experiment Design} We study the optimal experiment design problem under interference using the upper bound of the maximum regret. Specifically, we focus on the randomized saturation design which is composed of two-stage randomized experiments (for example, see \cite{baird2018optimal}). Suppose that we are given many clusters. In the first stage, we assign different treatment ratios in $\Pi$ to each cluster \emph{randomly} according to a probability distribution $f$. In the second stage, a binary treatment is assigned to each member of a cluster according to a treatment ratio assigned in the previous stage. Therefore, the randomized saturation design is fully characterized by a pair $(\Pi,f)$ and it encompasses other designs like clustered, block, and partial population designs commonly employed under interference. We now consider an experiment design problem that minimizes the maximum regret. We cannot compute the exact regret function because of the ambiguity in $\theta$. Instead, we reformulate the problem as minimizing the feasible upper bound of the regret in \eqref{eq:upper_bound_regret}. Recall that $N$ denotes the total sample size over all clusters and $\Pi=\{\pi_1, \pi_2, \cdots, \pi_K\}$ be a finite set of treatment ratios. Since $\Pi$ is a finite set, we can write $f=\{(\alpha_1, \alpha_2, \cdots \alpha_K): \sum_{k=1}^K \alpha_k = 1\}$, where $\alpha_k$ is a probability mass of assigning $\pi_k$. The subsample sizes can be written in terms of the treatment ratios and their corresponding probabilities: $N_{k0}= (1-\pi_k)\alpha_k N$ and $N_{k1}= \pi_k\alpha_kN$ for all $k= 1, 2,\ldots, K.$ Then, for each $A_k$, we have \eqs{ A_k & = \frac{(1-\pi_k)^2}{N_{k0}} + \frac{\pi_k^2}{N_{k1}} \\ & = \frac{(1-\pi_k)}{\alpha_k N} + \frac{\pi_k}{\alpha_k N} \\ & = \frac{1}{\alpha_k N}, }which makes the optimization problem simple. Without loss of generality, let $\overline{A}=A_1$. We substitute $A_k$ in \eqref{eq:upper_bound_regret} and drop all irrelevant variables to get \eqs{ &\hskip-60pt \min_{\{\alpha_k\}_{k=1}^K} \sum_{k=2}^K \left(\frac{1}{\alpha_1N} + \frac{1}{\alpha_k N}\right)^{1/2} \\ \mbox{subject to }~~ &\sum_{k=1}^K \alpha_k = 1 \\ & \alpha_1 \le \alpha_k \mbox{ for } k=2,\ldots,K. } Solving this optimization problem, we derive the quasi-optimal design of equal $\alpha_k^*$ ($\alpha^*_k=1/K$) only when $K=2$. It is worthwhile to note that \citet{baird2018optimal} derive the optimal randomized saturation design based on the statistical power but we focus on the maximum regret directly (see \cite{ manski2016sufficient} for further discussion). \subsection{Application 2: Covariate-dependent Treatment Rules}\label{sec:extensions} In this section, we extend the model and consider covariate-dependent treatment rules. We first introduce new notation. Let $X$ be a vector of covariates. In the similar spirit of Assumption \ref{assump:discrete Pi}, we restrict our attention to discrete and finite covariates. Then, we can vectorize the possible outcomes of $X$ and partition the population into $L$ different subsets denoted by $\mathcal{X}:=\{x_1,\ldots,x_L\}$. To avoid complicated notation, we assume the common domain of treatment ratios $\Pi$ for each $x_l$.\footnote{We can allow different assignment ratio sets at the cost of extra notation, e.g.\ $\Pi:=\cup_{l=1}^L \Pi_l$, where $\Pi_l:=\{\pi_1,\ldots,\pi_{K_l}\}$ is the set of assignment ratios for $x_l$.} We define the statistical treatment rule as $\delta(x,\omega^n):\mathcal{X}\times \Omega \mapsto \Pi$. Let $\boldsymbol{\pi}:=(\pi_1,\ldots,\pi_L)'$ be a vector of treatment assignment ratios, where $\pi_l$ is applied to subgroup $x_l \in \mathcal{X}$. Let $\boldsymbol{p}$ be a vector of population subgroup proportions. Then, $\bar{\pi}:= \boldsymbol{p}'\boldsymbol{\pi}$ becomes the unconditional treatment ratio. Under Assumption \ref{assump:anonymous}, the response function can be rewritten as $y_j(t_j, \bar{\pi})$. Given $\boldsymbol{\pi}$ and $\theta$, the outcome of the subgroup with covariate $x_l$ is \begin{align} U_l(\boldsymbol{\pi},\theta) := (1-\pi_l) \cdot E_{\theta}\left[ Y_0(\bar{\pi}) \vert X=x_l \right] + \pi_l \cdot E_{\theta} \left[Y_1 (\bar{\pi}) \vert X=x_l \right]. \end{align} Note that $U_l$ is affected by the treatment ratios of other covariate types through $\bar{\pi}$ as well as its own ratio $\pi_l$. Let $\boldsymbol{\delta}(\omega^n):=(\delta(x_1,\omega^n), \ldots, \delta(x_L,\omega^n) )$ be a vector of statistical treatment rules over $\mathcal{X}$ when sample $\omega^n$ is realized, i.e.\ $\boldsymbol{\delta}(\omega^n): \Omega \mapsto \Pi^L$. The expected outcome of the whole population is defined by the weighted sum of $U_l$: \begin{align} u(\boldsymbol{\delta},\theta) := \sum_{l=1}^L \left[\int U_l(\delta(x_l,\omega^n),\theta) dQ_{\theta }(\omega^n)\right] \Pr(X = x_l). \end{align} If $\Pr(X=x_l;\theta)=1$ for some $l$, then $\pi_l=\bar{\pi}\equiv \pi$, $L=1$ and $u(\delta,\theta)=\int U(\delta(\omega^n),\theta)dQ_{\theta}(\omega^n)$. Therefore, the expected outcome becomes equation \eqref{eq:expected-outcome-general}, where there exists a single type of population. Similarly in \eqref{eq:optimal-solution}, we can define the minimax regret solution of the decision problem as \eqs{ \boldsymbol{\delta}^* := \argmin_{\boldsymbol{\delta} \in D} \sup_{\theta \in \Theta} R(\boldsymbol{\delta}, \theta), }where $R(\boldsymbol{\delta}, \theta):= \max_{ \boldsymbol{d} \in D} u(\boldsymbol{d},\theta) - u(\boldsymbol{\delta},\theta)$ is a regret function. Since the expected welfare with covariate $x_l$ is affected by the treatment assignment ratios of other covariates $x_m\neq x_l$, we need to find the decision rule simultaneously over all elements in $\mathcal{X}$, i.e.\ the decision rule vector $\boldsymbol{\delta}$. It is worth noting that, when we consider $x_l$ as a single group, this extension can be interpreted as multiple groups with interaction between groups via $\bar{\pi}$. We now construct the multinomial empirical success rule conditional on covariate $x_l$. Note that $\Pi^L$ contains at most $K^L$ elements, $\vert \Pi^L \vert = K^L < \infty$. Let $\boldsymbol{\pi}_k$ be a generic element of $\Pi^L$. Then, the population (unconditional) treatment ratio is $\bar{\pi}_k=\boldsymbol{p}'\boldsymbol{\pi}_k$. The empirical mean of $Y_t(\bar{\pi}_k)$ conditional on $x_l$ is \eqs{ \hat{E}_{\theta}[Y_t(\bar{\pi}_{k})|X=x_l]:=\frac{\sum_{n_k=1}^{N_{k}} y_{n_k}(\bar{\pi}_k)\cdot \mathbbm{1}(t_{n_k}(\bar{\pi}_k)=t, X=x_l)}{\sum_{n_k=1}^{N_{k}} \mathbbm{1}(t_{n_k}(\bar{\pi}_k)=t, X=x_l)}~~\mbox{for}~~k=1,\ldots,K^L~\mbox{and}~t=0,1. } Finally, the conditional multinomial empirical success rule (CMES) is defined as follows: \begin{align} \boldsymbol{\delta}^{CMES}(\omega^n) := \sum_{k=1}^{K^L} \boldsymbol{\pi}_k \cdot \mathbbm{1}[\hat{U}(\boldsymbol{\pi}_k)>\max_{\boldsymbol{\pi} \in \Pi^L_{-k} } \hat{U}(\boldsymbol{\pi})] \label{eq:CMES} \end{align} where $\Pi^L_{-k} := \Pi^L \setminus \{\boldsymbol{\pi}_k\}$ and \begin{align*} \hat{U}(\boldsymbol{\pi}_k) & := \sum_{l=1}^L \Pr(X=x_l)\cdot \hat{U}_l(\boldsymbol{\pi}_k) \\ & = \sum_{l=1}^L \Pr(X=x_l)\Bigg[(1 - \pi_{kl}) \cdot \hat{E}_{\theta}[Y_0(\bar{\pi}_{k})|X=x_l] + \pi_{kl} \cdot \hat{E}_{\theta}[Y_1(\bar{\pi}_k)|X=x_l] \Bigg], \end{align*} where $\pi_{kl}$ is the $l$-th element of the $L$-dimensional vector $\boldsymbol{\pi}_k$. The CMES rule $\delta^{CMES}(\omega)$ in \eqref{eq:CMES} looks similar to the (unconditional) MES rule in Section \ref{sec:multinomial}. However, $\boldsymbol{\pi}_k$ is now an $L$-dimensional vector and the rule itself is an $L$-dimensional vector-valued function. Let $U(\boldsymbol{\pi}_k, \theta)$ be the population counterpart of $\hat{U}(\boldsymbol{\pi})$ by replace $\hat{E}_{\theta}$ with $E_{\theta}$. Then, we can define the expected outcome given the CMES rule $\boldsymbol{\delta}^{CMES}$ as follows: \begin{align*} u(\boldsymbol{\delta}^{CMES}, \theta) &= \sum_{k=1}^{K^L} \Pr(\boldsymbol{\delta}^{CMES}(\omega^n)=\boldsymbol{\pi}_k; \theta) \cdot U(\boldsymbol{\pi}_k, \theta) = \sum_{k=1}^{K^L} \Pr\left(\bigcap^{K^L}_{j=1, j\neq k}\{\hat{U}(\boldsymbol{\pi}_k)> \hat{U}(\boldsymbol{\pi}_j) \}; \theta \right)\cdot U(\boldsymbol{\pi}_k, \theta). \end{align*} We are now ready to extend the the bounds of the expected outcome in (\ref{eq:main-theorem}) to the CMES rule. \begin{theorem}\label{thm:bounds-cmes} Fix $\theta\in\Theta$. Let $\Pi^L = \{\boldsymbol{\pi}_1, \dots, \boldsymbol{\pi}_{K^L} \}$, $\Delta_{kj} :=|U(\boldsymbol{\pi}_k, \theta)- U(\boldsymbol{\pi}_{j}, \theta)|$ for $k,j=1\ldots,K^L$, and $\boldsymbol{\pi}_{M^*}:=\argmax_{\boldsymbol{\pi} \in \Pi^L} U(\boldsymbol{\pi},\theta)$. Then, the following inequality holds: \begin{align*} U(\boldsymbol{\pi}_{M^*}, \theta)- \sum_{k=1}^{K^{L}} \exp & \left( -2\Delta_{M^{*}k}^2\cdot\left\{\sum_{l=1}^L \Pr(X=x_l)^2 (A_{kl}+A_{M^{*}l})\right\}^{-1} \right)\cdot \Delta_{M^{*}k} \\ & \hskip140pt \leq u(\boldsymbol{\delta}^{CMES}, \theta)\leq U(\boldsymbol{\pi}_{M^*}, \theta), \label{eq:main-corollary} \end{align*} where $A_{kl}:= (1-\pi_{kl})^{2}N_{k0l}^{-1} + \pi_{kl}^{2}N_{k1l}^{-1} $ and $A_{{M^*}l}:=(1-\boldsymbol{\pi}_{M^* l})^{2}N_{M^*0l}^{-1} + \pi_{l1}^{2}N_{M^*1l}^{-1},$ with $N_{ktl}$ representing the number of individuals with $\boldsymbol{\pi}_{k}$, $T=t$, and $X=x_l$. \end{theorem} Using the similar arguments in Section \ref{sec:multinomial}, we define the non-negative finite sample penalty: \eqs{ D(\boldsymbol{\delta}^{CMES}, \theta):= \sum_{k=1}^{K^L} \exp \left( -2\Delta_{M^{*}k}^2\cdot\left\{\sum_{l=1}^L \Pr(X=x_l)^2 (A_{kl}+A_{M^{*}l})\right\}^{-1} \right)\cdot \Delta_{M^{*}k}, } and derive the following inequality: \begin{align} D(\boldsymbol{\delta}^{CMES}, \theta) \leq \frac{1}{2}\cdot e^{-\frac{1}{2}} \sum_{k=1, k\neq M^{*}}^{K^L} \left\{\sum_{l=1}^L \Pr(X=x_l)^2 (A_{kl}+A_{M^{*}l})\right\}^{\frac{1}{2}}. \end{align} Then, we can derive the uniform bound of the regret function, which can be recovered from the observable: \begin{align} \label{bound on maximum regret 1} 0\leq \sup_{\theta \in \Theta}R(\boldsymbol{\delta}^{CMES}, \theta) \leq \frac{1}{2}\cdot e^{-\frac{1}{2}} \sum_{k=1, A_{kl}\neq \bar{A}_l}^{K} \left\{\sum_{l=1}^L \Pr(X=x_l)^2 (A_{kl}+\bar{A}_l)\right\}^{\frac{1}{2}}, \end{align} where $\bar{A}_l:= \max_{k \in \{1,\dots, K^L \}}A_{kl}$ $ \forall l \in \mathcal{L}.$ We investigate the relationship between the sample size and the proper conditioning level of covariates. Recall that using all available covariates may reduce the statistical precision in practice. Let $\mathcal{Z} := \{z_1,\dots z_{L'}\}$ be a partitioning of the covariate space that is coarser than $\mathcal{X}$. Thus $L'< L$ and there exists a mapping $z(\cdot) :\mathcal{X} \mapsto \mathcal{Z}.$ Slightly abusing notation, we use the same $\boldsymbol{\pi}$ and $\boldsymbol{p}$ for assignment ratios and proportions. Finally, if $\boldsymbol{\pi}_{k'}$ is a generic element of $\Pi^{L'}$ and $\boldsymbol{\delta}_Z^{CMES}$ is the MES rule conditional on $Z$, then the population expected outcome becomes: \begin{align*} u(\boldsymbol{\delta}_Z^{CMES}, \theta) &= \sum_{k'=1}^{K^{L'}} \Pr\left(\bigcap^{K^{L'}}_{j=1, j\neq k'}\{\hat{U}(\boldsymbol{\pi}_{k'})> \hat{U}(\boldsymbol{\pi}_j) \}\right)\cdot U(\boldsymbol{\pi}_{k'}, \theta), \end{align*} where $ U(\boldsymbol{\pi}_{k'}, \theta) := \sum_{l=1}^{L'} \Pr(Z=z_{_{l'}})\cdot U_{l'}(\boldsymbol{\pi}_{k'},\theta)= \sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})\cdot[(1 - \pi_{k'l'}) \cdot E_{P_s}[Y_0(\bar{\pi}_{k'})|Z=z_{_{l'}}] + \pi_{k'l'} \cdot E_{P_s}[Y_1(\bar{\pi}_{k'})|Z=z_{_{l'}}] ]$ and $\bar{\pi}_{k'}:= \boldsymbol{p}\boldsymbol{\pi}_{k'} ,$ $ k' \in \{1, \dots, K^{L'}\} $ and $ l' \in \{1, \dots, L'\}.$ Similar to the results in Theorem \ref{thm:bounds-cmes}, we can bound the expected outcome in the following corollary: \begin{cor}\label{cor:bounds-zcmes} Fix $\theta\in\Theta$. Let $\Pi^{L'} = \{\boldsymbol{\pi}_1, \dots, \boldsymbol{\pi}_{K^{L'}} \}$, $\Delta_{k'j} :=|U(\boldsymbol{\pi}_{k'}, \theta)- U(\boldsymbol{\pi}_{j}, \theta)|$ for $k',j\in 1, \dots, K^{L'}$ and $\boldsymbol{\pi}_{M^*}:=\argmax_{\boldsymbol{\pi} \in \Pi^{L'}} U(\boldsymbol{\pi},\theta)$. Then, the following inequality holds: \begin{align} &\sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})\cdot U_{l'}(\boldsymbol{\pi}_{M^*}, \theta)- \sum_{k'=1}^{K^{L'}} \exp \left( -2\Delta_{M^{*}k'}^2\cdot\left\{\sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})^2 (A_{k'l'}+A_{M^{*}l'})\right\}^{-1} \right)\cdot \Delta_{M^{*}k'} \nonumber\\ & \hskip140pt\leq u(\boldsymbol{\delta}_Z^{CMES}, \theta) \leq \sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})\cdot U_{l'}(\boldsymbol{\pi}_{M^*}, \theta) \end{align} where $A_{k'l'}:= (1-\pi_{k'l'})^{2}N_{k'0l'}^{-1} + \pi_{k'l'}^{2}N_{k'1l'}^{-1} $ and $A_{{M^*}l'}:=(1-\boldsymbol{\pi}_{l1})^{2}N_{{M^*}0l'}^{-1} + \pi_{l1}^{2}N_{{M^*}1l'}^{-1}$, with $N_{k'tl'}$ representing the number of individuals with $\boldsymbol{\pi}_{k'}$, $Z=z_{l'}$, and $T=t$. \end{cor} We provide the bounds on the regret function and maximum regret function in the following inequalities: \begin{align} \sum_{l=1}^L & \Pr(X=x_l)\cdot U_l(\boldsymbol{\pi}_{M^*}, \theta)-\sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})\cdot U_{l'}(\boldsymbol{\pi}_{M^*}, \theta) \nonumber\\ &\leq R(\boldsymbol{\delta}_Z^{CMES}, \theta)\nonumber\\ &\leq \sum_{l'=1}^L \Pr(X=x_l)\cdot U_l(\boldsymbol{\pi}_{M^*}, \theta)-\sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})\cdot U_{l'}(\boldsymbol{\pi}_{M^*}, \theta)+D(\boldsymbol{\delta}_Z^{CMES}, \theta) . \end{align} and \begin{align}\label{bound on maximum regret for zcmes} L_G \leq\sup_{\theta \in \Theta} R(\boldsymbol{\delta}_Z^{CMES}, \theta)\leq H(\boldsymbol{\delta}_Z^{CMES}), \end{align} where $$D(\boldsymbol{\delta}_Z^{CMES}, \theta):=\sum_{k'=1}^{K'} \exp \left( -2\Delta_{M^{*}k'}^2\cdot\left\{\sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})^2 (A_{k'l'}+A_{M^{*}l'})\right\}^{-1} \right)\cdot \Delta_{M^{*}k'},$$ $$L_G:=\sup_{\theta \in \Theta} \left\{\sum_{l=1}^L \Pr(X=x_l)\cdot U_l(\boldsymbol{\pi}_{M^*}, \theta)-\sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})\cdot U_{l'}(\boldsymbol{\pi}_{M^*}, \theta)\right\},$$ and $$H(\boldsymbol{\delta}_Z^{CMES}) :=\sup_{\theta \in \Theta} \left\{\sum_{l=1}^L \Pr(X=x_l)\cdot U_l(\boldsymbol{\pi}_{M^*}, \theta)-\sum_{l'=1}^{L'} \Pr(Z=z_{_{l'}})\cdot U_{l'}(\boldsymbol{\pi}_{M^*}, \theta) + D(\boldsymbol{\delta}_Z^{CMES}, \theta)\right\}.$$ Let $\boldsymbol{N}_{KTL}:= \left(N_{ktl}: k=1,\ldots,K, t=0,1,\mbox{ and } l=1,\ldots,L\right)$ be a 3-dimensional array of stratum sample sizes. Recall that the upper bound of the maximum regret conditional on $X$ decreases as each $N_{ktl}$ increases. Therefore, we can find the sufficient sample sizes that justify conditioning on $X$ rather than conditioning on $Z$: \begin{align}\label{sufficient sample size problem} &\min \boldsymbol{N}_{KTL} \\ & \mbox{subject to } L_G> \frac{1}{2}\cdot e^{-\frac{1}{2}} \sum_{k=1, A_{kl}\neq \bar{A}_l}^{K} \left\{\sum_{l=1}^L \Pr(X=x_l)^2 (A_{kl}+\bar{A}_l)\right\}^{\frac{1}{2}}, \end{align} where we minimize each component of vector $\bold{N}_{TKL}$. Similar to the results in \citet{manski2004statistical}, the solution may not be unique. Also, it is a sufficient condition and not a necessary one. \subsection{Numerical Experiments} In this subsection, we conduct some numerical experiments, where we determine a sufficient sample size to use covariate-dependent treatment rules. Suppose that we have a binary covariate $X=\{low,high\}$ available in the sample. We now construct a treatment rule with or without the covariate. The sufficient sample size guarantees that the maximum regret from a covariate-dependent rule is smaller than that from a rule without considering any covariate. Thus, we can focus on covariate-dependent rules if the sample size is bigger than the sufficient one. In this experiment, the sample is partitioned into 2 groups ($X=low$, $X=high$), and $L = \vert \mathcal{X} \vert= 2$. Therefore, covariate-dependent rules $\boldsymbol{\pi}$ also becomes a 2-dimensional vector $\boldsymbol{\pi}=(\pi_{low},\pi_{high})$. Suppose that we consider two possible treatment rules, $\{\boldsymbol{\pi}_1= (0.5, 0.5), \boldsymbol{\pi}_2= (0.7, 0.3) \}.$ Unconditional treatment ratios for them becomes: \eqs{ \bar{\pi}_1 & = \Pr(X=low)\cdot0.50 + \Pr(X=high)\cdot0.50,\\ \bar{\pi}_2 & = \Pr(X=low)\cdot0.70 + \Pr(X=high)\cdot0.30. } We set that $\Pr(X=low)$ varies in $\{0.1, 0.5, 0.9, 0.99\}$ and that $\Pr(X=high):=1-\Pr(X=low)$ varies in $\{0.9, 0.5, 0.1, 0.01\}$. Recall that $(N_{ktx}: k=1,2, t=0,1, \mbox{ and } x= low, high)$ denotes the sample size of each partition separated by treatment rule $k$, treatment status $t$, and covariate $x$. In addition, $N$ denote the total sample size. $N_1$ and $N_2$ denote the sample sizes of each cluster, where we apply $\boldsymbol{\pi_1}$ and $\boldsymbol{\pi_2}$, respectively. Assuming that all states of natures are feasible, we compute the lower bound of maximum regret for the MES rule that does not depend on covariate $X$. We also compute the upper bounds of maximum regret for covariate-dependent MES rules as the sample size increases. In Tables \ref{table0.1}--\ref{table0.99}, we summarize the experiment results. The sufficient sample size is as low as $N=21$ when $\Pr(X=low)= 0.1$, $N=18$ when $\Pr(X=low)= 0.5,$ $N=68$ when $\Pr(X=low)= 0.9,$ and $N=5875$ when $\Pr(X=low)= 0.99.$ In each table, We also provide a breakdown of the sample sizes in each partition. This numerical study shows that covariate-dependent treatment rules can be justified with relatively small sample sizes. \begin{table}[H] \caption{Sufficient Sample Sizes: $\Pr(X=low)=0.10$} \label{table0.1} \centering \resizebox{\textwidth}{!} {\begin{tabular}{rrrrrrrrrrrrrr} \\ \hline $N$ & $N_1$ & $N_2$ & $N_{10low}$ & $N_{11low}$ & $N_{20low}$ & $N_{21low}$ & $N_{10high}$ & $N_{11high}$ & $N_{20high}$ & $N_{21high}$ & Upper bound & Lower bound \\ \hline 21 & 10 & 11 & 1 & 1 & 1 & 1 & 4 & 4 & 6 & 3 & 0.144 & 0.450 \\ 37 & 18 & 19 & 1 & 1 & 1 & 2 & 8 & 8 & 11 & 5 & 0.100 & 0.450 \\ 52 & 26 & 26 & 2 & 2 & 1 & 2 & 11 & 11 & 16 & 7 & 0.085 & 0.450 \\ 68 & 34 & 34 & 2 & 2 & 1 & 3 & 15 & 15 & 21 & 9 & 0.074 & 0.450 \\ 82 & 40 & 42 & 2 & 2 & 2 & 3 & 18 & 18 & 26 & 11 & 0.067 & 0.450 \\ 100 & 50 & 50 & 3 & 3 & 2 & 4 & 22 & 22 & 31 & 13 & 0.061 & 0.450 \\ 116 & 58 & 58 & 3 & 3 & 2 & 4 & 26 & 26 & 36 & 16 & 0.056 & 0.450 \\ 132 & 66 & 66 & 4 & 4 & 2 & 5 & 29 & 29 & 41 & 18 & 0.053 & 0.450 \\ 149 & 74 & 75 & 4 & 4 & 3 & 6 & 33 & 33 & 46 & 20 & 0.050 & 0.450 \\ 162 & 80 & 82 & 4 & 4 & 3 & 6 & 36 & 36 & 51 & 22 & 0.048 & 0.450 \\ \hline \end{tabular}} \end{table} \begin{table}[H] \centering \caption{Sufficient Sample Sizes: $\Pr(X=low)=0.50$} \resizebox{\textwidth}{!} {\begin{tabular}{rrrrrrrrrrrrrr} \\ \hline $N$ & $N_1$ & $N_2$ & $N_{10low}$ & $N_{11low}$ & $N_{20low}$ & $N_{21low}$ & $N_{10high}$ & $N_{11high}$ & $N_{20high}$ & $N_{21high}$ & Upper bound & Lower bound \\ \hline 18 & 8 & 10 & 2 & 2 & 2 & 3 & 2 & 2 & 3 & 2 & 0.145 & 0.250 \\ 34 & 16 & 18 & 4 & 4 & 3 & 6 & 4 & 4 & 6 & 3 & 0.104 & 0.250 \\ 50 & 24 & 26 & 6 & 6 & 4 & 9 & 6 & 6 & 9 & 4 & 0.086 & 0.250 \\ 66 & 32 & 34 & 8 & 8 & 5 & 12 & 8 & 8 & 12 & 5 & 0.075 & 0.250 \\ 81 & 40 & 41 & 10 & 10 & 7 & 14 & 10 & 10 & 14 & 6 & 0.067 & 0.250 \\ 98 & 48 & 50 & 12 & 12 & 8 & 17 & 12 & 12 & 17 & 8 & 0.061 & 0.250 \\ 114 & 56 & 58 & 14 & 14 & 9 & 20 & 14 & 14 & 20 & 9 & 0.057 & 0.250\\ 130 & 64 & 66 & 16 & 16 & 10 & 23 & 16 & 16 & 23 & 10 & 0.053 & 0.250 \\ 146 & 72 & 74 & 18 & 18 & 11 & 26 & 18 & 18 & 26 & 11 & 0.050 & 0.250 \\ 161 & 80 & 81 & 20 & 20 & 13 & 28 & 20 & 20 & 28 & 12 & 0.048 & 0.250 \\ \hline \end{tabular}} \label{table0.5} \end{table} \begin{table}[H] \centering \caption{Sufficient Sample Sizes: $\Pr(X=low)=0.90$} \resizebox{\textwidth}{!} {\begin{tabular}{rrrrrrrrrrrrrr} \\ \hline $N$ & $N_1$ & $N_2$ & $N_{10low}$ & $N_{11low}$ & $N_{20low}$ & $N_{21low}$ & $N_{10b}$ & $N_{11high}$ & $N_{20high}$ & $N_{21high}$ & Upper highound & Lower bound \\ \hline 21 & 10 & 11 & 4 & 4 & 3 & 6 & 1 & 1 & 1 & 1 & 0.136 & 0.072 \\ 37 & 18 & 19 & 8 & 8 & 5 & 11 & 1 & 1 & 2 & 1 & 0.100 & 0.072 \\ 52 & 26 & 26 & 11 & 11 & 7 & 16 & 2 & 2 & 2 & 1 & 0.085 & 0.072 \\ 68 & 34 & 34 & 15 & 15 & 9 & 21 & 2 & 2 & 3 & 1 & 0.074 & 0.072 \\ 82 & 40 & 42 & 18 & 18 & 11 & 26 & 2 & 2 & 3 & 2 & 0.067 & 0.072 \\ 100 & 50 & 50 & 22 & 22 & 13 & 31 & 3 & 3 & 4 & 2 & 0.061 & 0.072 \\ 116 & 58 & 58 & 26 & 26 & 16 & 36 & 3 & 3 & 4 & 2 & 0.056 & 0.072 \\ 132 & 66 & 66 & 29 & 29 & 18 & 41 & 4 & 4 & 5 & 2 & 0.053 & 0.072 \\ 149 & 74 & 75 & 33 & 33 & 20 & 46 & 4 & 4 & 6 & 3 & 0.050 & 0.072 \\ 162 & 80 & 82 & 36 & 36 & 22 & 51 & 4 & 4 & 6 & 3 & 0.048 & 0.072 \\ \hline \end{tabular}} \label{table0.9} \end{table} \begin{table}[H] \centering \caption{Sufficient Sample Sizes: $\Pr(X=low)=0.99$} \resizebox{\textwidth}{!} {\begin{tabular}{rrrrrrrrrrrrrr} \\ \hline $N$ & $N_1$ & $N_2$ & $N_{10low}$ & $N_{11low}$ & $N_{20low}$ & $N_{21low}$ & $N_{10high}$ & $N_{11high}$ & $N_{20high}$ & $N_{21high}$ & Upper bound & Lower bound \\ \hline 21 & 10 & 11 & 4 & 4 & 3 & 6 & 1 & 1 & 1 & 1 & 0.14609 & 0.00792 \\ 37 & 18 & 19 & 8 & 8 & 5 & 12 & 1 & 1 & 1 & 1 & 0.10463 & 0.00792 \\ 53 & 26 & 27 & 12 & 12 & 8 & 17 & 1 & 1 & 1 & 1 & 0.08590 & 0.00792 \\ 69 & 34 & 35 & 16 & 16 & 10 & 23 & 1 & 1 & 1 & 1 & 0.07455 & 0.00792 \\ 84 & 42 & 42 & 20 & 20 & 12 & 28 & 1 & 1 & 1 & 1 & 0.06721 & 0.00792 \\ 5764 & 2882 & 2882 & 1426 & 1426 & 856 & 1996 & 15 & 15 & 21 & 9 & 0.00799 & 0.00792 \\ 5780 & 2890 & 2890 & 1430 & 1430 & 858 & 2002 & 15 & 15 & 21 & 9 & 0.00798 & 0.00792 \\ 5796 & 2898 & 2898 & 1434 & 1434 & 861 & 2007 & 15 & 15 & 21 & 9 & 0.00797 & 0.00792 \\ 5812 & 2906 & 2906 & 1438 & 1438 & 863 & 2013 & 15 & 15 & 21 & 9 & 0.00796 & 0.00792 \\ 5828 & 2914 & 2914 & 1442 & 1442 & 865 & 2019 & 15 & 15 & 21 & 9 & 0.00795 & 0.00792 \\ 5844 & 2922 & 2922 & 1446 & 1446 & 868 & 2024 & 15 & 15 & 21 & 9 & 0.00793 & 0.00792 \\ 5860 & 2930 & 2930 & 1450 & 1450 & 870 & 2030 & 15 & 15 & 21 & 9 & 0.00792 & 0.00792 \\ 5875 & 2938 & 2937 & 1454 & 1454 & 872 & 2035 & 15 & 15 & 21 & 9 & 0.00791 & 0.00792 \\ 5892 & 2946 & 2946 & 1458 & 1458 & 875 & 2041 & 15 & 15 & 21 & 9 & 0.00790 & 0.00792 \\ 5907 & 2954 & 2953 & 1462 & 1462 & 877 & 2046 & 15 & 15 & 21 & 9 & 0.00789 & 0.00792 \\ 5924 & 2962 & 2962 & 1466 & 1466 & 880 & 2052 & 15 & 15 & 21 & 9 & 0.00788 & 0.00792 \\ \hline \end{tabular}} \label{table0.99} \end{table} \section{Asymptotic Optimality}\label{sec:optimality} In this section, we study the asymptotic optimality of the multinomial empirical success (MES) rule. We first transform the multivariate decision problem into a vector valued-binary decision problem. Then, we show the asymptotic optimality of MES by extending the limit experiment framework in \citet{hirano2009asymptotics} into the vector-valued binary decision problem. We first define $K(K-1)/2$-dimensional vector \eqs{ \delta^{VMES}_n := \left( \delta_{n,(1,2)},\ldots, \delta_{n,(k,k')},\ldots, \delta_{n,(K-1,K)} \right)', }where $\delta_{n,(k,k')}=I(\hat{U}(\pi_k)>\hat{U}(\pi_{k'}))$. We will call $\delta^{VMES}_n$ the vectorized multinomial empirical success (VMES) rule.\footnote{We use the subscript $n$ hereafter to distinguish a finite sample decision rule from the corresponding asymptotic one.} Note that the VMES rule has $2^{K (K-1)/2}$ different actions while the MES rule has only $K$ actions. However, they are equivalent in the sense that we can always construct one from the other given $\{\hat{U}(\pi_k):k=1,\ldots,K\}.$ To the best of our knowledge, this is the first paper to investigate the asymptotic optimality of a multiple decision problem by transforming it into a vector-valued binary decision problem.\footnote{A similar idea has been in the multiple \emph{hypothesis testing} literature for a long time, where they convert a $K$-multiple hypothesis problem into a $2^K$-finite action problem (see, e.g.\ \citet{lehmann1952testing,lehmann1957theory} and \citet{cohen2005decision}). } We now have $J:=K (K-1) /2$ binary decision problems. Following \citet{van1991asymptotic} and \citet{hirano2009asymptotics}, we investigate the asymptotic optimality around the local alternatives. We first restrict our attention the parametric class of $Q$ whose extension to the semiparametric class follows immediately. Let $\mathcal{E}_n:=\{ Q^n_{{\theta}}:{\theta} \in \Theta \subset \mathbb{R}^d \}$ be a sequence of experiments, where $\Theta$ be an open subset of $\mathbb{R}^d$. We define a vector of welfare contrasts \eqs{ g({\theta}) := \left(g_1({\theta}), \ldots, g_J({\theta}) \right)', }where $g_j(\theta):=U(\pi_k,{\theta}) - U(\pi_{k'},{\theta})$ is the welfare contrast between $\pi_k$ and $\pi_{k'}$. For notational simplicity, we use $j$ for a generic combination $(k,k')$. We consider local alternatives around $\theta_0$, where $g({\theta}_0) = 0$. Note that our problem is the most difficult case in the parameter space. If $g_j({\theta}_0)\neq 0$ for a given $\theta_0$, one action is strictly dominated by the other around $\theta_0$ and the decision between $(k,k')$ becomes trivial asymptotically. We next define a loss function. We consider a loss function that is additively separable for each binary decision problem $j$: \eq{ L(\delta, \theta) = \sum_{j=1}^J L_j(\delta_j, \theta), \label{eq:additive loss} }where $L_j$ is a loss function for a binary decision rule $\delta_j$ between $\pi_k$ and $\pi_{k'}$. Specifically, we use the regret loss in this analysis: \eqs{ L_j(\delta_j,\theta) &:= g_j(\theta)\left[I(g_j(\theta)>0) - \delta_j \right]. }Using the loss function in \eqref{eq:additive loss} and experiment $Q_{\theta}^n$, we define a risk function as usual: \eq{ R(\delta,\theta) & := \int_{\Omega} L(\delta(\omega^n),\theta) dQ_{\theta}^n (\omega^n)\label{eq:risk_function} \\ & = \sum_{j=1}^J \int_{\Omega} L_j(\delta_j(\omega^n),\theta) dQ_{\theta}^n (\omega^n) \nonumber \\ & \equiv \sum_{j=1}^J R_j(\delta_j,\theta). \nonumber }Note that the risk function is also additively separable. To achieve a tractable asymptotic experiment, we assume that $Q_{\theta}$ is differentiable in quadratic mean (DQM) at $\theta_0$. For the formal definition, let $q_{\theta}$ be the density of $Q_{\theta}$ with respect to Lebesque measure $\mu$. Then, there exists a measurable function $s(\omega)$ such that, as $h \to 0$, \eqs{ \int \left[ \sqrt{q_{\theta_0+h}(\omega)} - \sqrt{q_{\theta_0}(\omega)} - \frac{1}{2} h's(\omega) \sqrt{q_{\theta_0}(\omega)} \right]^2 d\mu(\omega) = o(\Vert h \Vert^2). } We can usually compute $s(\omega)$ by $s = \frac{\partial \log q_{\theta}}{\partial \theta}\vert_{\theta=\theta_0}$, and the Fisher information matrix is defined as $I_0=E_{\theta_0}[ss']$. Applying the standard local asymptotic normality arguments, we can show that the limit experiment becomes $N(\Delta|h,I_0^{-1})$, i.e.~the multivariate normal distribution with mean $h$ and variance $I_{0}^{-1}$ (see Proposition 3.1 in \citet{hirano2009asymptotics}). We next define the corresponding loss and risk functions in the asymptotic experiment. Recall that $g(\theta)$ is a $J\times 1$ vector of welfare contrasts with $g(\theta_0)=0$. Let $\triangledown_\theta g$ be a $J\times d$ matrix of partial derivatives of $g$ at $\theta_0$. Then, under some smoothness assumption on $g$, we have $\sqrt{n}g(\theta_0+h/\sqrt{n}) \to (\triangledown_{\theta} g)h$. Then, we observe that \eqs{ \sqrt{n}L_j\left(\delta_j, \theta_0 + \frac{h}{\sqrt{n}}\right) &\to (\triangledown_{\theta} g_j)h\left[ I\left( (\triangledown_{\theta} g_j)h >0 \right) - \delta_j \right] \\ & \equiv L_{j,\infty} (\delta_j, h), } where $\triangledown_{\theta} g_j$ is the $j$-th row of matrix $\triangledown_{\theta} g$. Using the additive separability, we can define the asymptotic loss function as \eqs{ L_{\infty}(\delta,h) := \sum_{j=1}^J L_{j,\infty} (\delta_j, h). }Similarly, we can define the corresponding asymptotic risk function as \eqs{ R_{j,\infty}(\delta_j,h) & := \lim_{n\to\infty} \sqrt{n} R_j\left(\delta_j,\theta_0+\frac{h}{\sqrt{n}}\right)\\ & = \int L_{j,\infty}(\delta_j(\Delta),h)dN(\Delta|h,I_0^{-1})\\ R_{\infty}(\delta, h) & := \sum_{j=1}^J R_{j,\infty}(\delta_j,h)\\ R_{\infty}(\delta) & := \sup_{h \in \mathbb{R}^d} R_{\infty}(\delta,h). }Abusing notation slightly, we use the same $\delta_j$ for both $R_{j}(\delta_j,\theta)$ and $R_{j,\infty}(\delta_j,h)$. However, notice that one in $R_j$ is defined on the sample sample $\omega^n \in \Omega$ while the other in $L_{j,\infty}$ is on the simpler asymptotic experiment space $\Delta \in \mathbb{R}^d$. We collect all the regularity conditions. \begin{assum}\label{ass:function_g} Let $g$ be a vector-valued welfare contrast function whose dimension is $J\times 1$. Then, it satisfies that $g(\theta_0)=0$ and $g(\theta)$ is differentiable at $\theta_0$. \end{assum} \begin{assum}\label{ass:LAN} The sequence of experiments $\mathcal{E}_n:=\{Q^n_{\theta}: \theta\in\Theta, n=1,\ldots\}$ is differentiable in quadratic mean at $\theta_0 \in \Theta \subset \mathbb{R}^d$ with nonsigular $I_0$. \end{assum} \begin{assum}\label{ass:estimator} (i) There exists the best regular estimator $\hat{\theta}$ such that \eqs{ \sqrt{n}(\hat{\theta}_n-\theta_0 -h/\sqrt{n}) \overset{h}{\rightsquigarrow} N(0,I_0^{-1})~~ \forall h \in \mathbb{R}^d, }where $\overset{h}{\rightsquigarrow}$ denotes the convergence in distribution under the sequence of $Q_{\theta_0 + h/\sqrt{n}}^n$.\\ (ii) Let $\sigma_{g_j}^2:=(\triangledown_\theta g_j) I_0^{-1} (\triangledown_\theta g_j)'$. Then, there exists an estimator $\hat{\sigma}_{g_j}$ such that \eqs{ \hat{\sigma}_{g_j} \overset{p}{\to} \sigma_{g_j}~~\forall j =1,\ldots,J }under $\theta_0$. \end{assum} These regularity conditions are similar to those in \citet{hirano2009asymptotics}. Assumption \ref{ass:function_g} is a mild extension of the welfare contrast to a vector-valued function. We impose that the smoothness assumption holds element-by-element. Assumption \ref{ass:LAN} is the standard condition for the local asymptotic normality. Therefore, the asymptotic experiment can be approximated by the multivariate normal distribution for each $j$. Finally, Assumption \ref{ass:estimator} assures the existence of an efficient estimator for $\theta_0$ and a consistent estimator for $\sigma_{g_j}$ for each $j$. \begin{theorem}\label{thm:asymp_opt_parametric} Suppose that Assumptions \ref{ass:function_g}--\ref{ass:estimator} hold. Let $\delta_n^R$ be a $J \times 1$ dimensional decision rule whose $j$-th component is defined as \eqs{ \delta_{n,j}^R :=I \left(\frac{\sqrt{n}g_j(\hat{\theta}_n)}{\hat{\sigma}_{g_j}}>0\right), } Then, it holds that \eq{ \sup_{H \in \mathcal{H}} \liminf_{n\to \infty}\sup_{h \in H} \sqrt{n} R\left(\delta_n^{R}, \theta_0 + \frac{h}{\sqrt{n}} \right) = \inf_{\delta_n\in\mathcal{D}} \sup_{H \in \mathcal{H}} \liminf_{n \to \infty} \sup_{h \in H} \sqrt{n} R \left(\delta_n, \theta_0 +\frac{h}{\sqrt{n}}\right), }where $\mathcal{H}$ is a collection of all finite subsets of $\mathbb{R}^d$ and $\mathcal{D}$ is the set of all sequences of decision rules that converges to the asymptotic decision problem. \end{theorem} This theorem is a gentle extension of Theorem 3.5 of \citet{hirano2009asymptotics} to the finite action framework with the additively separable loss function. In this paper, we focus on the statistical decision problem under social interaction, where it is transformed into choosing the fraction of the treatment. However, the result of this theorem is applicable to any case, where the decision problem is represented as a choice among multiple actions. Straightforward is an extension to semiparametric models. We have restricted our attention to the class of parametric models $\Theta$ in this section, but we can extend it to the class of distributions $\mathcal{P}$ with more complicated notation. Instead of repeating the same arguments with messier notation, we refer to \citet[section 4]{hirano2009asymptotics} and \citet{van1991asymptotic} for the extension. The main difference is that the multivariate Gaussian limit experiment is now replaced by an infinite Gaussian sequence. Since the optimal decision rule has the same threshold constant both in parametric models and semiparametric models, we can claim the asymptotic optimality of the MES rule based on the finite action framework. Suppose that we have a random sample $(y_t(\pi_k), y_t(\pi_{k'}))$ for the binary decision problem between $\pi_k$ and $\pi_{k'}$. Let $F_{t,k}$ and $F_{t,k'}$ be the distributions of the sample, which is unknown but included in the class of $\mathcal{P}$. We assume that $\mathcal{P}$ is the largest class of distributions satisfying \eqs{ \sup_{F\in\mathcal{P}} \int \vert y \vert^2 dF(y) < \infty. }Recall that the welfare contrast function in this binary decision problem becomes \eqs{ g_j & = U(\pi_k) - U(\pi_{k'}) \\ & = (1-\pi_k) \int y dF_{0,k}(y) + \pi_k \int y dF_{1,k}(y) -(1-\pi_{k'}) \int y dF_{0,k'}(y) + \pi_{k'} \int y dF_{1,k'}(y). }Note that the MES rule can be written as \eqs{ \delta_{n,(k,k')} & = I ( \hat{g}_{n,(k,k')} > 0 ), }where \eqs{ \hat{g}_{n,j} & := \hat{U}(\pi_k) - \hat{U}(\pi_{k'}) \\ & = (1-\pi_k)\cdot \frac{\sum_{n_k=1}^{N_{k}} y_{n_k}(\pi_k)\cdot I(t_{n_k}(\pi_k)=0)}{\sum_{n_k=1}^{N_{k}} I(t_{n_k}(\pi_k)=0)} + \pi_k \cdot \frac{\sum_{n_k=1}^{N_{k}} y_{n_k}(\pi_k)\cdot I(t_{n_k}(\pi_k)=1)}{ \sum_{n_k=1}^{N_{k}} I(t_{n_k}(\pi_k)=1)} \\ & \hskip30pt - (1-\pi_{k'})\cdot \frac{\sum_{n_{k'}=1}^{N_{k'}} y_{n_{k'}}(\pi_{k'})\cdot I(t_{n_k'}(\pi_{k'})=0)}{\sum_{n_{k'}=1}^{N_{k'}} I(t_{n_{k'}}(\pi_{k'})=0)} + \pi_{k'} \cdot \frac{\sum_{n_{k'}=1}^{N_{k'}} y_{n_k'}(\pi_{k'})\cdot I(t_{n_k'}(\pi_{k'})=1)}{ \sum_{n_{k'}=1}^{N_{k'}} I(t_{n_k'}(\pi_{k'})=1)}. }Since $\hat{g}_{n,j}$ is an asymptotically efficient estimator of $g_j$ \citep{bickel1993efficient}, we can conclude that $\delta_{n,(k,k')}$ is asymptotically minimax optimal for the regret loss function and that the MES rule is asymptotically optimal for the additively separable loss function. \section{Conclusion}\label{sec:conclusion} In this paper we study statistical treatment rules under social interaction. We impose the anonymous interaction assumption, and consider a treatment decision problem, where we choose the treatment ratio for each cluster. We propose a simple but intuitive rule called the multinomial empirical success (MES) rule. We construct the finite sample regret bound of the MES rule and show how it can be applied in the treatment decision problems. Finally, we show that the proposed MES rule achieves the asymptotic optimality in the sense of \citet{hirano2009asymptotics}. We may consider a few possible extensions. It is interesting to investigate the finite sample optimality of the MES rule. It does not work immediately if we apply the finite action problem framework, which we adopt in the asymptotic optimality analysis, and the game theoretic approach in \citet{stoye2009minimax} in the finite sample case. It is also interesting to relax the anonymous interaction assumption. Then, we have to ask what kind of additional information help reduce the dimension of the action space. The network information can be such an example. We leave these questions for our future research. \newpage
1,477,468,750,702
arxiv
\section{Conclusions and Future Work} Flow-based algorithms for local graph clustering exhibit very strong cut improvement and runtime guarantees. In our work we have exploited efficient warm-start and push-relabel heuristics to provide practitioners with a very simple yet fast flow-based method for real-world data mining applications. In addition to outperforming related flow-based clustering algorithms in runtime, our method is able to better incorporate domain-specific semi-supervised information about ground truth target clusters in a large network, by giving users the option to specify penalties and strict constraints for excluding specific seed nodes from the output set. Given the success of seed exclusion penalties for flow-based methods, in future work we will continue to explore how similar penalties may be incorporated in other well-known clustering approaches including spectral and random-walk based techniques. \section{Background and Related Work} We begin with an overview of notation and then provide a technical review of important concepts. Let $G = (V,E)$ represent an undirected and unweighted graph.\footnote{Following previous results, we will prove runtime and cut improvement guarantees for unweighted graphs, though in practice our implementations accommodate graphs with arbitrary floating point weights.} For each node $v \in V$ let $d_v$ be its degree, i.e.\ the number of edges that have $v$ as an endpoint. For any set $S \subset V$ let $|E_S|$ be the number of interior edges in $S$, $\textbf{vol}(S) = \sum_{v \in S} d_v$ be the \emph{volume} $S$, and $\textbf{cut}(S) = \textbf{vol}(S) - 2|E_S|$ denote the number of edges crossing from $S$ to $\bar{S} = V\backslash S$. Each set $S$ uniquely identifies a set of edges crossing from $S$ to $\bar{S}$, so we will frequently refer to a set of nodes $S$ as a \emph{cut} in a graph. One way to quantify the community structure of a set $S$ is by measuring its conductance: \[ \phi(S) = \frac{\textbf{cut}(S)} {\min\{ \textbf{vol}(S), \textbf{vol}(\bar{S})\}}. \] A small value for $\phi(S)$ indicates that $S$ is well-connected internally but only loosely connected to the rest of the graph, and therefore represents a ``good'' cluster from a topological perspective. \subsection{Local Variants of Conductance} In local graph clustering we are given a seed (or \emph{reference}) set $R \subset V$ that is small with respect to the size of the graph. If we fix some value of a \emph{locality} parameter $\varepsilon \in \left[\frac{\textbf{vol}(R)}{\textbf{vol}(\bar{R})}, \infty \right)$, then the following objective function is a modification of the conductance score biased towards the set $R$, which we call the \emph{local conductance} measure: \begin{equation} \label{local-cond} \phi_{R,\varepsilon}(S) = \frac{\textbf{cut}(S)}{\textbf{vol}(R\cap S) - \varepsilon \textbf{vol}(\bar{R} \cap S)}. \end{equation} One approach to localized community detection is to optimize the above objective over all sets $S$ such that the denominator is positive. This function was first introduced specifically for $\varepsilon = \frac{\textbf{vol}(R)}{\textbf{vol}(\bar{R})}$ by Andersen and Lang~\cite{AndersenLang2008}. Orecchia and Zhou later considered larger values of $\varepsilon$, which effectively restricts the search space to sets $S$ that overlap significantly with $R$, leading to algorithms that minimize the objective in time independent of the size of the input graph $G$~\cite{OrecchiaZhu2014}. Both of these algorithms generalize the earlier Max-flow Quotient-cut Improvement (MQI) algorithm~\cite{LangRao2004}, which computes the minimum conductance subset of $R$, and fits the above paradigm if we allow $\varepsilon = \infty$. \subsection{Minimizing Local Conductance} \label{minlocalcond} Although it is NP-hard to find the minimum conductance set of a graph $G$, one can minimize~\eqref{local-cond} efficiently by repeatedly solving a sequence of related minimum $s$-$t$ cut problems. First fix a parameter $\alpha \in (0,1)$ and construct a new graph $G_{st}$ using the following steps: \begin{compactitem} \item Keep original nodes in $G$ and edges with weight 1 \item Introduce source node $s$ and sink node $t$ \item For every $r\in R$, connect $r$ to $s$ with weight $\alpha d_r$ \item For every $j \in \bar{R}$, connect $j$ to $t$ with weight $\alpha \varepsilon d_j$. \end{compactitem} The minimum $s$-$t$ problem seeks the minimum weight set of edges in $G_{st}$ that, when removed, will separate the source node $s$ from the sink node $t$. Every subset of nodes $S \subset V$ in $G$ induces an $s$-$t$ cut in $G_{st}$ where the two sides of the cut are $\{s\}\cup S$ and $\{t\} \cup \bar{S}$. The weight of this $s$-$t$ cut can be given entirely in terms of cuts and volumes of sets in $G$: \begin{align*} {\alg{STcut}}(S) &= \textbf{cut}(S) + \alpha \varepsilon \textbf{vol}(\bar{R} \cap S) + \alpha \textbf{vol}(R \cap \bar{S})\\ &= \textbf{cut}(S) + \alpha \varepsilon \textbf{vol}(\bar{R} \cap S) - \alpha \textbf{vol}(R \cap {S}) + \alpha \textbf{vol}(R). \end{align*} If there exists some $S$ such that $\alg{STcut}(S) < \alpha \textbf{vol}(R)$, one can show with a few steps of algebra that this implies $\phi_{R,\varepsilon}(S) < \alpha$. Therefore, $\phi_{R,\varepsilon}(S)$ can be minimized by finding the smallest $\alpha$ such that the minimum $s$-$t$ cut of $G_{st}$ is exactly $\alpha\textbf{vol}(R)$. This can be accomplished by performing binary search over $\alpha$ or simply starting with $\alpha = \phi_{R,\varepsilon}(R)$ and iteratively finding minimum $s$-$t$ cuts in $G_{st}$ for increasingly smaller values of $\alpha$ until no more improvement is possible. For sufficiently large $\varepsilon$, there is no need to explicitly construct $G_{st}$ in order to solve the min-cut objective. Instead, localized techniques can be used which repeatedly solve flow and cut problems on small subgraphs until a global solution is reached~\cite{OrecchiaZhu2014,VeldtGleichMahoney2016}. For more details on the flow-based framework presented here and its relationship to regularized optimization problems and random-walk based methods, we refer to related papers~\cite{Fountoulakis2017,pmlr-v32-gleich14}. \subsection{Maximum $s$-$t$ Flows} \label{maxflow} Flow-based methods which operate by repeatedly solving minimum $s$-$t$ cut problems in the above manner include \alg{MQI}~\cite{LangRao2004}, \alg{FlowImprove}~\cite{AndersenLang2008}, \alg{LocalImprove}~\cite{OrecchiaZhu2014}, and \alg{SimpleLocal}~\cite{VeldtGleichMahoney2016}. These methods all obtain small $s$-$t$ cuts by solving the dual maximum $s$-$t$ flow problem. A major contribution in our work is to show that strongly-local algorithms for minimizing localized conductance measures can be obtained without any explicit computation of maximum $s$-$t$ flows. However, an efficient algorithm we develop later will apply explicit \emph{preflow} computations, so we briefly review key flow concepts here. Let $G_A = (V\cup \{s,t\}, A)$ be a directed graph with a distinguished source and sink nodes $s$ and $t$ and capacities $c_{ij} > 0$ for each directed edge $(i,j)$ (called an \emph{arc}) in a set $A$. We can turn any undirected graph $G = (V\cup \{s,t\}, E)$ into a directed graph by replacing each edge $\{i,j\} \in E$ with two arcs $(i,j)$ and $(j,i)$. A valid $s$-$t$ flow on $G_A$ is a function $F : A \rightarrow \mathbb{R}_{\geq 0}$ which assigns flow values $f_{ij}$ satisfying \begin{align} & f_{ij} \leq c_{ij} \text{ for $(i,j) \in A$} \\ & {\sum_{(j,i) \in A} f_{ji} = \sum_{(i,k) \in A} f_{ik} \text{ for $i \in V$} } \end{align} which are referred to as \emph{capacity} and \emph{flow} constraints respectively. The flow $F$ is defined to be skew-symmetric, i.e. $f_{ij} = -f_{ji}$. The maximum $s$-$t$ flow problem seeks the flow $F$ which routes a maximum amount of flow from $s$ to $t$. Given a flow $F$ for a graph $G_A$, the residual graph $G_F = (V\cup\{s,t\}, A_F)$ is defined to be the directed network in which arc $(i,j) \in A_F$ has capacity $c_{ij}^F = c_{ij} - f_{ij}$. If an arc has nonzero residual capacity, it means that one can push more flow across it in search of new ways to route flow from $s$ to $t$. An arc $(i,j)$ is saturated if it has a residual capacity of zero. A flow $F$ is a maximum $s$-$t$ flow if and only if there exists no path of unsaturated arcs from $s$ to $t$. In this case, the set of nodes $S$ reachable from $s$ via a path of unsaturated arcs defines the minimum $s$-$t$ cut. \subsection{Random Walks and Other Diffusion Based Clustering Algorithms} Spectral methods are another widely popular approach to local graph clustering. Among these methods, the Andersen-Chung-Lang \alg{Push} procedure for computing an approximate personalized PageRank vector is well-known for its strongly-local runtime and good cut improvement guarantees~\cite{AndersenChungLang2006}. Random-walk based spectral methods typically find local cuts in a graph by running a localized diffusion from a small set of seed nodes. This diffusion produces an embedding with limited support over the nodes in the graph, which can then be rounded using some form of a sweet cut procedure to produce a cut. In contrast, flow-based methods solve biased minimum-cut computations and directly produce a cut rather than an embedding which must be rounded. Another key distinction between random-walk and flow-based approaches is the type of seed set these methods require. Random-walk diffusions are typically able to grow a single seed node or a small seed set into a larger cluster with good conductance. For example, Andersen et al.~\cite{AndersenChungLang2006} showed that if one starts from any one of a large number of individual seed nodes in a target cluster $T$, the \alg{Push} algorithm will return a localized cluster with conductance at most $O(\sqrt{\phi(T)})$. Flow-based methods are able to provide stronger cut improvement guarantees, but can only do so if they begin with a large seed set that has significant overlap with the target cluster $T$ (see e.g.\ the results in~\cite{AndersenLang2008,OrecchiaZhu2014,VeldtGleichMahoney2016}). In practice, flow-based methods may perform poorly if the seed set $R$ is too small. One approach for obtaining a large enough seed set is to first run a spectral diffusion from a small number of starting nodes and then refine the output using the flow-based method. Another approach is to take the starting seed nodes and grow them by a neighborhood with a small radius to produce a localized seed set that is sufficiently large for flow-based methods to output meaningful results. \paragraph{Other Diffusion Based Methods} In addition to random-walk based methods, there also exist other localized community detection algorithms that operate by computing a diffusion and then performing a sweep cut on the resulting embedding. Among others, Kloster and Gleich~\cite{Kloster-2014-hkrelax} developed a fast method for computing local communities based on the heat kernel diffusion. The runtime of their algorithm, \alg{hk-relax}, depends on the parameters of the diffusion but is independent of the size of the input graph; hence the method is strongly-local. More recently, Wang et al.~\cite{pmlr-v70-wang17b} introduced the Capacity Releasing Diffusion (CRD), another strongly local algorithm, which spreads mass around nodes in a graph using a flow-like mechanism. Although CRD incorporates flow-based dynamics, we note that it does not compute biased minimum $s$-$t$ cuts on the input graph. For clarity, in this paper we reserve the term \emph{flow-based} to refer to methods that fit the paradigm outlined in Section~\ref{minlocalcond}. \section{Experiments} We demonstrate the performance of \alg{FlowSeed} in several community detection experiments and large scale 3D image segmentation problems. Code for our method and experiments is available online at~\url{https://github.com/nveldt/FlowSeed}. \subsection{Local Community Detection} \label{cd} Our first experiment demonstrates the robustness of \alg{FlowSeed} in local community detection, thanks to its ability to penalize the exclusion of certain seed nodes from the output. \paragraph{Datasets} We consider four graphs from the SNAP repository~\cite{snapnets}: DBLP, Amazon, LiveJournal, and Orkut. Each network come with sets of nodes representing so-called ``functional communities''~\cite{Yang2015}. Communities in these networks specifically represent user groups in a social network (LiveJournal and Orkut), product categories (Amazon), or academic publication venues (DBLP). For each graph we select the ten largest communities out of the top 5000 communities identified by Yang and Leskovec~\cite{yangComm}, which come with the data on the SNAP website. These communities range in size from a few hundred to a few thousand nodes. The size of each network in terms of nodes and edges is given in Table~\ref{snap-stats}, along with the average community size and conductance among the ten largest communities in each network. \begin{table}[h] \caption{Number of nodes and edges for SNAP networks, along with target community size $|T|$ and target community conductance $\phi(T)$, averaged over the largest 10 communities.} \label{snap-stats} \centering \begin{tabular}{lllll} \toprule Graph & $|V|$ & $|E|$ & $|T|$ & $\phi(T)$ \\ \midrule DBLP & 317,080 & 1,049,866 & 3902 & 0.4948 \\ Amazon & 334,863 & 925,872 & 190 & 0.0289 \\ LiveJournal & 3,997,962 & 34,681,189 & 988 & 0.4469 \\ Orkut & 3,072,441 & 117,185,083 & 3877 & 0.6512 \\ \bottomrule \end{tabular} \end{table} \paragraph{Strongly-Local Algorithms} We compare our Julia implementation of \alg{FlowSeed} against several other standard local graph clustering algorithms that come with strong locality guarantees:\newline \noindent \alg{Push}: The random-walk diffusion method of Andersen et al.~\cite{AndersenChungLang2006}. We use a highly optimized C++ implementation of the algorithm with a MATLAB interface. This method relies on a PageRank teleportation parameter $\alpha_{pr}$ and a tolerance parameter $\varepsilon_{pr}$. The latter controls the accuracy to which the underlying PageRank problem has been solved, and implicitly controls how wide of a region is explored in the graph by the method.\newline \noindent \alg{HK-relax}: The heat kernel diffusion method of Kloster and Gleich~\cite{Kloster-2014-hkrelax}. This comes with diffusion parameter $t$ and a tolerance parameter $\varepsilon_{hk}$. We use the C++ implementation (with MATLAB interface) provided by the original authors, available online at~\url{https://github.com/kkloste/hkgrow}. \newline \noindent \alg{CRD}: The capacity releasing diffusion of Wang et al.~\cite{pmlr-v70-wang17b}, implemented as a part of the LocalGraphClustering package~\url{https://github.com/kfoynt/LocalGraphClustering}. For this method we must set parameters $U$, $h$, and $w$ (see the original work for details). \newline \noindent \alg{SimpleLocal}: Our previous strongly-local flow-based method which optimizes the localized conductance objective~\eqref{local-cond} for a seed set $R$ and locality parameter $\varepsilon$. We use the fast C++ implementation available from the LocalGraphClustering package. \newline \paragraph{Seed Set and Algorithm Parameters} For each target community in each network, we randomly select 5\% of the target nodes, which we refer to as the \emph{starter} nodes. We grow the starter nodes by their neighborhood to produce a seed set $R$ that we use as input for each algorithm. For \alg{HK-relax}, \alg{Push}, and \alg{CRD}, we also tried using the starter set as the seed set, but this was not as effective in practice. Similarly, for these three methods we tried using each one of the starter nodes one at a time as an individual seed node, taking the best conductance output as the result, but this also was ineffective. Therefore, we only report results for each method using the full seed set $R$. For both \alg{SimpleLocal} and \alg{FlowSeed} we use a locality parameter of $\varepsilon = 0.1$. We require \alg{FlowSeed} to strictly include the known 5\% of the target community, but do not add any soft penalties on excluding other seed nodes. For \alg{Push}, we set $\alpha_{pr} = 0.99$, and test a range of tolerance parameters: $\varepsilon_{pr} \in \{10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}, 10^{-6}, 10^{-7} \}$, returning the output with the best conductance. For \alg{HK-relax}, following the experiments of Kloster and Gleich~\cite{Kloster-2014-hkrelax}, we test several pairs of parameter settings: $(t,\varepsilon_{hk})$ = $(5, 10^{-4})$; $(10, 10^{-4} )$; $(20, 10^{-3} )$; $(40, 10^{-3} )$; $(80, 10^{-2} )$, again returning the lowest conductance output. We also experimented with smaller values for locality and tolerance parameters~$\varepsilon$, $\varepsilon_{pr}$, and $\varepsilon_{hk}$, all of which control how much of the graph is explored by their respective method. However, this only increased the runtime of these methods without yielding improved results. This is consistent with previous research that has shown that real-world networks often exhibit very good community structure at small scales, but do not tend to possess large sets with good topological community structure~\cite{leskovec2008statistical}. Thus exploring the graph using \alg{SimpleLocal}, \alg{FlowSeed}, \alg{HK-relax}, or \alg{Push} with a smaller tolerance or locality parameter is unlikely to return better results. Finally, for \alg{CRD} we use the default parameters $U = 3$, $h = 10$, and $w = 2$; we were unable to determine other parameter settings that led to consistently improved output in practice. \paragraph{Results} In Table~\ref{tab:snap}, we report conductance, cluster size, precision, recall, F1 scores, and runtimes for each method, averaged over the 10 communities for each network. \alg{FlowSeed} returns the best result among all methods on three of four datasets. Furthermore, it always outperforms \alg{SimpleLocal}, which solves a very similar objective but does not penalize the exclusion of seed nodes. The relative performance of all methods other than \alg{FlowSeed} varies significantly depending on the dataset. As expected, in many cases \alg{SimpleLocal} discards too many seed nodes, returning sets with very good conductance and precision, at the expense of poor recall. This is exhibited most clearly on the DBLP dataset, and to a lesser extent on Amazon. On DBLP, \alg{HK-relax} and \alg{Push} also exhibit a tendency to overemphasize conductance and output tiny sets with low recall. On Orkut, \alg{Push} grows sets that are too large, which is another tendency of the method that has been documented in other work as well~\cite{Kloster-2014-hkrelax,VeldtGleichMahoney2016}. \alg{CRD} outperforms all other methods on LiveJournal and does reasonably well (relative to the other methods) on both Orkut and DBLP. However, it returns results that are significantly worse than all other algorithms on Amazon. \begin{table}[h] \caption{Average set size, conductance $\phi$, runtime (in seconds), precision, recall, and F1 score for five methods on four networks. Best F1 scores are shown in bold.} \label{tab:snap} \vspace{-.5\baselineskip} \centering \begin{tabular}{llllllll} \toprule Graph & method & size & $\phi$ & runtime & prec. & recall & F1 \\ \midrule DBLP & \alg{HK-relax} & 280 & 0.100 & 0.110 & 0.609 & 0.036 & 0.044 \\ & \alg{Push} & 80 & 0.130 & 0.168 & 0.607& 0.011 & 0.022 \\ & \alg{CRD} & 1460 & 0.255 & 3.569 & 0.468& 0.190 & 0.263 \\ & \alg{SimpleLocal} & 31 & 0.046 & 24.540 & 0.632& 0.006 & 0.011 \\ & \alg{FlowSeed} & 2789 & 0.254 & 9.491 & 0.414& 0.302 & \textbf{0.345} \\ \midrule Amazon & \alg{HK-relax} & 156 & 0.007 & 0.020 & 0.952 & 0.804 & 0.843 \\ & \alg{Push} & 180 & 0.007 & 0.225 & 0.953& 0.889 & 0.904 \\ & \alg{CRD} & 70 & 0.208 & 1.629 & 0.958& 0.374 & 0.521 \\ & \alg{SimpleLocal} & 154 & 0.007 & 0.096 & 0.906& 0.772 & 0.814 \\ & \alg{FlowSeed} & 214 & 0.018 & 0.332 & 0.892& 0.970 & \textbf{0.924} \\ \midrule LiveJournal & \alg{HK-relax} & 1373 & 0.144 & 0.206 & 0.432 & 0.593 & 0.406 \\ & \alg{Push} & 1867 & 0.363 & 0.172 & 0.444& 0.650 & 0.489 \\ & \alg{CRD} & 3230 & 0.098 & 69.584 & 0.464& 0.782 & \textbf{0.520} \\ & \alg{SimpleLocal} & 4485 & 0.035 & 17.932 & 0.371& 0.813 & 0.440 \\ & \alg{FlowSeed} & 4931 & 0.070 & 22.780 & 0.395& 0.896 & 0.484 \\ \midrule Orkut & \alg{HK-relax} & 3540 & 0.648 & 3.748 & 0.103 & 0.198 & 0.084 \\ & \alg{Push} & 16790 & 0.749 & 0.767 & 0.165& 0.706 & 0.267 \\ & \alg{CRD} & 4006 & 0.355 & 451.092 & 0.442& 0.457 & 0.428 \\ & \alg{SimpleLocal} & 3726 & 0.339 & 327.118 & 0.468& 0.451 & 0.439 \\ & \alg{FlowSeed} & 4049 & 0.379 & 439.327 & 0.505& 0.507 & \textbf{0.494} \\ \bottomrule \end{tabular} \end{table} \paragraph{Runtime Comparison of Flow-Based Methods} In Table~\ref{tab:snap} we see that \alg{Push} and \alg{HK-relax} are by far the fastest local clustering algorithms, taking only a fraction of a second in almost all cases. However, although these methods sometimes return good outputs, they do not consistently perform well across all datasets. Focusing next on the two flow-based methods, we see that \alg{SimpleLocal} and \alg{FlowSeed} trade off in runtime for the experiments summarized in Table~\ref{tab:snap}. However, the difference in runtime is greatly influenced by the fact that \alg{FlowSeed} solves a slightly different objective in order to ensure certain seed nodes are included in the output. In order to provide a clearer runtime comparison between these two related methods, we re-run both algorithms again but do not include any seed exclusion penalties when running \alg{FlowSeed}. In this case the algorithms will solve the same exact objective and return the same output. This time we use a locality parameter that depends on the size of the seed set relative to the graph: $\varepsilon =5\textbf{vol}(R)/\textbf{vol}(\bar{R})$. This means that for the larger datasets, computations will not be as local. Therefore, the bottleneck for both of the methods will be their underlying flow subroutines, which are what we are most interested in comparing. The average runtimes for the two algorithms are given in Table~\ref{tab:runtimes}. \begin{table}[t] \caption{Average runtimes (in seconds) for \alg{SimpleLocal} and \alg{FlowSeed} with no seed exclusion penalties. In this case the problems optimize the same objective, but \alg{FlowSeed} is faster thanks to our warm-start heuristic and push-relabel flow subroutine. } \label{tab:runtimes} \centering \begin{tabular}{l c c c c} & DBLP & Amazon & LiveJoural & Orkut \\ \toprule \alg{FlowSeed}&5.4 & 1.3 & 107.8 & 229.3 \\ \alg{SimpleLocal} & 17.6& 3.5 & 134.9 & 632.2 \\ \bottomrule \end{tabular} \end{table} From these results we see that, thanks to our push-relabel implementation and warm start procedure, our Julia implementation outperforms the optimized C++ code for \alg{SimpleLocal}, which relies on Dinic’s maximum flow algorithm as a subroutine and makes no use of warm starts. Thus, while \alg{HK-relax} and \alg{Push} maintain a superior runtime performance in local graph clustering experiments, our work constitutes an improvement in running times for flow-based methods, which in some cases can provide the best community detection results. Before moving on we make one important comment distinguishing the implementations of \alg{SimpleLocal} and \alg{FlowSeed}. In theory both algorithms, at each step, try to find whether there exists some $S$ with $\phi_{R,\varepsilon}(S) < \alpha$ (or in the case of \alg{FlowSeed}: $\pi_R(S) < \alpha$) for some $\alpha \in (0,1)$. If they succeed, they update $\alpha \leftarrow \phi_{R,\varepsilon}(S)$ (respectively: $\alpha \leftarrow \pi_R(S)$) and repeat the process with a new $\alpha$ (see Algorithm~\ref{alg:wrapper}). However, in practice, the implementation of \alg{SimpleLocal} in the LocalGraphClustering package updates $\alpha$ by computing the standard conductance: $\alpha \leftarrow \phi(S)$. This has the advantages that it sometimes leads to fewer iterations, and guarantees that the final output set will have a \emph{standard} conductance score less than or equal to the minimum \emph{local} conductance score, though the output set may not actual minimize \emph{local} conductance~\eqref{local-cond}. In order to accurately compare the two algorithms, for our runtime experiment we also use the update $\alpha \leftarrow \phi(S)$ in our implementation of \alg{FlowSeed}. However, in all other experiments we do not make this change, since one of the key features of \alg{FlowSeed} is that it looks for sets that not only have low conductance, but also agree as much as possible with the semi-supervised information provided. \subsection{3D Image Segmentation on a Brain Scan} Next we turn to detecting target regions in a large graph constructed from a brain MRI. The data is made up of a labeled $256\times 287 \times 256$ MRI obtained from the MICCAI 2012 challenge~\cite{Marcus-2007-oasis}. In previous work~\cite{VeldtGleichMahoney2016} we demonstrated how to convert the image into a nearest neighbors graph on the 3D voxels. Specifically, the MRI has $256 \times 287 \times 256 \approx 18$ million voxels, with each voxel represented by an integer between 0 and 4010. For each voxel we considered its 26 spatially adjacent neighbors, i.e.\ voxels whose indices differed by at most 1 in each of the three spatial dimensions. For adjacent voxels $u$ and $v$, we computed similarities between scan intensities $I_u$ and $I_v$ using the function $e^{-(\sqrt{I_u} - \sqrt{I_v})^2/(0.05)^2}$, similar to the approach of Shi and Malik~\cite{ShiMalik2000}. These similarities were then thresholded at 0.1 and multiplied by 10 to produce a set of weighted edges in the graph with minimum weight 1. In our experiments we therefore perform calculations in terms of weighted degrees and volumes in the graph. The resulting graph has 18 million nodes and around 234 million undirected weighted edges. The data provided by MICCAI 2012 came with 95 manually labeled regions of the brain (e.g. ventricles, amygdalas, brain stem, hippocampi, lobules, etc.). Each of these maps to a ground truth region of the brain graph. These regions range from 3104 to over 250,000 nodes in size, and with (weighted) conductance scores between 0.04 and 0.25. In our past research we showed results for detecting a single target ventricle of low conductance with around 4000 nodes~\cite{VeldtGleichMahoney2016}. Here we run semi-supervised clustering experiments on all 95 regions. More specifically, we select a set of 17 example regions out of the 95 identified regions of the brain, spanning the full range of sizes. We run extensive experiments using both the \alg{Push} algorithm~\cite{AndersenChungLang2006}, as well as \alg{FlowSeed}. We use results from experiments on the 17 regions to observe the behavior of penalizing the exclusion of seed nodes, and to guide our choice of locality parameters to choose in experiments on the remaining 78 evaluation regions. \paragraph{Benefit of Seed Exclusion Penalties.} \begin{figure}[t] \subfloat[F1 score] {\includegraphics[width=.5\linewidth]{Figures/SEPCI_2perc_F1.pdf}\label{bter}}\hfill \subfloat[Recall] {\includegraphics[width=.5\linewidth]{Figures/SEPCI_2perc_Recall.pdf}\label{cagrqcb}} \hfill \vspace{-.5\baselineskip} \caption{When detecting target regions of a large brain graph, enforcing no penalties on excluding seed nodes (blue) allows the method to discard too many seeds, often leading to very poor recall. If we enforce strict penalties on excluding known target nodes (red), this significantly improves recall and hence overall detection of the target cluster in terms of F1 score. We see even greater improvement by additionally including a soft penalty of $p_r = 1$ for excluding any other node in the seed set (green).} \vspace{-.5\baselineskip} \label{fig:penalties} \end{figure} We test a range of parameters on the 17 example regions to observe how different locality parameters and seed exclusion penalties behave for different sized regions and seed sets. We test locality parameters $\varepsilon \in \{0.5, 0.25, 0.1, 0.05\}$ and construct seed sets by taking very small subsets of the target cluster and growing them by their neighborhood. We find that with almost no exceptions, including strict and soft seed exclusion penalties leads to significant benefits in ground truth recovery across all region sizes. In Figure~\ref{fig:penalties}, for the 17 example regions we plot the recall and F1 scores for region recovery for a locality parameter of $\varepsilon = 0.1$ in the case where the seed set is made up of a random sample of $2\%$ of the target region, plus the immediate neighbors of these nodes. We run our method with (1) no penalties on excluding seed nodes, (2) strict penalties on excluding the initial $2\%$ of nodes, and (3) strict penalties on the $2\%$ and additionally a soft penalty of $p_r = 1$ for excluding any other seed nodes. As expected, when we include no penalties, the flow-based approach often shrinks the seed set into a small cluster with good conductance and very good precision, but almost no recall. As we increase the strength of seed exclusion penalties, the precision decreases slightly but the recall improves considerably, leading to a much better overall ground truth recovery. We confirmed in numerous experiments that the same behavior holds for different locality parameters and seed sizes. For each region we form four types of seed sets. For the first we select a random set of 100 nodes from the target region, and for the remaining three we select $1\%$, $2\%$ and $3\%$ of the target region. In all cases, we grow these nodes by a one-hop neighborhood. In Figure~\ref{fig:eps1} we show F1 scores achieved by \alg{FlowSeed} on all four types of seed sets when $\varepsilon = 0.1$. \begin{figure}[t!] \subfloat[100 Target Nodes] {\includegraphics[width=.5\linewidth]{Figures/SEPCI_SeedType_1_epsilon_01_F1}}\hfill \subfloat[1\% of Target] {\includegraphics[width=.5\linewidth]{Figures/SEPCI_SeedType_2_epsilon_01_F1}}\hfill \subfloat[2\% of Target] {\includegraphics[width=.5\linewidth]{Figures/SEPCI_SeedType_3_epsilon_01_F1}}\hfill \subfloat[3\% of Target] {\includegraphics[width=.5\linewidth]{Figures/SEPCI_SeedType_4_epsilon_01_F1}}\hfill \caption{F1 scores for \alg{FlowSeed} when $\varepsilon = 0.1$ on 17 brain regions using four different types of seed sets. Green indicates both soft and strict penalties, the red curve shows results for just including strict penalties for nodes that are known to be in the target cluster, and the blue curve shows results for including no penalties. } \label{fig:eps1} \end{figure} \begin{figure}[t!] \subfloat[100 Target Nodes] {\includegraphics[width=.45\linewidth]{Figures/JustPush_SeedType_1alpha60}}\hfill \subfloat[1\% of Target] {\includegraphics[width=.45\linewidth]{Figures/JustPush_SeedType_2alpha60}}\hfill \subfloat[2\% of Target] {\includegraphics[width=.45\linewidth]{Figures/JustPush_SeedType_3alpha60}}\hfill \subfloat[3\% of Target] {\includegraphics[width=.45\linewidth]{Figures/JustPush_SeedType_4alpha60}}\hfill \caption{F1 scores on 17 example regions of the brain graph using the PageRank \alg{Push} method with teleportation parameter $\alpha_{pr} = 0.6$ and a range of tolerance parameters $\varepsilon_{pr}$. Seed sets are 100 random nodes from the target region plus neighbors, or 1\%, 2\%, or 3\% of the region plus neighbors. As the region size increases, it becomes necessary to use smaller values of $\varepsilon_{pr}$ to accurately identify the target cluster.} \label{fig:ppr} \end{figure} \paragraph{Comparison with Random-Walk Methods} We additionally run the PageRank \alg{push} algorithm~\cite{AndersenChungLang2006} with teleportation parameters $\alpha_{pr}$ from $0.5$ to $0.9$, and approximate PageRank tolerance parameters $\varepsilon_{pr}$ from $10^{-11}$ to $10^{-7}$. In practice we find that smaller values of $\alpha_{pr}$ perform better, with little difference between parameters between 0.5 and 0.7. The values of $\varepsilon_{pr}$ we use here are significantly smaller than the ones we used for experiments in Section~\ref{cd}. This is because the MRI graph is much more structured and geometric than the real-world networks we considered in Section~\ref{cd}, and thus there are large sets of nodes with good topological community structure we wish to find in the MRI graph. To find these, we need to explore a wider region of the graph, and hence we must use small tolerance parameters. We show results for all seed sizes for $\alpha_{pr} = 0.6$ in Figure~\ref{fig:ppr}. For both \alg{Push} and \alg{FlowSeed}, we use observations from the experiments on the 17 example regions to inform our choice of parameter settings for different sized regions and seed set sizes. We then use these parameters to test the performance of each method on the remaining 78 regions, which we refer to as the evaluation set. We run experiments for the case where we know exactly 100 of the target nodes, and where $1\%, 2\%,$ and $3\%$ of the target region is given. We run \alg{Push} with a teleportation parameter of $\alpha_{pr} = 0.6$, and run \alg{FlowSeed} with both strict and soft penalties. In each experiment we identify which of the 17 example regions is closest in size to the target region from the evaluation set, and then set $\varepsilon$ and $\varepsilon_{pr}$ to be the values which led to the best F1-score recovery for this comparable example region. We plot results for all types of seed sets on a subset of the 78 regions (for easier display) in Figure~\ref{fig:test}. \begin{figure}[t!] \centering \subfloat[100 Target Nodes] {\includegraphics[width=.45\linewidth]{Figures/Test_SeedType_1_F1}}\hfill \subfloat[1\% of Target] {\includegraphics[width=.45\linewidth]{Figures/Test_SeedType_2_F1}}\hfill \subfloat[2\% of Target] {\includegraphics[width=.45\linewidth]{Figures/Test_SeedType_3_F1}}\hfill \subfloat[3\% of Target] {\includegraphics[width=.45\linewidth]{Figures/Test_SeedType_4_F1}}\hfill \caption{F1 scores for both \alg{FlowSeed} and \alg{Push} on a subset of half the testing regions on the brain graph. Most region sizes are omitted from the $x$-axis for easier display. When enough of the target nodes are known (e.g. (c) and (d) and the first part of (a)), then \alg{FlowSeed} is able to outperform \alg{Push} in identifying target regions. When only a small amount of the target set is known, \alg{Push} performs better as it is able to quickly grow the seed set into a large enough set to capture most of the target region. } \label{fig:test} \end{figure} Our experiments highlight a tradeoff in the performance of the two algorithms. For small seed sets (100 target nodes plus their neighborhood, or 1\% of the target region plus neighbors), \alg{Push} typically outperforms \alg{FlowSeed} in ground truth recovery, as it is able to grow a very small seed set into a sizable cluster. However, given sufficient information regarding the target cluster, we see a distinct benefit in applying our flow-based approach. When 2\% (resp. 3\%) of the target is known, \alg{FlowSeed} obtains a higher F1 score for 64 (resp. 68) of the 78 target regions, and the scores are on average 6.2\% (resp. 6.1\%) higher than those returned by \alg{Push}. In terms of runtime, the highly optimized \alg{Push} implementation is faster: most experiments run in under 1 second, with the largest taking several seconds. Our method takes up to 15 minutes for the largest region, but typically runs in 10-60 seconds for small and medium sized regions. \subsection{Detecting an Atrial Cavity} In our last experiment we demonstrate that random-walk and flow-based methods can be viewed as complementary approaches rather than competing algorithms. We combine the strengths of \alg{Push} and \alg{FlowSeed} to provide good quality 3D segmentations of a manually labeled left atrial cavity in a whole-body MRI scan. The dataset was provided as a part of the 2018 Atrial Segmentation Challenge, which sought efficient methods for automatic segmentation of the atrial cavity for clinical usage~\cite{xiong2018fully}. We convert one such MRI into a graph with 29.2 million nodes and 390 million edges using the same technique used to construct the brain graph. The cavity in the MRI corresponds to a target cluster with 252,364 nodes and a conductance of 0.0414 in the graph. We begin from a small set of 100 randomly selected nodes from the atrial cavity, constituting less than 0.04\% of the target region. We grow these nodes by a one-hop and a two-hop neighborhood to produce two different seed sets to use as input to \alg{Push}. The algorithm's performance is very similar using both seed sets, so we just report results using the two-hop neighborhood. We again set $\alpha_{pr} = 0.6$ and test a range of tolerance parameters $\varepsilon_{pr}$ from $10^{-14}$ to $10^{-8}$. Looking at results in Figure~\ref{fig:cavity_push}, we see that the \alg{Push} algorithm is simply growing circular regions around seed nodes. Many of the output sets are not connected. From Table~\ref{tab:cavity}, we note that the best F1 score achieved by \alg{Push} is 0.714 when $\varepsilon_{pr} = 10^{-12}$. \begin{figure}[t] \subfloat[\alg{Push} $\varepsilon_{pr} = 10^{-9}$] {\includegraphics[width=.25\linewidth]{Figures/PPR_heart9alpha60}\label{del9}}\hfill \subfloat[\alg{Push} $\varepsilon_{pr} = 10^{-10}$] {\includegraphics[width=.25\linewidth]{Figures/PPR_heart10alpha60}\label{del10}}\hfill \subfloat[\alg{Push} $\varepsilon_{pr} = 10^{-11}$] {\includegraphics[width=.25\linewidth]{Figures/PPR_heart11alpha60}\label{del11}}\hfill \subfloat[\alg{Push} $\varepsilon_{pr} = 10^{-12}$] {\includegraphics[width=.25\linewidth]{Figures/PPR_heart12alpha60}\label{del12}}\hfill \caption{The yellow region indicates the target cavity in a 29 million node graph constructed from a full-body MRI scan. Purple regions indicate sets returned by the \alg{Push} algorithm. Starting from a 100 random nodes in the target set plus their neighborhood, \alg{Push} grows circular regions which grow as $\varepsilon_{pr}$ decreases. } \label{fig:cavity_push} \end{figure} \begin{figure}[t] \subfloat[{\alg{FlowSeed} refinement of \alg{Push} ($\varepsilon_{pr} = 10^{-9}$)}] {\includegraphics[width=.275\linewidth]{Figures/SEPCI_heart1Refine_alpha60}\label{sep8}}\hfill \subfloat[\alg{FlowSeed} refinement of \alg{Push} ($\varepsilon_{pr} = 10^{-10}$)] {\includegraphics[width=.275\linewidth]{Figures/SEPCI_heart2Refine_alpha60}\label{sep9}}\hfill \subfloat[\alg{FlowSeed} refinement of \alg{Push} ($\varepsilon_{pr} = 10^{-11}$)] {\includegraphics[width=.275\linewidth]{Figures/SEPCI_heart3Refine_alpha60}\label{sep10}}\hfill \caption{On the atrial cavity dataset, we refine the output of \alg{Push} (see Figure~\ref{fig:cavity_push}) using \alg{FlowSeed} with a locality parameter $\varepsilon = 0.1$. \alg{FlowSeed} (purple) fills in the interior of the target region (yellow) and more closely identifies the region boundary.} \label{fig:cavity_fs} \end{figure} \begin{table}[t!] \caption{Results for detecting a target atrial cavity in a graph constructed from a full-body MRI. Letting \alg{Push} expand a small seed set and then refining the output with \alg{FlowSeed} leads to better F1 scores than simply running \alg{Push} with smaller $\varepsilon_{pr}$.} \label{tab:cavity} \centering \begin{tabular}{lrrrrrr} \toprule method & $\varepsilon_{pr}$ & size & pr & re & F1 & time \\ \midrule \alg{Push} & $10^{-9}$ & 72337 & 0.849& 0.243 & 0.378 & 1\\ +\alg{FlowSeed} & ($\varepsilon = .1$)& 160951 & 0.924 & 0.590 & 0.720 &1410 \\ \midrule \alg{Push} & $10^{-10}$ & 133618 & 0.792 & 0.419 & 0.549 & 3 \\ +\alg{FlowSeed} & ($\varepsilon = .1$) & 224842 & 0.850& 0.757 &0.801 & 3573\\ \midrule \alg{Push} & $10^{-11}$ & 211571 & 0.732 & 0.614 & 0.668 & 4 \\ +\alg{FlowSeed} & ($\varepsilon = .1$) & 296192 & 0.690 & 0.809 & 0.745 & 9800\\ \midrule \alg{Push} & $10^{-12}$ & 290937 & 0.666 & 0.768 & 0.714 & 5 \\ \alg{Push} & $10^{-13}$ & 367011 & 0.599 & 0.871 & 0.710 & 6 \\ \bottomrule \end{tabular} \end{table} We next take the output of \alg{Push} and refine it using \alg{FlowSeed} with locality parameter $\varepsilon = 0.1$. We set a strict penalty on excluding the original 100 nodes from the cavity, a soft penalty of 1 on excluding their neighbors, and a penalty of 0.5 on excluding any node in the set returned by \alg{Push}. We see a significant improvement in quality of segmentation, in the best case leading to a precision of 0.8498, recall of 0.7571, and F1 score of 0.8008 when refining the region output by \alg{Push} when $\varepsilon_{pr} = 10^{-10}$ (see Table~\ref{tab:cavity}). In Figure~\ref{fig:cavity_fs} we see that \alg{FlowSeed} smooths out the circular regions returned by \alg{Push} to return a connected region that better identifies the boundary of the target cavity. Regarding runtime, \alg{Push} quickly grows circular regions within a few seconds, and the \alg{FlowSeed} refinement procedure takes just under an hour. Together these methods produce a significantly better output than either method could have accomplished alone. \section{Introduction} Local graph clustering is the task of finding tightly connected clusters of nodes nearby a set of seed vertices in a large graph. This task has been applied to solve problems in information retrieval~\cite{LangRao2004}, image segmentation~\cite{Mahoney:2012:LSM:2503308.2503318,VeldtGleichMahoney2016}, and community detection~\cite{AndersenLang2006,KloumannKleinberg2014}, among many other applications. In practice, seed nodes represent semi-supervised information about a hidden target cluster, and the goal is to recover or detect this cluster by combining knowledge of the seed set with observations about the topological structure of the network. One popular approach for graph clustering is to apply flow-based algorithms, which repeatedly solve regionally biased minimum cut and maximum flow problems on the input graph. These methods satisfy very good theoretical cut improvement guarantees with respect to quotient-style clustering objectives such as conductance~\cite{AndersenLang2008,LangRao2004,OrecchiaZhu2014,VeldtGleichMahoney2016}. Additionally, some of these methods are strongly local, i.e.\ their runtime depends only on the size of the seed set and not the entire input graph. Despite these attractive theoretical properties, existing flow-based methods exhibit drawbacks when it comes to solving real-world graph clustering problems. For example, in some cases these methods exhibit the tendency to discard important semi-supervised information in favor of optimizing a quotient-style clustering objective. More specifically, existing methods either shrink a seed set of nodes into a subset with better cut-to-size ratio~\cite{LangRao2004}, or try to find a good output cluster that overlaps well with the seed set but may not include all or even a majority of the seed nodes~\cite{AndersenLang2008,OrecchiaZhu2014,VeldtGleichMahoney2016}. While this is beneficial for obtaining theoretically good graph cuts, it is not always desirable in label propagation and community detection applications where the goal is to grow a set of seed nodes into a larger community. In addition to this, the previously cited flow-based methods treat all seed nodes equally, whereas in practice there may be varying levels of confidence for whether or not a seed node is a true representative of the undetected target cluster. Although recently developed strongly-local methods constitute a major advancement in flow-based clustering, these also exhibit drawbacks in terms of implementation and practical performance. The \alg{LocalImprove} algorithm of Orecchia and Zhou~\cite{OrecchiaZhu2014} is known to have an extremely good theoretical runtime but relies on a complicated variation of Dinic's max-flow algorithm~\cite{Dinitz-1970-max-flow} that is difficult to implement in practice. In more recent work we developed an algorithm called \alg{SimpleLocal}, which provides a simplified framework for optimizing the same objective~\cite{VeldtGleichMahoney2016}. While this method is easy to implement and reasonably fast in practice, it still relies on repeatedly solving numerous exact maximum flow problems, and takes no advantage of warm-start solutions between consecutive flow problems that are closely related. \paragraph{Our Contributions} In this paper we improve the practical performance of flow-based methods for local clustering in two major ways. We first develop a generalized framework which takes better advantage of semi-supervised information about target clusters, and avoids the tendency of other methods to contract a large seed set into a small subcluster. Our approach allows users to place strict constraints and soft penalties on excluding specified seed nodes from the output set, depending on the user's level of confidence for whether or not each node should belong to the output set. Our second major contribution is a fast algorithm for minimizing our generalized objective function. We begin by showing that this objective can be minimized in strongly-local time using a meta-procedure that repeatedly solves localized minimum $s$-$t$ cuts, and doesn't require any explicit computation of maximum $s$-$t$ flows. This simultaneously generalizes and simplifies the meta-procedure we developed in previous work, which solves a more restricted objective function and requires the explicit computation of maximum flows as an intermediary step to obtaining minimum cuts~\cite{VeldtGleichMahoney2016}. We then implement our meta-procedure using a fast variant of the push-relabel algorithm~\cite{goldberg1988new}, which computes minimum cuts using preflows, rather than maximum flows. We make our algorithm extremely efficient using two key heuristics: a known global-relabeling scheme for the push-relabel algorithm~\cite{Cherkassky1997}, and a novel warm-start procedure which allows us to quickly solve consecutive minimum cut problems. We validate our approach in several community detection experiments in real-world networks, and in several large-scale 3D image segmentation problems on graphs with hundreds of millions of edges. In practice our algorithm is faster that existing implementations of related flow-based methods, and allows us to more accurately detect ground truth clusters by better incorporating available knowledge of the target set. \section{Generalized Local Clustering Objective} In order to develop a flow-based method that places a higher emphasis on agreeing with the seed set, we begin by presenting a generalization of the local conductance objective~\eqref{local-cond}. After introducing the objective, we prove cut improvement guarantees that can be achieved if this objective is minimized in practice. In Section~\ref{sl-meta}, we prove that the objective can be solved in strongly-local time using a meta-procedure that repeatedly applies minimum $s$-$t$ cut solvers as subroutines with no explicit calculation of maximum $s$-$t$ flows. In Section~\ref{pr-implementation}, we provide details for how to implement the meta-procedure using a fast variant of push-relabel method with two key heuristics. \subsection{The Seed-Penalized Conductance Score} Let $G = (V,E)$ be an undirected and unweighted graph, and $R$ a small set of nodes that we wish to grow into a larger cluster that we will call $S$. Unlike other methods, we assume there exists a designated set of nodes $R_s \subseteq R$ which must be included in the output set, and a weight $p_i \geq 0$ for every other node $r_i \in R$ which indicates our level of confidence that $r_i$ should also be included in the output. We start by introducing the following new \emph{overlap score} between $R$ and $S$: \begin{equation*} \mathcal{O}_{R}(S) = \textbf{vol}(R\cap S) - \varepsilon \textbf{vol}(S\cap \bar{R}) - \sum_{r \in R} p_r d_r \chi_{\bar{S}}(r) \end{equation*} where $\vp = (p_i)$ is the vector of penalty weights for nodes in the seed set, $\chi_{\bar{S}}$ is the indicator function for nodes in $\bar{S}$, and $\varepsilon$ is a locality parameter controlling how much we allow the output set to include nodes outside $R$. The first term rewards a high intersection between $S$ and $R$, the second term penalizes the inclusion of nodes outside $R$, and the third term introduces a penalty for nodes in $R$ that are not in $S$. Given this definition of overlap, our goal is to minimize the following objective, which we refer to as seed-penalized conductance score: \begin{equation} \label{eq:sc} \pi_R(S) = \begin{cases} \frac{\textbf{cut}(S)}{\mathcal{O}_{R}(S)} & \mbox{ if } \mathcal{O}_{R}(S) > 0, R_s \subseteq S \\ \infty & \mbox{ otherwise} \end{cases} \end{equation} To keep notation simple we only include the set $R$ in the subscript of $\mathcal{O}_R(S)$ and $\pi_R(S)$, though we note that these also depend on $R_s$, $\varepsilon$, and $\vp$, which are fixed parameters chosen at the outset of a problem. \subsection{Cut Improvement Guarantee} Despite the differences between~\eqref{eq:sc} and the standard conductance measure, we can prove that minimizing the former will give strong cut improvement guarantees in terms of the latter. This result is closely related to similar cut improvement guarantees for variants of the local conductance objective~\cite{AndersenLang2008,VeldtGleichMahoney2016}, but here we focus on the case where $R$ is completely contained in some ground truth target cluster $T$, since in our work we are especially concerned with approaches that grow a seed set into a larger cluster. Note that the following result is in fact independent of $R_s$ and $\vp$, so it holds regardless of how strict the seed node exclusion penalties are. \begin{theorem} Let $G = (V,E)$ be connected and $R$ be a seed set. Let $T$ be any set of nodes containing $R$ with $\textbf{vol}(T) \leq \textbf{vol}(\bar{T})$, and assume that $\textbf{vol}(R) = \gamma \textbf{vol}(T)$ for some $\gamma \in (0,1)$. For any $\varepsilon \in \Big[ \frac{2 \textbf{vol}(R)}{\textbf{vol}(G) - 2\textbf{vol}(R) }, \frac{\gamma}{1-\gamma}\Big)$, if $S^*$ is the set of nodes minimizing objective~\eqref{eq:sc}, then $\phi(S^*) \leq C \phi(T)$ where $C = \frac{1}{\gamma + \varepsilon \gamma - \varepsilon}$ \end{theorem} \begin{proof} Note the following bound on the volume of $S^*$: \begin{align*} & 0 < \mathcal{O}_{R}(S^*) < \textbf{vol}{(R \cap S^*)} - \varepsilon \textbf{vol}{(\bar{R} \cap S^*)}\\ \implies & 0 < (1+\varepsilon) \textbf{vol}{(R \cap S^*)} - \varepsilon \textbf{vol}(S^*)\\ \implies & \textbf{vol}(S^*) < \left( 1 + \frac{1}{\varepsilon}\right) \textbf{vol}(R). \end{align*} Combining this with the lower bound on $\varepsilon$ given in the theorem statement, we see that the volume of $S^*$ is less than $\textbf{vol}(G)/2$. Next observe that $\textbf{vol}(S^*) > \mathcal{O}_{R}(S^*)$, so \[\phi(S^*) = \textbf{cut}(S^*)/\textbf{vol}(S^*)< \textbf{cut}(S^*)/ \mathcal{O}_{R}(S^*) =\pi_R(S^*).\] Because $R$ is contained in $T$, we have $\sum_{r \in R} p_r d_r \chi_{\bar{T}}(r) = 0$ and $\textbf{vol}{(T \cap R)} = \textbf{vol}(R)$. Therefore, \begin{align*} \mathcal{O}_{R}(T) = \textbf{vol}(R)-\varepsilon \textbf{vol}{(T \cap \bar{R})} = (1+\varepsilon) \textbf{vol}(R) - \varepsilon \textbf{vol}(T) = \textbf{vol}(T)((1 + \varepsilon)\gamma -\varepsilon ). \end{align*} Since $S^*$ minimizes~\eqref{eq:sc}, \begin{align*} \phi(S^*) &< \pi_R(S^*) \leq \pi_R(T) = \frac{\textbf{cut}(T)}{\mathcal{O}_{R}(T)} =\frac{1}{(1+\varepsilon)\gamma - \varepsilon } \frac{\textbf{cut}(T)}{\textbf{vol}(T)} = C \phi(T). \end{align*} \end{proof} If we select $\varepsilon$ to be at its lower bound defined above, the approximation ratio will be $C = \frac{\textbf{vol}(G) - 2\textbf{vol}(R)}{\gamma \textbf{vol}(G) + \gamma \textbf{vol}(R) - 2\textbf{vol}(R)}$. If $\textbf{vol}(R)$ is very small compared to the overall size of the graph, then the approximation factor goes to $1/\gamma$ as the size of the graph increases for a fixed seed set. \subsection{Minimizing Seed-Penalized Conductance} \label{minspc} As is the case for local conductance, objective~\eqref{eq:sc} can be minimized in polynomial time by solving a sequence of minimum $s$-$t$ cut problems. Fix $\alpha \in (0,1)$ and assume we wish to find whether there exists some $S$ such that $\pi_R(S) < \alpha$. We construct a new version of the \emph{cut graph} $G_{st}$ which includes all nodes in $G$ and an additional source $s$ and sink $t$. For any node $r \in R_s$, we will assign a penalty variable $p_r = \textbf{vol}(G)/\alpha$. We add an edge from $s$ to each $r \in R$ with weight $\alpha(1+p_r)d_r$. The chosen weight for nodes in $R_s$ is large enough to guarantee that a minimum cut will never separate any of these nodes from $s$. Then, for each node $w \in \bar{R}$, we add an edge from $w$ to $t$ with weight $\alpha \varepsilon d_w$. For any set of non-terminal nodes $S \subseteq V$, the $s$-$t$ cut associated with that set can be expressed in terms of cuts and volumes in the original graph $G$: \begin{equation*} {\textstyle \textbf{cut}(S) + \alpha\varepsilon \textbf{vol}(\bar{R} \cap {S}) + \alpha \sum_{r\in R} d_r (1+p_r) \chi_{\bar{S}}(r)}. \notag \end{equation*} Using the observation that $\alpha\textbf{vol}(R \cap \bar{S}) = \alpha \textbf{vol}(R) - \alpha \textbf{vol}(R\cap S)$, we can rearrange this into the following objective function: \begin{equation} \label{mincut} f_{R,\varepsilon}^\alpha(S) = \textbf{cut}(S) - \alpha \mathcal{O}_{R}(S) + \alpha \textbf{vol}(R), \end{equation} so $f_{R,\varepsilon}(S) < \alpha \textbf{vol}(R)$ if and only if $\textbf{cut}(S)/\mathcal{O}_{R}(S) < \alpha$. Thus, solving the minimum $s$-$t$ cut objective on $G_{st}$ will tell us whether there exists some $S$ with seed-penalized conductance less than $\alpha$. Given any procedure for minimizing objective~\eqref{mincut} (e.g.\ a generic minimum $s$-$t$ cut solver), we can minimize seed-penalized local conductance using Algorithm~\ref{alg:wrapper}. \begin{algorithm}[t] \caption{Minimizing seed-penalized conductance } \label{alg:wrapper} \begin{algorithmic} \STATE \textbf{Input:} $G$, $R$, $\varepsilon$, $\vp$ \STATE $\alpha := 2$ \STATE $\alpha_{new} = \pi_R(R) = \phi(R)$ \STATE $S = R$ \WHILE{$\alpha_{new} < \alpha$} \STATE $S_{best} \leftarrow S$ \STATE $\alpha \leftarrow \alpha_{new}$ \STATE $S \leftarrow \arg \min f_{\varepsilon,R}^\alpha(S)$ \STATE $\alpha_{new} \leftarrow \pi_R(S)$ \ENDWHILE \STATE \textbf{Return:} $S_{best}$ \end{algorithmic} \end{algorithm} We end this section by showing a bound on the number of iterations for Algorithm~\ref{alg:wrapper}, by slightly adapting the techniques Andersen and Lang~\cite{AndersenLang2008} used to prove a similar bound for a more restrictive objective function. \begin{theorem} \label{thm:cutR} Algorithm~\ref{alg:wrapper} will need to solve min-cut objective~\eqref{mincut} at most $\textbf{cut}(R)$ times. \end{theorem} \begin{proof} Since $R$ and $\varepsilon$ and $\vp$ are fixed at the outset of the algorithm we will write $f^\alpha$ instead of $f_{R,\varepsilon}^\alpha$ and $\mathcal{O}$ rather than $\mathcal{O}_{R}$. Consider two consecutive iterations in which Algorithm~\ref{alg:wrapper} successfully finds sets with improved seed-penalized conductance and therefore doesn't terminate. Let $S_i$ be the set returned after the $(i-1)$st iteration, so $S_{i} = \arg\min f^{\alpha_{i-1}}(S)$ for some $\alpha_{i-1}$, and set $\alpha_i = \pi_R(S_i) = \textbf{cut}(S_i)/ \mathcal{O}(S_i) < \alpha_{i-1}$. Similarly, $S_{i+1} = \arg\min f^{\alpha_{i}}(S)$ and $\alpha_{i+1} = \pi_R(S_{i+1}) < \alpha_i$. Note that \begin{align*} f^{\alpha_{i-1}}(S_i) &= \alpha_{i-1}\textbf{vol}(R) + \textbf{cut}(S_i) - \alpha_{i-1} \mathcal{O}(S_i)\\ &= \alpha_{i-1}\textbf{vol}(R) + \mathcal{O}(S_i) ( \pi_R(S_i) - \alpha_{i-1}) \\ &= \alpha_{i-1}\textbf{vol}(R) + \mathcal{O}(S_i) (\alpha_i - \alpha_{i-1}) \end{align*} and similarly \[ f^{\alpha_{i-1}}(S_{i+1}) = \alpha_{i-1}\textbf{vol}(R) + \mathcal{O}(S_{i+1}) (\alpha_{i+1} - \alpha_{i-1}). \] Because $S_{i}$ minimizes $f^{\alpha_{i-1}}$ we know that $f^{\alpha_{i-1}} (S_i) \leq f^{\alpha_{i-1}} (S_{i+1})$, which implies that \[ \mathcal{O}(S_{i})(\alpha_{i} - \alpha_{i-1}) \leq \mathcal{O}(S_{i+1})(\alpha_{i+1} - \alpha_{i-1})\] and since $(\alpha_{i+1} - \alpha_{i-1}) < (\alpha_{i} - \alpha_{i-1}) < 0$ we see that $\mathcal{O}(S_{i+1}) < \mathcal{O}(S_i)$. Thus both $\pi_R(S)$ and its denominator are strictly decreasing during the course of the algorithm, so $\textbf{cut}(R)$ must strictly decrease at each step as well. Since we assume the graph is unweighted, there are at most $\textbf{cut}(R)$ iterations in total. \end{proof} \section{The Strongly-Local Meta-Procedure} \label{sl-meta} The results of the previous section imply that Algorithm~\ref{alg:wrapper} can be run in polynomial time using any black-box min $s$-$t$ cut solver. In this section we will prove a much stronger result by showing that objective~\eqref{mincut} can be minimized in strongly-local time using a very simple two-step meta-procedure. A significant feature of this meta-procedure is that the algorithm does not require any explicit computation of maximum flows, but relies on a very simple repeated two-step procedure for localized minimum $s$-$t$ cuts. \paragraph{Local Graph Operations.} In order minimize~\eqref{mincut} without touching all of $G = (V,E)$, we will repeatedly solve a variant of objective~\eqref{mincut} on a growing subgraph $L = (V,E_L)$ called the \emph{local graph}, which contains a restricted edge set $E_L \subset E$. In theory $L$ is assumed to have the same node set $V$ but many of these nodes will have degree zero in $L$, so we will not need to explicitly perform computations with all nodes in practice. We consider the following localized variant of~\eqref{mincut}, which corresponds to a minimum $s$-$t$ cut problem on a subgraph $L_{st}$ of the cut graph $G_{st}$: \begin{equation} \label{localmincut} f_{L}^\alpha(S) = \textbf{cut}_L(S) - \alpha \mathcal{O}_{R}(S) + \alpha \textbf{vol}(R), \end{equation} where the only difference from~\eqref{mincut} is that $\textbf{cut}_L(S)$ is defined to the be number of edges in $E_L$ between $S$ and $\bar{S}$, rather than the number of edges in $E$, thus $f_L^\alpha(S) \leq f_G(S) = f_{R,\varepsilon}^\alpha(S)$ for all $S \subset V$ and for any such subgraph $L$ of $G$. We will use the notation $d_i^L$ to denote the degree of node $i$ in $L$, which is always less than or equal to $d_i$. We then distinguish between an \emph{edge-complete} set of nodes $L_C= \{i \in V : d_i = d_i^L \} $ and an \emph{edge-incomplete} nodes $L_I = \{i \in V : d_i > d_i^L \}$ in $L$. Let $S_L$ be the minimizer of~\eqref{localmincut} for a fixed subgraph $L$. The following lemma shows that if $S_L$ is made up entirely of edge-complete nodes, then this set also minimizes the global objective function $f_G = f_{R,\varepsilon}^\alpha$~\eqref{mincut}. \begin{lemma} \label{lem:local} Let $S_L = \arg \min f_L(S)$. If $S_L \subseteq L_C$ then $S_L = \arg \min f_G(S)$. \end{lemma} \begin{proof} Because $E_L \subseteq E$, $\textbf{cut}_L(S) \leq \textbf{cut}(S)$ for all $S\subset V$, and therefore $f_L(S) \leq f_G(S)$ for all $S\subset V$, which implies that $\min_S \, f_L(S) \leq \min_S \, f_G(S)$. For the specified set $S_L$, since $S_L \subseteq L_C$, all nodes in $S_L$ have the same degree in $L$ as well as $G$, implying that $\textbf{cut}_L(S_L) = \textbf{cut}(S_L)$. Therefore: % \[ f_L(S_L) = f_G(S_L) \geq \min_S f_G(S) \geq \min_S f_L(S) = f_L(S_L) \] % so equality holds throughout and $S_L$ is optimal for both $f_L$ and $f_G$. \end{proof} Our meta-procedure for minimizing~\eqref{mincut} in strongly local time operates by repeatedly solving objective~\eqref{localmincut} over a sequence of growing local subgraphs. This proceeds until an iteration in which the current subgraph $L$ is large enough so that the set minimizing~\eqref{localmincut} is made up of edge-complete nodes, at which point we know by Lemma~\ref{lem:local} that we have globally solved objective~\eqref{mincut}. The full procedure is given in Algorithm~\ref{alg:meta}. \begin{algorithm} \caption{\alg{Local Min-Cut Meta-Procedure}} \label{alg:meta} \begin{algorithmic} \STATE {\bfseries Input:} graph $G$, seed set $R$, parameters $\alpha, \varepsilon$, $\vp$ \STATE Initialize $L$: $L_C = R$, $L_I = \bar{R}$ \STATE $E_L$: all edges in $E$ with at least 1 endpoint in $R$. \REPEAT \STATE \textbf{1. Solve Local Objective on $L$} \STATE $S_L = \arg\min_S \, f_L^\alpha(S)$ \STATE $N = S_L\cap L_I$ (new nodes to explore around) \STATE \textbf{2. Expand $L$ around $N$} \FORALL{$v \in N$} \STATE $E_v = \text{edges incident to node $v$ in $G$}$ \STATE $E_L \leftarrow E_L \cup E_v$ \STATE $L_C \leftarrow L_C \cup\{v\}$ \STATE $L_I \leftarrow L_I - v$. \ENDFOR \STATE $L \leftarrow (V,E_L)$ \UNTIL{$N = \emptyset$} \end{algorithmic} \end{algorithm} The following result proves that the size of the largest subgraph formed by Algorithm~\ref{alg:meta} will be bounded in terms of $\textbf{vol}(R)$, and $\varepsilon$. This mirrors a result for our previously developed meta-procedure~\cite{VeldtGleichMahoney2016}, which required explicit computation of flows in order to solve an objective related to~\eqref{mincut}. The proof technique is very similar, though in order to avoid explicit computation of flows, more analysis is needed to prove the theoretical bound. \begin{theorem} \label{thm:volbound} Let $\alpha$ be chosen so that $\pi_R(S_0) = \alpha$ for some $S_0 \subset V$. The largest subgraph $L$ formed by Algorithm~\ref{alg:meta} satisfies the following volume bound: \[ \textbf{vol}(L) = 2|E_L| \leq \textbf{vol}(R) \left(1 + 1/\varepsilon\right) + \textbf{cut}(R). \] \end{theorem} \begin{proof} Recall that the global objective~\eqref{mincut} we wish to solve is exactly the minimum $s$-$t$ objective on an auxiliary graph $G_{st}$, whose construction we outlined in Section~\ref{minspc}. The localized objective~\eqref{localmincut} is the minimum $s$-$t$ cut objective on a subgraph $L_{st}$ of $G_{st}$ that contains the same set of terminal edges but only a subset of edges between non-terminal nodes. Although Algorithm~\ref{alg:meta} does not require an explicit computation of a maximum $s$-$t$ flow, we will show a bound on $\textbf{vol}(L)$ by considering an implicit maximum $s$-$t$ flow with special properties that exists by the max-flow/min-cut duality theorem. \paragraph{Notation for Proof.} Let $L^{(i)}$ denote the local graph at the $i$th iteration of Algorithm~\ref{alg:meta}, $f_i = f_{L_i}^\alpha$ be shorthand for objective function~\eqref{localmincut}, and $S_i = \arg\min f_i(S)$. Use $N_i$ to denote the set of nodes which become edge-complete in the $i$th iteration, which by design are all in $\bar{R}$. $L^{(i)}$ itself is a subgraph of $G$, and we use $L_{st}^{(i)}$ to denote the subgraph of $G_{st}$ whose minimum $s$-$t$ cut we compute at iteration $i$. Let $C^{(i)}$ denote the set of edges in $L_{st}^{(i)}$ that are cut at iteration $i$. \paragraph{Constructing Implicit Flows.} By the min-cut/max-flow theorem, the value of the maximum flow on the graph $L_{st}^{(i)}$ equals the weight of the cut $C^{(i)}$ which we compute in practice. Furthermore, any maximum flow which we could compute will saturate all of the edges in $C^{(i)}$. Using this observation we will show by construction that when Algorithm~\ref{alg:meta} terminates after some iteration $k$, there exists a maximum flow $F$ on $L_{st}^{(k)}$ which saturates all edges between edge-complete nodes and the sink. In the first iteration, $N_1$ is exactly the set of nodes whose edge to the sink is cut. Let $F_1 = (f_{ij})$ be some maximum $s$-$t$ flow on $L_{st}^{(1)}$, and note that it will saturate all edges from $N_1$ to $t$. In the next iteration we compute a new minimum cut $C^{(2)}$. The set $N_2$ represents all nodes whose edge to $t$ was included in $C^{(2)}$ but not $C^{(1)}$. Since $L_{st}^{(1)}$ is a strict subgraph of $L_{st}^{(2)}$, we could in theory find a maximum $s$-$t$ flow $F_2$ on $L_{st}^{(2)}$ by starting with the previous flow $F_1$ and continually finding new augmenting flow paths until no more flow can be routed from $s$ to $t$. Since $F_2$ is a maximum flow, it must saturate all edges from $N_2$ to $t$, since these were cut by $C^{(2)}$. Furthermore, we can assume that in the construction of $F_2$, all the edges from $N_1$ to $t$, which were saturated by $F_1$, will remain saturated, since no improvement can be gained by rerouting flow from the sink back to $N_1$. Proceeding by induction, we see that at iteration $i$ we can find some maximum $s$-$t$ flow which saturates all edges from $N_i$ to the sink (since these were cut by $C^{(i)}$), and also saturates all terminal edges of previous edge-complete nodes in $\bar{R}$. We conclude that when Algorithm~\ref{alg:meta} terminates, the maximum flow, and hence the minimum cut, will be bounded below by the weight of edges from edge-complete nodes to $t$. Each edge is of weight $\alpha \varepsilon d_v$ for the edge-complete node $v \in \bar{R}$. Thus $ \alpha \varepsilon \textbf{vol}(\mathcal{N}) < M$ where $M$ is the minimum cut value, and $\mathcal{N}$ is the set of edge-complete nodes. \paragraph{Bounding Min-Cut Above.} Next we bound $M$ from above. In the statement of the theorem we assumed that that $\pi_R(S_0) = \alpha$ for some $S_0 \subset V$ (which will always be true if we use Algorithm~\ref{alg:meta} as a subroutine for Algorithm~\ref{alg:wrapper}). This is equivalent to the statement that % \[ f_{R,\varepsilon}^\alpha(S_0) = \textbf{cut}(S_0) - \alpha \mathcal{O}_{R}(S_0) + \alpha \textbf{vol}(R) = \alpha \textbf{vol}(R), \] so we have an upper bound of $\alpha\textbf{vol}(R)$ on $\min_S f_{R,\varepsilon}^\alpha(S)$. Combining upper and lower bounds, we see that % \[\alpha \varepsilon \textbf{vol}(\mathcal{N}) \leq \alpha \textbf{vol}(R) \implies \textbf{vol}(\mathcal{N}) \leq \textbf{vol}(R)/\varepsilon. \] % \paragraph{Bounding Volume of $L$.} The largest local graph $L$ that we form is made up of $\mathcal{N}$, all of $R$, and a few additional nodes in $\bar{R}$ that have non-zero degree in $L$ but remained edge-incomplete during the entire course of the algorithm. Use $P$ to denote these edge-incomplete nodes, and note that they share no edges with each other, but only share edges with $R$ and $\mathcal{N}$. Thus, \begin{align*} \textbf{vol}(P) &\leq (\text{number of edges from $R$ to $P$}) + (\text{number of edges from $\mathcal{N}$ to $P$}) \\ & \leq \textbf{cut}(R) + \textbf{vol}(\mathcal{N}). \end{align*} Thus the full volume bound follows: \begin{align*} \textbf{vol}(L) &\leq \textbf{vol}(R) + \textbf{vol}(\mathcal{N}) + \textbf{vol}(P) \\ &\leq \textbf{vol}(R) + 2 \textbf{vol}(\mathcal{N}) + \textbf{cut}(R) \\ & \leq \textbf{vol}(R) ( 1 + 2/\varepsilon) + \textbf{cut}(R). \end{align*} \end{proof} The volume bound given here is the same as the bound shown for our previous method \alg{SimpleLocal}. Thus, using Algorithm~\ref{alg:meta} as a subroutine in Algorithm~\ref{alg:wrapper} produces an algorithm with the same theoretical runtime as \alg{SimpleLocal}, despite solving a much more general objective function and completely avoiding any explicit maximum flow computations. Each flow problem takes at most $O(\textbf{vol}(R)^3/\varepsilon)$ operations if fast flow subroutines are used~\cite{Orlin-2013-max-flow}, and Theorem~\ref{thm:cutR} guarantees it will be run at most $\textbf{cut}(R)$ times. This runtime bound is very conservative and the empirical performance will typically be significantly better. \section{The Push-Relabel Implementation} \label{pr-implementation} We implement Algorithm~\ref{alg:meta} using a new method for computing minimum $s$-$t$ cuts based on a variant of the push-relabel algorithm of Goldberg and Tarjan~\cite{goldberg1988new}. The full push-relabel algorithm can be separated into two phases: the first phase computes a maximum preflow which can be used to solve the minimum $s$-$t$ cut problem, and the second phase performs additional computations to turn the preflow into a maximum $s$-$t$ flow. Because we only require minimum $s$-$t$ cuts, our method simply applies Phase 1. \subsection{Push-Relabel Overview} Section~\ref{maxflow} provides a basic overview of flow computations. The push-relabel algorithm is specifically a \emph{preflow} algorithm for maximum flows, meaning that during the course of the algorithm, all arcs satisfy capacity constraints, but each node $i$ is allowed to have more incoming flow than outgoing flow, i.e.\ a preflow satisfies a relaxation of the the flow constraints: \begin{align*} { \sum_{(j,i) \in A} f_{ji} \leq \sum_{(i,k) \in A} f_{ik} \text{ for $i \in V$}} \end{align*} where $F = (f_{ij})$ is a flow assignment for a directed graph $G$ with node set $V$ and arc set $A$. Push-relabel maintains a labeling function $\ell : V \rightarrow \{0,1,2, \hdots , n \}$ where $n = |V|$ is the number of nodes in a graph $G_{st}$ with distinguished source and sink. The algorithm can be initialized using any preflow and a labeling that gives a lower bound on the distance from each node to the sink in the residual graph. The standard initialization is to set $\ell(s) = n$ and the label of all other nodes to zero. The preflow is initialized to be zero on all edges, and afterwards all edges from $s$ to its neighbors are saturated. This creates a positive \emph{excess} at these neighbors, i.e.\ more flow goes into the nodes than out. After initialization, the algorithm repeatedly visits \emph{active} nodes, which are nodes that have a label less than $n$ and a positive excess. For a selected active node $u$, the algorithm locally pushes flow across admissible edges, which are defined to be edges $(u,v)$ for which $\ell(u) = \ell(v) + 1.$ If no admissible edges exist, the label of the node is increased to be the minimum label such that an admissible arc is created. During the course of the algorithm, it can be shown that $\ell(u) < \ell(v)$ for any arc $(u,v)$ with nonzero residual capacity, and furthermore $\ell(v)$ is a lower bound on the distance from node $v$ to the sink $t$, if there still exists a path of unsaturated edges from $v$ to $t$. Phase 1 of the algorithm is complete when there are no more active nodes to process. At this point the preflow is at a maximum, and the set of nodes with label $n$ forms the minimum cut set. \subsection{Label Selection Variants and Relabeling Heuristics} The generic push-relabel algorithm simply requires one to push flow across admissible edges whenever there still exist active nodes. This procedure is guaranteed to converge to the solution to the minimum cut problem, but better runtimes can be obtained by more carefully selecting the order in which to process active nodes. One approach is the first-in-first-out (FIFO) method, which begins by pushing all initial active nodes into a queue, and adding new nodes to the queue as they become active. Another approach is to continually select the highest-labeled node at each step. The push-relabel method can be made very fast in practice using efficient relabeling heuristics~\cite{Cherkassky1997}. One simple but very effective heuristic is to periodically run a breadth first search from the sink node $t$ and update the labels of each node to equal the distance from that node to $t$. Another heuristic is the gap relabeling heuristic, which checks whether there exist certain types of gaps in the labels that can be used to prove when certain nodes are no longer connected to the sink node $t$. \subsection{Implementation Details and Warm-Start Heuristic} In practice we implement the FIFO push-relabel algorithm in the Julia programming language and make use of the global relabeling heuristic. Although implementations of push-relabel in other languages have made efficient use of the highest-label variant and the gap relabeling heuristic~\cite{Cherkassky1997}, these require slightly more sophisticated data structures that are more challenging to maintain in Julia. Our implementation choices make it possible to maintain a very simple but efficient method to implement Algorithm~\ref{alg:meta}. Running this procedure for various $\alpha$ using Algorithm~\ref{alg:wrapper} provides a fast local graph clustering algorithm. Because our method is flow-based and puts a higher emphasis on including seed nodes, we refer to it as \alg{FlowSeed}. An important part of our implementation of Algorithm~\ref{alg:meta} is a warm-start heuristic for computing consecutive minimum $s$-$t$ cuts on the growing subgraph. Each local subgraph $L$ corresponds to a local cut graph $L_{st}$ with added source and sink nodes. For the first local cut graph, we use the standard initialization for push-relabel, i.e. start with a preflow of zero and saturate all edges from $s$ to its neighbors. Applying push-relabel will return a maximum preflow $F$ on $L_{st}$, and thus a minimum $s$-$t$ cut which we use to update $L$ as outlined in Section~\ref{sl-meta}. After $L$ and $L_{st}$ are updated, the goal is to find an updated minimum cut, which can be accomplished up finding an updated maximum preflow. Note that $F$ is no longer a maximum preflow on the updated $L_{st}$, since we have added new nodes and edges to $L_{st}$ and hence there are new ways to route flow from $s$ to $t$. However, $F$ will still be a valid preflow. Our warm-start procedure therefore initializes the next run of the push-relabel method with the preflow $F$, and sets the label of each node to be its distance to the sink in the corresponding residual graph. Initializing each consecutive maximum preflow computation in this way will be much more efficient than re-constructing $L_{st}$ from $L$ at each step and starting with a preflow of zero.
1,477,468,750,703
arxiv
\section{Introduction} Face and kinship verification are two popular facial analysis tasks in computer vision. Face verification attempts to determine whether or not a given pair of face samples belong to the same subject\cite{face_verification}. Kinship verification aims to predict whether or not there is a kin relation between a given pair of face samples\cite{kinship1}. Face and kinship verification tasks based on deep learning technologies have achieved remarkable performance under controlled conditions in the past decade. However, the two tasks in the wild are still challenging due to large intra-class variations, such as complex background, occlusion, and a variety of variations on illumination, pose and facial expression. Prior works on face and kinship verification are generally devoted to improve prediction accuracy, while paying little attention to confidence estimation. We argue that the reliability is also a key measure for evaluating the performance of these verification algorithms, and it becomes even more crucial for those verification systems deployed in high-risk scenarios. In this paper, we focus on modeling confidence estimation and calibration for face and kinship verification. Accurate confidence estimation for face and kinship verification is often difficult. One reason can be that the labels describing sample (pairs) uncertainty are usually not offered in most face or kinship verification datasets. To address this issue, existing approaches often tend to train a separate network\cite{face_uncertainty_1} or an extra network branch\cite{face_uncertainty_2,probabilistic_face_embeddings,relative_uncertainty} to model data uncertainty. However, they are devoted to uncertainty estimation of individual face images rather than confidence estimation for given face pairs. On the other hand, modern neural networks tend to be over- or under-confident in their predictions. Hence, the similarity score of a face pair may not correctly reflect the verification confidence. For these reasons, in this paper we propose a simple yet effective confidence measure for face and kinship verification. Our approach is inspired by the following observation: If the pair similarity is close to the decision threshold $\tau$, the model is less likely to make a correct prediction. On the other hand, if the pair similarity is far from the threshold, the model's prediction is more likely to be correct. The proposed measure allows face and kinship verification models to transform the similarity score into a confidence score for a given face pair. We further develop a new algorithm to adjust the similarity of face pairs in angular space, so that the calibrated confidence can well quantify the predictive confidence while maintaining the verification accuracy. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Fig1} \centering \caption{The pipeline of our proposed confidence-calibrated face and kinship verification. Different from previous methods, our method gives additional well-calibrated confidence estimation on the prediction, by introducing a simple confidence measure and a flexible calibrator that can be directly applied to most existing face and kinship verification models without modifications.} \label{Fig_1} \end{figure} The pipeline of our confidence-calibrated face and kinship verification is illustrated in \textcolor{blue}{Fig. 1}. The main contributions of this work can be outlined below: 1) We introduce a new confidence measure for probabilistic face and kinship verification, allowing any off-the-shelf verification models to efficiently estimate the prediction confidence. 2) We develop a confidence-calibrated algorithm via angle scaling, which can be directly applied to existing face and kinship verification models without model modifications. To the best of our knowledge, this is the first general confidence-calibrated solution to face and kinship verification in modern context. 3) Extensive experiments are conducted on both face and kinship datasets, and the results demonstrate that our proposed method achieves state-of-the-art calibration performance. The rest of the paper is organized as follows. In section II, we first review prior works on face and kinship verification, and then discuss related works on confidence estimation in face analysis. In section III, we elaborate our proposed confidence measure for face and kinship verification, followed by our confidence-calibrated approach. In section IV, we present experimental results and analysis. Finally, we conclude the paper in section V. \section{Related Works} \subsection{Face and Kinship Verification} Face verification is a well-studied research topic in computer vision. In early studies, the low-level handcrafted features were used for verification and identification. The development of deep neural networks in the past decade has greatly improved the performance of face verification. DeepFace\cite{deepface} and DeepID\cite{deepid,deepid2,deepid2+,deepid3} were the first studies to introduce deep CNNs into face recognition, which explored deep models to learn effective high-level features, thus boosting the face recognition performance. FaceNet\cite{facenet} proposed the triplet loss to learn a direct mapping from face images to a compact Euclidean space, with distances directly related to a measure of face similarity. Wen et al.\cite{center} proposed a center loss to enhance the compactness of intra-class samples by minimizing the Euclidean distance between deep feature vectors and their corresponding class centers, while combining a softmax loss to guarantee inter-class differences. Large-Margin Softmax\cite{lsoftmax} inspired new ideas for face verification tasks, and many excellent studies have emerged since then, such as SpereFace\cite{sphereface}, CosFace\cite{cosface}, and ArcFace\cite{arcface}. These methods penalised the angles between deep features and their corresponding weights in angular space, which improved effectively the discriminative ability of feature embeddings. Note that vector angular-based distance measure has gradually replaced the Euclidean distance measure as the most popular method for face verification tasks, since the cosine of the angle and the Softmax loss are inherently consistent. The facial kinship verification problem can be dated back to the pioneering work by Fang et al.\cite{kinship1}. Since then, a variety of approaches have been proposed. These approaches can be roughly divided into two categories: one is the traditional methods based on feature or metric learning, and the other is the deep learning-based approaches developed in recent years. The former seeks to learn a feature encoder or distance metric from pairwise kinship samples, and the representative work of this line is the neighborhood repulsed metric learning (NRML)\cite{kinship2}, with the goal of learning a distance metric that maximizes the distance between negative pairs while minimizing the distance between positive pairs. Following this,\cite{kinship3,kinship4,kinship5,kinship6,kinship7,kinship8} expanded on this concept by developing the novel approaches combining multiple features\cite{kinship3,kinship6}, multiple views\cite{kinship5,kinship6,kinship7}, multiple similarity\cite{kinship4,kinship7}, and denoising\cite{kinship8}. Recently, research works\cite{kinship9,kinship10,kinship11,kinship12,kinship13,kinship14,kinship15,kinship16,kinship17,kinship18,kinship19} based on traditional methods have been renovated with deep learning techniques, such as deep metric learning, deep feature representation, or their combination to solve the kinship recognition problem\cite{kinship9,kinship12,kinship15}. Besides that, a variety of kinship recognition solutions with generative modeling\cite{kinship20,kinship21,kinship23} or graph representation\cite{kinship22} have been developed to further improve the robustness of the kinship verification. As discussed above, most existing face and kinship verification methods focus on accuracy performance, while ignoring the confidence estimation for their predictions. Even though a few attempts have been made recently to estimate the prediction confidence\cite{bmvc2022}, the estimated confidence can be inaccurate due to the poor calibration of modern DNNs\cite{uncertainty_3}. Differently, our method provides well-calibrated predictive confidence while maintaining the verification accuracy. \subsection{Uncertainty and Confidence Estimation in Face Analysis} In recent years, a variety of computer vision tasks, ranging from object detection\cite{uncertainty_detection_1,uncertainty_detection_2}, semantic segmentation\cite{uncertainty_segmentation_1,uncertainty_segmentation_2}, to image retrieval\cite{uncertainty_retrieval_1,uncertainty_retrieval_2}, have introduced uncertainty estimation into deep models to improve the robustness and interpretability of the system. Uncertainty can generally be classified into two types: aleatoric uncertainty and epistemic uncertainty\cite{uncertainty_1}. The former relates to the noise-related uncertainty in the data, whereas the latter relates to the parameter uncertainty in the prediction model. Gal and Ghahramani\cite{uncertainty_2} proposed to model predictive uncertainty by dropout training in deep neural networks as approximate Bayesian inference. Kendall and Gal\cite{uncertainty_1} designed a Bayesian deep learning framework that combines input-dependent aleatoric uncertainty with epistemic uncertainty. Guo et al.\cite{uncertainty_3} investigated various factors affecting the uncertainty of deep models, and evaluated the performance of multiple calibration methods for classification tasks. There are a few works on the uncertainty analysis in face and kinship verification. Xie et al.\cite{face_uncertainty_1} proposed to train an independent network to measure the quality of face images. Shi et al.\cite{probabilistic_face_embeddings} proposed to learn the variance of feature embedding, and measure the likelihood of face pair belonging to the same latent distribution. In \cite{relative_uncertainty}, the relative uncertainty of face pair is learned through an additional network branch, and the uncertainty is used as a weight to fuse the features with two different labels. In addition, various attempts have been made to incorporate uncertainty analysis into the process of face representation learning\cite{face_uncertainty_2,face_uncertainty_3,face_uncertainty_4}. Existing methods for modeling uncertainty in facial feature learning can only provide a uncalibrated uncertainty estimation in the prediction stage, which does not reflect the probability of the prediction being correct. For verification tasks, the confidence measure and calibration approach we proposed can provide well-calibrated prediction confidence in the decision-making process, which directly represents the probability that the prediction is correct and aids in risk assessment. The most closely related work to ours is the approach proposed in\cite{bmvc2022}, where uncertainty of the similarity score is estimated and propagated to the predictive confidence in face verification. However, this approach doesn't take into account the confidence calibration, leading to inaccurate confidence estimation. Experiment results on four widely-used datasets demonstrate that our proposed post-calibration method works well in face and kinship verification in terms of the uncertainty metric. \section{Our Approach} \subsection{Preliminaries: Face and Kinship Verification} Given a face pair $(X_1,X_2)$, existing face and kinship verification methods typically compute a cosine similarity $s(X_1,X_2)=\tfrac{<f(X_1) , f(X_2)>}{\vert\vert f(X_1)\vert\vert\vert\vert f(X_2) \vert\vert}$ and then use a decision threshold $\tau$ for final prediction, where $f(\cdot)$ is the feature embedding often represented by modern DNNs. Formally, prediction function $g$ for face and kinship verification can be defined by: \begin{equation} \label{equation 1} g(s,\tau)=\left\{ \begin{aligned} 1, & \qquad s \geq \tau \\ -1, & \qquad s < \tau \\ \end{aligned} \right. \end{equation} where $\tau$ is a predefined threshold, and often, it is empirically set based on the ROC curve on a held-out validation set. Recent works on face or kinship verification show that verification systems built even on popular DNNs may not work reliably, especially when the face images are partially occluded or are of low resolution. Therefore, confidence estimation for face and kinship verification plays a key role in such a safety-critical task. However, most existing verification methods based on similarity measure fail to quantify the prediction confidence of face pairs, as the similarity score itself does not exactly reflect the prediction confidence. To address this issue, we propose a simple and flexible confidence measure to quantify the predictive confidence of any face and kinship verification models. Specifically, we estimate the predictive confidence based on the similarity $s$ and threshold $\tau$, and then calibrate the confidence in angular space so that the calibrated confidence relates directly to the probability of the prediction being correct. \subsection{Confidence Measure} Intuitively, if the similarity score of a face pair is equal to the left or right boundary values (-1 or 1), the predictive confidence will reach its maximum value of 1. Obviously, the predictive confidence relates not only to the similarity score but also to the decision threshold. \begin{figure}[!t] \centering \includegraphics[width=2.8in]{Fig2} \centering \caption{An illustration for the relation between similarity score and predictive confidence. The closer to the decision threshold $\tau$ the similarity score $s$ is, the lower the confidence $c$ becomes.} \label{Fig_2} \end{figure} To model the relation between the similarity score and the predictive confidence for verification problems, we first define a confidence function $\varphi(s,\tau)$ based on the similarity $s$ and threshold $\tau$: \begin{equation} \label{equation 2} \varphi(s,\tau)=\left\{ \begin{aligned} \frac{s-\tau}{1-\tau}, & \qquad g(s,\tau)=1 \\ \frac{\tau-s}{1+\tau}, & \qquad g(s,\tau)=-1 \\ \end{aligned} \right. \end{equation} where $s \in [-1,1]$, and $\tau \in (-1,1)$. The similarity threshold $\tau$ divides the cosine similarity interval $[-1,1]$ into the positive part and negative part, with size of $1-\tau$ and $1+\tau$, respectively, as shown in \textcolor{blue}{Fig. 2}. Note that face or kinship verification is a binary decision problem. Hence, the probabilistic confidence $c$ for a prediction $g$ based on Eq. (1) should be greater or equal to 0.5. As such, the confidence measure $c(s,\tau)$ on the prediction $g$ can be written by: \begin{equation} \label{equation_3} c(s,\tau)=\frac{1}{2}\varphi(s,\tau)+\frac{1}{2} \end{equation} As shown in \textcolor{blue}{Fig. 2}, if $s$ takes the value farther away from $\tau$ (e.g., $s=s_2$), the predictive confidence $c$ becomes much higher (e.g., $c=c_2$). On the contrary, if the value of $s$ is closer to $\tau$ (e.g., $s=s_1$), the confidence $c$ becomes lower (e.g., $c=c_1$), indicating the more likely the model makes incorrect predictions. So far, we have propose a flexible confidence measure that can be directly applied to any off-the-shelf verification models to yield a probabilistic estimation of prediction confidence. \subsection{Confidence Calibration via Angular Scaling} In actuality, there is a bias between the verification accuracy and predictive confidence in face and kinship verification problems. This can be due to the fact that modern DNNs are often miscalibrated\cite{uncertainty_3}. The proposed confidence measure may not produce well-matched confidence to the expected accuracy of the model. In classification tasks, it is common to calibrate the model by scaling the $logit$\cite{uncertainty_3}. However, if a similar process is used to directly scale the similarity $s$ in the verification task, the similarity $s$, the threshold $\tau$, and the similarity interval will be scaled equally, resulting in no change to the prediction confidence according to Eq. (2). Inspired by the work of \cite{arcface,sphereface,cosface}, we propose a post-calibration method via angular scaling, which can well calibrate the prediction confidence without retraining the model. We propose to calibrate the prediction confidence by calibrating the angle $\theta$ between pairs of face on a recalibration dataset $D_c=\{(X_i,Z_i),y_i\}_{i=1}^N$, and a feature encoder denoted as $f(\cdot)$. For a sample pair $(X_i,Z_i)$ with label $y_i\in \{-1,1\}$ in $D_c$, its feature representation and cosine similarity are denoted by $x_i=f(X_i), z_i=f(Z_i)$, and $s(x_i,z_i)$, respectively. Our angular scaling calibration (ASC) method learns to adjust the angle between the face pairs based on the similarity set $S_c=\{s(x_i,z_i)\}_{i=1}^N$ and its labels $Y_c=\{y_i\}_{i=1}^N$ (derived from the recalibration set $D_c$). Let $\theta_i$ denote the angle between the two face representation features $x_i$ and $z_i$: \begin{equation} \label{equation_4} \theta_i=arccos(s(x_i,z_i)) \end{equation} Our ASC learns to adjust the angle $\theta_i$ using two learnable scalar parameters $w$ and $b$: \begin{equation} \label{equation_5} \theta_i'=\theta_i*w+b \end{equation} where $w>0$ and $\theta_i' \in [0,\pi]$, and the calibrated similarity is: \begin{equation} \label{equation_6} s'(x_i,z_i)=cos(\theta_i') \end{equation} Accordingly, the calibrated threshold will be: \begin{equation} \label{equation_7} \tau'=cos(arccos(\tau)*w+b) \end{equation} To learn the calibration parameter $w$ and $b$, we minimize the following objective function: \begin{equation} \label{equation_8} L=\frac{1}{N}\sum_{i=1}^{N}(s'(x_i,z_i)-y_{i})^2 \end{equation} where $y_{i}=1$ if $(x_i,z_i)$ is a positive pair, and $y_{i}=-1$ otherwise. The detail of the calibration procedure via angular scaling is summarized in \textbf{Algorithm 1}. In the prediction stage, we use the learned $w$ and $b$ to update the calibrated similarity $s'$ and the threshold $\tau'$, and then update the predictive confidence according to Eq. (3). \begin{algorithm}[b] \caption{Angular Scaling Calibration (ASC)}\label{alg:alg1} \begin{algorithmic} \STATE \STATE \textbf{Input: }\text{Recalibration set} $D_c=\{(X_i,Z_i),y_i\}_{i=1}^N$. \STATE \textbf{Output: }\text{Calibration parameters} $w,b$. \STATE \text{Set} $w=1,b=0$; \STATE \text{Compute the similarity $S_c$ of feature pairs;} \STATE \text{Compute the angle $\theta_c$ according to Eq.(4);} \STATE \textbf{while}\text{ not converge}\textbf{ do} \STATE \hspace{0.2cm}\text{1. Update the angle set according to Eq. (5);} \STATE \hspace{0.2cm}\text{2. Update the similarity set according to Eq. (6);} \STATE \hspace{0.2cm}\text{3. Update $w$,$b$ by optimizing the objective function (8);} \STATE\textbf{end} \STATE \textbf{return} $w,b$ \end{algorithmic} \end{algorithm} Now, we briefly explain why the angular scaling calibration ensures that the model maintains the prediction accuracy. Firstly, let $\tau$ denote the uncalibrated threshold, and we are given the positive pair $(X_p,Z_p)$ and negative pair $(X_n,Z_n)$, and their features are $(x_p,x_p)$ and $(x_n,z_n)$, respectively. It is clearly that $s(x_n,z_n) < \tau \leq s(x_p,z_p) $, and since $arccos$ decreases monotonically in the interval $[-1,1]$, we have: $\theta_n > \theta_{\tau} \geq \theta_p$. We then use $w,b$ to adjust the angle between the two feature vectors. Because $w>0$, we have $\theta_n'>\theta_{\tau}' \ge \theta_{p}' $ , and $\theta_{n}',\theta_{\tau}',\theta_{p}' \in [0,\pi]$ are satisfied. Likewise, we can also conclude $cos(\theta_{n}')<cos(\theta_{\tau}') \leq cos(\theta_{p}')$. Hence, it can be concluded that $s'(x_n,z_n)<\tau' \leq s'(x_p,z_p) $. This indicates that our proposed ASC learns to calibrate the predictive confidence while maintaining the verification accuracy. To evaluate the calibration performance of our method for face and kinship verification, we use ECE (Expected Calibration Error)\cite{bayesian_binning} as a metric. ECE is defined as the weighted average difference across bins between the expected accuracy and the predictive confidence: \begin{equation} \label{equation_9} ECE=\sum_{m=1}^{M}\frac{\vert B_m \vert}{n}\vert acc(B_m)-conf(B_m) \vert \end{equation} where $B_m$ represents all predictions fall in the $m$th bin, $M$ denotes the number of bins for partition of the confidence interval $[0.5,1]$, $n$ is the total number of samples, $acc(B_m)$ and $conf(B_m)$ are the accuracy and confidence of $m$th bin, calculated by Eq. (10) and Eq. (11), respectively. \begin{equation} \label{equation_10} acc(B_m)=\frac{1}{\vert B_m \vert}\sum_{i \in B_m}1(g(s_i,\tau)=y_i) \end{equation} \begin{equation} \label{equation_11} conf(B_m)=\frac{1}{\vert B_m \vert}\sum_{i \in B_m}c(s_i,\tau) \end{equation} In general, there are two schemes towards bins partition\cite{binning_schema}: one is to use equal-size interval partition, that is, the size of each bin is equal to $1/2M$, and the other is that each interval contains an equal number of samples. Note that a well-calibrated verification model often has a small ECE. Especially when the accuracy of each bin is equal to its confidence, the ECE value of the model will be zero, indicating that it is perfectly calibrated. \section{Experiments} To validate the effectiveness of our proposed method for face and kinship verification, we conduct extensive experiments on four face and kinship datasets, including FIW, KinFaceW, LFW, and IJB-C. In this section, we will report the accuracy, the mean confidence and the ECE before and after calibration on these datasets, and the experimental analysis are also present in detail. \begin{table*}[htbp] \renewcommand\arraystretch{1.2} \scriptsize \caption{ Accuracy (\%) and Calibration (\%) Performance of Different Models and Losses on the FIW Dataset \label{tab:table1}} \centering \begin{tabular}{c|c|ccccccccccc|c|c} \hline \multirow{2}{*}{Loss} & \multirow{2}{*}{Model} & \multicolumn {12}{c|}{Accuracy} & \multirow{2}{*}{ECE} \\ \cline{3-14} &&SS & BB & \ SIBS & FD & MD & FS & MS & GFGD & GMGD & GFGS & GMGS & AVG \\ \hline InfoNCE&ResNet101&82.84&82.06&80.32&76.69&80.59&82.81&76.47&78.10&71.38&71.02&60.34&80.19&22.85\\ InfoNCE&ResNet50&79.00&74.81&75.92&71.82&74.15&78.60&72.10&73.36&70.26&63.67&61.45&75.01&16.51\\ InfoNCE&ResNet34&76.49&73.46&72.99&70.60&72.42&75.88&69.23&67.27&62.83&62.86&56.98&72.87&12.72\\ InfoNCE&VGG16&67.20&66.32&62.27&61.61&63.93&66.90&62.08&67.95&61.71&63.67&56.98&64.68&5.97\\ \hline Triplet&ResNet101&80.49&79.50&78.79&73.34&79.04&80.53&74.34&77.88&77.32&68.98&59.22&77.94&20.93\\ Triplet&ResNet50&78.63&75.80&75.92&72.51&74.05&76.49&67.17&68.40&66.17&66.94&55.31&74.18&16.97\\ Triplet&ResNet34&75.17&73.95&71.29&72.26&71.03&76.26&69.39&69.75&64.68&62.04&59.78&72.86&10.46\\ Triplet&VGG16&66.14&64.80&64.32&63.76&62.13&67.01&64.02&65.69&68.03&66.53&50.84&64.65&7.87\\ \hline ArcFace&ResNet101&80.23&79.36&79.38&75.62&77.52&81.19&73.54&76.75&75.84&71.02&61.45&78.02&19.39\\ ArcFace&ResNet50&76.53&73.13&73.23&72.79&73.63&78.20&70.11&65.91&57.25&71.43&63.13&73.87&16.00\\ ArcFace&ResNet34&74.67&74.06&70.94&71.12&71.00&75.66&67.25&64.55&62.83&58.37&61.45&72.16&10.04\\ ArcFace&VGG16&67.98&67.11&61.86&60.86&63.93&68.00&61.60&61.63&65.43&55.92&58.10&64.83&7.69\\ \hline Softmax&ResNet101&78.51&76.60&75.57&75.48&76.64&81.28&76.57&73.81&67.66&68.98&65.92&77.25&16.24\\ Softmax&ResNet50&75.90&75.21&73.29&69.57&73.49&78.06&70.48&66.37&60.97&77.14&64.25&73.74&15.77\\ Softmax&ResNet34&76.34&74.18&72.47&70.76&72.76&76.75&68.43&65.46&64.31&60.00&59.22&73.06&8.90\\ Softmax&VGG16&67.76&67.80&61.45&61.75&64.35&66.32&60.59&60.05&68.40&56.73&56.42&64.71&5.40\\ \hline \end{tabular} \end{table*} \subsection{Datasets and Experimental Settings} \textsl{1) FIW \cite{fiw_1}:} FIW (Families In the Wild)\cite{fiw_1,fiw_2,fiw_3,fiw_4} is the largest visual kinship recognition dataset to date, consisting of over 13,000 facial images from 1000 families, with 11 pairwise kin relationships that can be divided into three sub-groups: siblings type (i.e., sister-sister, brother-brother, and sister-brother), parent-child type (i.e., father-son, father-daughter, mother-son, and mother-daughter), and grandparent-grandchild type (i.e., grandfather-grandson, grandfather-granddaughter, grandmother-grandson, and grandmother-granddaughter). The FIW is a challenging dataset because all the images are captured in the wild with partial occlusion and large variations in background, pose, expression, illumination, and so on. InfoNCE\cite{simclr}, Triplet\cite{facenet}, Softmax\cite{deepface}, and ArcFace\cite{arcface} are four widely-used losses in face and kinship verification tasks. We first use four loss functions to train models with different backbones pre-trained on the MS-celeb-1M\cite{msceleb}. We follow the data partitioning and evaluation protocol of RFIW 2021\cite{fiw_5}. To be more specific, we employ RetinaFace\cite{retinaface} for face alignment and the sample is cropped into $112\times112$ pixels. SGD is used as the optimizer with a momentum of 0.9 and a learning rate of 0.0001. \textsl{2) KinFaceW\cite{kinship2}:} The KinFaceW-I and KinFaceW-II datasets are widely used kinship datasets comprised of Internet-collected images of public figures and celebrities. The two datasets contain four kin relations: Father-Son, Father-Daughter, Mother-Son, and Mother-Daughter. For these four kin relations, KinFaceW-I has 156, 134, 116, and 127 pairs of images with kin relationship. In KinFaceW-II, there are 250 pairs of images for each relationship. KinFaceW-I differs from KinFaceW-II in that, in most cases, the face pairs in the former dataset are from distinct photos while those in the latter one are from the same photos. Each image in the two datasets is aligned and resized into $64\times64$ for feature extraction. We evaluate two traditional and one deep-learning kinship verification methods on the KinFaceW datasets, including: 1) NRML (LBP)\cite{kinship2}: for each face image, we extract 3776-dimensional uniform pattern LBP features to kinship verification. 2) NRML (HOG)\cite{kinship2}: first, we partition each image into non-overlapping blocks. Then, we extract a 9-dimensional HOG feature for each block and concatenate them to generate a 2880-dimensional feature vector. 3) InfoNCE (ResNet18): we use InfoNCE loss to fine-tune a ResNet-18 model pre-trained on the MS1MV2\cite{arcface}. More specifically, the model is optimized using SGD, and the temperature parameter is set to 0.1. In this experiment, we use a five-fold cross-validation strategy in an image-unrestricted setting. \textsl{3) LFW\cite{lfw1,lfw2}:} LFW contains 13,233 face images collected from 5,749 individuals, of which 1,680 have 2 or more face images. We use CASIA-WebFace\cite{casia} as the training set to train networks of different depths with the ArcFace loss, and evaluate the verification performance of these models on the LFW dataset. In the experiments, we set the initial learning rate and momentum of the SGD optimizer to 0.1 and 0.9. We calibrate the confidence of the four models using ASC in the 10-fold cross-validation scheme. In addition, the maximum number of iterations and the learning rate for ASC are set to 1000 and 0.01, respectively. \textsl{4) IJB-C \cite{IJB-C}:} IJB-C (IARPA Janus Benchmark-C ) dataset contains 31,334 images and 11,779 videos captured under unconstrained condition from 3,531 subjects. There are a total of 23,124 templates, with 19,557 positive matches and 15,638,932 negative matches to enable performance evaluations at low FAR. In this experiment, we adopt the 1:1 Mixed Verification protocol, that is, a single feature vector was constructed by taking a weighted average of the frames in the template, so that all frames in a video belonging to the same subject have the same cumulative weight as a single still image. We perform face verification on the IJB-C dataset using different deep models trained on the MS1MV2\cite{arcface} with the ArcFace loss and the Triplet loss functions. To perform confidence estimation and calibration, we use five-fold cross-validation with stratified sampling of positive and negative samples, with four of the folds being the recalibration set and the remaining one being the test set. \begin{figure*}[!htbp] \centering \includegraphics[width=6.0in]{Fig3} \centering \caption{The trend of verification accuracy (\%) vs. ECE (\%) of different verification models and losses during training on the FIW dataset. } \label{Fig_3} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=6.0in]{Fig4} \centering \caption{Similarity distributions before calibration (a) and after calibration with ASC (b). From top to bottom are results of ResNet101, ResNet50, ResNet34, and VGG16, respectively.} \label{Fig_4} \end{figure*} \subsection{Experimental Results on FIW} In \textcolor{blue}{Table I}, we show the verification accuracy, the weighted average accuracy, and the ECE on FIW dataset for different models and losses. From these results, we make the observations as below: \begin{enumerate} \item For all four loss functions, models with deeper architecture have better feature representation abilities. However, we also observe that increasing depth lead to a negative influence on model calibration. \item Compared with Softmax and ArcFace loss, InfoNCE and Triplet loss achieve better verification accuracy. Especially as the model gets deeper, the performance improvement will be more noticeable; however the calibration error will increase. \item ArcFace loss leverages the angular margin penalty to enforce extra intra-class compactness and inter-class discrepancy, achieving better verification performance with larger calibration error than Softmax loss. \end{enumerate} In \textcolor{blue}{Fig. 3}, we plot the accuracy-calibration during the training process of ResNet34 and ResNet101, intuitively showing the trend of the model's ECE and verification accuracy with four different loss functions. It can be seen that the ECE of both models (ResNet34 and ResNet101) generally increase with the verification accuracy of the models. Also, if the model uses a loss that optimizes the feature representation, like InfoNCE or Triplet loss, the model has a larger ECE with the same accuracy. In addition, we conduct experiments on the FIW dataset to evaluate the effects of the angular scaling calibration. \textcolor{blue}{Table II} shows the ECE and thresholds before and after calibration. Specifically, we adopt LBFGS\cite{lbfgs} as the calibration optimizer with a learning rate of 0.01 and a maximum number of iterations of 1000. The calibrated thresholds and calibration parameters ($w$ and $b$) are used to evaluate ECE on the test data. From \textcolor{blue}{Table II}, we observe that our ASC achieves good calibration on different models (loss functions). It is also found that the calibrated threshold tends to 0, so that positive and negative pairs are equally divided in the cosine similarity interval. \begin{figure}[!htbp] \centering \includegraphics[width=3.5in]{Fig5} \centering \caption{Accuracy vs. Confidence plots of different models and losses before and after calibration (with ASC) on the FIW dataset.} \label{Fig_5} \end{figure} \textcolor{blue}{Fig. 4} plots the cosine similarity distribution for different models with InfoNCE loss before and after calibration on the FIW dataset. We observe that the similarity before calibration is distributed over a smaller interval, leading to poor calibration, whereas the similarity after calibration is distributed uniformly over different similarity levels. \textcolor{blue}{Fig. 5} is the accuracy-confidence plots of different models with Triplet loss and Softmax loss on the FIW dataset before and after calibration. The plots also show how well the predictive confidence is calibrated via angular scaling on all models, where perfect calibration corresponds to the line $y=x$. We can see that our ASC is able to obtain the well-calibrated predictions, as the recalibrated confidence correctly reflects the prediction accuracy. \textcolor{blue}{Table III} summarizes the average verification accuracy, average confidence before and after calibration of different models on three sub-groups of FIW. It can be seen that the confidence before calibration is significantly lower or higher than the actual verification accuracy, while the recalibrated confidence well matches the true accuracy of the model. Therefore, verification models calibrated by ASC can provide a reliable confidence estimation to support decision-making in face and kinship verification systems. \begin{table}[!tbp] \renewcommand\arraystretch{1.2} \scriptsize \caption{ ECE(\%) and Threshold $\tau$ before and after calibration (with ASC) on the FIW Dataset \label{tab:table2}} \centering \begin{tabular}{c|c|c|c|c|c} \hline Loss & Model& ECE & \makecell{ECE\\w/ ASC} & $\tau$ & \makecell{$\tau$\\w/ ASC} \\ \hline InfoNCE&ResNet101&22.85&2.08&0.13&-0.01\\ InfoNCE&ResNet50&16.51&1.81&0.32&-0.03\\ InfoNCE&ResNet34&12.72&2.11&0.37&0.02\\ InfoNCE&VGG16&5.97&1.39&0.39&0.01\\ \hline Triplet&ResNet101&20.93&2.83&0.11&0.02\\ Triplet&ResNet50&16.97&2.83&0.32&-0.07\\ Triplet&ResNet34&10.46&1.87&0.26&0.06\\ Triplet&VGG16&7.87&1.54&0.71&0.08\\ \hline ArcFace&ResNet101&19.39&2.62&0.13&0.07\\ ArcFace&ResNet50&16.00&2.45&0.39&-0.09\\ ArcFace&ResNet34&10.04&3.40&0.14&-0.02\\ ArcFace&VGG16&7.69&2.90&0.20&-0.07\\ \hline Softmax&ResNet101&16.24&2.79&0.13&-0.04\\ Softmax&ResNet50&15.77&2.80&0.64&-0.05\\ Softmax&ResNet34&8.90&3.56&0.17&-0.01\\ Softmax&VGG16&5.40&1.81&0.37&0.01\\ \hline \end{tabular} \end{table} \begin{table*}[!htbp] \renewcommand\arraystretch{1.1} \scriptsize \caption{ Accuracy (\%), Confidence (\%) before and after calibration (with ASC) for different age groups on the FIW dataset \label{tab:table3}} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Model} & \multicolumn {3}{c|}{BB, SS, SIBS}& \multicolumn {3}{c|}{FS, MS, FD, MD} & \multicolumn {3}{c}{GFGS, GMGS, GFGD, GMGD} \\ \cline{2-10} &Conf.& Conf. w/ ASC & Acc. & Conf. & Conf. w/ ASC & Acc. & Conf. & Conf. w/ ASC & Acc. \\ \hline ResNet101&58.20&81.98&79.80&56.71&78.50&77.01&55.78&75.25&72.89\\ ResNet50&59.02&75.83&76.91&56.41&71.68&72.89&54.49&67.66&65.49\\ ResNet34&63.10&72.88&74.12&62.02&71.09&72.44&60.31&66.52&65.32\\ VGG16&70.34&64.11&65.26&69.20&63.63&64.29&67.86&63.22&64.08\\ \hline \end{tabular} \end{table*} \begin{table*}[!htbp] \renewcommand\arraystretch{1.1} \scriptsize \caption{ ACCURACY (\%), CONFIDENCE (\%), and ECE (\%) before and after calibration (with ASC) for different models on the KinFaceW datasets \label{tab:table4}} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{DataSet} & \multicolumn {2}{c|}{\multirow{2}{*}{Methods}} & \multicolumn {3}{c|}{FS} & \multicolumn {3}{c|}{FD} & \multicolumn {3}{c|}{MS} & \multicolumn {3}{c|}{MD} & \multicolumn {3}{c}{Mean} \\ \cline{4-18} &\multicolumn {2}{c|}{}& Acc. & Conf. & ECE & Acc. & Conf. & ECE & Acc. & Conf. & ECE & Acc. & Conf. & ECE & Acc. & Conf. & ECE \\ \hline \multirow{6}{*}{KFW-I} & \multirow{2}{*}{\makecell{NRML\\(LBP)}} & w/o ASC &\multirow{2}{*}{81.43}&57.01&27.03&\multirow{2}{*}{69.42}&55.40&19.24&\multirow{2}{*}{67.23}&55.94&18.04&\multirow{2}{*}{72.87}&56.08&19.48&\multirow{2}{*}{72.74}&56.11&20.95 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&79.94&1.97&\multirow{2}{*}{}&68.43&1.39&\multirow{2}{*}{}&64.64&2.49&\multirow{2}{*}{}&71.05&2.70&\multirow{2}{*}{}&71.02&2.14 \\ \cline{2-18} & \multirow{2}{*}{\makecell{NRML\\(HOG)}} & w/o ASC &\multirow{2}{*}{83.68}&57.52&25.91&\multirow{2}{*}{74.64}&56.88&19.35&\multirow{2}{*}{71.56}&56.32&20.26&\multirow{2}{*}{79.96}&57.16&23.69&\multirow{2}{*}{77.46}&56.97&22.30 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&81.10&3.24&\multirow{2}{*}{}&72.20&4.19&\multirow{2}{*}{}&68.40&3.98&\multirow{2}{*}{}&76.02&6.95&\multirow{2}{*}{}&74.43&4.59 \\ \cline{2-18} & \multirow{2}{*}{\makecell{InfoNCE\\(ResNet18)}} & w/o ASC &\multirow{2}{*}{83.34}&55.24&27.29&\multirow{2}{*}{82.88}&54.99&27.75&\multirow{2}{*}{81.01}&54.86&25.17&\multirow{2}{*}{85.04}&55.93&28.31&\multirow{2}{*}{83.07}&55.26&27.13 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&82.03&1.47&\multirow{2}{*}{}&81.50&1.86&\multirow{2}{*}{}&79.13&1.97&\multirow{2}{*}{}&83.91&1.25&\multirow{2}{*}{}&81.64&1.64 \\ \hline \multirow{6}{*}{KFW-II} & \multirow{2}{*}{\makecell{NRML\\(LBP)}} & w/o ASC &\multirow{2}{*}{79.20}&54.96&24.16&\multirow{2}{*}{71.60}&54.53&18.34&\multirow{2}{*}{72.20}&54.51&18.90&\multirow{2}{*}{68.40}&54.70&16.10&\multirow{2}{*}{72.85}&54.68&19.38 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&78.65&1.22&\multirow{2}{*}{}&70.88&1.57&\multirow{2}{*}{}&71.82&1.67&\multirow{2}{*}{}&67.25&1.67&\multirow{2}{*}{}&72.15&1.53 \\ \cline{2-18} & \multirow{2}{*}{\makecell{NRML\\(HOG)}} & w/o ASC &\multirow{2}{*}{80.80}&55.75&24.90&\multirow{2}{*}{72.80}&55.14&19.16&\multirow{2}{*}{74.80}&55.41&20.57&\multirow{2}{*}{70.40}&55.24&18.48&\multirow{2}{*}{74.70}&55.39&20.78 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&79.76&1.06&\multirow{2}{*}{}&72.24&1.86&\multirow{2}{*}{}&74.27&1.68&\multirow{2}{*}{}&68.82&1.89&\multirow{2}{*}{}&73.77&1.62 \\ \cline{2-18} & \multirow{2}{*}{\makecell{InfoNCE\\(ResNet18)}} & w/o ASC &\multirow{2}{*}{84.00}&55.37&28.13&\multirow{2}{*}{78.20}&54.50&23.20&\multirow{2}{*}{82.20}&55.15&26.55&\multirow{2}{*}{84.00}&55.16&28.34&\multirow{2}{*}{82.10}&55.05&26.56 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&83.48&0.60&\multirow{2}{*}{}&78.16&0.81&\multirow{2}{*}{}&80.55&1.54&\multirow{2}{*}{}&83.17&1.06&\multirow{2}{*}{}&81.34&1.00 \\ \hline \end{tabular} \end{table*} \subsection{Experimental Results on KinFaceW} In \textcolor{blue}{Table IV}, we report the accuracy, the mean confidence and the ECE before and after calibration on KinFaceW-I and KinFaceW-II datasets. We observe that models on the kin relationships (FS, MD) of same gender have higher verification accuracy and confidence than those of different gender (FD, MS), but the gain in accuracy is more significant. For NRML, the HOG achieves better verification accuracy and confidence than LBP, while it has a larger ECE. Compared to traditional verification methods, deep-learning methods perform less confidently, with the smallest confidence and largest ECE on most kin relationships. Besides, the accuracy is randomly distributed over the bins due to the small size of the KinFaceW dataset and limited number of data samples in each bin. However, the models calibrated by our ASC consistently show improved calibration performance in terms of both ECE and mean ECE. \begin{figure}[!htbp] \centering \includegraphics[width=3.5in]{Fig6} \centering \caption{Accuracy vs. Confidence plots before and after calibration (with ASC) for different models on the LFW dataset.} \label{Fig_6} \end{figure} \begin{table}[!h] \renewcommand\arraystretch{1.2} \scriptsize \caption{ Accuracy (\%), Confidence (\%), and ECE (\%) before and after calibration (with ASC) on the LFW dataset \label{tab:table5}} \centering \begin{tabular}{c|c|c|c|c|c} \hline Model &Acc.&Conf.&\makecell{Conf.\\w/ ASC}&ECE&\makecell{ECE\\w/ ASC} \\ \hline ResNet101&99.55&70.65&98.79&28.48&0.78\\ ResNet50&99.23&71.68&98.33&27.84&0.93\\ ResNet34&98.88&73.14&97.81&25.43&1.13\\ VGG16&93.57&70.89&93.31&24.67&1.54\\ \hline \end{tabular} \end{table} \begin{table*}[!ht] \renewcommand\arraystretch{1.2} \scriptsize \caption{ Threshold $\tau$, TAR (\%), Accuracy (\%), Confidence (\%), ECE (\%), and AUC (\%) before calibration on the IJB-C dataset \label{tab:table6}} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Loss} & \multirow{2}{*}{Model} & \multicolumn {5}{c|}{FAR=1e-2} & \multicolumn {5}{c|}{FAR=1e-4} & \multirow{2}{*}{AUC} \\ \cline{3-12} &&$\tau$ & TAR & Acc. & Conf. & ECE & $\tau$ & TAR & Acc. & Conf. & ECE& \\ \hline \multirow{3}{*}{ArcFace}& ResNet101 & 0.03&97.17&99.92&65.54&34.89&0.22&93.92&99.94&70.76&29.64&99.25 \\ \cline{2-13} & ResNet50 & -0.17&96.69&99.92&68.62&31.36&0.09&90.16&99.93&75.47&23.97&99.25 \\ \cline{2-13} & ResNet34 & -0.20&96.02&99.89&71.99&27.98&0.02&90.56&99.93&77.56&21.93&99.24 \\ \hline Triplet&ResNet50& 0.15&93.35&99.71&64.73&35.10&0.28&86.31&99.90&68.31&31.36&99.07 \\ \hline \end{tabular} \end{table*} \begin{table*}[!ht] \renewcommand\arraystretch{1.2} \scriptsize \caption{Accuracy (\%), Confidence (\%), and ECE (\%) after calibration (with ASC) on the IJB-C dataset \label{tab:table7}} \centering \begin{tabular}{c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Loss} & \multirow{2}{*}{Model} & \multicolumn {3}{c|}{FAR=1e-2} & \multicolumn {3}{c}{FAR=1e-4} \\ \cline{3-8} && Acc. & Conf. & ECE & Acc.& Conf. & ECE \\ \hline \multirow{3}{*}{ArcFace}& ResNet101 &99.92&98.77&1.30&99.94&99.52&0.47 \\ \cline{3-8} & ResNet50 &99.92&98.80&1.18&99.93&99.46&0.53\\ \cline{3-8} & ResNet34 &99.89&98.06&1.90&99.93&99.36&0.63 \\ \hline Triplet&ResNet50&99.71&97.72&2.24&99.90&97.95&2.19 \\ \hline \end{tabular} \end{table*} \begin{table*}[!hb] \renewcommand\arraystretch{1.2} \scriptsize \caption{ Comparison of ECE (\%) for different post-calibration methods on four datasets \label{tab:table8}} \centering \begin{tabular}{c|c|c|c|c|c|c} \hline Dataset & Loss (Method) & Model (Feature) & Uncalibrated & Hist. Binning & Isotonic & ASC \\ \hline \multirow{16}{*}{FIW} & \multirow{4}{*}{InfoNCE}&ResNet101&22.85&2.47&2.51&\textbf{2.08}\\ \cline{3-7} &&ResNet50&16.51&\textbf{1.57}&1.62&1.81\\ \cline{3-7} &&ResNet34&12.72&2.89&2.60&\textbf{2.11}\\ \cline{3-7} &&VGG16&5.97&2.14&2.17&\textbf{1.39}\\ \cline{2-7} & \multirow{4}{*}{Triplet}&ResNet101&20.93&\textbf{1.22}&1.36&2.83\\ \cline{3-7} &&ResNet50&16.97&2.83&\textbf{2.82}&2.83\\ \cline{3-7} &&ResNet34&10.46&2.80&2.57&\textbf{1.87}\\ \cline{3-7} &&VGG16&7.87&\textbf{1.51}&4.70&1.54\\ \cline{2-7} &\multirow{4}{*}{ArcFace}&ResNet101&19.39&3.14&2.95&\textbf{2.62}\\ \cline{3-7} &&ResNet50&16.00&4.75&4.78&\textbf{2.45}\\ \cline{3-7} &&ResNet34&10.04&3.45&3.76&\textbf{3.40}\\ \cline{3-7} &&VGG16&7.69&1.66&\textbf{1.40}&2.90\\ \cline{2-7} & \multirow{4}{*}{Softmax}&ResNet101&16.24&1.37&\textbf{1.20}&2.79\\ \cline{3-7} &&ResNet50&15.77&2.92&2.85&\textbf{2.80}\\ \cline{3-7} &&ResNet34&8.90&3.82&3.65&\textbf{3.56}\\ \cline{3-7} &&VGG16&5.40&2.05&1.84&\textbf{1.81}\\ \hline \multirow{3}{*}{KinFaceW-I}& \multirow{2}{*}{NRML} &LBP &20.95&\textbf{1.90}&3.06&2.14 \\ \cline{3-7} &&HOG &22.30&4.98&\textbf{4.56}&4.59\\ \cline{2-7} &InfoNCE& ResNet18& 27.13&1.69&2.92&\textbf{1.64}\\ \hline \multirow{3}{*}{KinFaceW-II}& \multirow{2}{*}{NRML} &LBP &19.38&\textbf{1.08}&1.34&1.53\\ \cline{3-7} &&HOG &20.78&2.22&\textbf{1.37}&1.62\\ \cline{2-7} &InfoNCE& ResNet18& 26.56&2.35&2.22&\textbf{1.00}\\ \hline \multirow{4}{*}{LFW}& \multirow{4}{*}{ArcFace}&ResNet101&28.48&1.22&\textbf{0.16}&0.78\\ \cline{3-7} &&ResNet50&27.84&0.98&\textbf{0.70}&0.93\\ \cline{3-7} &&ResNet34&25.43&2.35&1.76&\textbf{1.13}\\ \cline{3-7} &&VGG16&24.67&2.56&1.58&\textbf{1.54}\\ \hline \multirow{4}{*}{IJB-C}& \multirow{3}{*}{ArcFace}&ResNet101&29.64&1.18&0.97&\textbf{0.47} \\ \cline{3-7} &&ResNet50&23.97&1.49&1.00&\textbf{0.53}\\ \cline{3-7} &&ResNet34&21.93&1.72&1.14&\textbf{0.63}\\ \cline{2-7} &Triplet&ResNet50&31.36&2.24&\textbf{2.07}&2.19\\ \hline \end{tabular} \end{table*} \begin{figure*}[!htbp] \centering \includegraphics[width=5.5in]{Fig7} \centering \caption{ Visualization of confidence calibration by our ASC method. From top to bottom are examples from FIW, LFW, KinFaceW-I, and KinFaceW-II, respectively. The captions below each sample pair show the confidence adjustment before and after calibration. For better visualization, sample pairs with different confidence levels are shown in three groups: \textbf{low} confidence (a), \textbf{medium} confidence (b), and \textbf{high} confidence (c).} \label{Fig_7} \end{figure*} \subsection{Experimental Results on LFW} To further evaluate our method on face verification task, we also conduct experiments on the LFW dataset. As can be seen from \textcolor{blue}{Table V}, deeper architectures may achieve better verification accuracy, but the calibration performance gets worse, which has also been demonstrated on both FIW and LFW. Moreover, \textcolor{blue}{Fig. 6} plots the correlation between the confidence and accuracy of the four models before and after calibration. we observe that ASC is helpful to improve the calibration of all four models. Due to the high accuracy of ResNet34, ResNet50, and ResNet101 on the LFW dataset, the calibration methods tend to map the predictive confidence to a high-level to match the expected accuracy. However, as shown in \textcolor{blue}{Fig. 6}, the calibrated confidence of all four models does not degenerate to a constant solution, and instead it is well matched to the prediction accuracy of different levels. \subsection{Experimental Results on IJB-C} We present in \textcolor{blue}{Table VI} the threshold $\tau$, the TAR (True Accept Rate), the accuracy, the mean confidence and the ECE before calibration on IJB-C dataset. The threshold increases as the FAR (False Accept Rate) drops from 1e-2 to 1e-4, resulting in lower TAR and ECE. Also, it can be seen that the verification accuracy on all groups is above 99.7 $\%$. Four groups of experiments show that ECE decreases with decreasing TAR, which is consistent with the correlation between ECE and accuracy. In the experiment, deeper model also gets larger ECE when achieving a better TAR , which is in line with the observations from previous experiments. Moreover, evaluating the effect of the loss function on the ECE reveals that the ResNet50 trained with the Triplet loss has a larger ECE than the model trained with ArcFace, even surpassing the ResNet101 model trained with ArcFace. \textcolor{blue}{Table VII} presents the accuracy, the mean confidence, and the ECE with ASC on the IJB-C datasets. The calibrated confidence is closer to the true accuracy, and the ECE is closer to zero. It shows that our proposed ASC yields good calibration performance for models with varying numbers of layers and loss functions, and it still keeps a small calibrator error even when the threshold changes. \subsection{Comparison with Previous Post-calibration Methods} To validate the superiority of our ASC for confidence calibration in face and kinship verification, two widely used post-calibration methods are chosen for performance comparison: Histogram binning\cite{hist} and Isotonic regression\cite{iso}. While there are some other post-calibration methods, such as Temperature scaling\cite{uncertainty_3}, Platt scaling\cite{platt}, Bayesian binning\cite{bayesian_binning}, and Matrix and vector scaling, they cannot be directly applied to confidence calibration for face and kinship verification tasks. For fair comparison, we adopt the same experimental setting for all compared methods. \textbf{Histogram binning} is a non-parametric calibration method. In this paper, Histogram binning firstly divide the cosine similarity score $s$ evenly into $M$ bins $B_1$, $B_2$, ..., $B_M$, with bin boundaries $-1=a_{1}<a_{2}<...<a_{M}<a_{M+1}=1$, where the bin $B_m$ refers to $\left( a_{m}, a_{m+1}\right]$. We assign each $B_m$ a calibrated score $\eta_m$, which is optimized by: \begin{equation} \label{equation_12} \mathop{\arg\min}\limits_{\eta_{1},...,\eta_{M}}\sum\limits_{m=1}^{M}\sum\limits_{i=1}^{N} 1(a_{m}<s_{i} \leq a_{m+1})(\eta_{m}-y_{i})^2 \end{equation} where $s_i$, $y_{i}$ are uncalibrated similarity score and the label of the $i$th sample pair, respectively. Once the $i$th sample falls into bin $B_m$, its calibrated similarity will be set to $\eta_m$. Additionally, the threshold $\tau'$ is updated with $\eta_t$, where $\tau \in \left( a_{t}, a_{t+1}\right]$. \textbf{Isotonic regression} is another commonly used non-parametric calibration method that can be used for similarity score calibration by learning a piecewise constant function $f$, that is, $s'=f(s)$, where $f(s_{i})\leq f(s_{j})$ if $s_{i} \leq s_{j}$. The goal of Isotonic regression is to minimize the square loss of $\sum_{i=1}^{N}(f(s_{i})-y_{i})^2$, which is optimized to learn the boundaries and calibration scores of each interval. We employ PAVA\cite{pava} algorithm to optimize the function $f$, and the calibrated threshold $\tau'$ is set to $f(\tau)$. \textcolor{blue}{Table VIII} shows the calibration of three alternative methods (ASC, Histogram binning and Isotonic regression) on the FIW, KinFaceW, LFW, and IJB-C datasets, in terms of ECE metric. We observe that ASC achieves the best calibration performance in most cases. Our method is easy to implement, as it involves only two parameters to be optimized and they can be efficiently learned by most gradient-based optimizers. \subsection{Visualization Analysis} In \textcolor{blue}{Fig. 7}, confidence calibration on sampled pairs from four datasets are presented. We divide sample pairs into three groups according to the recalibrated confidence value: low confidence, medium confidence, and high confidence. As expected, the visualization results show that samples with low confidence are usually accompanied by certain facial occlusions, pose variations, age discrepancies, or image blurring, whereas samples with high confidence tend to have the same pose, background, and illumination conditions. Consistent with the previous experiments, we observe that models usually make incorrect predictions on low-confidence samples and correct predictions on high-confidence samples. The samples with medium confidence usually have similar issues as those with low confidence, but the facial region is complete, so there is a higher probability that they can be correctly predicted by the face or kinship verification models. In addition, ASC has good calibration ability for those samples of over-confidence (the first row) and under-confidence (the second, third, and fourth rows), making the calibrated confidence close to the average accuracy. All in all, our proposed confidence calibration method via angular scaling does not require retraining of the verification model while still maintaining its verification performance. \section{Conclusions} In this work, we propose a simple yet effective confidence measure for face and kinship verification tasks, allowing any off-the-shelf verification models to estimate the decision confidence in an efficient and flexible way. We further introduce a confidence calibration method using angular scaling for face and kinship verification tasks, which is retraining-free and accuracy-preserving. We perform comprehensive experiments on four widely used face and kinship verification datasets to investigate the calibration of popular face/kinship verification models, and the effectiveness of our ASC is validated in these experiments. Experimental comparisons with two popular post-calibration methods demonstrate that our ASC achieves superior calibration performance. In future work, incorporation of uncertainty modeling in confidence estimation and confidence calibration appears to be an interesting and promising direction in face and kinship verification. \section{Introduction} Face and kinship verification are two popular facial analysis tasks in computer vision. Face verification attempts to determine whether or not a given pair of face samples belong to the same subject\cite{face_verification}. Kinship verification aims to predict whether or not there is a kin relation between a given pair of face samples\cite{kinship1}. Face and kinship verification tasks based on deep learning technologies have achieved remarkable performance under controlled conditions in the past decade. However, the two tasks in the wild are still challenging due to large intra-class variations, such as complex background, occlusion, and a variety of variations on illumination, pose and facial expression. Prior works on face and kinship verification are generally devoted to improve prediction accuracy, while paying little attention to confidence estimation. We argue that the reliability is also a key measure for evaluating the performance of these verification algorithms, and it becomes even more crucial for those verification systems deployed in high-risk scenarios. In this paper, we focus on modeling confidence estimation and calibration for face and kinship verification. Accurate confidence estimation for face and kinship verification is often difficult. One reason can be that the labels describing sample (pairs) uncertainty are usually not offered in most face or kinship verification datasets. To address this issue, existing approaches often tend to train a separate network\cite{face_uncertainty_1} or an extra network branch\cite{face_uncertainty_2,probabilistic_face_embeddings,relative_uncertainty} to model data uncertainty. However, they are devoted to uncertainty estimation of individual face images rather than confidence estimation for given face pairs. On the other hand, modern neural networks tend to be over- or under-confident in their predictions. Hence, the similarity score of a face pair may not correctly reflect the verification confidence. For these reasons, in this paper we propose a simple yet effective confidence measure for face and kinship verification. Our approach is inspired by the following observation: If the pair similarity is close to the decision threshold $\tau$, the model is less likely to make a correct prediction. On the other hand, if the pair similarity is far from the threshold, the model's prediction is more likely to be correct. The proposed measure allows face and kinship verification models to transform the similarity score into a confidence score for a given face pair. We further develop a new algorithm to adjust the similarity of face pairs in angular space, so that the calibrated confidence can well quantify the predictive confidence while maintaining the verification accuracy. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{Fig1} \centering \caption{The pipeline of our proposed confidence-calibrated face and kinship verification. Different from previous methods, our method gives additional well-calibrated confidence estimation on the prediction, by introducing a simple confidence measure and a flexible calibrator that can be directly applied to most existing face and kinship verification models without modifications.} \label{Fig_1} \end{figure} The pipeline of our confidence-calibrated face and kinship verification is illustrated in \textcolor{blue}{Fig. 1}. The main contributions of this work can be outlined below: 1) We introduce a new confidence measure for probabilistic face and kinship verification, allowing any off-the-shelf verification models to efficiently estimate the prediction confidence. 2) We develop a confidence-calibrated algorithm via angle scaling, which can be directly applied to existing face and kinship verification models without model modifications. To the best of our knowledge, this is the first general confidence-calibrated solution to face and kinship verification in modern context. 3) Extensive experiments are conducted on both face and kinship datasets, and the results demonstrate that our proposed method achieves state-of-the-art calibration performance. The rest of the paper is organized as follows. In section II, we first review prior works on face and kinship verification, and then discuss related works on confidence estimation in face analysis. In section III, we elaborate our proposed confidence measure for face and kinship verification, followed by our confidence-calibrated approach. In section IV, we present experimental results and analysis. Finally, we conclude the paper in section V. \section{Related Works} \subsection{Face and Kinship Verification} Face verification is a well-studied research topic in computer vision. In early studies, the low-level handcrafted features were used for verification and identification. The development of deep neural networks in the past decade has greatly improved the performance of face verification. DeepFace\cite{deepface} and DeepID\cite{deepid,deepid2,deepid2+,deepid3} were the first studies to introduce deep CNNs into face recognition, which explored deep models to learn effective high-level features, thus boosting the face recognition performance. FaceNet\cite{facenet} proposed the triplet loss to learn a direct mapping from face images to a compact Euclidean space, with distances directly related to a measure of face similarity. Wen et al.\cite{center} proposed a center loss to enhance the compactness of intra-class samples by minimizing the Euclidean distance between deep feature vectors and their corresponding class centers, while combining a softmax loss to guarantee inter-class differences. Large-Margin Softmax\cite{lsoftmax} inspired new ideas for face verification tasks, and many excellent studies have emerged since then, such as SpereFace\cite{sphereface}, CosFace\cite{cosface}, and ArcFace\cite{arcface}. These methods penalised the angles between deep features and their corresponding weights in angular space, which improved effectively the discriminative ability of feature embeddings. Note that vector angular-based distance measure has gradually replaced the Euclidean distance measure as the most popular method for face verification tasks, since the cosine of the angle and the Softmax loss are inherently consistent. The facial kinship verification problem can be dated back to the pioneering work by Fang et al.\cite{kinship1}. Since then, a variety of approaches have been proposed. These approaches can be roughly divided into two categories: one is the traditional methods based on feature or metric learning, and the other is the deep learning-based approaches developed in recent years. The former seeks to learn a feature encoder or distance metric from pairwise kinship samples, and the representative work of this line is the neighborhood repulsed metric learning (NRML)\cite{kinship2}, with the goal of learning a distance metric that maximizes the distance between negative pairs while minimizing the distance between positive pairs. Following this,\cite{kinship3,kinship4,kinship5,kinship6,kinship7,kinship8} expanded on this concept by developing the novel approaches combining multiple features\cite{kinship3,kinship6}, multiple views\cite{kinship5,kinship6,kinship7}, multiple similarity\cite{kinship4,kinship7}, and denoising\cite{kinship8}. Recently, research works\cite{kinship9,kinship10,kinship11,kinship12,kinship13,kinship14,kinship15,kinship16,kinship17,kinship18,kinship19} based on traditional methods have been renovated with deep learning techniques, such as deep metric learning, deep feature representation, or their combination to solve the kinship recognition problem\cite{kinship9,kinship12,kinship15}. Besides that, a variety of kinship recognition solutions with generative modeling\cite{kinship20,kinship21,kinship23} or graph representation\cite{kinship22} have been developed to further improve the robustness of the kinship verification. As discussed above, most existing face and kinship verification methods focus on accuracy performance, while ignoring the confidence estimation for their predictions. Even though a few attempts have been made recently to estimate the prediction confidence\cite{bmvc2022}, the estimated confidence can be inaccurate due to the poor calibration of modern DNNs\cite{uncertainty_3}. Differently, our method provides well-calibrated predictive confidence while maintaining the verification accuracy. \subsection{Uncertainty and Confidence Estimation in Face Analysis} In recent years, a variety of computer vision tasks, ranging from object detection\cite{uncertainty_detection_1,uncertainty_detection_2}, semantic segmentation\cite{uncertainty_segmentation_1,uncertainty_segmentation_2}, to image retrieval\cite{uncertainty_retrieval_1,uncertainty_retrieval_2}, have introduced uncertainty estimation into deep models to improve the robustness and interpretability of the system. Uncertainty can generally be classified into two types: aleatoric uncertainty and epistemic uncertainty\cite{uncertainty_1}. The former relates to the noise-related uncertainty in the data, whereas the latter relates to the parameter uncertainty in the prediction model. Gal and Ghahramani\cite{uncertainty_2} proposed to model predictive uncertainty by dropout training in deep neural networks as approximate Bayesian inference. Kendall and Gal\cite{uncertainty_1} designed a Bayesian deep learning framework that combines input-dependent aleatoric uncertainty with epistemic uncertainty. Guo et al.\cite{uncertainty_3} investigated various factors affecting the uncertainty of deep models, and evaluated the performance of multiple calibration methods for classification tasks. There are a few works on the uncertainty analysis in face and kinship verification. Xie et al.\cite{face_uncertainty_1} proposed to train an independent network to measure the quality of face images. Shi et al.\cite{probabilistic_face_embeddings} proposed to learn the variance of feature embedding, and measure the likelihood of face pair belonging to the same latent distribution. In \cite{relative_uncertainty}, the relative uncertainty of face pair is learned through an additional network branch, and the uncertainty is used as a weight to fuse the features with two different labels. In addition, various attempts have been made to incorporate uncertainty analysis into the process of face representation learning\cite{face_uncertainty_2,face_uncertainty_3,face_uncertainty_4}. Existing methods for modeling uncertainty in facial feature learning can only provide a uncalibrated uncertainty estimation in the prediction stage, which does not reflect the probability of the prediction being correct. For verification tasks, the confidence measure and calibration approach we proposed can provide well-calibrated prediction confidence in the decision-making process, which directly represents the probability that the prediction is correct and aids in risk assessment. The most closely related work to ours is the approach proposed in\cite{bmvc2022}, where uncertainty of the similarity score is estimated and propagated to the predictive confidence in face verification. However, this approach doesn't take into account the confidence calibration, leading to inaccurate confidence estimation. Experiment results on four widely-used datasets demonstrate that our proposed post-calibration method works well in face and kinship verification in terms of the uncertainty metric. \section{Our Approach} \subsection{Preliminaries: Face and Kinship Verification} Given a face pair $(X_1,X_2)$, existing face and kinship verification methods typically compute a cosine similarity $s(X_1,X_2)=\tfrac{<f(X_1) , f(X_2)>}{\vert\vert f(X_1)\vert\vert\vert\vert f(X_2) \vert\vert}$ and then use a decision threshold $\tau$ for final prediction, where $f(\cdot)$ is the feature embedding often represented by modern DNNs. Formally, prediction function $g$ for face and kinship verification can be defined by: \begin{equation} \label{equation 1} g(s,\tau)=\left\{ \begin{aligned} 1, & \qquad s \geq \tau \\ -1, & \qquad s < \tau \\ \end{aligned} \right. \end{equation} where $\tau$ is a predefined threshold, and often, it is empirically set based on the ROC curve on a held-out validation set. Recent works on face or kinship verification show that verification systems built even on popular DNNs may not work reliably, especially when the face images are partially occluded or are of low resolution. Therefore, confidence estimation for face and kinship verification plays a key role in such a safety-critical task. However, most existing verification methods based on similarity measure fail to quantify the prediction confidence of face pairs, as the similarity score itself does not exactly reflect the prediction confidence. To address this issue, we propose a simple and flexible confidence measure to quantify the predictive confidence of any face and kinship verification models. Specifically, we estimate the predictive confidence based on the similarity $s$ and threshold $\tau$, and then calibrate the confidence in angular space so that the calibrated confidence relates directly to the probability of the prediction being correct. \subsection{Confidence Measure} Intuitively, if the similarity score of a face pair is equal to the left or right boundary values (-1 or 1), the predictive confidence will reach its maximum value of 1. Obviously, the predictive confidence relates not only to the similarity score but also to the decision threshold. \begin{figure}[!t] \centering \includegraphics[width=2.8in]{Fig2} \centering \caption{An illustration for the relation between similarity score and predictive confidence. The closer to the decision threshold $\tau$ the similarity score $s$ is, the lower the confidence $c$ becomes.} \label{Fig_2} \end{figure} To model the relation between the similarity score and the predictive confidence for verification problems, we first define a confidence function $\varphi(s,\tau)$ based on the similarity $s$ and threshold $\tau$: \begin{equation} \label{equation 2} \varphi(s,\tau)=\left\{ \begin{aligned} \frac{s-\tau}{1-\tau}, & \qquad g(s,\tau)=1 \\ \frac{\tau-s}{1+\tau}, & \qquad g(s,\tau)=-1 \\ \end{aligned} \right. \end{equation} where $s \in [-1,1]$, and $\tau \in (-1,1)$. The similarity threshold $\tau$ divides the cosine similarity interval $[-1,1]$ into the positive part and negative part, with size of $1-\tau$ and $1+\tau$, respectively, as shown in \textcolor{blue}{Fig. 2}. Note that face or kinship verification is a binary decision problem. Hence, the probabilistic confidence $c$ for a prediction $g$ based on Eq. (1) should be greater or equal to 0.5. As such, the confidence measure $c(s,\tau)$ on the prediction $g$ can be written by: \begin{equation} \label{equation_3} c(s,\tau)=\frac{1}{2}\varphi(s,\tau)+\frac{1}{2} \end{equation} As shown in \textcolor{blue}{Fig. 2}, if $s$ takes the value farther away from $\tau$ (e.g., $s=s_2$), the predictive confidence $c$ becomes much higher (e.g., $c=c_2$). On the contrary, if the value of $s$ is closer to $\tau$ (e.g., $s=s_1$), the confidence $c$ becomes lower (e.g., $c=c_1$), indicating the more likely the model makes incorrect predictions. So far, we have propose a flexible confidence measure that can be directly applied to any off-the-shelf verification models to yield a probabilistic estimation of prediction confidence. \subsection{Confidence Calibration via Angular Scaling} In actuality, there is a bias between the verification accuracy and predictive confidence in face and kinship verification problems. This can be due to the fact that modern DNNs are often miscalibrated\cite{uncertainty_3}. The proposed confidence measure may not produce well-matched confidence to the expected accuracy of the model. In classification tasks, it is common to calibrate the model by scaling the $logit$\cite{uncertainty_3}. However, if a similar process is used to directly scale the similarity $s$ in the verification task, the similarity $s$, the threshold $\tau$, and the similarity interval will be scaled equally, resulting in no change to the prediction confidence according to Eq. (2). Inspired by the work of \cite{arcface,sphereface,cosface}, we propose a post-calibration method via angular scaling, which can well calibrate the prediction confidence without retraining the model. We propose to calibrate the prediction confidence by calibrating the angle $\theta$ between pairs of face on a recalibration dataset $D_c=\{(X_i,Z_i),y_i\}_{i=1}^N$, and a feature encoder denoted as $f(\cdot)$. For a sample pair $(X_i,Z_i)$ with label $y_i\in \{-1,1\}$ in $D_c$, its feature representation and cosine similarity are denoted by $x_i=f(X_i), z_i=f(Z_i)$, and $s(x_i,z_i)$, respectively. Our angular scaling calibration (ASC) method learns to adjust the angle between the face pairs based on the similarity set $S_c=\{s(x_i,z_i)\}_{i=1}^N$ and its labels $Y_c=\{y_i\}_{i=1}^N$ (derived from the recalibration set $D_c$). Let $\theta_i$ denote the angle between the two face representation features $x_i$ and $z_i$: \begin{equation} \label{equation_4} \theta_i=arccos(s(x_i,z_i)) \end{equation} Our ASC learns to adjust the angle $\theta_i$ using two learnable scalar parameters $w$ and $b$: \begin{equation} \label{equation_5} \theta_i'=\theta_i*w+b \end{equation} where $w>0$ and $\theta_i' \in [0,\pi]$, and the calibrated similarity is: \begin{equation} \label{equation_6} s'(x_i,z_i)=cos(\theta_i') \end{equation} Accordingly, the calibrated threshold will be: \begin{equation} \label{equation_7} \tau'=cos(arccos(\tau)*w+b) \end{equation} To learn the calibration parameter $w$ and $b$, we minimize the following objective function: \begin{equation} \label{equation_8} L=\frac{1}{N}\sum_{i=1}^{N}(s'(x_i,z_i)-y_{i})^2 \end{equation} where $y_{i}=1$ if $(x_i,z_i)$ is a positive pair, and $y_{i}=-1$ otherwise. The detail of the calibration procedure via angular scaling is summarized in \textbf{Algorithm 1}. In the prediction stage, we use the learned $w$ and $b$ to update the calibrated similarity $s'$ and the threshold $\tau'$, and then update the predictive confidence according to Eq. (3). \begin{algorithm}[b] \caption{Angular Scaling Calibration (ASC)}\label{alg:alg1} \begin{algorithmic} \STATE \STATE \textbf{Input: }\text{Recalibration set} $D_c=\{(X_i,Z_i),y_i\}_{i=1}^N$. \STATE \textbf{Output: }\text{Calibration parameters} $w,b$. \STATE \text{Set} $w=1,b=0$; \STATE \text{Compute the similarity $S_c$ of feature pairs;} \STATE \text{Compute the angle $\theta_c$ according to Eq.(4);} \STATE \textbf{while}\text{ not converge}\textbf{ do} \STATE \hspace{0.2cm}\text{1. Update the angle set according to Eq. (5);} \STATE \hspace{0.2cm}\text{2. Update the similarity set according to Eq. (6);} \STATE \hspace{0.2cm}\text{3. Update $w$,$b$ by optimizing the objective function (8);} \STATE\textbf{end} \STATE \textbf{return} $w,b$ \end{algorithmic} \end{algorithm} Now, we briefly explain why the angular scaling calibration ensures that the model maintains the prediction accuracy. Firstly, let $\tau$ denote the uncalibrated threshold, and we are given the positive pair $(X_p,Z_p)$ and negative pair $(X_n,Z_n)$, and their features are $(x_p,x_p)$ and $(x_n,z_n)$, respectively. It is clearly that $s(x_n,z_n) < \tau \leq s(x_p,z_p) $, and since $arccos$ decreases monotonically in the interval $[-1,1]$, we have: $\theta_n > \theta_{\tau} \geq \theta_p$. We then use $w,b$ to adjust the angle between the two feature vectors. Because $w>0$, we have $\theta_n'>\theta_{\tau}' \ge \theta_{p}' $ , and $\theta_{n}',\theta_{\tau}',\theta_{p}' \in [0,\pi]$ are satisfied. Likewise, we can also conclude $cos(\theta_{n}')<cos(\theta_{\tau}') \leq cos(\theta_{p}')$. Hence, it can be concluded that $s'(x_n,z_n)<\tau' \leq s'(x_p,z_p) $. This indicates that our proposed ASC learns to calibrate the predictive confidence while maintaining the verification accuracy. To evaluate the calibration performance of our method for face and kinship verification, we use ECE (Expected Calibration Error)\cite{bayesian_binning} as a metric. ECE is defined as the weighted average difference across bins between the expected accuracy and the predictive confidence: \begin{equation} \label{equation_9} ECE=\sum_{m=1}^{M}\frac{\vert B_m \vert}{n}\vert acc(B_m)-conf(B_m) \vert \end{equation} where $B_m$ represents all predictions fall in the $m$th bin, $M$ denotes the number of bins for partition of the confidence interval $[0.5,1]$, $n$ is the total number of samples, $acc(B_m)$ and $conf(B_m)$ are the accuracy and confidence of $m$th bin, calculated by Eq. (10) and Eq. (11), respectively. \begin{equation} \label{equation_10} acc(B_m)=\frac{1}{\vert B_m \vert}\sum_{i \in B_m}1(g(s_i,\tau)=y_i) \end{equation} \begin{equation} \label{equation_11} conf(B_m)=\frac{1}{\vert B_m \vert}\sum_{i \in B_m}c(s_i,\tau) \end{equation} In general, there are two schemes towards bins partition\cite{binning_schema}: one is to use equal-size interval partition, that is, the size of each bin is equal to $1/2M$, and the other is that each interval contains an equal number of samples. Note that a well-calibrated verification model often has a small ECE. Especially when the accuracy of each bin is equal to its confidence, the ECE value of the model will be zero, indicating that it is perfectly calibrated. \section{Experiments} To validate the effectiveness of our proposed method for face and kinship verification, we conduct extensive experiments on four face and kinship datasets, including FIW, KinFaceW, LFW, and IJB-C. In this section, we will report the accuracy, the mean confidence and the ECE before and after calibration on these datasets, and the experimental analysis are also present in detail. \begin{table*}[htbp] \renewcommand\arraystretch{1.2} \scriptsize \caption{ Accuracy (\%) and Calibration (\%) Performance of Different Models and Losses on the FIW Dataset \label{tab:table1}} \centering \begin{tabular}{c|c|ccccccccccc|c|c} \hline \multirow{2}{*}{Loss} & \multirow{2}{*}{Model} & \multicolumn {12}{c|}{Accuracy} & \multirow{2}{*}{ECE} \\ \cline{3-14} &&SS & BB & \ SIBS & FD & MD & FS & MS & GFGD & GMGD & GFGS & GMGS & AVG \\ \hline InfoNCE&ResNet101&82.84&82.06&80.32&76.69&80.59&82.81&76.47&78.10&71.38&71.02&60.34&80.19&22.85\\ InfoNCE&ResNet50&79.00&74.81&75.92&71.82&74.15&78.60&72.10&73.36&70.26&63.67&61.45&75.01&16.51\\ InfoNCE&ResNet34&76.49&73.46&72.99&70.60&72.42&75.88&69.23&67.27&62.83&62.86&56.98&72.87&12.72\\ InfoNCE&VGG16&67.20&66.32&62.27&61.61&63.93&66.90&62.08&67.95&61.71&63.67&56.98&64.68&5.97\\ \hline Triplet&ResNet101&80.49&79.50&78.79&73.34&79.04&80.53&74.34&77.88&77.32&68.98&59.22&77.94&20.93\\ Triplet&ResNet50&78.63&75.80&75.92&72.51&74.05&76.49&67.17&68.40&66.17&66.94&55.31&74.18&16.97\\ Triplet&ResNet34&75.17&73.95&71.29&72.26&71.03&76.26&69.39&69.75&64.68&62.04&59.78&72.86&10.46\\ Triplet&VGG16&66.14&64.80&64.32&63.76&62.13&67.01&64.02&65.69&68.03&66.53&50.84&64.65&7.87\\ \hline ArcFace&ResNet101&80.23&79.36&79.38&75.62&77.52&81.19&73.54&76.75&75.84&71.02&61.45&78.02&19.39\\ ArcFace&ResNet50&76.53&73.13&73.23&72.79&73.63&78.20&70.11&65.91&57.25&71.43&63.13&73.87&16.00\\ ArcFace&ResNet34&74.67&74.06&70.94&71.12&71.00&75.66&67.25&64.55&62.83&58.37&61.45&72.16&10.04\\ ArcFace&VGG16&67.98&67.11&61.86&60.86&63.93&68.00&61.60&61.63&65.43&55.92&58.10&64.83&7.69\\ \hline Softmax&ResNet101&78.51&76.60&75.57&75.48&76.64&81.28&76.57&73.81&67.66&68.98&65.92&77.25&16.24\\ Softmax&ResNet50&75.90&75.21&73.29&69.57&73.49&78.06&70.48&66.37&60.97&77.14&64.25&73.74&15.77\\ Softmax&ResNet34&76.34&74.18&72.47&70.76&72.76&76.75&68.43&65.46&64.31&60.00&59.22&73.06&8.90\\ Softmax&VGG16&67.76&67.80&61.45&61.75&64.35&66.32&60.59&60.05&68.40&56.73&56.42&64.71&5.40\\ \hline \end{tabular} \end{table*} \subsection{Datasets and Experimental Settings} \textsl{1) FIW \cite{fiw_1}:} FIW (Families In the Wild)\cite{fiw_1,fiw_2,fiw_3,fiw_4} is the largest visual kinship recognition dataset to date, consisting of over 13,000 facial images from 1000 families, with 11 pairwise kin relationships that can be divided into three sub-groups: siblings type (i.e., sister-sister, brother-brother, and sister-brother), parent-child type (i.e., father-son, father-daughter, mother-son, and mother-daughter), and grandparent-grandchild type (i.e., grandfather-grandson, grandfather-granddaughter, grandmother-grandson, and grandmother-granddaughter). The FIW is a challenging dataset because all the images are captured in the wild with partial occlusion and large variations in background, pose, expression, illumination, and so on. InfoNCE\cite{simclr}, Triplet\cite{facenet}, Softmax\cite{deepface}, and ArcFace\cite{arcface} are four widely-used losses in face and kinship verification tasks. We first use four loss functions to train models with different backbones pre-trained on the MS-celeb-1M\cite{msceleb}. We follow the data partitioning and evaluation protocol of RFIW 2021\cite{fiw_5}. To be more specific, we employ RetinaFace\cite{retinaface} for face alignment and the sample is cropped into $112\times112$ pixels. SGD is used as the optimizer with a momentum of 0.9 and a learning rate of 0.0001. \textsl{2) KinFaceW\cite{kinship2}:} The KinFaceW-I and KinFaceW-II datasets are widely used kinship datasets comprised of Internet-collected images of public figures and celebrities. The two datasets contain four kin relations: Father-Son, Father-Daughter, Mother-Son, and Mother-Daughter. For these four kin relations, KinFaceW-I has 156, 134, 116, and 127 pairs of images with kin relationship. In KinFaceW-II, there are 250 pairs of images for each relationship. KinFaceW-I differs from KinFaceW-II in that, in most cases, the face pairs in the former dataset are from distinct photos while those in the latter one are from the same photos. Each image in the two datasets is aligned and resized into $64\times64$ for feature extraction. We evaluate two traditional and one deep-learning kinship verification methods on the KinFaceW datasets, including: 1) NRML (LBP)\cite{kinship2}: for each face image, we extract 3776-dimensional uniform pattern LBP features to kinship verification. 2) NRML (HOG)\cite{kinship2}: first, we partition each image into non-overlapping blocks. Then, we extract a 9-dimensional HOG feature for each block and concatenate them to generate a 2880-dimensional feature vector. 3) InfoNCE (ResNet18): we use InfoNCE loss to fine-tune a ResNet-18 model pre-trained on the MS1MV2\cite{arcface}. More specifically, the model is optimized using SGD, and the temperature parameter is set to 0.1. In this experiment, we use a five-fold cross-validation strategy in an image-unrestricted setting. \textsl{3) LFW\cite{lfw1,lfw2}:} LFW contains 13,233 face images collected from 5,749 individuals, of which 1,680 have 2 or more face images. We use CASIA-WebFace\cite{casia} as the training set to train networks of different depths with the ArcFace loss, and evaluate the verification performance of these models on the LFW dataset. In the experiments, we set the initial learning rate and momentum of the SGD optimizer to 0.1 and 0.9. We calibrate the confidence of the four models using ASC in the 10-fold cross-validation scheme. In addition, the maximum number of iterations and the learning rate for ASC are set to 1000 and 0.01, respectively. \textsl{4) IJB-C \cite{IJB-C}:} IJB-C (IARPA Janus Benchmark-C ) dataset contains 31,334 images and 11,779 videos captured under unconstrained condition from 3,531 subjects. There are a total of 23,124 templates, with 19,557 positive matches and 15,638,932 negative matches to enable performance evaluations at low FAR. In this experiment, we adopt the 1:1 Mixed Verification protocol, that is, a single feature vector was constructed by taking a weighted average of the frames in the template, so that all frames in a video belonging to the same subject have the same cumulative weight as a single still image. We perform face verification on the IJB-C dataset using different deep models trained on the MS1MV2\cite{arcface} with the ArcFace loss and the Triplet loss functions. To perform confidence estimation and calibration, we use five-fold cross-validation with stratified sampling of positive and negative samples, with four of the folds being the recalibration set and the remaining one being the test set. \begin{figure*}[!htbp] \centering \includegraphics[width=6.0in]{Fig3} \centering \caption{The trend of verification accuracy (\%) vs. ECE (\%) of different verification models and losses during training on the FIW dataset. } \label{Fig_3} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=6.0in]{Fig4} \centering \caption{Similarity distributions before calibration (a) and after calibration with ASC (b). From top to bottom are results of ResNet101, ResNet50, ResNet34, and VGG16, respectively.} \label{Fig_4} \end{figure*} \subsection{Experimental Results on FIW} In \textcolor{blue}{Table I}, we show the verification accuracy, the weighted average accuracy, and the ECE on FIW dataset for different models and losses. From these results, we make the observations as below: \begin{enumerate} \item For all four loss functions, models with deeper architecture have better feature representation abilities. However, we also observe that increasing depth lead to a negative influence on model calibration. \item Compared with Softmax and ArcFace loss, InfoNCE and Triplet loss achieve better verification accuracy. Especially as the model gets deeper, the performance improvement will be more noticeable; however the calibration error will increase. \item ArcFace loss leverages the angular margin penalty to enforce extra intra-class compactness and inter-class discrepancy, achieving better verification performance with larger calibration error than Softmax loss. \end{enumerate} In \textcolor{blue}{Fig. 3}, we plot the accuracy-calibration during the training process of ResNet34 and ResNet101, intuitively showing the trend of the model's ECE and verification accuracy with four different loss functions. It can be seen that the ECE of both models (ResNet34 and ResNet101) generally increase with the verification accuracy of the models. Also, if the model uses a loss that optimizes the feature representation, like InfoNCE or Triplet loss, the model has a larger ECE with the same accuracy. In addition, we conduct experiments on the FIW dataset to evaluate the effects of the angular scaling calibration. \textcolor{blue}{Table II} shows the ECE and thresholds before and after calibration. Specifically, we adopt LBFGS\cite{lbfgs} as the calibration optimizer with a learning rate of 0.01 and a maximum number of iterations of 1000. The calibrated thresholds and calibration parameters ($w$ and $b$) are used to evaluate ECE on the test data. From \textcolor{blue}{Table II}, we observe that our ASC achieves good calibration on different models (loss functions). It is also found that the calibrated threshold tends to 0, so that positive and negative pairs are equally divided in the cosine similarity interval. \begin{figure}[!htbp] \centering \includegraphics[width=3.5in]{Fig5} \centering \caption{Accuracy vs. Confidence plots of different models and losses before and after calibration (with ASC) on the FIW dataset.} \label{Fig_5} \end{figure} \textcolor{blue}{Fig. 4} plots the cosine similarity distribution for different models with InfoNCE loss before and after calibration on the FIW dataset. We observe that the similarity before calibration is distributed over a smaller interval, leading to poor calibration, whereas the similarity after calibration is distributed uniformly over different similarity levels. \textcolor{blue}{Fig. 5} is the accuracy-confidence plots of different models with Triplet loss and Softmax loss on the FIW dataset before and after calibration. The plots also show how well the predictive confidence is calibrated via angular scaling on all models, where perfect calibration corresponds to the line $y=x$. We can see that our ASC is able to obtain the well-calibrated predictions, as the recalibrated confidence correctly reflects the prediction accuracy. \textcolor{blue}{Table III} summarizes the average verification accuracy, average confidence before and after calibration of different models on three sub-groups of FIW. It can be seen that the confidence before calibration is significantly lower or higher than the actual verification accuracy, while the recalibrated confidence well matches the true accuracy of the model. Therefore, verification models calibrated by ASC can provide a reliable confidence estimation to support decision-making in face and kinship verification systems. \begin{table}[!tbp] \renewcommand\arraystretch{1.2} \scriptsize \caption{ ECE(\%) and Threshold $\tau$ before and after calibration (with ASC) on the FIW Dataset \label{tab:table2}} \centering \begin{tabular}{c|c|c|c|c|c} \hline Loss & Model& ECE & \makecell{ECE\\w/ ASC} & $\tau$ & \makecell{$\tau$\\w/ ASC} \\ \hline InfoNCE&ResNet101&22.85&2.08&0.13&-0.01\\ InfoNCE&ResNet50&16.51&1.81&0.32&-0.03\\ InfoNCE&ResNet34&12.72&2.11&0.37&0.02\\ InfoNCE&VGG16&5.97&1.39&0.39&0.01\\ \hline Triplet&ResNet101&20.93&2.83&0.11&0.02\\ Triplet&ResNet50&16.97&2.83&0.32&-0.07\\ Triplet&ResNet34&10.46&1.87&0.26&0.06\\ Triplet&VGG16&7.87&1.54&0.71&0.08\\ \hline ArcFace&ResNet101&19.39&2.62&0.13&0.07\\ ArcFace&ResNet50&16.00&2.45&0.39&-0.09\\ ArcFace&ResNet34&10.04&3.40&0.14&-0.02\\ ArcFace&VGG16&7.69&2.90&0.20&-0.07\\ \hline Softmax&ResNet101&16.24&2.79&0.13&-0.04\\ Softmax&ResNet50&15.77&2.80&0.64&-0.05\\ Softmax&ResNet34&8.90&3.56&0.17&-0.01\\ Softmax&VGG16&5.40&1.81&0.37&0.01\\ \hline \end{tabular} \end{table} \begin{table*}[!htbp] \renewcommand\arraystretch{1.1} \scriptsize \caption{ Accuracy (\%), Confidence (\%) before and after calibration (with ASC) for different age groups on the FIW dataset \label{tab:table3}} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Model} & \multicolumn {3}{c|}{BB, SS, SIBS}& \multicolumn {3}{c|}{FS, MS, FD, MD} & \multicolumn {3}{c}{GFGS, GMGS, GFGD, GMGD} \\ \cline{2-10} &Conf.& Conf. w/ ASC & Acc. & Conf. & Conf. w/ ASC & Acc. & Conf. & Conf. w/ ASC & Acc. \\ \hline ResNet101&58.20&81.98&79.80&56.71&78.50&77.01&55.78&75.25&72.89\\ ResNet50&59.02&75.83&76.91&56.41&71.68&72.89&54.49&67.66&65.49\\ ResNet34&63.10&72.88&74.12&62.02&71.09&72.44&60.31&66.52&65.32\\ VGG16&70.34&64.11&65.26&69.20&63.63&64.29&67.86&63.22&64.08\\ \hline \end{tabular} \end{table*} \begin{table*}[!htbp] \renewcommand\arraystretch{1.1} \scriptsize \caption{ ACCURACY (\%), CONFIDENCE (\%), and ECE (\%) before and after calibration (with ASC) for different models on the KinFaceW datasets \label{tab:table4}} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{DataSet} & \multicolumn {2}{c|}{\multirow{2}{*}{Methods}} & \multicolumn {3}{c|}{FS} & \multicolumn {3}{c|}{FD} & \multicolumn {3}{c|}{MS} & \multicolumn {3}{c|}{MD} & \multicolumn {3}{c}{Mean} \\ \cline{4-18} &\multicolumn {2}{c|}{}& Acc. & Conf. & ECE & Acc. & Conf. & ECE & Acc. & Conf. & ECE & Acc. & Conf. & ECE & Acc. & Conf. & ECE \\ \hline \multirow{6}{*}{KFW-I} & \multirow{2}{*}{\makecell{NRML\\(LBP)}} & w/o ASC &\multirow{2}{*}{81.43}&57.01&27.03&\multirow{2}{*}{69.42}&55.40&19.24&\multirow{2}{*}{67.23}&55.94&18.04&\multirow{2}{*}{72.87}&56.08&19.48&\multirow{2}{*}{72.74}&56.11&20.95 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&79.94&1.97&\multirow{2}{*}{}&68.43&1.39&\multirow{2}{*}{}&64.64&2.49&\multirow{2}{*}{}&71.05&2.70&\multirow{2}{*}{}&71.02&2.14 \\ \cline{2-18} & \multirow{2}{*}{\makecell{NRML\\(HOG)}} & w/o ASC &\multirow{2}{*}{83.68}&57.52&25.91&\multirow{2}{*}{74.64}&56.88&19.35&\multirow{2}{*}{71.56}&56.32&20.26&\multirow{2}{*}{79.96}&57.16&23.69&\multirow{2}{*}{77.46}&56.97&22.30 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&81.10&3.24&\multirow{2}{*}{}&72.20&4.19&\multirow{2}{*}{}&68.40&3.98&\multirow{2}{*}{}&76.02&6.95&\multirow{2}{*}{}&74.43&4.59 \\ \cline{2-18} & \multirow{2}{*}{\makecell{InfoNCE\\(ResNet18)}} & w/o ASC &\multirow{2}{*}{83.34}&55.24&27.29&\multirow{2}{*}{82.88}&54.99&27.75&\multirow{2}{*}{81.01}&54.86&25.17&\multirow{2}{*}{85.04}&55.93&28.31&\multirow{2}{*}{83.07}&55.26&27.13 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&82.03&1.47&\multirow{2}{*}{}&81.50&1.86&\multirow{2}{*}{}&79.13&1.97&\multirow{2}{*}{}&83.91&1.25&\multirow{2}{*}{}&81.64&1.64 \\ \hline \multirow{6}{*}{KFW-II} & \multirow{2}{*}{\makecell{NRML\\(LBP)}} & w/o ASC &\multirow{2}{*}{79.20}&54.96&24.16&\multirow{2}{*}{71.60}&54.53&18.34&\multirow{2}{*}{72.20}&54.51&18.90&\multirow{2}{*}{68.40}&54.70&16.10&\multirow{2}{*}{72.85}&54.68&19.38 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&78.65&1.22&\multirow{2}{*}{}&70.88&1.57&\multirow{2}{*}{}&71.82&1.67&\multirow{2}{*}{}&67.25&1.67&\multirow{2}{*}{}&72.15&1.53 \\ \cline{2-18} & \multirow{2}{*}{\makecell{NRML\\(HOG)}} & w/o ASC &\multirow{2}{*}{80.80}&55.75&24.90&\multirow{2}{*}{72.80}&55.14&19.16&\multirow{2}{*}{74.80}&55.41&20.57&\multirow{2}{*}{70.40}&55.24&18.48&\multirow{2}{*}{74.70}&55.39&20.78 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&79.76&1.06&\multirow{2}{*}{}&72.24&1.86&\multirow{2}{*}{}&74.27&1.68&\multirow{2}{*}{}&68.82&1.89&\multirow{2}{*}{}&73.77&1.62 \\ \cline{2-18} & \multirow{2}{*}{\makecell{InfoNCE\\(ResNet18)}} & w/o ASC &\multirow{2}{*}{84.00}&55.37&28.13&\multirow{2}{*}{78.20}&54.50&23.20&\multirow{2}{*}{82.20}&55.15&26.55&\multirow{2}{*}{84.00}&55.16&28.34&\multirow{2}{*}{82.10}&55.05&26.56 \\ \cline{3-3}\cline{5-6}\cline{8-9}\cline{11-12}\cline{14-15}\cline{17-18} && w/ ASC &\multirow{2}{*}{}&83.48&0.60&\multirow{2}{*}{}&78.16&0.81&\multirow{2}{*}{}&80.55&1.54&\multirow{2}{*}{}&83.17&1.06&\multirow{2}{*}{}&81.34&1.00 \\ \hline \end{tabular} \end{table*} \subsection{Experimental Results on KinFaceW} In \textcolor{blue}{Table IV}, we report the accuracy, the mean confidence and the ECE before and after calibration on KinFaceW-I and KinFaceW-II datasets. We observe that models on the kin relationships (FS, MD) of same gender have higher verification accuracy and confidence than those of different gender (FD, MS), but the gain in accuracy is more significant. For NRML, the HOG achieves better verification accuracy and confidence than LBP, while it has a larger ECE. Compared to traditional verification methods, deep-learning methods perform less confidently, with the smallest confidence and largest ECE on most kin relationships. Besides, the accuracy is randomly distributed over the bins due to the small size of the KinFaceW dataset and limited number of data samples in each bin. However, the models calibrated by our ASC consistently show improved calibration performance in terms of both ECE and mean ECE. \begin{figure}[!htbp] \centering \includegraphics[width=3.5in]{Fig6} \centering \caption{Accuracy vs. Confidence plots before and after calibration (with ASC) for different models on the LFW dataset.} \label{Fig_6} \end{figure} \begin{table}[!h] \renewcommand\arraystretch{1.2} \scriptsize \caption{ Accuracy (\%), Confidence (\%), and ECE (\%) before and after calibration (with ASC) on the LFW dataset \label{tab:table5}} \centering \begin{tabular}{c|c|c|c|c|c} \hline Model &Acc.&Conf.&\makecell{Conf.\\w/ ASC}&ECE&\makecell{ECE\\w/ ASC} \\ \hline ResNet101&99.55&70.65&98.79&28.48&0.78\\ ResNet50&99.23&71.68&98.33&27.84&0.93\\ ResNet34&98.88&73.14&97.81&25.43&1.13\\ VGG16&93.57&70.89&93.31&24.67&1.54\\ \hline \end{tabular} \end{table} \begin{table*}[!ht] \renewcommand\arraystretch{1.2} \scriptsize \caption{ Threshold $\tau$, TAR (\%), Accuracy (\%), Confidence (\%), ECE (\%), and AUC (\%) before calibration on the IJB-C dataset \label{tab:table6}} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Loss} & \multirow{2}{*}{Model} & \multicolumn {5}{c|}{FAR=1e-2} & \multicolumn {5}{c|}{FAR=1e-4} & \multirow{2}{*}{AUC} \\ \cline{3-12} &&$\tau$ & TAR & Acc. & Conf. & ECE & $\tau$ & TAR & Acc. & Conf. & ECE& \\ \hline \multirow{3}{*}{ArcFace}& ResNet101 & 0.03&97.17&99.92&65.54&34.89&0.22&93.92&99.94&70.76&29.64&99.25 \\ \cline{2-13} & ResNet50 & -0.17&96.69&99.92&68.62&31.36&0.09&90.16&99.93&75.47&23.97&99.25 \\ \cline{2-13} & ResNet34 & -0.20&96.02&99.89&71.99&27.98&0.02&90.56&99.93&77.56&21.93&99.24 \\ \hline Triplet&ResNet50& 0.15&93.35&99.71&64.73&35.10&0.28&86.31&99.90&68.31&31.36&99.07 \\ \hline \end{tabular} \end{table*} \begin{table*}[!ht] \renewcommand\arraystretch{1.2} \scriptsize \caption{Accuracy (\%), Confidence (\%), and ECE (\%) after calibration (with ASC) on the IJB-C dataset \label{tab:table7}} \centering \begin{tabular}{c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Loss} & \multirow{2}{*}{Model} & \multicolumn {3}{c|}{FAR=1e-2} & \multicolumn {3}{c}{FAR=1e-4} \\ \cline{3-8} && Acc. & Conf. & ECE & Acc.& Conf. & ECE \\ \hline \multirow{3}{*}{ArcFace}& ResNet101 &99.92&98.77&1.30&99.94&99.52&0.47 \\ \cline{3-8} & ResNet50 &99.92&98.80&1.18&99.93&99.46&0.53\\ \cline{3-8} & ResNet34 &99.89&98.06&1.90&99.93&99.36&0.63 \\ \hline Triplet&ResNet50&99.71&97.72&2.24&99.90&97.95&2.19 \\ \hline \end{tabular} \end{table*} \begin{table*}[!hb] \renewcommand\arraystretch{1.2} \scriptsize \caption{ Comparison of ECE (\%) for different post-calibration methods on four datasets \label{tab:table8}} \centering \begin{tabular}{c|c|c|c|c|c|c} \hline Dataset & Loss (Method) & Model (Feature) & Uncalibrated & Hist. Binning & Isotonic & ASC \\ \hline \multirow{16}{*}{FIW} & \multirow{4}{*}{InfoNCE}&ResNet101&22.85&2.47&2.51&\textbf{2.08}\\ \cline{3-7} &&ResNet50&16.51&\textbf{1.57}&1.62&1.81\\ \cline{3-7} &&ResNet34&12.72&2.89&2.60&\textbf{2.11}\\ \cline{3-7} &&VGG16&5.97&2.14&2.17&\textbf{1.39}\\ \cline{2-7} & \multirow{4}{*}{Triplet}&ResNet101&20.93&\textbf{1.22}&1.36&2.83\\ \cline{3-7} &&ResNet50&16.97&2.83&\textbf{2.82}&2.83\\ \cline{3-7} &&ResNet34&10.46&2.80&2.57&\textbf{1.87}\\ \cline{3-7} &&VGG16&7.87&\textbf{1.51}&4.70&1.54\\ \cline{2-7} &\multirow{4}{*}{ArcFace}&ResNet101&19.39&3.14&2.95&\textbf{2.62}\\ \cline{3-7} &&ResNet50&16.00&4.75&4.78&\textbf{2.45}\\ \cline{3-7} &&ResNet34&10.04&3.45&3.76&\textbf{3.40}\\ \cline{3-7} &&VGG16&7.69&1.66&\textbf{1.40}&2.90\\ \cline{2-7} & \multirow{4}{*}{Softmax}&ResNet101&16.24&1.37&\textbf{1.20}&2.79\\ \cline{3-7} &&ResNet50&15.77&2.92&2.85&\textbf{2.80}\\ \cline{3-7} &&ResNet34&8.90&3.82&3.65&\textbf{3.56}\\ \cline{3-7} &&VGG16&5.40&2.05&1.84&\textbf{1.81}\\ \hline \multirow{3}{*}{KinFaceW-I}& \multirow{2}{*}{NRML} &LBP &20.95&\textbf{1.90}&3.06&2.14 \\ \cline{3-7} &&HOG &22.30&4.98&\textbf{4.56}&4.59\\ \cline{2-7} &InfoNCE& ResNet18& 27.13&1.69&2.92&\textbf{1.64}\\ \hline \multirow{3}{*}{KinFaceW-II}& \multirow{2}{*}{NRML} &LBP &19.38&\textbf{1.08}&1.34&1.53\\ \cline{3-7} &&HOG &20.78&2.22&\textbf{1.37}&1.62\\ \cline{2-7} &InfoNCE& ResNet18& 26.56&2.35&2.22&\textbf{1.00}\\ \hline \multirow{4}{*}{LFW}& \multirow{4}{*}{ArcFace}&ResNet101&28.48&1.22&\textbf{0.16}&0.78\\ \cline{3-7} &&ResNet50&27.84&0.98&\textbf{0.70}&0.93\\ \cline{3-7} &&ResNet34&25.43&2.35&1.76&\textbf{1.13}\\ \cline{3-7} &&VGG16&24.67&2.56&1.58&\textbf{1.54}\\ \hline \multirow{4}{*}{IJB-C}& \multirow{3}{*}{ArcFace}&ResNet101&29.64&1.18&0.97&\textbf{0.47} \\ \cline{3-7} &&ResNet50&23.97&1.49&1.00&\textbf{0.53}\\ \cline{3-7} &&ResNet34&21.93&1.72&1.14&\textbf{0.63}\\ \cline{2-7} &Triplet&ResNet50&31.36&2.24&\textbf{2.07}&2.19\\ \hline \end{tabular} \end{table*} \begin{figure*}[!htbp] \centering \includegraphics[width=5.5in]{Fig7} \centering \caption{ Visualization of confidence calibration by our ASC method. From top to bottom are examples from FIW, LFW, KinFaceW-I, and KinFaceW-II, respectively. The captions below each sample pair show the confidence adjustment before and after calibration. For better visualization, sample pairs with different confidence levels are shown in three groups: \textbf{low} confidence (a), \textbf{medium} confidence (b), and \textbf{high} confidence (c).} \label{Fig_7} \end{figure*} \subsection{Experimental Results on LFW} To further evaluate our method on face verification task, we also conduct experiments on the LFW dataset. As can be seen from \textcolor{blue}{Table V}, deeper architectures may achieve better verification accuracy, but the calibration performance gets worse, which has also been demonstrated on both FIW and LFW. Moreover, \textcolor{blue}{Fig. 6} plots the correlation between the confidence and accuracy of the four models before and after calibration. we observe that ASC is helpful to improve the calibration of all four models. Due to the high accuracy of ResNet34, ResNet50, and ResNet101 on the LFW dataset, the calibration methods tend to map the predictive confidence to a high-level to match the expected accuracy. However, as shown in \textcolor{blue}{Fig. 6}, the calibrated confidence of all four models does not degenerate to a constant solution, and instead it is well matched to the prediction accuracy of different levels. \subsection{Experimental Results on IJB-C} We present in \textcolor{blue}{Table VI} the threshold $\tau$, the TAR (True Accept Rate), the accuracy, the mean confidence and the ECE before calibration on IJB-C dataset. The threshold increases as the FAR (False Accept Rate) drops from 1e-2 to 1e-4, resulting in lower TAR and ECE. Also, it can be seen that the verification accuracy on all groups is above 99.7 $\%$. Four groups of experiments show that ECE decreases with decreasing TAR, which is consistent with the correlation between ECE and accuracy. In the experiment, deeper model also gets larger ECE when achieving a better TAR , which is in line with the observations from previous experiments. Moreover, evaluating the effect of the loss function on the ECE reveals that the ResNet50 trained with the Triplet loss has a larger ECE than the model trained with ArcFace, even surpassing the ResNet101 model trained with ArcFace. \textcolor{blue}{Table VII} presents the accuracy, the mean confidence, and the ECE with ASC on the IJB-C datasets. The calibrated confidence is closer to the true accuracy, and the ECE is closer to zero. It shows that our proposed ASC yields good calibration performance for models with varying numbers of layers and loss functions, and it still keeps a small calibrator error even when the threshold changes. \subsection{Comparison with Previous Post-calibration Methods} To validate the superiority of our ASC for confidence calibration in face and kinship verification, two widely used post-calibration methods are chosen for performance comparison: Histogram binning\cite{hist} and Isotonic regression\cite{iso}. While there are some other post-calibration methods, such as Temperature scaling\cite{uncertainty_3}, Platt scaling\cite{platt}, Bayesian binning\cite{bayesian_binning}, and Matrix and vector scaling, they cannot be directly applied to confidence calibration for face and kinship verification tasks. For fair comparison, we adopt the same experimental setting for all compared methods. \textbf{Histogram binning} is a non-parametric calibration method. In this paper, Histogram binning firstly divide the cosine similarity score $s$ evenly into $M$ bins $B_1$, $B_2$, ..., $B_M$, with bin boundaries $-1=a_{1}<a_{2}<...<a_{M}<a_{M+1}=1$, where the bin $B_m$ refers to $\left( a_{m}, a_{m+1}\right]$. We assign each $B_m$ a calibrated score $\eta_m$, which is optimized by: \begin{equation} \label{equation_12} \mathop{\arg\min}\limits_{\eta_{1},...,\eta_{M}}\sum\limits_{m=1}^{M}\sum\limits_{i=1}^{N} 1(a_{m}<s_{i} \leq a_{m+1})(\eta_{m}-y_{i})^2 \end{equation} where $s_i$, $y_{i}$ are uncalibrated similarity score and the label of the $i$th sample pair, respectively. Once the $i$th sample falls into bin $B_m$, its calibrated similarity will be set to $\eta_m$. Additionally, the threshold $\tau'$ is updated with $\eta_t$, where $\tau \in \left( a_{t}, a_{t+1}\right]$. \textbf{Isotonic regression} is another commonly used non-parametric calibration method that can be used for similarity score calibration by learning a piecewise constant function $f$, that is, $s'=f(s)$, where $f(s_{i})\leq f(s_{j})$ if $s_{i} \leq s_{j}$. The goal of Isotonic regression is to minimize the square loss of $\sum_{i=1}^{N}(f(s_{i})-y_{i})^2$, which is optimized to learn the boundaries and calibration scores of each interval. We employ PAVA\cite{pava} algorithm to optimize the function $f$, and the calibrated threshold $\tau'$ is set to $f(\tau)$. \textcolor{blue}{Table VIII} shows the calibration of three alternative methods (ASC, Histogram binning and Isotonic regression) on the FIW, KinFaceW, LFW, and IJB-C datasets, in terms of ECE metric. We observe that ASC achieves the best calibration performance in most cases. Our method is easy to implement, as it involves only two parameters to be optimized and they can be efficiently learned by most gradient-based optimizers. \subsection{Visualization Analysis} In \textcolor{blue}{Fig. 7}, confidence calibration on sampled pairs from four datasets are presented. We divide sample pairs into three groups according to the recalibrated confidence value: low confidence, medium confidence, and high confidence. As expected, the visualization results show that samples with low confidence are usually accompanied by certain facial occlusions, pose variations, age discrepancies, or image blurring, whereas samples with high confidence tend to have the same pose, background, and illumination conditions. Consistent with the previous experiments, we observe that models usually make incorrect predictions on low-confidence samples and correct predictions on high-confidence samples. The samples with medium confidence usually have similar issues as those with low confidence, but the facial region is complete, so there is a higher probability that they can be correctly predicted by the face or kinship verification models. In addition, ASC has good calibration ability for those samples of over-confidence (the first row) and under-confidence (the second, third, and fourth rows), making the calibrated confidence close to the average accuracy. All in all, our proposed confidence calibration method via angular scaling does not require retraining of the verification model while still maintaining its verification performance. \section{Conclusions} In this work, we propose a simple yet effective confidence measure for face and kinship verification tasks, allowing any off-the-shelf verification models to estimate the decision confidence in an efficient and flexible way. We further introduce a confidence calibration method using angular scaling for face and kinship verification tasks, which is retraining-free and accuracy-preserving. We perform comprehensive experiments on four widely used face and kinship verification datasets to investigate the calibration of popular face/kinship verification models, and the effectiveness of our ASC is validated in these experiments. Experimental comparisons with two popular post-calibration methods demonstrate that our ASC achieves superior calibration performance. In future work, incorporation of uncertainty modeling in confidence estimation and confidence calibration appears to be an interesting and promising direction in face and kinship verification.
1,477,468,750,704
arxiv
\section{Coarse-graining via the fluctuation-dissipation theorem} A fluctuation-dissipation theorem of the second kind (FDT), according to the terminology of Kubo \cite{KTH85}, gives a one-to-one relationship between the noise and the friction properties of a diffusion process, and is formulated even far from equilibrium \cite{hcO05}. Our first aim is to generalize the FDT to the class of Markov processes. The motivation comes from a theory of coarse-graining, and our second aim is to generalize this, too. The goal of a theory of \emph{coarse-graining} is to derive more macroscopic from more microscopic models of a physical system. A general class of nonequilibrium-thermodynamical models can be written in the form of metriplectic systems \cite{pjM86} or the GENERIC \cite{GO97,OG97}, where the dynamics has the mathematical structure of a ``force'' times a ``phenomenological matrix'', which is directly related to the ``cometric'' field in the language of metriplectic systems, or the ``friction matrix'' in the language of the GENERIC. In this framework, the goal of coarse-graining is to resolve the mathematical structure of a macroscopic model in terms of the properties of the microscopic one, that is: (i) to find the thermodynamic potential~$s$ giving rise to the ``force'' and (ii) to compute the friction matrix~$M$. The classical setup is shown in \figurename\ref{fig:FDT}: \begin{itemize} \item A ``microscopic'' model (level~2), identified by the variables~$y$, shows a separation of time scales, such that the dynamics may be decomposed into a ``slow'' and a ``fast'' component. The first challenge is to identify a set of ``slow'' variables $x = \Pi(y)$ at the ``macroscopic'' level~1. \item One considers a stochastic extension of level 1, called ``level $1^+$'', which is assumed to be a diffusion process controlled by an extensive parameter~$n$. The noise term reproduces, in an approximate fashion, the fast dynamics that has been neglected in the transition operated by the map~$\Pi$. In the deterministic limit, when the parameter~$n$ is very large, fluctuations vanish and we recover the macroscopic model at level~$1$. \item The entropy function is found by looking at the stationary distribution of the diffusion process, and the friction matrix is defined by the diffusion tensor through $2 M = D$. The latter definition expresses the FDT, which is a one-to-one relation between the diffusion matrix and the drift term of a diffusion equation, as a consequence of detailed balance with respect to an invariant measure of the Boltzmann form $e^{n s}$. This implies that the friction matrix can be computed by simulation of the microscopic dynamics and estimation of the second moments of the approximating stochastic process, which is the basis of \emph{Green-Kubo relations} \cite{KTH85,hcO05}. \end{itemize} This scheme has been advocated by various authors in the framework of the \emph{projection-operator} technique \cite{rwZ01,hG82,hcO05,pE04,EV02}, which allows us to derive the diffusion process at the level $1^+$ and the expression of the friction matrix in terms of the microscopic data in a formal manner. In the literature \cite{PS08,ZHS16} there exist other mathematical techniques that also produce diffusion processes as effective dynamics and lead to similar conclusions. Many fluctuating systems, however, are not well approximated by diffusion processes, but require more general Markov processes \cite{FVEW15,DLLN16}. A typical example is represented by chemical reactions, which are characterized by rare and large events and are described by Markov jump processes. These are substantially different from diffusion processes in that the latter evolve continuously in time as infinitesimally small movements in state space, while -- for the former -- it is always possible to find a time scale at which the dynamics appears as constituted of sudden jumps at discrete instants of time. As announced before, the second aim of this work is to extend the above scheme of coarse-graining to the setting where fluctuations are assumed of the form of Markov processes, and the generalized FDT serves exactly this purpose. In the context of Markov processes, the generalized FDT gives friction no longer in terms of a matrix, but in terms of a dissipation potential \cite{dgbE72,mG93,CV90}. The mathematical ingredients that we need are (i) \emph{generalized gradient flows} \cite{MPR14} or the (purely dissipative) GENERIC \cite{GO97} to formulate friction through dissipation potentials, and (ii) large-deviation theory \cite{hT09} to characterize fluctuations. Following \cite{MPR14}, we identify a correspondence between Markov processes describing fluctuations and the generalized gradient structures of their deterministic limit: we call this connection a generalized fluctuation-dissipation theorem of the second kind (generalized FDT). Thanks to the powerful tools of large deviations, in particular a numerical implementation of the Feng-Kurtz scheme \cite{FK06}, we propose a novel coarse-graining method that aims at computing dissipation potentials. We test our newly-devised method in the example of a very simple chemical reaction. All constructions are restricted to purely dissipative systems and Markov processes with detailed balance. The proper extension including reversible dynamics has not been established yet and represents one of the biggest open issues. Before giving our reformulation of an FDT, we introduce the concept of a generalized gradient flow in Sec.~\!\ref{sec:GGF} and, in Sec.~\!\ref{sec:LD}, we give a short intuitive account of large-deviation theory and its use in statistical mechanics. In Sec.~\!\ref{sec:FDT}, we first illustrate the usual formulation of an FDT in the language of gradient flows and large deviations, thus providing the starting point for the intended generalization. Then, we formulate the generalized FDT for Markov processes and generalized gradient flows. In Sec.~\!\ref{sec:CME}, we study an example in the context of chemical reactions with both analytic and numerical instruments. Our conclusions and perspectives may be found in Sec.~\!\ref{sec:conclusions}. \begin{figure*}[t] \begin{tikzpicture}[node distance=1.75cm] \tikzstyle{every node}=[font=\normalsize, fill=white] \draw[very thick] (0, 0) -- (5, 0) node[pos=.5, above] {$\dot{y}_t = \operatorname{slow}(y_t) + \operatorname{fast}(y_t)$} node[at start] (A) {2} node[right] {microscopic level}; \draw[very thick] (0, 5) -- (6, 5) node[pos=.49, above] {$\overbrace{\dot{x}_t = M_{x_t} \mathrm{d} s_{x_t}}^\text{slow}$} node[at start] (B) {1} node[right] {macroscopic level} node[midway, below, fill=gray!10] {FDT: $2 M_x := D(x)$}; \draw[very thick] (1.9, 3.25) -- (11, 3.25) node[midway, above] {$\mathrm{d} X^n_t = \dfrac{1}{2} D(X^n_t) \mathrm{d} s_{X^n_t} \, \mathrm{d} t + n^{-1/2} B(X^n_t) \diamond \mathrm{d} W_t$} node[at start] (C) {$1^+$} node[right] {macroscopic + fluctuations} node[midway, below] {$D(x) = B(x) B(x)^T = n \lim\limits_{\tau \to 0} \dfrac{1}{\tau} \mathbb{E}\!\left[ \left( X^n_\tau - x \right)^2 \Big| X^n_0 = x \right]$}; \draw[->] (A) -- (B) node[pos=.49, right] {\footnotesize{$x = \Pi(y)$}}; \draw[->] (A) -- (C); \draw[->] (C) -- (B) node[midway, above, sloped] {\footnotesize{$n \to \infty$}}; \end{tikzpicture} \caption{Coarse-graining via the FDT. A microscopic level of description (2) is described by the variables $y$ and its dynamics may be decomposed into a slow and a fast component. Level $1^+$ is a stochastic approximation of level $2$: its dynamics is governed by an SDE that has $e^{n s}$ as a stationary distribution. At the macroscopic level (1), the reduced set of variables $x$ accounts for the slow dynamics only, and the dynamics is a gradient flow where the friction matrix is defined via the FDT. Hence, the friction matrix $M$ may be calculated via the evaluation of the diffusion coefficient $D$, which is done through the estimation of a second moment of the process: this procedure is the basis of the Green-Kubo relations.} \label{fig:FDT} \end{figure*} \section{Dissipation: generalized gradient flows\label{sec:GGF}} A great deal of purely dissipative systems may be expressed in the mathematical language of \emph{generalized gradient flows}, a nonlinear generalization of \emph{gradient flows}. Such structures are recently experiencing growing interest among mathematicians and physicists. The mathematician uses them to prove existence and stability of solutions~\cite{AGS08}, or convergence of evolution equations in the limit of some parameter~\cite{aM16a}. The physicist benefits from geometric structures that express thermodynamics \cite{pjM86,GO97,OG97,gpB14}. A \emph{(standard) gradient structure} on the space $\mathcal{X}$ is a pair $\left( M, s \right)$, where $s \colon \mathcal{X} \to \mathbb{R}$ is a smooth function and $M$ is a symmetric and non-negative definite two-tensor field on $\mathcal{X}$, which is called a \emph{cometric} in \cite{pjM86} and a \emph{friction matrix} in \cite{hcO05}. A gradient structure induces the evolution equation \begin{equation}\label{GF} \dot{x}_t = M_{x_t} \mathrm{d} s_{x_t} \, , \end{equation} which we call a \emph{(standard) gradient flow}. In the framework of nonequilibrium thermodynamics, gradient structures find inspiration in the linear relationships between fluxes and forces proposed by Onsager~\cite{lO31a}. When the friction matrix is directly related to the phenomenological (or Onsager) matrix, symmetry is a manifestation of Onsager's reciprocal relations, and positive semidefiniteness expresses the non-negativity of the entropy production. However, Eq.~\!\eqref{GF} has been shown to accommodate several nonlinear force-flux relations, too, such as chemical reactions~\cite{aM11,hcO15}. \emph{Generalized gradient structures}~\cite{MPR14} were advanced in various contexts, prompted by considerations of mathematical structure~\cite{DGMT80,CV90,eDG93}, and geometrical and physical meaning~\cite{mG93,MPR15}, and chemical reactions are the prototypical example \cite{mG12}. As outlined in Sec.~\!\ref{sec:CME}, such structures find a further motivation in the light of the generalized FDT, where they arise from the properties of some underlying level of descriptions through the form of the fluctuations. The generalization is based on the observation that Eq.~\!\eqref{GF} can also be written as \begin{equation}\label{GGF} \boxed{\dot{x}_t = \partial_\xi \Psi^*_{x_t}\!(\mathrm{d} s_x)} \, , \end{equation} where the \emph{dissipation potentials} \footnote{The quadratic form $v \cdot M_x^{-1} v$, $M_x$ possibly being degenerate, does not exist in the ordinary sense. In the context of convex analysis, however, it should be interpreted as $\xi \cdot M_x \xi$ if $v \in \mathcal{R}(M_x)$ and $M_x \xi = v$, and $+ \infty$ otherwise, where $\mathcal{R}$ is the range.} \begin{equation*} \Psi_x(v) := \dfrac{1}{2} v \cdot M_x^{-1} v \quad \text{ and } \quad \Psi^*_x(v) := \dfrac{1}{2} \xi \cdot M_x \xi \end{equation*} are dual to each other in the sense of Legendre-Fenchel transforms \cite{rtR70,hT14}: \begin{subequations} \begin{align}\label{dual} \Psi^*_x(\xi) &= \sup\limits_v \left[ \xi \cdot v - \Psi_x(v) \right] \qquad \text{and} \\ \Psi_x(v) &= \sup\limits_\xi \left[ \xi \cdot v - \Psi^*_x(\xi) \right] \, . \end{align} \end{subequations} A \emph{generalized gradient flow} is of the form \eqref{GGF}, but with $\Psi$ and $\Psi^*$ not necessarily quadratic. More precisely, \begin{center} \noindent\fbox{\parbox{.92\linewidth}{the pair $\left( \Psi, s \right)$ is a \emph{generalized gradient structure} (GGS) on $\mathcal{X}$ if, for all $x \in \mathcal{X}$, \begin{enumerate} \item $\Psi_x(v)$ is convex in the variable $v$, \item $\Psi_x(0) = 0$, \item $\min\limits_v \Psi_x(v) = 0$. \end{enumerate}\vspace{-3.5mm}}} \end{center} A dissipation potential is called \emph{symmetric} if $\Psi_x(v)=\Psi_x(-v)$ for all $(x, v)$. It can be verified that $\Psi^*$ inherits exactly the same properties and that, when symmetry is satisfied, properties 1 and 2 imply 3. There is another formulation of the generalized gradient flow \eqref{GGF}, as a minimization problem, that is particularly useful to our work. Indeed, from Eq.~\!\eqref{dual}, the \emph{Young-Fenchel inequality} follows, \begin{equation} \Psi_{x}(v) + \Psi^*_{x}(\xi) - \xi \cdot v \geq 0 \, , \end{equation} and this inequality holds with an equal sign when $v$ is a solution $\bar{v}(\xi)$ of the maximization problem in Eq.~\!\eqref{dual}. Then, let us define the function \begin{equation}\label{FFunction} \mathcal{F}(x, v) := \Psi_{x}(v) + \Psi^*_{x}\big(\mathrm{d} s_x\big) - \mathrm{d} s_x \cdot v \, , \end{equation} which is convex in its second argument and, by the Fenchel-Young inequality, is always non-negative. \begin{center} \noindent\fbox{\parbox{.92\linewidth}{The generalized gradient flow \eqref{GGF} has the equivalent characterization \begin{equation}\label{evolution} \mathcal{F}(x_t, \dot{x}_t) = 0 \, , \end{equation}\vspace{-7mm}}} \end{center} namely, the trajectories $x \colon [0, T] \to \mathcal{X}$ minimize $\mathcal{F}$ at value zero. This formulation also extends Eq.~\!\eqref{GGF} to the case of non-differentiable dissipation potentials. Now we are able to make two important remarks on the definition of a GGS: first, conditions 2 and 3 imply that stationary points of $s$ are also stationary solutions of the evolution equation \eqref{evolution}; furthermore, $s$ is a Lyapunov function of the evolution because, along a solution $x \colon [0, T] \to \mathcal{X}$ of Eq.~\!\eqref{evolution}, \begin{equation*} \dfrac{\mathrm{d} (s \circ x)(t)}{\mathrm{d} t} = \mathrm{d} s_{x_t} \cdot \dot{x}_t = \Psi_{x_t}(\dot{x}_t) + \Psi^*_{x_t}\!\big(\mathrm{d} s_{x_t}\big) \geq 0 \, , \end{equation*} since $\Psi$ and $\Psi^*$ are both non-negative. These two features constitute essential reasons for the use of such structure in thermodynamics, where the driving function $s$ is identified with the thermodynamic entropy and the dissipation potential provides the relationship between nonequilibrium forces and fluxes. \section{Fluctuations: the theory of large deviations\label{sec:LD}} In the previous section we have introduced a mathematical structure that expresses dissipation. The trajectories of the system minimize the function $\mathcal{F}$ at every instant of time. In this section, we study fluctuations around trajectories, and realize that a similar minimization feature arises in the framework of large-deviation theory. \subsection{Three notions of convergence} In order to describe fluctuations, we use random variables and stochastic processes on $\mathcal{X}$. In particular, since we are interested in systems typical of statistical mechanics, characterized by many degrees of freedom, we consider sequences of random variables and stochastic processes indexed by some large parameter $n$. We will use the symbol $X^n$ to denote both sequences of random variables and of stochastic processes: the latter, indeed, can be seen as random variables taking values in some space of curves in $\mathcal{X}$. In the limit $n \to \infty$, different limit theorems, corresponding to distinct notions of convergence, come into play. \paragraph{The law of large numbers.} If the probability distribution concentrates onto a single point $z \in \mathcal{X}$, we say that we have a \emph{law of large numbers}, that is \begin{equation} X^n \to z \qquad \text{almost surely as } n \to \infty \, . \end{equation} The point $z$ is called the \emph{deterministic limit} of $X^n$. When $X^n$ is a stochastic process, $z$ is also called the \emph{deterministic evolution}. \paragraph{The central limit theorem.} Now, let us suppose we have a law of large numbers, consider a \emph{fluctuation} \begin{equation*} X^n - z \, , \end{equation*} and rescale it by a factor $n^{1/2}$: \begin{equation}\label{CLT} W^n := n^{1/2} \left( X^n - z \right) \, . \end{equation} When a central limit theorem holds, it tells us that $W^n$ converges in distribution to a normal random variable. This represents a first characterization of fluctuations around the deterministic limit, one that describes small, relatively probable deviations. \paragraph{Large deviations.} A further characterization is provided by the theory of \emph{large deviations}, which studies untypical occurrences contained in the tail of the distribution. As $n \to \infty$ the probability of such an occurrence tends to zero, and large-deviation theory searches for a decay of the form \begin{equation}\label{LDP} \mathbb{P} \!\left( X^n \approx x \right) \asymp e^{- n I(x)} \, , \end{equation} which is called a \emph{large deviation principle} (LDP) and reads: the probability of the random variable $X^n$ to be close (here vaguely denoted by ``$\approx$'' ) to some $x \in \mathcal{X}$ decays exponentially with a rate given by $n$ times a \emph{rate function}. A rate function is a lower-semicontinuous function $I \colon \mathcal{X} \to [0, \infty]$. The symbol ``$\asymp$'' defines the notion of large-deviation convergence for random variables: a detailed account of the mathematical theory can be found in the book \cite{DZ10}, and an excellent presentation for physicists is contained in the review article \cite{hT09}. Given a certain physical setup represented by a sequence of random variables, the convergence in the large-deviation sense is a result that needs to be proven with the appropriate mathematical tools. We give a short account of one of these tools at the end of this section and use it in our numerical experiment. We remark that, whenever the rate function has a unique minimum $0$ at $z \in \mathcal{X}$, the LDP \eqref{LDP} automatically yields a (strong) law of large numbers. Indeed, this property implies that \begin{equation*} X^n \to z \qquad \text{almost surely as } n \to \infty \, . \end{equation*} Moreover, in good cases, the quadratic approximation of the rate function around $z$ reproduces the central limit theorem \cite{wB93}. Similar definitions exist for stochastic processes. Now, consider curves and stochastic processes in $\mathcal{X}$. Then, a large-deviation principle reads \begin{equation*} \mathbb{P} \!\left( \left. X^n_t \right\rvert_{[0, T]} \approx \left. x_t \right\rvert_{[0, T]} \right) \asymp e^{- n I_{[0, T]}(x)} \qquad \text{as } n \to \infty \, , \end{equation*} which means: the probability of the stochastic process to be close to some curve $x \colon [0, T] \to \mathcal{X}$ decays exponentially with a rate given by $n$ times a rate function, which is again a non-negative lower-semicontinuous function on the set of curves in $\mathcal{X}$. If the rate function has a unique minimum 0, we may again identify an element in the space of curves that is the deterministic limit of the stochastic process. \subsection{Large deviations and statistical mechanics} The language of large-deviation theory is central to statistical mechanics. As recognized in \cite{hT09}, any statement that connects probabilities of microstates to a specific thermodynamic potential is an LDP. For instance, the definition of Boltzmann's entropy \begin{equation}\label{Boltzmann} S^n_\text{eq}(u) := k_B \ln \mathbb{P}(H^n/n \approx u) \, , \end{equation} where $H^n$ is the total energy and $u$ is a possible realization of the energy per particle $H^n/n$, leads to an LDP in the following sense. Let us ask whether the specific entropy \begin{equation*} s_\text{eq}(u) := \lim\limits_{n \to \infty} \dfrac{S^n_\text{eq}(u)}{n} \end{equation*} exists. If we have Eq.~\!\eqref{Boltzmann}, then \begin{equation*} s_\text{eq}(u) = \lim\limits_{n \to \infty} \dfrac{1}{n} k_B \ln \mathbb{P}(H^n/n \approx u) \, , \end{equation*} or \begin{equation*} \mathbb{P}(H^n/n \approx u) \asymp e^{n s_\text{eq}(u)} \, . \end{equation*} The LDP tells us that the entropy $S^n_\text{eq}$ becomes extensive ($S^n_\text{eq} \simeq n s_\text{eq}$) in the limit of many degrees of freedom. Similar relations hold for other thermodynamic potentials, and the structure of Legendre-Fenchel transforms among them completely reflects the same structure arising in large-deviation theory \cite{hT09}. The equilibrium states are defined as the most probable states in the limit $n \to \infty$, namely, the minimizers of the rate function. The rate function describes the fluctuations around the equilibrium states. What happens in nonequilibrium statistical mechanics? Can similar statements be established? In the present paper, we will give a characterization of macroscopic dynamics as minimizers of large-deviation rate functions for stochastic processes. The rate functions, then, will describe the fluctuation paths around the deterministic evolution. Moreover, the characterization will provide the deterministic dynamics with a precise structure. \subsection{The Feng-Kurtz method}\label{Feng-Kurtz} We conclude this section by a concise description of the Feng-Kurtz method \cite{FK06}, which represents both a useful tool to derive explicit expressions for the rate functions and a well-developed theory to rigorously prove corresponding mathematical statements. Given a sequence of time-homogeneous Markov processes with infinitesimal generator \begin{equation} (\mathcal{Q}^n f)(x) := \lim\limits_{t \to 0} \dfrac{\mathbb{E}\!\left[ f(X^n_t) \big| X^n_0 = x \right] - f(x)}{t} \, , \end{equation} we want to establish an LDP. To start with, we define the \emph{nonlinear} or \emph{Fleming generator} \cite{whF78} \begin{equation} (H_n f)(x) := \dfrac{1}{n} e^{-n f(x)} (\mathcal{Q}^n e^{n f})(x) \end{equation} and search for a limit \begin{equation*} H_n \rightarrow H \end{equation*} in some operator sense that we don't define here. The convergence of the nonlinear generator implies the existence of a large-deviation principle \cite{FK06}, and the limit~$H$ leads to a characterization of the rate function, as follows. For a Markov process, the expression \begin{equation*} (H f)(x) \end{equation*} depends on the function $f$ only through its first derivative. We can therefore define the \emph{Hamiltonian} \begin{equation} \mathcal{H}(x, \mathrm{d} f(x)) := (H f)(x) \, , \end{equation} and compute its Legendre-Fenchel transform \begin{equation} \mathcal{L}(x, v) = \sup\limits_\xi \left[ \xi \cdot v - \mathcal{H}(x, \xi) \right] \, , \end{equation} which we call the \emph{Lagrangian}. Then, the rate function has the form \begin{equation*} I_{[0, T]}(x) = I_0(x_0) + \int_0^T \mathcal{L}(x_t, \dot{x}_t) \, \mathrm{d} t \, , \end{equation*} where $I_0$ is the rate function for the initial state $X^n_0$. Since $I_0$ plays no role in the following, we suppose $X^n_0$ is chosen deterministically and write concisely \begin{equation}\label{rateFunction} I_{[0, T]}(x) = \int_0^T \mathcal{L}(x_t, \dot{x}_t) \, \mathrm{d} t \, . \end{equation} As we may expect for time-homogeneous Markov processes, information is contained in a function of just two variables (the Lagrangian), which expresses the fact that -- for such processes -- information is local in time. We will take advantage of this property in our numerical simulations of Sec.~\!\ref{numerics}. For a concrete example of the analytical procedure, see Sec.~\!\ref{largeDeviationsCME}. The form of the rate function \eqref{rateFunction} presents the following additional feature: when a law of numbers holds, the deterministic evolution must minimize the Lagrangian at every instant of time. This establishes a parallel with the discussion of generalized gradient flows that we will elaborate in our generalization of the FDT. \section{The generalized fluctuation-dissipation theorem\label{sec:FDT}} In Secs.~\!\ref{sec:GGF}-\ref{sec:LD} we have introduced the mathematical ingredients that we need to express dissipation and fluctuations and we have recognized a similarity in that both ingredients are characterized by a minimization of a function: $\mathcal{F}$ in the case of generalized gradient flows, $\mathcal{L}$ in the case of LDPs. Now we are ready to address the first aim of the paper: the generalization of the FDT. Given a sequence of stochastic processes that approximate some microscopic dynamics, what is the proper dissipative structure of its deterministic limit, namely, the macroscopic dynamics? As a first step, we introduce the classical formulation of the FDT for diffusion processes. Then, we translate it into the language of large deviations and GGSs. Finally, we use this translation to state the generalization for Markov processes. \subsection{FDT for diffusion processes and gradient flows}\label{FDT-GF} According to the coarse-graining procedure depicted in \figurename\ref{fig:FDT}, for a wide class of systems the correct description of fluctuations is given in terms of a diffusion process. This happens when the dynamics of the macroscopic variables~$X^n$ is the sum of the short-time correlated interactions of many microscopic particles, such that the fluctuations are modeled as a Gaussian white noise \cite{OM53}. The governing equation is the stochastic differential equation (SDE) \begin{equation}\label{diffusion} \mathrm{d} X^n_t = A(X^n_t) \, \mathrm{d} t + n^{-1/2} B(X^n_t) \diamond \mathrm{d} W_t \, , \end{equation} where the expressions for the functions $A$ (called \emph{drift}) and $B$ (the \emph{noise intensity matrix}) need to be specified. This equation can be formally derived by projection-operator techniques, which also give the expressions for $A$ and $B$ in terms of the microscopic data \cite{pE04,EV02}. The symbol $\diamond$ stands for the \emph{kinetic} or \emph{Klimontovich} interpretation for the noise \cite{HO98}, which -- in It\^{o} form -- results in the SDE \begin{gather*} \mathrm{d} X^n_t = \left[ A(X^n_t) + \dfrac{1}{2 n} \Div D(X^n_t) \right] \mathrm{d} t + \dfrac{B(X^n_t)}{\sqrt{n}} \, \mathrm{d} W_t \, , \\ \text{with } D(x) := B(x) B(x)^T \, . \end{gather*} Now, suppose that this process has a stationary distribution of the Boltzmann type \footnote{The choice of the kinetic interpretation for the noise has been made to let this stationary distribution be such for any value of $n$. To support the following arguments, however, we need a weaker condition: that the distribution satisfies the LDP $\pi^n \asymp e^{n s}$. This fact makes the choice of interpretation for the noise irrelevant, since the correction in the drift vanishes in the limit $n \to \infty$.}, \begin{equation}\label{eq} \pi^n_x = e^{n s_x} \, , \end{equation} and, in addition, is in detailed balance with respect to it. This means that the generator of the process \eqref{diffusion}, \begin{equation*} (Q^n f)(x) = A(x) \cdot \mathrm{d} f(x) + \dfrac{1}{2 n} \Div \!\big[ D_x \mathrm{d} f(x) \big] \, , \end{equation*} is self-adjoint with respect to the measure \eqref{eq}, viz., \begin{equation*} \int_\mathcal{X} f(x) \, (Q^n g)(x) \, e^{n s_x} \, \mathrm{d} x = \int_\mathcal{X} g(x) \, (Q^n f)(x) \, e^{n s_x} \, \mathrm{d} x \end{equation*} for all functions $f$ and $g$ in a proper class. Then, the SDE \eqref{diffusion} must have the form \cite[Sec.~\! 6.3.5]{cG09} \begin{equation}\label{diffDB} \mathrm{d} X^n_t = \dfrac{1}{2} D(X^n_t) \mathrm{d} s_{X^n_t} \, \mathrm{d} t + n^{-1/2} B(X^n_t) \diamond \mathrm{d} W_t \, . \end{equation} We call this result the \emph{classical FDT}: \emph{given detailed balance, the drift term has the structure of a gradient flow, where the driving function is related to the logarithm of the stationary distribution of the process and the linear operator is (a half of) the diffusion tensor}. Of course, the deterministic limit of \eqref{diffDB}, which describes the macroscopic evolution, is given by the drift: \begin{equation*} \dot{x}_t = \dfrac{1}{2} D(x) \mathrm{d} s_x \, . \end{equation*} Therefore, we \emph{define} the friction matrix to be equal to a half of the diffusion tensor, \begin{equation}\label{FDT} \boxed{M_x := \dfrac{1}{2} D(x)} \, , \end{equation} and the macroscopic evolution equation is the gradient flow \begin{equation}\label{deterministicGF} \dot{x}_t = M_{x_t} \mathrm{d} s_{x_t} \, . \end{equation} \subsection{Reformulation of the FDT via large deviations} As already noticed by Onsager and Machlup \cite{OM53}, the path measure of Eq.~\!\eqref{diffDB} has a special form that encodes all data about the structure of the deterministic limit. In the language and notation of the present paper, the path measure satisfies an LDP \begin{equation*} \mathbb{P} \!\left( \left. X^n_t \right\rvert_{[0, T]} \approx \left. x_t \right\rvert_{[0, T]} \right) \asymp e^{- n I_{[0, T]}(x)} \end{equation*} with \begin{equation*} I_{[0, T]}(x) = \int_0^T \mathcal{L}(x_t, \dot{x}_t) \, \mathrm{d} t \end{equation*} and \begin{equation}\label{LagrangianDiff} \mathcal{L}(x, v) = \dfrac{1}{2} \left( v - \dfrac{D(x)}{2} \mathrm{d} s_x \right) \cdot D(x)^{-1} \left( v - \dfrac{D(x)}{2} \mathrm{d} s_x \right) \, . \end{equation} A corresponding result can be proven rigorously for any SDE of the form \eqref{diffusion} \cite{FW98} and can also be formally obtained by using the Feng-Kurtz method explained in Sec.~\!\ref{Feng-Kurtz}. The Lagrangian \eqref{LagrangianDiff} is (a half of) the function that generates the standard gradient structure $\left( D/2, s \right)$: this is exactly equivalent, and represents nothing but a reformulation, of the classical FDT. Therefore, we define the function $\mathcal{F}$, which drives the macroscopic gradient flow, to be \begin{equation} \mathcal{F} := 2 \mathcal{L} \, . \end{equation} Our generalization will elaborate exactly on this observation. \subsection{Generalized FDT for Markov processes with detailed balance} We have just seen that the large-deviation Lagrangian of a diffusion process with detailed balance is the $\mathcal{F}$-function of a standard gradient flow, the entropy being related to the logarithm of the stationary distribution. This is a first realization of the parallel between generalized gradient flows and large-deviation Lagrangians that we suggested before. The parallel is even more general when we extend the class of processes to Markov processes with detailed balance. Assume that a sequence of Markov processes $X^n$ satisfies the following conditions: \begin{enumerate} \item convergence to a deterministic curve that solves \begin{equation*} \dot{z}_t = \mathcal{A}(z_t) \, ; \end{equation*} \item the LDP \begin{equation*} \mathbb{P} \!\left( \left. X^n_t \right\rvert_{[0, T]} \approx \left. x_t \right\rvert_{[0, T]} \right) \asymp e^{- n I_{[0, T]}(x)} \end{equation*} with rate function \begin{equation}\label{FDT-rateFunction} I_{[0, T]}(x) = \int_0^T \mathcal{L}(x_t, \dot{x}_t) \, \mathrm{d} t \end{equation} and $\mathcal{L}(x, v)$ convex in $v$; \item detailed balance with respect to the stationary distribution \begin{equation} \pi^n_x \asymp e^{n s_x} \, . \end{equation} \end{enumerate} Then, it is proven in \cite{MPR14} that \emph{the large-deviation Lagrangian generates a generalized gradient structure (with symmetric dissipation potential) for the deterministic limit}. We call this statement a \emph{generalized FDT}. As a consequence, we define the $\mathcal{F}$-function of the deterministic limit by \begin{equation}\label{genFDT} \boxed{\mathcal{F} := 2 \mathcal{L}} \, . \\ \end{equation} The function $\mathcal{F}$ has of course the form \begin{equation*} \mathcal{F}(x, v) = \Psi_x(v) + \Psi^*_x(\mathrm{d} s_x) - \mathrm{d} s_x \cdot v \end{equation*} and \emph{the dissipation potentials are symmetric}. Strictly speaking, detailed balance is sufficient but not necessary to obtain a GGS as defined in Sec.~\!\ref{sec:GGF} \cite{MPR14}: it would suffice that \begin{equation*} 2 \partial_v \mathcal{L}(x, 0) = - \mathrm{d} s_x \, , \end{equation*} whereas detailed balance implies that \begin{equation*} \mathcal{L}(x, v) - \mathcal{L}(x, -v) = - \mathrm{d} s_x \cdot v \, , \end{equation*} which is a stronger statement. From the latter condition, one deduces the symmetry of the dissipation potentials, which -- in turn -- implies the properties 2 and 3 of the definition of a dissipation potential. \subsection{Generalized coarse-graining procedure} A major consequence of the generalized FDT \eqref{genFDT} is an extension of the coarse-graining procedure described in the introduction. In \figurename\ref{fig:GFDT} we illustrate the effect of this generalization on the structure of \figurename\ref{fig:FDT}: whereas the method of \figurename\ref{fig:FDT} allowed us to handle microscopic dynamics that are well-approximated by diffusion processes, the scheme of \figurename\ref{fig:GFDT} extends this to microscopic systems whose fluctuations have the form of (sequences of) more general Markov processes. The typical examples that did not fit the scheme of \figurename\ref{fig:FDT} are systems characterized by rare events, which are mathematically described by Markov jump processes. The procedure outlined here constitutes a novel approach in the field of rare-event simulations. \begin{figure*}[t] \begin{tikzpicture}[node distance=1.75cm] \tikzstyle{every node}=[font=\normalsize, fill=white] \draw[very thick] (0, 0) -- (5, 0) node[pos=.5, above] {$\dot{y}_t = \operatorname{slow\&fast}(y_t)$} node[at start] (A) {2} node[right] {microscopic level}; \draw[very thick] (0, 5) -- (6, 5) node[pos=.5, above] {$\overbrace{\mathcal{F}(x_t, \dot{x}_t) = 0}^\text{slow}$} node[at start] (B) {1} node[right] {macroscopic level} node[midway, below, fill=gray!10] {FDT: $\mathcal{F} := 2 \mathcal{L}$}; \draw[very thick] (1.9, 3.25) -- (13, 3.25) node[midway, above, align=center] {sequence of Markov processes with $\mathbb{P}\!\left( X^n \approx x \right) \asymp e^{-n \int_{0}^{T} \mathcal{L}(x_t, \dot{x}_t) \, \mathrm{d} t}$ \\ and detailed balance with respect to $\pi^n \asymp e^{n s}$} node[at start] (C) {$1^+$} node[right] {macroscopic + fluctuations}; \draw[->] (A) -- (B) node[pos=.49, right] {\footnotesize{$x = \Pi(y)$}}; \draw[->] (A) -- (C); \draw[->] (C) -- (B) node[midway, above, sloped] {\scriptsize{$n \to \infty$}}; \end{tikzpicture} \caption{Coarse-graining for Markov processes via the generalized FDT. A microscopic level of description (2) is described by the variables~$y$ and its dynamics contains both fast and slow behaviors. Level~$1^+$ is a stochastic approximation of level~$2$: its dynamics is a sequence of Markov processes with an LDP. At the macroscopic level (1), the reduced set of variables~$x$ accounts for the ``slow'' dynamics only, and the dynamics is a generalized gradient flow where the function~$\mathcal{F}$ is defined via the generalized FDT. Hence, the driving function~$\mathcal{F}$ may be calculated via the evaluation of the Lagrangian~$\mathcal{L}$, which is done though the estimation of the nonlinear generator of the process.} \label{fig:GFDT} \end{figure*} The generic setup is the following: \begin{itemize} \item We simulate the microscopic dynamics with variables~$y$ and we observe the dynamics of the coarse-grained variables $x = \Pi(y)$. \item In terms of the variables~$x$, the entropy~$s$ is the rate function of the stationary distribution. \item The function $\mathcal{F}$ that drives the macroscopic generalized gradient flow is two times the Lagrangian of the stochastic process, which we choose to calculate via a numerical implementation of the Feng-Kurtz method. \end{itemize} Once we have~$\mathcal{F}$, it is easy to get the dissipation potential upon the observation \begin{equation*} \mathcal{F}(x, 0) = \Psi_x(0) + \Psi^*_x(\mathrm{d} s) - \mathrm{d} s_x \cdot 0 = \Psi^*_x(\mathrm{d} s) \, , \end{equation*} whence \begin{equation} \Psi_x(v) = \mathcal{F}(x, v) - \mathcal{F}(x, 0) + \mathrm{d} s_x \cdot v \, . \end{equation} We show an example of this procedure in the following section, where we compare the results with the exact ones predicted from theory. \section{Example: chemical reactions\label{sec:CME}} In this section we describe an application of the generalized FDT in the context of chemical kinetics. Chemical reactions, together with all systems that exhibit rare-event features, are the archetypal example where Markov jump processes provide the right form of the fluctuations. The use of Green-Kubo schemes associated with diffusion processes (cf.~\figurename\ref{fig:FDT}) would lead to inaccurate or wrong result, as we show in Sec.~\!\ref{Green-Kubo}. A typical microscopic model for chemical reactions is a diffusive dynamics in a potential landscape, which represents the effective interactions of all constituents of a mixture in the configuration space. If the minima of the landscape are separated by sufficiently high barriers with respect to the amplitude of the noise, the effective dynamics, at large time scales, is well approximated by a Markov jump process between the minima. The simplest realization of this scheme is Kramers' escape problem \cite{haK40}, which we analyze in detail. We introduce the model from the standpoints of coarse-graining and the framework of the present paper and, in the last subsection, we propose a numerical strategy that follows the scheme of \figurename\ref{fig:GFDT}. \subsection{A multi-scale view: the levels of description}\label{A} \subsubsection{Level 2: diffusion in a double-well potential} \begin{figure}[h] \centering \begin{tikzpicture}[scale=1.85] \draw[->] (-1.5,0) -- (1.5,0) node[right] {$q$}; \foreach \x/\xtext in {0/0, -1/-1, 1/1} \draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$}; \draw[->] (0,0) -- (0,2) node[right, pos=.55] {$1$}; \draw[domain=-1.5:1.5,smooth,variable=\x,black,thick] plot ({\x},{\x*\x*\x*\x - 2*\x*\x + 1}) node[above] {$V(q)$}; \end{tikzpicture} \caption{The double-well potential $V$ that we use in the numerical experiment. The regions $A = (-\infty, 0)$ and $B = (0, \infty)$ are representative of the two chemical states.} \label{fig:potential} \end{figure} We consider the overdamped Langevin dynamics of a Brownian particle in the potential of \figurename\ref{fig:potential}, whose two wells represent the two chemical states $A$ and $B$. The motion is governed by the SDE \begin{equation} \mathrm{d} Q^\epsilon_t = - \dfrac{1}{\gamma} V'(Q^\epsilon_t) \, \mathrm{d} t + \sqrt{\dfrac{k_B T}{\gamma}} \, \mathrm{d} W_t \, , \end{equation} where $\gamma$ is a friction coefficient of unit $\textrm{kg}/\textrm{s}$. For simplicity, we put $\gamma = 1$: a different value would simply require a rescaling of time. Since we are interested in the low-temperature limit, we set $k_B T =: 2 \epsilon$ and write \begin{equation}\label{Kramers-SDE} \mathrm{d} Q^\epsilon_t = - V'(Q^\epsilon_t) \, \mathrm{d} t + \sqrt{2 \epsilon} \, \mathrm{d} W_t \, . \end{equation} Then, we take $n$ independent particles $Q^{\epsilon, i}$ in the same potential: these are the microscopic variables $y$. \subsubsection{Level $1^+$: the chemical master equation} Denoting by $\mathbbm{1}_J$ the indicator function of the set $J$, we consider the stochastic process \begin{equation}\label{empiricalProcess} X^{n, \epsilon} = \Pi(Q^{\epsilon, 1}, \ldots, Q^{\epsilon, n}) := \dfrac{1}{n} \sum_{i = 1}^{n} \mathbbm{1}_A(Q^{\epsilon, i}) \, , \end{equation} which is the basic object we are interested in. It keeps track of the concentration of the particles that, at time $t$, are in the well $A$; namely, it is a rational number $x$ in the set \begin{equation*} \mathcal{X}^n = \left\{0, \dfrac{1}{n}, \dfrac{2}{n}, \dfrac{3}{n}, \ldots, \dfrac{n-1}{n}, 1 \right\} \subset [0, 1] \eqqcolon \mathcal{X} \, . \end{equation*} The concentration of $B$ is $1 - X^{\epsilon, n}$, of course. As one can infer from \figurename~\ref{fig:potential}, when $\epsilon$ is sufficiently small, the process concentrates onto the minima of the potential, and moving between the two wells is a relatively rare event. In this regime, the system is characterized by two neatly separated time scales: the equilibration time $t_2$ of the particles in the wells, and the escape time $t_1$ from the wells \cite{DLLN16}, with $t_1 \gg t_2$. During the equilibration time $t_2$, a particle gets locally equilibrated and forgets where it was before the last jump event. In the limit $\epsilon \to 0$, the jumps become infinitely rare and, to obtain a non-trivial dynamics, one has to rescale time appropriately. By this means, the process \eqref{empiricalProcess} converges to a Markov jump process on the two chemical states, \begin{equation}\label{measureCME} X^{n, \epsilon} \to X^n \, . \end{equation} The evolution equation satisfied by the law of the process \eqref{measureCME} is the simplest realization of the \emph{chemical master equation} (CME) \cite{dtG92}. We think of the CME as approximating the $n$-particle diffusion process for small enough~$\epsilon$. The transition rates are given by the formulas \begin{equation*} \begin{cases} \mathcal{Q}\!\left( x \to x + \dfrac{1}{n} \right) = k \, n (1 - x) \\ \mathcal{Q}\!\left( x \to x - \dfrac{1}{n} \right) = k \, n x \end{cases}, \end{equation*} in terms of the \emph{reaction constant} $k := t_1^{-1}$, which means that the generator is the following linear operator on continuous bounded functions on $\mathcal{X}^n$: \begin{align}\label{generator} (\mathcal{Q}^n f)(x) &= n k (1 - x) \left[ f\!\left(x + \dfrac{1}{n}\right) - f(x) \right] + \nonumber \\ &- n k x \left[ f(x) - f\!\left(x - \dfrac{1}{n}\right) \right] \, . \end{align} \subsubsection{Level 1: the reaction rate equation} From Eq.~\!\eqref{generator}, we may immediately infer the deterministic dynamics. Indeed, as $n \to \infty$, \begin{equation} (\mathcal{Q}^n f)(x) \to (\mathcal{Q} f)(x) = \left[ k (1 - x) - k x \right] f'(x) \, , \end{equation} which is a process with pure drift: it describes the deterministic \emph{reaction rate equation} (RRE) \footnote{also called \emph{law-of-mass-action dynamics} or \emph{Guldberg-Waage dynamics} \cite{GW867,sS87,mG12}} \begin{equation}\label{RRE} \dot{x}_t = k (1 - 2 x_t) \, , \end{equation} with $x \in \mathcal{X} = [0, 1]$. \subsection{Large deviations for the chemical master equation}\label{largeDeviationsCME} The convergence of the CME to the RRE is characterized by the large-deviation principle \begin{equation*} \mathbb{P} \!\left( \!\left. X^n_t \right\rvert_{[0, T]} \approx \left. x_t \right\rvert_{[0, T]} \right) \asymp e^{- n I_{[0, T]}(x)} \qquad \text{as } n \to \infty \, , \end{equation*} where \begin{equation*} I_{[0, T]}(x) = \int_0^T \mathcal{L}(x_t, \dot{x}_t) \, \mathrm{d} t \, . \end{equation*} In order to calculate the Lagrangian, we follow the Feng-Kurtz method outlined in Sec.~\!\ref{Feng-Kurtz}. As a first step, we introduce the nonlinear generator \begin{align*} (H_n f)(x) &= \dfrac{1}{n} e^{-n f(x)} (\mathcal{Q}^n e^{n f})(x) = \nonumber \\ & = n k (1 - x) \left( e^{n f(x + 1/n) - n f(x)} - 1 \right) + \nonumber \\ &- n k x \left( e^{n f(x - 1/n) - n f(x)} - 1 \right) \end{align*} and calculate the limit \begin{multline*} (H_n f)(x) \to (H f)(x) = \\ = k (1 - x) \left( e^{\mathrm{d} f(x)} - 1 \right) + k x \left( e^{- \mathrm{d} f(x)} - 1 \right) \, . \end{multline*} Indeed, the limit nonlinear generator $H$ depends on the function $f$ only through its first derivative. The next moves are to substitute $\xi$ for $\mathrm{d} f(x)$, defining the Hamiltonian \begin{equation}\label{HamiltonianCME} \mathcal{H}(x, \xi) := k (1 - x) \left( e^\xi - 1 \right) + k x \left( e^{-\xi} - 1 \right) \, , \end{equation} and to compute the Legendre transform \begin{multline}\label{Lagrangian} \mathcal{L}(x, v) = k + v \sinh^{-1}\!\left(\dfrac{v}{\sqrt{4 k^2 x (1 - x)}}\right) + \\ - \sqrt{k^2 + v^2 - \left[ k \left( 1 - 2x \right) \right]^2} - v \sinh^{-1}\!\left( \dfrac{k \left( 1 - 2x \right)}{\sqrt{4 k^2 x (1 - x)}} \right) \, . \end{multline} This Lagrangian is convex in $v$, and minimal with value zero at $v = k (1 - 2 x)$, which is the vector field of the deterministic dynamics \eqref{RRE}. For this simple system, we can explicitly compute the stationary distribution and verify that the process is in detailed balance with respect to it for any finite $n$. The distribution is a binomial one with parameters $n$ and $1/2$: \begin{equation}\label{invariantDistribution} \pi^n_x = \begin{pmatrix} n \\ nx \end{pmatrix} 2^{-n} \, . \end{equation} To obtain its large-deviation behavior for $n \to \infty$, the G\"artner-Ellis theorem \cite{hT09} gives \begin{equation*} \pi^n \asymp e^{n s} \qquad \text{as } n \to \infty \, , \end{equation*} with \begin{equation}\label{S-GGF} s_x = - \left[ x \ln x + \left( 1 - x \right) \ln(1 - x) + \ln 2 \right] \, . \end{equation} \subsection{FDT and GGS for the reaction rate equation}\label{C} It is known that Eq.~\!\eqref{RRE} has at least two generalized gradient structures \cite{aM11,mG10}, given the entropy \eqref{S-GGF}, which has derivative \begin{equation*} \mathrm{d} s_x = \ln\dfrac{1 - x}{x} \, . \end{equation*} The FDT singles out precisely one GGS, given the Hamiltonian \eqref{HamiltonianCME}. Indeed, by the properties of the Legendre transform, one can show that \begin{align} \Psi^*_x(\xi) &= 2 \left[ \mathcal{H}\!\left(x, \dfrac{1}{2}(\xi - \mathrm{d} s_x) \right) - \mathcal{H}\!\left( x, -\dfrac{1}{2} \mathrm{d} s_x \right) \right] \label{Psi-H} \\ &= 4 k \sqrt{x (1 - x)} \left( \cosh\dfrac{\xi}{2} - 1 \right) \, . \label{Psi-GGF} \end{align} The evolution equation, \begin{equation*} \dot{x}_t = \partial_\xi \Psi^*_{x_t}\!\big( \mathrm{d} s_{x_t} \big) \, , \end{equation*} indeed reduces to the RRE \eqref{RRE}. Another possible choice for a GGS is given by a quadratic dissipation potential or, equivalently, by the friction matrix \begin{equation}\label{M} M_x = \dfrac{k \left( 1 - 2 x \right)}{\ln(1 - x) - \ln x} \, , \end{equation} as shown, e.g., in \cite{aM11,OG97} and found in \cite{hcO15} by thermodynamic and geometric arguments. The GGS \eqref{Psi-GGF}-\eqref{S-GGF} was proposed in \cite{mG93} independently from any consideration of the large deviations of some underlying stochastic process, which is extremely remarkable. Arguments in favor of the quadratic dissipation potential associated to the friction matrix \eqref{M} are advanced in \cite{hcO15}. \subsection{Green-Kubo formula and diffusion approximations}\label{Green-Kubo} The scheme of \figurename\ref{fig:GFDT} provides the CME at level~$1^+$ with (i) the right deterministic limit (thus the right macroscopic dynamics), (ii) the right stationary distribution, and (iii) it gives an expression for the dissipation potential that can be computed by numerical simulations on level~$1$. Can these three features be reproduced with the method of \figurename\ref{fig:FDT} or, at least, by approximating the microscopic dynamics with a diffusion process? In this section we formulate an answer. If we use the scheme of \figurename\ref{fig:FDT}, at level $1^+$, we have the diffusion process that solves \begin{equation*} \mathrm{d} X^n_t = M^\text{GK}_{X^n_t} \mathrm{d} s_{X^n_t} \, \mathrm{d} t + \sqrt{\dfrac{2 M^\text{GK}_{X^n_t}}{n}} \diamond \mathrm{d} W_t \, , \end{equation*} with the Green-Kubo formula \begin{equation} 2 M^\text{GK}_x = n \lim\limits_{\tau \to 0} \dfrac{1}{\tau} \mathbb{E}\!\left[ \left( X^n_\tau - x \right)^2 \Big| X^n_0 = x \right] \, . \end{equation} Since we know that the CME is the correct model that approximates the $n$-particle process, we can calculate the theoretical result that we expect from simulations on level $1$: \begin{align} 2 M^\text{GK}_x &= n \left[ (\mathcal{Q}^n f)(x) - 2 x (\mathcal{Q}^n g)(x) \right] = \nonumber \\ &= k \left( 1 - x \right) + k x = k \label{M-CLE} \, , \end{align} where $f(x) = x^2$, $g(x) = x$, and $\mathcal{Q}^n$ is the generator \eqref{generator} of the CME. We remark that $M^\text{GK}$ is, in general, a function of $x$, and it would be so if we had two distinct rate constants for the backward and forward reactions, $k^- \neq k^+$. With the entropy \eqref{S-GGF}, we get \begin{equation*} \mathrm{d} X^n_t = \dfrac{k}{2} \ln\dfrac{1 - X^n_t}{X^n_t} \, \mathrm{d} t + \sqrt{\dfrac{k}{n}} \, \mathrm{d} W_t \, , \end{equation*} which has (ii) the right stationary distribution (by construction), (iii) a Green-Kubo expression for the friction matrix that can be computed by simulations on the microscopic level, but (i) the wrong drift and, therefore, the wrong deterministic limit (cf. the RRE \eqref{RRE}). Other possibilities of constructing a diffusion process for chemical reactions have been proposed: they aim to approximate the CME for a large number of particles. One of them is called \emph{chemical Langevin equation} \cite{dtG00}, which can also be thought of as the diffusion approximation \footnote{The diffusion approximation may be found by three heuristic arguments: the first corresponds to expanding the generator to first order in~$1/n$, the second to considering the Kramers-Moyal expansion in the equation for the law, and the third to replacing the Poisson noise, which describes the jumps, by a Brownian one for large~$n$ \cite{ngvK83,EK05,KKP14,jmH15}.} of the CME, and reads \begin{align} \mathrm{d} Y^n_t &= k (1 - 2 Y^n_t) \, \mathrm{d} t + \sqrt{\dfrac{k}{n}} \, \mathrm{d} W_t \label{CLE} \\ &= M^\text{GK}_{Y^n_t} \mathrm{d} s^\text{CLE}_{Y^n_t} \, \mathrm{d} t + \sqrt{\dfrac{2 M^\text{GK}_{Y^n_t}}{n}} \diamond \mathrm{d} W_t \nonumber \, , \end{align} with the entropy \begin{equation} s^\text{CLE}_y = - 2 \left( y - \dfrac{1}{2} \right)^2 \end{equation} and the friction matrix \eqref{M-CLE}. This process has (i) the right deterministic limit, (iii) a Green-Kubo expression for the friction matrix, but (ii) the wrong entropy, viz., the wrong stationary distribution. A third choice is the \emph{log-mean equation} \cite{BBGBD15} \begin{align*} \mathrm{d} Z^n_t &= M_{Z^n_t} \mathrm{d} s_{Z^n_t} \, \mathrm{d} t + \sqrt{\dfrac{2 M_{Z^n_t}}{n}} \diamond \mathrm{d} W_t \nonumber \\ &= k (1 - 2 Z^n_t) \, \mathrm{d} t + \sqrt{\dfrac{2}{n} \dfrac{k \left( 1 - 2 Z^n_t \right)}{\ln(1 - Z^n_t) - \ln Z^n_t}} \diamond \mathrm{d} W_t \, , \end{align*} with the entropy \eqref{S-GGF} and the friction matrix \eqref{M}. It has (i) the right deterministic limit, (ii) the right stationary distribution, but (iii) the friction matrix cannot be derived by the Green-Kubo formula. Among the three diffusion processes, only one is consistent with the CME for relatively small deviations from the deterministic limit. Indeed, the large-deviation Lagrangian of the CLE \eqref{CLE}, \begin{equation}\label{DiffusionApproximation} \mathcal{L}_\mathrm{D}(x, v) = \dfrac{1}{2 k} \left[ v - k(1 - 2x) \right]^2 \, , \end{equation} is the quadratic form in $v$ that approximates the Lagrangian \eqref{Lagrangian} around the deterministic vector field \eqref{RRE} (cf.~\figurename\ref{fig:Hamiltonian}). This further justifies why the CLE is \emph{the} diffusion approximation of the CME. In conclusion, there is no way for a diffusion process to satisfy the requirements (i)-(iii) simultaneously. In order to satisfy all three requirements, one should move to more general Markov processes and to non-quadratic dissipation potentials, as indicated in \figurename\ref{fig:GFDT}. \subsection{Numerical experiments}\label{numerics} The goal of the scheme depicted in \figurename\ref{fig:GFDT} is to infer the structure of a more macroscopic level of description from a more microscopic one, which in this case is represented by the Brownian motion of many independent particles in a double-well potential. The latter, for $\epsilon$ small enough, is well approximated by the CME. The approach advanced in this paper, thus, suggests the following procedure: we perform simulations on the microscopic level, use the coarse-graining map \eqref{empiricalProcess}, and approximately compute the large-deviation rate functions for the CME, which give us the GGS of the deterministic limit. Since, for this simple problem, we know everything analytically, we can compare the numerical results with the exact ones. In particular, we compare the values of the reaction constant obtained by simulation with the one provided by Kramers' formula \cite{haK40,BdH15} \begin{equation}\label{Kramers} \bar{k} = \dfrac{\sqrt{- V''(0) \, V''(A)}}{2 \pi} e^{\left( V(A) - V(0) \right)/\epsilon} \, . \end{equation} In accordance with the generalized FDT, we should estimate the entropy function by looking at the large deviations of the stationary distribution of the process. For simplicity, however, here we suppose to know the entropy and concentrate on the dynamic large deviations. The latter task is, in general, a very hard one, since the dynamic rate function takes values in a space of curves; in other words, we are dealing with a very high-dimensional problem. However, the form \eqref{rateFunction} of the rate function for time-homogeneous Markov processes in terms of a Lagrangian, a local function of just two variables, notably reduces the issue of dimensionality. For this reason, we think that the Feng-Kurtz method, whose main scope is to find the Lagrangian by studying the convergence of nonlinear generators, represents an excellent framework for our purposes. Hence, we shall develop a numerical implementation of the Feng-Kurtz method introduced in Sec.~\!\ref{Feng-Kurtz}. The core of the Feng-Kurtz method is to calculate the nonlinear generator \begin{equation}\label{nonlinearGenerator} (H^n f)(x) := \dfrac{1}{n} e^{-n f(x)} \lim\limits_{\tau \to 0} \dfrac{\mathbb{E}\!\left[ e^{n f(X^n_\tau)} \big| X^n_0 = x \right] - e^{n f(x)}}{\tau} \, , \end{equation} whose limit defines the Hamiltonian \begin{equation}\label{HamiltonianNum} (H^n f)(x) \to \mathcal{H}(x, \mathrm{d} f(x)) \quad \text{as } n \to \infty \, , \end{equation} which here has to be understood in a purely numerical sense: the number of particles $n$ should be large enough for an acceptable accuracy on $\mathcal{H}$. It is not the purpose of this paper to go into the details of the control of this convergence, but only to show the successfulness of this method in the context of the present example. In the same way, for the moment we do not have the goal of constructing efficient simulations, nor to pursue any statistical rigor, which we reserve for future work. For instance, in the same spirit of inference for generators of continuous-time Markov processes \cite{BS05}, a theory of inference for nonlinear generators should be developed. Since the limit nonlinear generator \eqref{HamiltonianNum}, if it exists, depends only on $x$ and $\mathrm{d} f(x)$, it is sufficient to consider linear functions. The numerical discretization of Eq.~\!\eqref{nonlinearGenerator} contains five parameters that, in principle, we would like to take to their limits, but in our numerical experiment attain only finite values: \begin{itemize} \item The noise intensity $\epsilon \to 0$, which controls the separation of time scales. Kramers mentions in his seminal paper \cite{haK40} that $\epsilon = 0.2$ is sufficient: the process becomes approximately Markov and is well described by the CME. We actually work with $\epsilon = 0.15$. In applications, however, this is not a parameter, but a model datum. \item The time-step size $\Delta t \to 0$ of the numerical scheme used to simulate the SDE \eqref{Kramers-SDE}, which should resolve the microscopic dynamics, guarantee stability of the scheme, and be smaller than the local equilibration time $t_2$ \footnote{An estimation of the equilibration time $t_2$ should be done separately, for instance with the help of the various methods available in the literature \cite{DLLN16,afV98,BLS15}. In our case, the ratio $t_2/\Delta t$ is of the order of $10$.}. We consider the numerical value $\Delta t = 0.01$. \item The time interval $\tau \to 0$. This time constant should be ``macroscopically small'', i.e., much smaller than the typical jump time $t_1 = k^{-1}$, but also larger than the equilibration time $t_2$, in such a way that we retain only the macroscopic features of the process~$X^n$, namely, we neglect the underlying diffusive nature of it. We take $\tau = \bar{k}^{-1}/50$, where $\bar{k}$ is given by formula \eqref{Kramers}. In more general contexts, we may not know the values of the characteristic times in advance and should perform an appropriate estimation of them. \item The number of particles $n \to \infty$. Since the particles are independent and the observables $f$ are linear, the nonlinear generator loses its dependence on $n$. This parameter, then, only controls the discretization of the space $\mathcal{X}^n$, and $n = 6$ is enough for our purposes. \item The sample size $N \to \infty$ over which we calculate the expectation. To obtain good statistics, enough jumps in the time interval $\tau$ should occur. Since the average jump time is $t_1$, we need $N \tau \gg t_1$, and we choose $N = 10^5$. \end{itemize} We have built the chain of inequalities \begin{equation*} \Delta t \ll t_2 \ll \tau \ll t_1 \ll N \tau \, . \end{equation*} For definiteness, we choose the quartic potential \begin{equation*} V(q) = \left( q^2 - 1 \right)^2 \, . \end{equation*} \begin{figure*}[t] \begin{minipage}[t]{.482\textwidth} \centering \includegraphics[width=1.05\linewidth]{Hamiltonian.eps} \caption{Numerical estimate of the Hamiltonian compared with the theoretical result. The latter is represented by the smooth surface, and the former by the red dots. The values of the parameters are: $\epsilon = 0.15$, $\Delta t = 10^{-2}$, $\tau = \bar{k}^{-1}/50$, $n = 6$, $N = 10^5$, $\xi_\text{max} = 2$.} \label{fig:Hamiltonian} \end{minipage}\quad \ \ \ \begin{minipage}[t]{.482\textwidth} \centering \includegraphics[width=.82\linewidth]{Fitting.eps} \caption{Fitting of the data points $\widehat{\mathcal{H}}(0.5, f_j'(0.5))$ with the function $k/2 \left( e^\xi - 1 \right) + k/2 \left( e^{-\xi} - 1 \right)$ (Eq.~\!\eqref{HamiltonianCME}) and comparison to the theoretical Hamiltonian and its diffusion approximation \eqref{DiffusionApproximationHamiltonian}, both with $k = \bar{k}$. The confidence intervals are at level 99\%.} \label{fig:fitting} \end{minipage} \end{figure*} The numerical setup is the following. \begin{enumerate} \item We select the observables $f_j(x) = \xi_j x$, with the $\xi_j$ logarithmically spaced \footnote{We choose a small value $\xi^+_1$. The positive values $\xi^+_j$ are in geometric progression starting from $\xi^+_1$, and the negative ones are $\xi^-_j = - \xi^+_j$.} in the interval $[-\xi_\text{max}, \xi_\text{max}]$ with $\xi_\text{max} = 2$. The logarithmic spacing has the aim of resolving the region around $\xi = 0$ sufficiently well. \item For every $x \in \mathcal{X}^n$, we run $N$ simulations, of length $\tau$, of the $n$ independent SDEs \eqref{Kramers-SDE}. We use an Euler-Mayurama scheme with step size $\Delta t$. The starting point for $n x$ particles is $A$, and for the others is $B$. We need not care about equilibration in the wells because $\tau \gg t_2$. \item We index the simulations by $k$ and say that the random variable $X_\tau$ has $x^k_\tau$ as its realization. After the $k$-th simulation started from $x$, we compute the quantities \begin{equation}\label{sample} h_j^k(x) := e^{n f_j(x^k_\tau)} \, . \end{equation} There is one such quantity for each $x$, $j$ and $k$. \item Since the sample is automatically extracted from the (approximate, because of time discretization) law of the process, to estimate the expectation in Eq.~\!\eqref{nonlinearGenerator} we only need to take the simple averages \begin{equation} \dfrac{1}{N} \sum\limits_{k = 1}^N h_j^k(x) \, . \end{equation} Therefore, our estimator for the nonlinear generator is \begin{equation} (\widehat{H} f_j)(x) = \dfrac{1}{n \tau} \left( e^{-n f_j(x)} \dfrac{1}{N} \sum\limits_{k = 1}^N h_j^k(x) - 1 \right) \, . \end{equation} \end{enumerate} Then, from the Feng-Kurtz theory, we know that computing the nonlinear generator on an observable $f_j$ corresponds to the estimate of the Hamiltonian \begin{equation} \widehat{\mathcal{H}}(x, \mathrm{d} f_j(x)) = (\widehat{H} f_j)(x) \, . \end{equation} This algorithm provides us with an estimate at the points~$(x, \xi_j) \in \mathbb{R}^2$. The results are displayed in \figurename\ref{fig:Hamiltonian}, where the solid surface is the Hamiltonian \eqref{HamiltonianCME} with ${k = \bar{k}}$, and the red dots are its estimated values, which show excellent agreement. If we assume the functional form of the Hamiltonian, we are able to determine the whole function in $\mathbb{R}^2$ by using a finite set of observables. For Markov jump processes on a graph, we can always expect the Hamiltonian to be of the form \cite{MPPR17} \begin{equation} \mathcal{H}(x, \xi) = \sum\limits_{\nu} r_\nu(x) \left( e^{\xi \cdot \nu} - 1 \right) \, , \end{equation} where $\nu$ represent the directed path between two nodes (states), and the sum is over all paths. In our one-dimensional case, this is reduced to the determination of the functions $r_1$ and $r_2$ in \begin{equation}\label{Hamiltonian} \mathcal{H}(x, \xi) = r_1(x) \left( e^\xi - 1 \right) + r_2(x) \left( e^{-\xi} - 1 \right) \, . \end{equation} The coefficients $1$ and $-1$ in the exponents represent the stoichiometric coefficients of the two reaction paths (forward and backward) and are generalizable to any reaction network \cite{MPPR17}. If, in addition, we know the functional expressions for the rates, we may determine the reaction constants by fitting the simulation points with the function \eqref{HamiltonianCME}. We do so with the values $(\widehat{H} f_j)(x)$ at $x = 0.5$, a cross section of \figurename\ref{fig:Hamiltonian}. In \figurename\ref{fig:fitting}, we compare this fitting procedure to the theoretical Hamiltonian with $k = \bar{k}$. In addition, we provide the comparison with the prediction of the diffusion approximation (cf.~Sec.~\!\ref{Green-Kubo}), namely, with the Legendre transform of the Lagrangian \eqref{DiffusionApproximation}, \begin{equation}\label{DiffusionApproximationHamiltonian} \mathcal{H}_\mathrm{D}(x, \xi) = \dfrac{k}{2} \xi^2 + k \left( 1 - 2 x \right) \xi \, . \end{equation} In order to find the dissipation potential, we assume to know the entropy function \eqref{S-GGF}. According to Eq.~\!\eqref{Psi-H}, we would like to compute \begin{equation} \widehat{\Psi}^*_x(\xi) = 2 \left[ \widehat{\mathcal{H}}\!\left(x, \dfrac{1}{2}(\xi - \mathrm{d} s_x) \right) - \widehat{\mathcal{H}}\!\left( x, -\dfrac{1}{2} \mathrm{d} s_x \right) \right] \, . \end{equation} Since we know the values of the Hamiltonian only at discrete points, first we need to interpolate it. Via the MATLAB function \texttt{scatteredInterpolant} \footnote{The function \texttt{scatteredInterpolant} uses a Delaunay triangulation of the scattered sample points to perform interpolation \cite{iA02}. The resulting function can then be evaluated at query points.}, we obtain an interpolation for $\widehat{\Psi}^*$ in the region $\left\{ 0 \leq x \leq 1, - 2 \xi_\text{max} + \mathrm{d} s_x \leq \xi \leq 2 \xi_\text{max} + \mathrm{d} s_x \right\}$. Disregarding any statistical consideration again, the result is shown in \figurename\ref{fig:DissipationPotential}. \begin{figure}[h] \centering \includegraphics[width=1.05\linewidth]{DissipationPotential.eps} \caption{Numerical estimate of the dissipation potential in the region $\left\{ 0 \leq x \leq 1, -4 + \mathrm{d} s_x \leq \xi \leq 4 + \mathrm{d} s_x \right\}$ compared with the theoretical result. The latter is represented by the smooth surface, and the former by the mesh, which was found by interpolation of the simulated Hamiltonian and evaluation with the MATLAB function \texttt{scatteredInterpolant}. The error is everywhere smaller than $1.4 \cdot 10^{-4}$.} \label{fig:DissipationPotential} \end{figure} In this very simple example of a chemical reaction, coarse-graining means estimating the reaction constant, namely the parameter $k$ in the linear generator \eqref{generator}. Consequently, the limit nonlinear generator or the Hamiltonian \eqref{Hamiltonian} does not really contain different information than the linear generator. Hence, in the present context, one could just estimate the transition rates by one of the numerous methods already available in the literature (e.g., \cite{eVE06,FVEW15,HBSBS14}). Moreover, considering more general reaction networks, typical problems of standard approaches remain: high local minima of the potential landscape are rarely explored, boundaries between macroscopic states are not easy to set, the recrossing problem is typical \cite{FVEW15}. The approach proposed here presents the additional complication of being ``stiff'' because of the strong nonlinearities in Eq.~\!\eqref{sample}. However, we emphasize that our aim is to directly resolve the full structure \eqref{Psi-GGF}, and not just the rates: this is not a feature of already available methods. We also expect that our viewpoint will constitute an advantageous tool for less trivial systems, where the convergence of the nonlinear generator carries substantial new information, such as problems of homogenization \cite{MS13} or systems where the structure of the macroscopic dynamics is not known in advance \cite{BC12}. The last main issue is numerical efficiency. Many known numerical methods in statistical mechanics \cite{hT11} are based on \emph{importance sampling}: the probability distribution of a random variable is changed, often by ``exponential tilting'', in such a way that rare events become less rare and can more easily be observed. Physically, this corresponds to biasing the system by an external force. Following the ideas in \cite{HBSBS14}, we would like to develop biased methods for nonlinear generators. \section{Conclusions\label{sec:conclusions}} We have considered the statistical mechanics of a physical system with many degrees of freedom represented by a parameter $n$. In particular, we have studied the structural properties of the deterministic limit ($n \to \infty$) and the fluctuation properties around that limit. The right mathematical framework for this task is the theory of large deviations. Classically, a fluctuation-dissipation theorem of the second kind is formulated for a diffusion process: in the limit of large but finite $n$, it gives the definition of the friction properties of a system, which characterize its most probable evolution, in terms of its fluctuation properties around this evolution. In this paper, we have extended this definition to more general Markov processes with detailed balance. The friction properties are encoded in a nonlinear generalization of a gradient flow, called \emph{generalized gradient flow}, which has the following feature: the driving function, interpreted as the relevant thermodynamic potential, characterizes equilibrium and never decreases along the evolution. For example, for closed systems, the driving function is the thermodynamic entropy. The generalized gradient flow is entirely determined by the driving function and a dissipation potential. The fluctuation properties are characterized by two large-deviation principles: a static one, whose rate function contains the information on the equilibrium states, hence it is the relevant thermodynamic potential; and a dynamic (pathwise) one, where the rate function describes the deviations of the stochastic trajectories from the most probable one, namely the macroscopic evolution, in the limit $n \to \infty$. The generalized FDT establishes the definition of friction in terms of the fluctuations: the dissipation potential, which characterizes friction, is uniquely defined by the dynamic rate function, which describes fluctuations. The most important consequence of the generalization is an extended theory of coarse-graining. If the classical theory of fluctuations allowed us to handle only diffusion processes -- the typical setting of macroscopic ``hydrodynamic-like'' equations and Green-Kubo relations --, now the class of systems includes ``rare-event-like'' systems, which are much better described by jump processes rather than diffusions. In this context, we have tested the new method in the example of a simple model of a monomolecular chemical reaction, the Kramers escape problem. Although this elementary illustration has been proven successful, it certainly requires refinement from both the standpoint of the statistical solidity and of the efficiency of the algorithm, especially because we aim to apply the method to more complex systems, such as plasticity and the dynamics of glasses. The connection between large deviations and generalized gradient flows is restricted to purely dissipative systems. Indeed, its extension to dynamics with a non-dissipative component has not been established yet, although a few attempts \cite{DPZ13,KLMP18} have been made. We expect that the general class of FDTs should hold for evolutions equations of the GENERIC type, where the non-dissipative component is modeled by a Poisson structure and the dissipative one by a generalized gradient structure \begin{acknowledgments} We are grateful to Mohsen Talebi, Aleksandar Donev, Patrick Ilg, Robert Riggleman, and Elijah Thimsen, who considerably helped us to improve our understanding and the presentation of the ideas. \end{acknowledgments}
1,477,468,750,705
arxiv
\section{Introduction} Transformer language models like BERT \cite{devlin2019bert}, GPT \cite{radford2018improving, radford2019language} and XLNet \cite{yang2019xlnet}, have improved the state-of-art in many NLP tasks since their introduction. The versatility of these pre-trained models suggests that they may acquire fairly robust linguistic knowledge and capacity for natural language ``understanding''. However, an emerging body of analysis demonstrates a level of superficiality in these models' handling of language \cite{niven2019probing, kim2020cogs, mccoy2019right, ettinger2020bert,yu2020assessing}. In particular, although \emph{composition}---a model's capacity to combine meaning units into more complex units reflecting phrase meanings---is an indispensable component of language understanding, when testing for composition in pre-trained transformer representations, \citet{yu2020assessing} report that these representations reflect word content of phrases, but don't show signs of more sophisticated humanlike composition beyond word content. In the present paper we perform a direct follow-up of that study, asking whether models will show better evidence of composition after fine-tuning on tasks that are good candidates for requiring composition: 1) the Quora Question Pairs dataset in Paraphrase Adversaries from Word Scrambling (PAWS-QQP) \cite{zhang2019paws}, an adversarial paraphrase dataset forcing models to classify paraphrases with high lexical overlap, and 2) the Stanford Sentiment Treebank \cite{socher2013recursive}, a sentiment dataset with fine-grained phrase labels to promote composition. We base our analysis on the tests proposed by \citet{yu2020assessing}, which rely on alignment with human judgments of phrase pair similarities, and which leverage control of lexical overlap to target compositionality. We fine-tune and evaluate the same models and representation types tested in that paper, for optimal comparison. We find that across the board, fine-tuning on PAWS-QQP does not improve compositionality---if anything, performance on composition metrics tends to degrade. Composition performance also remains low after training on SST, but we do see some localized improvements for certain models. Analyzing the PAWS-QQP dataset, we find reliable superficial cues to paraphrase labels (distance of word swap), explaining in part why fine-tuning on that task might fail to improve composition---and reinforcing the need for caution in interpreting difficulty of NLP tasks. We also discuss the contribution of variation in size of labeled phrases in SST, with respect to the benefits that result from fine-tuning on that task. All experimental code and data are made available for further testing.\footnote{Datasets and code available at https://github.com/yulang/fine-tuning-and-composition-in-transformers} \begin{table*}[ht] \centering \begin{tabularx}{\linewidth}{XXc} \noalign{\hrule height 1pt} \textbf{Sentence 1} & \textbf{Sentence 2} & \textbf{Label} \\ \noalign{\hrule height 1pt} There are also specific discussions , public profile debates and project discussions . & There are also public discussions , profile specific discussions , and project discussions . & 0 \\ \cline{1-3} She worked and lived in Stuttgart , Berlin ( Germany ) and in Vienna ( Austria ) . & She worked and lived in Germany ( Stuttgart , Berlin ) and in Vienna ( Austria ) . & 1 \\ \noalign{\hrule height 1pt} \end{tabularx} \caption{Example pairs from PAWS-QQP. Both positive and negative pairs have high bag-of-words overlap.} \label{tab:paws_example} \end{table*} \section{Related work} Extensive work has studied the nature of learned representations in NLP models~\cite{adi2016fine, conneau2018you, ettinger2016probing, durrani2020analyzing}. Our work builds in particular on analysis of contextualized representations \cite{bacon2019does, tenney2019you,peters2018dissecting,hewitt2019structural,klafka2020spying, toshniwal2020cross}. Other work that has focused on transformers, as we do, has often focused on analyzing the attention mechanism \cite{vig2019analyzing, clark2019does}, learned parameters \cite{roberts2020much, radford2019language, raffel2020exploring} and redundancy \cite{dalvi2020analyzing, voita2019analyzing, michel2019sixteen}. The evaluation that we use here follows the paradigm of classification-based probing~\cite{kim2019probing, wang2018glue, paws2019naacl, pawsx2019emnlp} and correlation with similarity judgments \cite{finkelstein2001placing, gerz2016simverb, hill2015simlex, conneau2018senteval}. The current paper also builds on work subjecting trained NLP models to adversarial inputs, to highlight model weaknesses. One body of work approaches the problem by applying heuristic rules of perturbation to input sequences \cite{wallace2019universal, jia2017adversarial, zhang2019paws}, while another uses neural models to construct adversarial examples~\cite{li2020bert, li2018textbugger} or manipulate inputs in embedding space~\cite{jin2020bert}. Our work also contributes to efforts to understand impacts and outcomes of the fine-tuning process~\cite{miaschi2020linguistic, mosbach2020interplay, wang2020meta, perezmayos2021evolution}. Phrase and sentence composition has drawn frequent attention in analysis of neural models, often focusing on analysis of internal representations and downstream task behavior~\cite{ettinger2018assessing, conneau2019unsupervised, nandakumar2019well, mccoy2019right, yu2020assessing, bhathena2020evaluating, mu2020compositional, andreas2019measuring}. Some work investigates compositionality via constructing linguistic~\cite{keysers2019measuring} and non-linguistic~\cite{livska2018memorize, hupkes2018learning, baan2019realization} synthetic datasets. Most related to our work here is the finding of~\citet{yu2020assessing}. They test for composition in two-word phrase representations from transformers, via similarity correlations and paraphrase detection. They find that baseline performance on these tasks is high, but once they control for amount of word overlap, performance drops dramatically, suggesting that observed correspondences rely on word content rather than phrase composition. We build directly on this work, testing whether these patterns will still hold after fine-tuning on tasks intended to encourage composition. \section{Fine-tuning Pre-trained Transformers}\label{sec:fine-tune} In response to the weaknesses observed by~\citet{yu2020assessing}, we select two different datasets with promising characteristics for addressing these weaknesses. We fine-tune on these tasks, then perform layer-wise testing on contextualized representations from the fine-tuned models, comparing against results on the pre-trained models. Here we describe the two fine-tuning datasets. \subsection{PAWS: fine-tuning on high word overlap} The core of the~\citet{yu2020assessing} finding is that model performance on the selected composition tests degrades significantly when cues of lexical overlap are controlled. It stands to reason, then, that a model trained to discern meaning differences under conditions of high lexical overlap may improve on these overlap-controlled composition tests. This drives our selection of the Paraphrase Adversaries from Word Scrambling (PAWS) dataset~\cite{paws2019naacl}, which consists of sentence pairs with high lexical overlap. The task is formulated as binary classification of whether two sentences are paraphrases or not. State-of-the-art models achieve only $<40$\% accuracy before training on the dataset~\cite{zhang2019paws}. Table~\ref{tab:paws_example} shows examples from this dataset. Due to the high lexical overlap, we might expect that in order to achieve non-trivial accuracy on this task, models must attend to more sophisticated meaning information than simple word content. \subsection{SST: fine-tuning on hierarchical labels} Another dataset that has been associated with training and evaluation of phrasal composition is the Stanford Sentiment Treebank, which contains syntactic phrases of various lengths, together with fine-grained human-annotated sentiment labels for these phrases. Because this dataset contains annotations of composed phrases of various sizes, we can reasonably expect that training on this dataset may foster an increased sensitivity to compositional phrase meaning. We formulate the fine-tuning task as a 5-class classification task following the setup in~\citet{socher2013recursive}. The models are trained to predict sentiment labels given phrases as input. \section{Representation evaluation} \label{rep_com_eval} For optimal comparison of the effects of fine-tuning on the above tasks, we replicate the tests, representation types, and models reported on by \citeauthor{yu2020assessing}. Here we briefly describe these methods. For more details on the evaluation dataset and task setup, please refer to~\citet{yu2020assessing}. \subsection{Evaluation tasks} \citeauthor{yu2020assessing} propose two analyses for measuring composition: similarity correlations and paraphrase classification. They focus on two-word phrases, using the BiRD bigram relatedness dataset \cite{asaadi2019big} for similarity correlations, and the PPDB 2.0 paraphrase database \cite{ganitkevitch2013ppdb, pavlick2015ppdb} for paraphrase classification. BiRD contains 3,345 bigram pairs, with source phrases paired with numerous target phrases, and human-annotated similarity scores ranging from 0 to 1. For similarity correlation, \citeauthor{yu2020assessing} take layer-wise correlations between these human phrase similarity scores and the cosine similarities of model representations for the same phrases. For paraphrase classification, \citeauthor{yu2020assessing} train a multi-layer perceptron classifier to label whether two phrase representations are paraphrases, drawing their positive phrase pairs from PPDB 2.0---which contains paraphrases with scores generated by a regression model---and randomly sampling negative pairs from the rest of the dataset. We replicate all of these procedures. For both task types,~\citeauthor{yu2020assessing} compare between ``uncontrolled'' and ``controlled'' tests, with the latter filtering the data to control word overlap within phrase pairs, such that amount of word overlap between two phrases can no longer be used as a cue for how similar the meanings are. It is on these controlled settings that \citeauthor{yu2020assessing} observe the significant drop in performance, suggesting that model representations lack the compositional knowledge to discern phrase meaning beyond word content. Below we will report results for both settings, with particular focus on controlled settings. \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{figs/revised1_uncontrolled_cor.eps} \caption{Similarity correlation on uncontrolled BiRD dataset, with phrase-only input. Columns correspond to models, and rows correspond to representation types (``HT'' = Head-token, ``AP'' = Avg-Phrase and ``AA'' = Avg-All). For each model and representation type, the corresponding subplot shows correlations for pre-trained, PAWS-tuned and SST-tuned settings, respectively. For each subplot, X-axis corresponds to layer index, and Y-axis corresponds to correlation value. Layer 0 corresponds to input embeddings passed to the model.} \label{fig:bird-tuned} \end{figure*} \subsection{Representation types} Following~\citeauthor{yu2020assessing}, for each input phrase we test as a potential representation 1) CLS token, 2) average of tokens within the phrase (Avg-Phrase), 3) average of all input tokens (Avg-All), 4) embedding of the second word of the phrase, intended to approximate the semantic head (Head-Word), and 5) SEP token. We test each of these representations at every layer of each model.\footnote{Like~\citeauthor{yu2020assessing}, we also test both phrase-only input (encoder input consists only of two-word phrase plus special CLS/SEP tokens), as well as inputs in which phrases are embedded in sentence contexts.} \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{figs/revised1_controlled_cor.eps} \caption{Similarity correlation on controlled BiRD dataset (AB-BA setting), with phrase-only input.} \label{fig:bird-abba-tuned} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{figs/revised1_controlled_acc_new2.eps} \caption{Paraphrase classification accuracy on controlled PPDB dataset (50\% word overlap setting) with phrase-only input. Y-axis range is smaller relative to Figure~\ref{fig:ppdb-tuned}, to make changes from pre-training more visible. } \label{fig:ppdb-exact-tuned} \end{figure*} \section{Experimental setup} We fine-tune and analyze the same models that~\citeauthor{yu2020assessing} test in pre-trained form: BERT \cite{devlin2019bert}, RoBERTa \cite{liu2019roberta}, DistilBERT \cite{sanh2019distilbert}, XLNet \cite{yang2019xlnet} and XLM-RoBERTa \cite{conneau2019unsupervised}. In each case, the pre-trained ``base'' version is used as the starting point for fine-tuning. We use the implementation of \citet{Wolf2019HuggingFacesTS}\footnote{\url{https://github.com/huggingface/transformers}} based on PyTorch \cite{paszke2019pytorch}. We fine-tune these models on the two datasets described in Section~\ref{sec:fine-tune}. The Quora Question Pairs dataset in Paraphrase Adversaries from Word Scrambling (PAWS-QQP)\footnote{\url{https://github.com/google-research-datasets/paws}} consists of a training set with 11,988 sentence pairs, and a dev/test set with 677 sentence pairs. Tuning on PAWS-QQP is formulated as binary classification. Sentences are passed as input and models are trained to predict whether the input sentences are paraphrases or not. Models are trained on the training set, and validated on the dev/test set for convergence. The Stanford Sentiment Treebank (SST)\footnote{\url{https://nlp.stanford.edu/sentiment/treebank.html}} \cite{socher2013recursive} contains 215,154 phrases. 15\% of the data is reserved for validation. The fine-tuning task is formulated as 5-class classification on sentiment labels, where models are given phrases as input, and asked to predict sentiment. In both tasks, we use the Adam optimizer \cite{kingma2014adam} with default weight decay. We train the models until convergence on the validation set. The evaluation tasks consist of correlation analysis and paraphrase classification. For correlation in the uncontrolled setting, we use the complete BiRD dataset, containing 3,345 phrase pairs.\footnote{\url{http://saifmohammad.com/WebPages/BiRD.html}} For the controlled test, we filter the complete dataset following the criteria in~\citet{yu2020assessing}, resulting in 410 ``AB-BA'' mirror-image pairs with 100\% word overlap (e.g., \emph{law school} / \emph{school law}). For the classification tasks, we use the preprocessed data released by \citet{yu2020assessing}.\footnote{\url{https://github.com/yulang/phrasal-composition-in-transformers}} We collect 12,036 source-target phrase pairs from the preprocessed dataset for our uncontrolled classification setting, and for the controlled classification setting, we collect 11,772 phrase pairs with exactly 50\% word overlap in each pair, following the procedure from the original paper. \section{Results after fine-tuning}\label{analysis} \subsection{Full datasets} Figure \ref{fig:bird-tuned} presents the original results from \citet{yu2020assessing} on pre-trained models, alongside our new results after fine-tuning, on the full BiRD dataset. Since this is prior to the control of word overlap, these correlations can be expected to reflect effects of lexical content encoding, without yet having isolated effects of composition. We find that after fine-tuning on SST, most models and representation types show small improvements in peak correlations across layers, while fine-tuning on PAWS also yields improvements in peak correlations---albeit even smaller---in models other than BERT and XLM-RoBERTa. Overall, within a given representation type, improvements are generally stronger after fine-tuning on SST than on PAWS. Between representation types, Avg-Phrase and Avg-All remain consistently at the highest correlations after fine-tuning. Additionally, we see that the decline in correlation at later layers in pre-trained BERT, RoBERTa and XLM-RoBERTa is mitigated after fine-tuning. Model-wise, we see the most significant improvements in the RoBERTa model, for which the correlations become more consistent across layers for most representation types. As we discuss below, we take this as indication that the fine-tuning promotes more robust retention of word content information across layers, if not more robust phrasal composition. For the sake of space, we present the plots of the uncontrolled paraphrase classification setting in Figure~\ref{fig:ppdb-tuned} of the Appendix. The overall improvements are even smaller than those seen in the correlations, but we do see comparable patterns in these paraphrase classification results, in particular with SST showing slightly stronger benefits than PAWS. \subsection{Controlled datasets} Above we see small benefits of fine-tuning for performance on the full, uncontrolled datasets. However, the critical question for our purposes is whether correlations also show improvements in word-overlap controlled settings, which better isolate effects of composition. Figure \ref{fig:bird-abba-tuned} shows correlations for all models on the controlled AB-BA (full word overlap) correlation test. Figure \ref{fig:ppdb-exact-tuned} shows the results for the controlled paraphrase classification setting, where both paraphrase and non-paraphrase pairs have exactly 50\% word overlap. The first comparison to note is between original and controlled settings, which allows us to establish the contributions of overlap information as opposed to composition. Comparing between Figure~\ref{fig:bird-tuned} and Figure~\ref{fig:bird-abba-tuned}, it is clear that fine-tuned models still show substantial reduction in correlation when overlap cues are removed. The same goes for Figure \ref{fig:ppdb-exact-tuned} (by comparison to Figure~\ref{fig:ppdb-tuned} of the Appendix)---we see that on the controlled dataset, accuracies hover just above chance-level performance both before and after fine-tuning, compared to over 90\% accuracy on the uncontrolled dataset. This gap in performance between the original and controlled datasets mirrors the findings of \citet{yu2020assessing}, and suggests that even after fine-tuning, the majority of correspondence between model phrase representations and human meaning similarity judgments can be attributed to capturing of word content information rather than phrasal composition. The second key comparison is between pre-trained and fine-tuned models within the overlap-controlled settings. While the prior comparison tells us that similarity correspondence is still dominated by word content effects, this second comparison can tell us whether fine-tuning shows at least some boost in meaning composition relative to pre-training. Comparing performance of pre-trained and fine-tuned models in Figure \ref{fig:bird-abba-tuned}, we see that fine-tuning on PAWS-QQP actually slightly degrades correlations at many layers for a majority of models and representation types---with improvements largely restricted to XLM-RoBERTa and XLNet (perhaps notably, mostly in cases where pre-trained correlations are negative). This is despite the fact that models achieve strong validation performance on PAWS-QQP (as shown in Table \ref{tab:paws_acc}), suggesting that learning this task does little to improve composition. We will explore the reasons for this below. In Figure~\ref{fig:ppdb-exact-tuned}, we see that fine-tuning also does little to improve paraphrase classification accuracies in the controlled setting---though each model shows slight improvement in peak accuracy across layers and representation types (e.g., RoBERTa shows $\sim$3\% increase in peak accuracy with SST tuning, and 2\% with PAWS tuning). Even so, the best accuracies across models continue to be only marginally above chance. This, too, fails to provide evidence of any substantial composition improvement resulting from the fine-tuning process. The story changes slightly when we turn to impacts of SST fine-tuning on correlations in Figure \ref{fig:bird-abba-tuned}. While all correlations remain low after SST fine-tuning, we do see that correlations for BERT, XLM-RoBERTa and XLNet show some non-trivial benefits even in the controlled setting. In particular, SST tuning consistently improves correlation among all representation types in BERT (except for minor degradation in later layers for Head-token), boosting the highest correlation from $\sim$0.2 to $\sim$0.39. Between representation types, the greatest change is in the CLS token, with the most dramatic point of improvement being an abrupt correlation peak for CLS at BERT's fourth layer. We will discuss more below about this localized benefit. A final important observation is that fine-tuning on either dataset produces clear degradation in correlations for all representation types in RoBERTa under the controlled setting, by contrast to the general improvements seen for that and other models in the uncontrolled setting. This suggests that at least for that model, fine-tuning encourages retention or enhancement of lexical information, but may degrade compositional phrase information.\footnote{Following \citet{yu2020assessing}, in addition to phrase-only inputs we also try embedding target phrases in sentence contexts. Consistent with the findings of \citet{yu2020assessing}, we see that presence of context words does boost overall correlation and accuracy, but does not alter the general trends. Moreover, models still show relatively weak performance on controlled tasks even with context available (see Figure~\ref{fig:bird-in-sent-tuned} and Figure~\ref{fig:bird-abba-in-sent-tuned} in the Appendix for details).} \begin{table}[t!] \centering \begin{tabular}{c|c} \noalign{\hrule height 1pt} \textbf{Model} & \textbf{Accuracy(\%)} \\ \noalign{\hrule height 1pt} BERT & 80.13 \\ \hline RoBERTa & 90.81 \\ \hline DistilBERT & 81.98 \\ \hline XLM-RoBERTa & 91.18 \\ \hline XLNet & 88.24 \\ \hline Linear CLF & 71.34 \\ \hline \end{tabular} \caption{Accuracy of fine-tuned models on PAWS-QQP dev/test set. Linear CLF is a baseline classifier with relative swapping distance as the only input feature.} \label{tab:paws_acc} \end{table} \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{figs/bird-various-length-phrase-only-abba.eps} \caption{Layer-wise correlation of BERT fine-tuned on phrases of different lengths in SST.} \label{fig:abba_various_len} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{figs/histogram.eps} \caption{Distribution of positive and negative predictions made by tuned models. Last plot shows the statistics in the PAWS-QQP dev/test set. X-axis corresponds to relative swapping distance; Y-axis shows number of samples in the specific relative swapping distance bin.} \label{fig:paws_histo} \end{figure} \section{Analyzing impact of fine-tuning} The presented results suggest that despite compelling reasons to think that fine-tuning on the selected tasks may improve composition of phrase meaning, these models mostly do not exhibit noteworthy benefits from fine-tuning. In particular, fine-tuning on the PAWS-QQP dataset often degrades performance on the controlled datasets taken to be most indicative of compositionality. As for SST, the benefits are minimal, but in localized cases like BERT's CLS token, we do see signs of improved compositionality. In this section, we conduct further analysis on the impacts of fine-tuning, and discuss why tuned models behave as they do. \subsection{Failure of PAWS-QQP} Table \ref{tab:paws_acc} shows accuracy of fine-tuned models on the dev/test set of PAWS-QQP.\footnote{The performance of BERT in the table is different from previous work mainly due to the fact that models in \citet{zhang2019paws} are tuned on concatenation of QQP and PAWS-QQP datasets rather than PAWS-QQP only.} It is clear that the models are learning to perform well on this dataset, but our results above indicate that this does not translate to improved composition sensitivity. \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{figs/tuned-model-comparison.eps} \caption{Average layer-wise embedding similarity between fine-tuned and pre-trained BERT. The upper half shows the comparison between PAWS-QQP tuned and pre-trained BERT. And the lower half presents Sentiment Treebank-tuned v.s. pre-trained. Embeddings are evaluated using full BiRD dataset for input.} \label{fig:rep_changes} \end{figure*} We explore the possibility that this discrepancy may be caused by trivial cues arising during the construction of the dataset, enabling models to infer paraphrase labels without needing to improve their understanding of the meaning of the sentence pair \cite[c.f.,][]{poliak2018hypothesis, gururangan2018annotation}. Sentence pairs in PAWS are generated via word swapping and back translation to ensure high bag-of-words overlap~\cite{zhang2019paws}. We hypothesize that models may be able to achieve high performance in this task based on distance of the word swap alone, without requiring any sophisticated representation of sentence meaning. To test this, given a sentence pair $(s_1, s_2)$ with word counts $l_1, l_2$, respectively, we define ``relative swapping distance'' as \begin{equation*} dist_{relative} = \frac{dist_{swap}}{max(l_1, l_2)} \end{equation*} where $dist_{swap}$ is defined as the index difference of the first swapping word in $s_1$ and $s_2$. For the example shown in the first row of Table \ref{tab:paws_example}, the first swapping word is ``specific'', with $dist_{swap} = 4$. Note that with this measure we focus on information from one word swap only, while some pairs in PAWS-QQP have multiple swapped words---so in reality, swapping distance information may be even stronger than our results below indicate. In the last plot of Figure \ref{fig:paws_histo}, we show an association between relative swapping distance and paraphrase labels in the PAWS dev/test set: sentence pairs with small swapping distance tend to be positive samples, while large swapping distance associates with negative labels. The other plots in Figure \ref{fig:paws_histo} show distribution of positive and negative predictions generated by each fine-tuned model with respect to relative swapping distance. We see a similar pattern, with models tending to generate negative labels when swapping distance is larger. To verify the viability of this cue, we train a simple linear classifier on PAWS-QQP, with relative swapping distance as the only input feature. The results are reported as ``Linear CLF'' in Table \ref{tab:paws_acc}. Even without access to the content of the sentences, we see that this simple model is able to achieve non-trivial and comparably good classification accuracy on the dev/test set. The strong performance of the linear classifier and the distribution of predictions are consistent with the hypothesis that when we tune on PAWS-QQP, rather than forcing models to learn nuanced meaning in the absence of word overlap cues, we may instead encourage models to focus on lower-level information having little to do with the sentence meaning, further degrading their performance on the composition tasks. \subsection{Localized impacts of SST} Fine-tuning on sentiment shows a bit of a different pattern---while it mostly shows only minor changes from pre-training, and the correlations and classification accuracies remain at decidedly low levels on the controlled settings, we do see in certain models some distinctive changes in levels of similarity correlation as a result of tuning on SST. Notably, since these improvement patterns are seen in the similarity correlations but not in the classification accuracies, this suggests that these two tasks are picking up on slightly different aspects of phrasal compositionality. To investigate these effects further, we focus our attention on BERT, which shows the most distinctive improvement in correlations. The obvious candidate for the source of the localized SST benefit is the dataset's inclusion of labeled syntactic phrases of various sizes. The benefits seen from SST tuning suggest that this may indeed encourage models to gain some finer-grained sensitivity to compositional impacts of phrase structure (at least those relevant for sentiment). To examine this further, we filter the SST dataset to subsets with phrases of the same length, from 2 to 6 words, and tune pre-trained BERT on each subset. Figure~\ref{fig:abba_various_len} shows the correlations for BERT, fine-tuned on each phrase length, on the overlap-controlled BiRD dataset. We see that tuning on the full dataset (mixed phrase lengths) gives the strongest fourth-layer boost in CLS correlation performance---but among the size subsets, a semblance of the fourth-layer CLS peak is seen across phrase lengths, with length-2 training yielding the strongest peak, and length-6 training the smallest. This suggests an amount of size-based specialization---sentiment training on phrases of (or closer to) two words has more positive impact on similarity correlations for our two-word phrases.\footnote{Although subset size can potentially contribute to correlation performance, we find that subset size does not correlate with the performance patterns we observe here. Phrase count of each subset: length 2 - 11,499; length 3 - 11,779; length 4 - 15,050; length 5 - 11,816; length 6 - 9,935.} However, we also see that phrases of other sizes contribute non-trivially to the ultimate correlation improvement observed from training on the full dataset. This is consistent with the notion that training on diverse phrase sizes encourages fine-grained attention to compositionality, while training on phrases of similar size may have slightly more direct benefit. \paragraph{Representation changes} For further comparison of fine-tuning effects between tasks, we analyze changes in BERT representations at each layer before and after the fine-tuning process. Figure~\ref{fig:rep_changes} shows the average layer-wise representation similarity between fine-tuned and pre-trained BERT given identical input. We see substantial differences between tasks in terms of representation changes: while SST fine-tuning produces significant changes across representations and layers, PAWS fine-tuning leaves representations largely unchanged (further supporting the notion that this task can be solved fairly trivially). We also see that after SST tuning, BERT's CLS token shows robust similarity to pre-trained representations until the fifth layer, followed by a rapid drop in similarity. This suggests that the fourth-layer correlation peak may be enabled in part by retention of key information from pre-training, combined with heightened phrase sensitivity from fine-tuning. We leave in-depth exploration of this dynamic for future work. \section{Discussion} The results of our experiments indicate that despite the promise of PAWS-QQP and SST tasks for improving models' phrasal composition, fine-tuning on these tasks falls far short of resolving the composition weaknesses observed by~\citet{yu2020assessing}. The majority of correspondence with human judgments can still be attributed to word overlap effects---disappearing once overlap is controlled---and improvements on the controlled settings are absent, very small, or highly localized to particular models, layers and representations. This outcome aligns with the increasing body of evidence that NLP datasets often do not require of models the level of linguistic sophistication that we might hope for---and in particular, our identification of a strong spurious cue in the PAWS-QQP dataset adds to the growing number of findings emphasizing that NLP datasets often have artifacts that can inflate performance~\cite{poliak2018hypothesis,gururangan2018annotation,kaushik2018much}. We do see a ray of promise in the small, localized benefits for certain models from tuning on SST. These improvements do not extend to all models, and are fairly small in the models that do see benefits---but as we discuss above, it appears that training on fine-grained syntactic phrase distinctions may indeed confer some enhancement of compositional meaning in phrase representations---at least when model conditions are amenable. Since sentiment information constitutes only a very limited aspect of phrase meaning, we anticipate that training on fine-grained phrase labels that reflect richer and more diverse meaning information could be a promising direction for promoting composition more robustly in these models. \section{Conclusions and future directions} We have tested effects of fine-tuning on phrase meaning composition in transformer representations. Although we select tasks with promise to address composition weaknesses and reliance on word overlap, we find that representations in the fine-tuned models show little improvement on controlled composition tests, or show only very localized improvements. Follow-up analyses suggest that the PAWS-QQP dataset contains spurious cues that undermine learning of sophisticated meaning properties when training on that task. However, results from SST tuning suggest that training on labeled phrases of various sizes could prove effective for learning composition. Future work should investigate how model properties interact with fine-tuning to produce improvements in particular models and layers---and should move toward phrase-level training with meaning-rich annotations, which we predict will be a promising direction for improving models' phrase meaning composition. \section*{Acknowledgments} We would like to thank three anonymous reviewers for valuable feedback for improving this paper. We also thank members of the University of Chicago CompLing Lab for helpful comments and suggestions on this work. This material is based upon work supported by the National Science Foundation under Award No.~1941160. \bibliographystyle{acl_natbib}
1,477,468,750,706
arxiv
\section{#1}} \numberwithin{equation}{section} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand \nc {\newcommand} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \nc \thref{Theorem \ref} \nc \leref{Lemma \ref} \nc \prref{Proposition \ref} \nc \coref{Corollary \ref} \nc \deref{Definition \ref} \nc \exref{Example \ref} \nc \reref{Remark \ref} \newcommand{\leftexp}[2]{{\vphantom{#2}}^{#1}{#2}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \renewcommand{\P}{\mathcal{P}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathcal{F}}{\mathcal{F}} \renewcommand{\H}{\mathcal{H}} \newcommand{\mathcal{J}}{\mathcal{J}} \renewcommand{{\mathcal L}}{\mathcal{L}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathbb{N}}{\mathbb{N}} \renewcommand{\O}{\mathcal{O}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbf{f}}{\mathbf{f}} \newcommand{\mathbf{g}}{\mathbf{g}} \renewcommand{\c}{\mathbf{c}} \renewcommand{\k}{\mathfrak{k}} \newcommand{\mathbf{p}}{\mathbf{p}} \newcommand{\mathbf{q}}{\mathbf{q}} \renewcommand{\t}{\mathbf{t}} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{y}}{\mathbf{y}} \newcommand{\hookrightarrow}{\hookrightarrow} \newcommand{\overline{\mathcal{M}}}{\overline{\mathcal{M}}} \newcommand{\overline{\mathcal{M}}_{g,n}}{\overline{\mathcal{M}}_{g,n}} \newcommand{\Xgnd}[1][]{X_{g,n #1,d}} \def\mathop{\rm Aut}\nolimits{\mathop{\rm Aut}\nolimits} \def\mathop{\rm End}\nolimits{\mathop{\rm End}\nolimits} \def\mathop{\rm Def}\nolimits{\mathop{\rm Def}\nolimits} \def\mathfrak{Fock}{\mathfrak{Fock}} \def\mathop{\rm Hom}\nolimits{\mathop{\rm Hom}\nolimits} \def\mathop{\rm Koszul}\nolimits{\mathop{\rm Koszul}\nolimits} \def\mathop{\rm Map}\nolimits{\mathop{\rm Map}\nolimits} \def\mathop{\rm Td}\nolimits{\mathop{\rm Td}\nolimits} \def\mathop{\rm Re}\nolimits{\mathop{\rm Re}\nolimits} \def\mathop{\rm Res}\nolimits{\mathop{\rm Res}\nolimits} \def\mathop{\rm deg}\nolimits{\mathop{\rm deg}\nolimits} \def\mathop{\rm dim}\nolimits{\mathop{\rm dim}\nolimits} \def\mathop{\rm ad}\nolimits{\mathop{\rm ad}\nolimits} \def\mathop{\rm ch}\nolimits{\mathop{\rm ch}\nolimits} \def\mathop{\rm ct}\nolimits{\mathop{\rm ct}\nolimits} \def\partial{\partial} \def\mathop{\rm ev}\nolimits{\mathop{\rm ev}\nolimits} \def\mathop{\rm gr}\nolimits{\mathop{\rm gr}\nolimits} \def\mathop{\rm diag} \nolimits{\mathop{\rm diag} \nolimits} \def\mathop{\rm id}\nolimits{\mathop{\rm id}\nolimits} \def\cong{\cong} \def\otimes{\otimes} \def\mathop{\rm tr}\nolimits{\mathop{\rm tr}\nolimits} \def\mathop{\rm vdim}\nolimits{\mathop{\rm vdim}\nolimits} \def\mbox{\it \large e}{\mbox{\it \large e}} \defK\"ahler{K\"ahler} \defPoincar\'e{Poincar\'e} \defWeierstra\ss{Weierstra\ss} \def\({\left(} \def\){\right)} \def\[{\left[} \def\]{\right]} \def\<{\left\langle} \def\>{\right\rangle} \def{\bf \Gamma}{{\bf \Gamma}} \def{\mathfrak{g}}{{\mathfrak{g}}} \def{\mathfrak{sl}}{{\mathfrak{sl}}} \def{\mathfrak{mp}}{{\mathfrak{mp}}} \def{\mathfrak{o}}{{\mathfrak{o}}} \def{\mathfrak{u}}{{\mathfrak{u}}} \def{\mathfrak{gl}}{{\mathfrak{gl}}} \def{\mathfrak{su}}{{\mathfrak{su}}} \def{\mathfrak{so}}{{\mathfrak{so}}} \def{\mathfrak{sp}}{{\mathfrak{sp}}} \def{\rm SL}{{\rm SL}} \def{\rm Mp}{{\rm Mp}} \def{\rm O}{{\rm O}} \def{\rm U}{{\rm U}} \def{\rm GL}{{\rm GL}} \def{\rm SU}{{\rm SU}} \def{\rm SO}{{\rm SO}} \def{\rm Sp}{{\rm Sp}} \def\lambda{\lambda} \def\epsilon{\epsilon} \def\alpha{\alpha} \def\delta{\delta} \def\beta{\beta} \begin{document} \title{$\mathcal{W}_{N+1}$-constraints for Singularities of Type $A_N$} \author{Bojko Bakalov} \address{Department of Mathematics\\ North Carolina State University\\ Raleigh, NC 27695, USA\\ e-mail: bojko\_bakalov@ncsu.edu} \author{Todor Milanov} \address{Department of Mathematics\\ North Carolina State University\\ Raleigh, NC 27695, USA\\ e-mail: temilano@ncsu.edu} \thanks{The first author is supported in part by the NSF grant DMS-0701011} \thanks{ The second author is supported in part by the NSF grant DMS-0707150.} \date{\today} \begin{abstract} Using Picard--Lefschetz periods for the singularity of type $A_N$, we construct a projective representation of the Lie algebra of differential operators on the circle with central charge $h:=N+1$. We prove that the total descendant potential $\mathcal{D}_{A_N}$ of $A_N$-singularity is a highest weight vector. It is known that $\mathcal{D}_{A_N}$ can be interpreted as a generating function of a certain class of intersection numbers on the moduli space of $h$-spin curves. In this settings our constraints provide a complete set of recursion relations between the intersection numbers. Our methods are based entirely on the symplectic loop space formalism of A. Givental and therefore they can be applied to the mirror models of symplectic manifolds. \end{abstract} \maketitle \sectionnew{Introduction} It was conjectured by E. Witten \cite{W1} and proved by M. Kontsevich \cite{Ko} that the stable intersection theory on the moduli space $\overline{\mathcal{M}}_{g,n}$ of Riemann surfaces is governed by a unique solution of the KdV hierarchy. Following A. Givental \cite{G3}, we denote this solution by $\mathcal{D}_{\rm pt}$ and we refer to it as the {\em Witten--Kontsevich $\tau$-function}. More generally, given a compact K\"ahler manifold $X$, let $\overline{\mathcal{M}}_{g,n}(X,d)$ be the moduli space of equivalence classes of degree-$d$ stable holomorphic maps with values in $X$, whose domain is a genus-$g$ Riemann surface equipped with $n$ marked points. Similarly to $\overline{\mathcal{M}}_{g,n}$, there are intersection numbers in $\overline{\mathcal{M}}_{g,n}(X,d)$, called {\em Gromov--Witten invariants} \cite{BF}, \cite{BM}, \cite{Ko2}, \cite{LT}, \cite{S}. One can organize them in a generating function $\mathcal{D}_X$, similar to $\mathcal{D}_{\rm pt}$. It is natural to ask whether $\mathcal{D}_X$ can be uniquely identified with the solution of some integrable hierarchy. In general, it is very hard to approach this question. However, there is a class of manifolds for which the problem looks manageable. To specify them, one has to introduce the notion of {\em Frobenius structure} on a vector space $H$. It consists of a family of Frobenius algebra structures -- one on each tangent space $T_tH,\ t\in H$ -- satisfying certain integrability conditions (see \cite{D}). The Frobenius structure is called {\em semi-simple} if the multiplication in $T_tH$ is semi-simple for generic $t\in H$. The genus-0 Gromov--Witten invariants of $X$ give rise to a Frobenius structure on $H^*(X)$. In case it is semi-simple, Givental conjectured (see \cite{G1}) that $\mathcal{D}_{X}$ is given by a closed formula, which depends only on the semi-simple Frobenius structure and the Witten-Kontsevich $\tau$-function $\mathcal{D}_{\rm pt}.$ Givental's formula was recently proved by C. Teleman \cite{Te}. Since the formula for $\mathcal{D}_X$ makes sense for any semi-simple Frobenius structure, it is natural to investigate a more general problem. Namely, what is the connection between integrable hierarchies and semi-simple Frobenius structures -- see \cite{DLZ} and \cite{DZ}. In this article we pursue a different direction, which in some sense is parallel to the above discussion. To begin with, let us recall that there is a different way to characterize $\mathcal{D}_{\rm pt}$. According to V. Kac and A. Schwartz \cite{KacS}, $\mathcal{D}_{\rm pt}$ is a highest weight vector of the Virasoro algebra, which is a central extension of the Lie algebra of vector fields on the circle. Combinatorially, the meaning of the Virasoro constraints is the following. The intersection numbers are obtained by integrating over $\overline{\mathcal{M}}_{g,n}$ certain monomial expressions of the cohomology classes $\psi_1,\ldots, \psi_n$, where $\psi_i$ is the first Chern class of the line bundle on $\overline{\mathcal{M}}_{g,n}$ formed by the cotangent lines at the $i$-th marked point. The Virasoro constraints give rise to a rule for removing the powers of $\psi_n$ and thus they express each intersection number in terms of simpler ones, depending on fewer $\psi$'s or lower genus. The Gromov--Witten invariants are obtained by integrating over $\overline{\mathcal{M}}_{g,n}(X,d)$ monomial expressions in cohomology classes of the type ${\rm ev}_i^*(\phi)\psi_i^k$, where ${\rm ev}_i$ is the evaluation map at the $i$-th marked point and $\phi\in H^*(X)$. One of the fundamental questions in Gromov--Witten theory, which is still open for manifolds $X$ whose quantum cohomology is not semi-simple, is whether $\mathcal{D}_X$ satisfies the Virasoro constraints (see \cite{EHX, EJX, EX} and also \cite{DZ2} and \cite{G3}). They could also be interpreted as rules for removing ${\rm ev}_n^*({\bf 1})\psi_n^k$, where ${\bf 1}\in H^*(X)$ is the unity. More generally, one could ask whether there are rules for removing all cohomology classes ${\rm ev}_n^*(\phi)\psi_n^k,\ \phi\in H^*(X)$, or in terms of generating functions, is it possible to prove that $\mathcal{D}_X$ is a highest weight vector for an algebra larger than the Virasoro algebra? The above question makes sense for any $\mathcal{D}_X$ arising from a semi-simple Frobenius structure. In \cite{G3}, A. Givental introduced a certain symplectic loop space formalism, which allowed him to prove that his formula satisfies Virasoro constraints. The problem then is to find a larger algebra such that $\mathcal{D}_X$ is still a highest weight vector. There is one case in which the answer is known (see \cite{DV}). Namely, the space of miniversal deformations of an $A_N$-singularity has a semi-simple Frobenius structure and for it $\mathcal{D}_{A_N}$ is given by Givental's formula. In \cite{G1}, it was proved that $\mathcal{D}_{A_N}$ is a solution to the $h$-KdV hierarchy, where $h:=N+1$. In addition, according to the results of \cite{G3}, $\mathcal{D}_{A_N}$ satisfies the {\em string equation}, which is just one of the Virasoro constraints, corresponding to removing ${\bf 1}$ from the intersection numbers. Finally, it was proved in \cite{AM} that a solution to $h$-KdV satisfying the string equation is unique and it is a highest weight vector of the vertex algebra $\mathcal{W}_h$. We are not going to use the theory of vertex algebras here. The last statement can be reformulated also in the following way (see \cite{FKRW}). The algebra of differential operators on the circle has a unique central extension which is usually denoted by $W_{1+\infty}$. The above statement means that there is a representation of $W_{1+\infty}$ with central charge $h$, such that $\mathcal{D}_{A_N}$ is a highest weight vector. In the present article we prove that $\mathcal{D}_{A_N}$ is a highest weight vector for some algebra of differential operators defined in terms of Picard--Lefschetz periods and vertex operators. Our methods are based entirely on the symplectic loop space formalism of A. Givental developed in \cite{G3} and pursued further in \cite{G1} and \cite{GM}. We also prove that after an appropriate change of the variables our constraints coincide with the $\mathcal{W}_h$ constraints. \subsection{Formulation of the main result.} \label{frobenius} Let $\mathcal{T}\cong\mathbb{C}^N$ be the space of miniversal deformations of $f(x)={x^{N+1}}/{(N+1)},$ i.e., the points $t=(t^1,\ldots,t^N)\in \mathcal{T}$ parametrize the polynomials \begin{equation}\label{miniversal:deformation} f_t(x)=\frac{x^{N+1}}{N+1}+t^1 x^{N-1}+\cdots+t^N. \end{equation} To avoid cumbersome notations we put $h:=N+1.$ Each tangent space $T_t\mathcal{T}$ is naturally identified with the algebra of polynomial functions on the critical set ${Crit}\ f_t$: $$ \partial/\partial t^i\in T_t\mathcal{T}\ \mapsto\ \partial f_t/\partial t^i\, ({\rm mod}\, f_t'(x))\in \mathbb{C}[x]/\langle f_t'(x)\rangle. $$ In particular, the tangent space $T_t\mathcal{T}$ is equipped with an associative, commutative multiplication, which will be denoted by $\bullet_t$. In addition to the multiplication we introduce also a {\em flat} structure on $\mathcal{T}$ via the following residue pairing: \begin{equation}\label{res} \( \partial/\partial t^i\, ,\, \partial/\partial t^j\)_t:=\sum_{i=1} {\rm res}_{x=\xi_i} \frac{\partial_{t^i}f_t\,\partial_{t^j}f_t}{(f_t)'_x}\,\omega,\quad \omega=dx, \end{equation} where $\xi_i,\ 1\leq i\leq N,$ are the critical points of $f_t.$ The flatness here means that we can find a holomorphic coordinate system $(\tau^1,\ldots,\tau^N)$ on $\mathcal{T},$ in which the above pairing is constant. It follows from the definitions that the multiplication $\bullet_t$ is {\em Frobenius} with respect to the residue pairing. It is also known that the following integrability condition holds (for example see \cite{He}): the family of connection operators \begin{equation}\label{connection} \nabla = \nabla^{\rm L.C.} - \frac{1}{z}\sum_{i=1}^N (\partial/\partial t^i\bullet_t)dt^i \end{equation} is flat, where $\nabla^{\rm L.C.}$ is the Levi--Cevita connection of the residue pairing. Let $H=\mathbb{C}[x]/\langle x^N\rangle$ and $v_i,\ 1\leq i\leq N$ be the basis obtained from the projection of the monomials $x^{N-i}$ on $H$. Put \begin{eqnarray*} {\bf 0}=(0,\ldots,0,0)\in \mathcal{T},\quad {\bf 1}=(0,\ldots,0,1)\in \mathcal{T}. \end{eqnarray*} Let $\tau^i=\tau^i(t)$, $1\leq i\leq N$ be flat coordinates on $\mathcal{T}$ such that $\tau^i({\bf 0})=0$ and the restriction of the coordinate vector field $\partial_i:=\partial/\partial\tau^i$ to the tangent space $T_{\bf 0}\mathcal{T}\cong H$ coincides with $v_i.$ Let us point out that $t^N=\tau^N$, so $\partial_N$ is a unity with respect to the Frobenius multiplication $\bullet_t$. Also, the flat structure allows us to identify $\mathcal{T}\cong H$, $t\mapsto \sum_{i=1}^N \tau^i(t)v_i$ and so we may assume that ${\bf 1}=\partial_N$. \medskip By definition, the symplectic loop space $\H$ is the space of formal Laurent series in $z^{-1}$ with coefficients in the vector space $H$ (these are series with finitely many positive powers of $z$ and possibly infinitely many negative ones). The space $\H$ is equipped with a symplectic structure: \begin{eqnarray*} \Omega(\phi_1,\phi_2):={\rm res}_{z=0}\(\phi_1(-z),\phi_2(z)\)dz,\quad \phi_1(z),\phi_2(z)\in \H. \end{eqnarray*} The Darboux coordinate system for $\Omega$ is provided by linear functions $q_k^i$, $p_{k,i}$ defined by the following formula: \begin{eqnarray*} \phi(z) = \sum_{k=0}^\infty q_k^i v_i z^k + \sum_{k=0}^\infty p_{k,i}\,v^i(-z)^{-k-1} \end{eqnarray*} where $\{v^i\}_{1\leq i\leq N}$ is a basis of $H$ dual to $\{v_i\}_{1\leq i\leq N}$ with respect to the residue pairing. We also introduce the {\em Fock space}, which by definition is the space of formal power series in the vector variables $q_0,q_1+{\bf 1},q_2,\ldots$, with coefficients in the field $\mathbb{C}_\epsilon=\mathbb{C}((\epsilon))$, where $q_k=\sum_i q_k^i v_i$. By definition, a {\em vertex operator} is an operator acting on the Fock space of the following type: \begin{equation}\label{vop} \exp\Big(\sum_{k=0}^\infty (-1)^{k+1}\(I^{(-1-k)},v_i\)\frac{q_k^i}{\epsilon} \Big) \exp\Big(\sum_{k=0}^\infty \(I^{(k)},v^i\)\epsilon\frac{\partial}{\partial q_k^i} \Big), \end{equation} where $I^{(n)}\in H.$ In the context of the symplectic loop space formalism the vertex operators will be interpreted as follows. Put $\phi(z)=\sum_{n\in \mathbb{Z}} I^{(n)}(-z)^n\in \H$. The symplectic loop space $\H$ is a direct sum of two Lagrangian subspaces: $\H_-:=H[[z^{-1}]]z^{-1}$ and $\H_+:=H[z].$ We denote by $\phi_+$ (resp. $\phi_-$) the projection of $\phi$ onto $\H_+$ (resp. $\H_-$). The first and second exponent in \eqref{vop} are then quantizations of the linear Hamiltonians $\Omega(\ ,\phi_-)$ and $\Omega(\ ,\phi_+)$ respectively, where the quantization rules are defined by: $\widehat{q}_k^i={q}_k^i/\epsilon$ and $\widehat{p}_{k,i}=\epsilon\partial/\partial{q}_k^i.$ The period vector $I_a^{(0)}(t,\lambda)\in H$, $(t,\lambda)\in \mathcal{T}\times \mathbb{C}$ is defined by the following formulas: \begin{equation}\label{periods} \(I^{(0)}_a(t,\lambda),v_i\):= \int_{a} \partial_if_t\frac{\omega}{df_t},\quad 1\leq i\leq N, \end{equation} where $a\in H_0(f_t^{-1}(\lambda);\frac{1}{2}\mathbb{Z})$ is a 1-point cycle and the integral is interpreted as evaluation at $a$ (the coefficients of the cohomology may be chosen even in $\mathbb{C}$, but $\frac{1}{2}\mathbb{Z}$ suffices for our purposes). The value of $I_a^{(0)}(t,\lambda)$ is well defined only if $(t,\lambda)$ is a point outside the {\em discriminant locus} $\{ (t,u)\ |\ u \mbox{ is a critical value of } f_t\},$ and it depends on the choice of a path, avoiding the discriminant, from a fixed reference point in $\mathcal{T}\times \mathbb{C}$ to $(t,\lambda)$. It is convenient to choose $({\bf 0},1)$ for a reference point, although any other point outside the discriminant locus would work too. Furthermore, near $\lambda=\infty$, the period vector expands as a Laurent series involving only fractional powers of $\lambda$. In particular, it makes sense to put $I^{(n)}_a (t,\lambda)=\partial_\lambda^n I^{(0)}_a (t,\lambda),$ where for negative $n$ the operator $\partial_\lambda^n$ is interpreted as formal integration. We are ready to define the vertex operators that will be needed in this article. Let $\Gamma_a(t,\lambda,s)$ be the vertex operator corresponding to the series \begin{eqnarray*} \phi_a(t,\lambda,s)=\phi_a(t,\lambda+s)-\phi_a(t,\lambda),\quad \phi_a(t,\lambda)=\sum_{n\in \mathbb{Z}}I^{(n)}_a(t,\lambda)(-z)^n, \end{eqnarray*} where $s$ is a {\em formal variable}. In other words, our definition should be interpreted as a formal Taylor series in $s:$ \begin{eqnarray*} \phi_a(t,\lambda,s)= \sum_{k= 1}^\infty \Big( \sum_{n\in \mathbb{Z}}I^{(n+k)}_a(t,\lambda)(-z)^n\Big)\frac{s^{k}}{k!}. \end{eqnarray*} Using \eqref{vop}, it is easy to see that the action of the vertex operator $\Gamma_a$ on an element $\mathcal{D}(\mathbf{q})$ of the Fock space is given by: \begin{equation}\label{vop:action} \Gamma_a(t,\lambda,s)\mathcal{D}(\mathbf{q}) = e^{\frac{1}{\epsilon}\Omega(\mathbf{q}(z),\phi_-)}\mathcal{D}(\mathbf{q}+\epsilon \phi_+), \end{equation} where $\phi:=\phi_a(t,\lambda,s).$ Finally, we need also the so called {\em phase form}: \begin{equation}\label{phase:form} \mathcal{W}_{a,b} = \Big(I^{(0)}_a(t,s)-I^{(0)}_a(t,0)\Big)\bullet_t \Big(I^{(0)}_b(t,s)-I^{(0)}_b(t,0)\Big). \end{equation} Here \begin{eqnarray*} I^{(0)}_a(t,s)-I^{(0)}_a(t,0) = \sum_{k= 1}^\infty I^{(k)}_a(t,0)s^k/k! \end{eqnarray*} is interpreted via the Taylor's formula as a formal power series in $s$. Each product $I^{(k)}_a(t,0)\bullet_t I^{(l)}_b(t,0)$ is a vector in $H$, which should be identified with $T_t\mathcal{T}$ and then (via the residue pairing) with the co-tangent space $T_t^*\mathcal{T}$. So $\mathcal{W}_{a,b} $ is a formal power series in $s$ whose coefficients are 1-forms on $\mathcal{T}$. Notice that the phase form is multi-valued and has poles. We say that a function $\mathcal{D}$ from the Fock space satisfies $\mathcal{W}_{A_N}$-constraints if the expression \begin{equation}\label{A_N:constraints} \sum_{a\in f_{\bf 0}^{-1}(1)} c_a(0,\lambda,s)\Gamma_a(0,\lambda,s) \, \mathcal{D}\quad \end{equation} is regular in $\lambda.$ Here $c_a(0,\lambda,s) = e^{\frac{1}{2}\int_{-{\bf 1}}^{-\lambda\,{\bf 1}} \mathcal{W}_{a,a}},$ where the integration path is chosen as follows: first we fix a one point cycle $a_0\in f_{\bf 0}^{-1}(1)$ and then we pick a path from $-{\bf 1}$ to $-\lambda{\bf 1}$. For each $a\in f_{\bf 0}^{-1}(1)$, we precompose this path with a closed loop going several times around $0$, such that the parallel transport of $a_0$ is $a$. Expression \eqref{A_N:constraints} is independent of the choice of $a_0$ and the path from $-{\bf 1}$ to $-\lambda{\bf 1}$, because we can interpret \eqref{A_N:constraints} as the sum over all branches of $c_{a_0}\Gamma_{a_0}\mathcal{D}$. The later means also that \eqref{A_N:constraints} is invariant under the analytical continuation along a loop around $\lambda=\infty$. Therefore, the operator acting on $\mathcal{D}$ expands as a power series in $s$ and a formal series in integral powers of $\lambda$: \begin{equation}\label{A_N:jnk} \sum_a c_a(0,\lambda,s)\Gamma_a(0,\lambda,s) = \sum_{k=0}^\infty \sum_{n\in \mathbb{Z}} W_n^k\, \lambda^{-n-k}s^k, \end{equation} where $W_n^k $ are some differential operators. The regularity of \eqref{A_N:constraints} means that only non-negative powers of $\lambda$ are present, i.e., $W_n^k \mathcal{D}=0$ for $n+k>0.$ Our main result is the following theorem. \begin{theorem}\label{t1} The total descendant potential $\mathcal{D}_{A_N}$ satisfies the $\mathcal{W}_{A_N}$-constraints. \end{theorem} \medskip It is easy to obtain the genus-0 limit of the $\mathcal{W}_{A_N}$-constraints. Let $h_a(\mathbf{p},\mathbf{q})$ be the linear Hamiltonian $\Omega(\ ,\frac{\partial\phi_a}{\partial\lambda}(0,\lambda))$. Then for each $k\geq 2$ we have the following expansion: \begin{eqnarray*} \sum_a c_a(0,\lambda,s)\(h_a(\mathbf{q},\mathbf{p})\)^k=\sum_{n\in \mathbb{Z}} h_{k,n}(\mathbf{q},\mathbf{p})\lambda^{-n-k}. \end{eqnarray*} Using the polarization $\H=\H_-\oplus\H_+$ we identify $\H$ with the cotangent bundle. On the other hand, the total descendant potential has the form $\mathcal{D}_{A_N}=e^{\epsilon^{2g-2}\mathcal{F}^{(g)}}.$ It is known that the graph of the differential $d\mathcal{F}^{(0)}$ is a Lagrangian cone ${\mathcal L}$ in $\H$, which has some very special properties (see \cite{G4} for more details). By taking only the lowest degree terms in $\epsilon$ in our $\mathcal{W}_{A_N}$-constraints we get: \begin{corollary}\label{c1} The Hamiltonians $h_{k,n}$ vanish for $k\geq 0$, $n>-k$ when restricted to the the Lagrangian cone ${\mathcal L}$. \end{corollary} \medskip \subsection{$\mathcal{W}_{1+\infty}$-constraints.} Let $\Gamma(w,\zeta)$ be a vertex operator, acting on the Fock space $\mathbb{C}_\epsilon[[t_1,t_2,\ldots]]$, obtained from the composition of the vertex operator \begin{equation}\label{vop:kp} \exp\Big( -\sum_{n=1}^\infty ( w^n-\zeta^n)t_n\Big) \exp\Big( \sum_{n=1}^\infty \frac{1}{n}(w^{-n}-\zeta^{-n})\partial_{t_n}\Big) \end{equation} and the dilaton shift $t_{h+1}\mapsto t_{h+1}-\frac{1}{h+1}.$ We define the differential operators $J_n^k$ by the following Taylor expansion: \begin{equation}\label{jnk:bosonic} \sum w^{-N/2}\zeta^{-N/2} (w-\zeta)^{-1}\ \Gamma(w,\zeta)= \frac{h}{s}+\sum_{k=0}^\infty \sum_{n\in \mathbb{Z}} J_n^k \lambda^{-n-k-1}\frac{s^{k}}{k!} \end{equation} where $w=(h(\lambda+s))^{1/h}$, $\zeta=(h\lambda)^{1/h}$ and the sum on the left-hand side is over all branches of $\lambda^{1/h}$. On the other hand, using the change of variables: \begin{equation}\label{change:tq} t_{-i+kh}= \frac{q_{k-1}^i}{(-i+h)(-i+2h)\cdots (-i+kh)} \end{equation} and the dilaton shift $t_{h+1}\mapsto t_{h+1}-\frac{1}{h+1}$ we identify the Fock space $\mathbb{C}_{\epsilon}[[q_0,q_1+1,q_2,\ldots]]$ with the subspace of $\mathbb{C}_{\epsilon}[[t_1,t_2,t_3,\ldots]]$ consisting of all series independent of $t_{h}, t_{2h}, t_{3h}$, etc. Therefore, given an element $\mathcal{D}$ from the Fock space $\mathbb{C}_{\epsilon}[[q_0,q_1+1,q_2,\ldots]]$ it makes sense to consider the action of $J_n^k$ on $\mathcal{D}.$ \begin{theorem}\label{t2} The following statements hold: \begin{enumerate} \item[a)] The regularity conditions $W_n^k\mathcal{D} = 0$ ($n+k>0, k\geq 0$) are equivalent to $J_n^k\mathcal{D}=0$ ($n+k\geq 0, k\geq 0$). \item[b)] The map $-\lambda^{n+k}\partial_{\lambda}^k\mapsto J_n^k$ is a representation of $W_{1+\infty}$ with central charge $h$. \item[c)] If the regularity condition $J_n^k\mathcal{D}=0$, $n+k\geq 0$, holds for $k=0, 1,\ldots h-1$, then it holds for all $k\geq 0.$ \end{enumerate} \end{theorem} This theorem will be proved in Section \ref{sec:3}. The proof of a) amounts to changing the variables in \eqref{A_N:jnk} via \eqref{change:tq} and observing that we get \eqref{jnk:bosonic} up to factor, which is invertible and regular in $\lambda$. Part c) is a corollary from \cite{FKRW} and b). Finally, to prove b), we construct a representation of $W_{1+\infty}$ (see \cite{vanM}) in the Fermionic Fock space and we prove, using the Boson--Fermion isomorphism, that the representation is the same as the one stated in the lemma. \subsection{Higher spin curves} By definition, an $h$-spin smooth curve is a Riemann surface $\Sigma$ equipped with $n$ marked points $x_1, x_2,\ldots, x_n$, a line bundle $L$ on $\Sigma$, $n$ integer labels $m_1, m_2,\ldots, m_n,$ $0\leq m_i\leq N-1$, and an isomorphism between $L^{\otimes h}$ and the twisted canonical bundle $K\otimes\,\bigotimes_i \O(-m_i x_i)$. The moduli space of equivalence classes of $h$-spin curves is denoted by $\mathcal{M}_{g,n}^{\bf m}$, where ${\bf m}=(m_1,\ldots, m_n)$. It is non-empty iff the degree compatibility condition is met: $h$ is divisible by $2g-2-\sum_i m_i$. In this case the forgetful map $\pi: \mathcal{M}_{g,n}^{\bf m}\rightarrow \mathcal{M}_{g,n}$, which remembers only $\Sigma$ and the marked points, is a degree $h^{2g}$ covering. It is known that $\mathcal{M}_{g,n}^{\bf m}$ has a natural compactification $\overline{\mathcal{M}}_{g,n}^{\bf m}$, obtained by analyzing what happens to the sections of the line bundle $L$ when $\Sigma$ acquires a node (see \cite{W1} and \cite{JKV}). Another important ingredient is the Witten's top Chern class $c_{g,n}^{\bf m}\in H^*(\overline{\mathcal{M}}_{g,n}^{\bf m},\mathbb{Q})$. Using the spin structure isomorphism $L^{\otimes h}\cong \omega_\Sigma\otimes\,\bigotimes_i \O(-m_i x_i)$, one can define a map $w:\Omega^{0,0}(L)\rightarrow \Omega^{0,1}(L)$, $s\mapsto \bar\partial s + s^N$. This construction, which so far is only over a single point of $\overline{\mathcal{M}}_{g,n}^{\bf m}$ is natural, so $w$ can be interpreted as a map between two bundles over $\overline{\mathcal{M}}_{g,n}^{\bf m}$. Then $c_{g,n}^{\bf m}$ is {Poincar\'e} dual to the pushforward of the zero locus of $w$ to $\overline{\mathcal{M}}_{g,n}^{\bf m}$. More details can be found in \cite{PV}, \cite{P}, and \cite{W1}. Let $H$ be an $N$-dimensional vector space with basis $v_1,\ldots, v_N$ and a non-degenerate bilinear paring defined by $(v_i,v_j)=\delta_{i+j,N+1}$. Put \begin{eqnarray*} \langle v_{i_1}\psi^{k_1},\ldots, v_{i_n}\psi^{k_n}\rangle_{g,n}:=\int_{\overline{\mathcal{M}}_{g,n}^{\bf m} } \psi_1^{k_1}\cdots \psi_n^{k_n}\, c_{g,n}^{\bf m} \end{eqnarray*} where $\psi_j$ is the first Chern classes of the line bundle on $\overline{\mathcal{M}}_{g,n}^{\bf m}$ formed by the cotangent lines $T^*_{x_j}\Sigma$, and ${\bf m}=(N-i_1,\ldots, N-i_n).$ It follows from the work of \cite{JKV}, \cite{P}, \cite{PV}, and \cite{Te} (see also \cite{Sh}), that the total descendant potential of $A_N$-singularity coincides with the following generating function: \begin{eqnarray*} \mathcal{D}_{A_N}=\exp\Big( \sum \frac{1}{n!}\epsilon^{2g-2} \langle \mathbf{q}(\psi),\ldots, \mathbf{q}(\psi)\rangle_{g,n} \Big), \end{eqnarray*} where the summation is over all $g, n\geq 0$, $$ \mathbf{q}(\psi) = \sum_{k\geq 0}\sum_{i=1}^N q_k^i v_i \psi^k, $$ and we have to shift $q_1^N\mapsto q_1^N+1$, so that we have a formal series in $q_1^N+1$ and $q_k^i,\ (k,i)\neq (1,N).$ It is easy to see the following. The $J^0$-constraints are empty, because $J^0$ is a constant, the constraint $J^i_{-i-k}\mathcal{D}=0$ comes from a rule for removing $v_{N+1-i}\psi^k$ ($1\leq i\leq N, k\geq 0$), and the rest of the constraints, according to \thref{t2}, are corollaries from the the preceding ones. In particular, the $J^1$-constraints coincide with the Virasoro constraints. \subsection{Final remark} The Virasoro constraints for a point, which in our case correspond to $N=1$, can be proved directly, without using \cite{KacS} and \cite{Ko}. The argument, given by M. Mirzakhani \cite{Mir}, is based on an interpretation of the intersection numbers in terms of symplectic volumes. Then by using the Duistermaat--Heckman formula Mirzakhani obtains some recursion relations, which turn out to be the same as the ones provided by the Virasoro algebra. It would be interesting to see whether our constraints can also be proved geometrically. \sectionnew{From $\mathcal{W}_{A_N}$ to $\mathcal{W}_{h}$.} \label{sec:3} The goal in this section is to give a proof of \thref{t2}. \subsection{Reduction modulo $h$.} The change of variables \eqref{change:tq} looks mysterious but it has a very simple purpose: up to terms depending only on $t_h,t_{2h},t_{3h},\ldots $ it just transforms the vertex operator $\Gamma_a(0,\lambda,s)$ into $\Gamma(w,\zeta)$ where $w=(h(\lambda+s))^{1/h}$ and $\zeta=(h\lambda)^{1/h}$ -- see \eqref{vop:kp}. Indeed, by definition \begin{eqnarray*} (I^{(0)}_a(0,\lambda),v_i)=\int_a {x^{N-i}}/{x^N} = \int_a x^{-i} = (h\lambda)^{-i/h}. \end{eqnarray*} Therefore, for $k\geq 0$ we have: \begin{equation}\label{period:k} (I^{(k)}(0,\lambda),v_i)=(-1)^ki(i+h)\cdots (i+ (k-1)h)(h\lambda)^{-k-i/h}, \end{equation} and \begin{equation}\label{period:k1} (I^{(-k-1)}(0,\lambda),v_i)= \frac{(h\lambda)^{k+1-i/h}}{(-i+h)\cdots (-i+(k+1)h)}. \end{equation} Since $(v_i,v_j)=\delta_{i+j,N+1}$, as one can easily verify from \eqref{res}, using our quantization conventions we get: \begin{eqnarray*} \((I^{(n)}(0,\lambda),v_i)v^i(-z)^{n}\)\sphat = \begin{cases} -t_{-i+(k+1)h}(h\lambda)^{k+1-i/h} & \mbox{ if } n=-k-1 <0, \\ \frac{1}{i+kh}\,\partial_{t_{i+kh}}(h\lambda)^{-k-i/h} & \mbox{ if } n=k \geq 0. \end{cases} \end{eqnarray*} It follows that the substitution $t_h=t_{2h}=\cdots = 0$ transforms the vertex operator $\Gamma(w,\zeta)$ into $\Gamma_a(0,\lambda+s,\lambda)$. We call this operator the mod-$h$ reduction of $\Gamma(w,\zeta)$ and denote it by $\leftexp{\rm red}{\Gamma}(w,\zeta).$ \subsection{The phase factors.} Our next goal is to compute the coefficient $c_a(0,\lambda,s)=e^{\frac{1}{2}\int_{-{\bf 1}}^{-\lambda{\bf 1}} \mathcal{W}_{a,a}}.$ The restriction of the phase form $\mathcal{W}_{a,a}$ to the complex plane in $\mathcal{T}$ spanned by ${\bf 1} $ is: \begin{eqnarray*} \(I_a^{(0)}(0,-t_N,s),I_a^{(0)}(0,-t_N,s)\)dt_N . \end{eqnarray*} Using the above formula for the period $I_a^{(0)}$ we get: \begin{eqnarray*} \frac{1}{h}\sum_{i=1}^N \Big( (-t_N+s)^{-\frac{i}{h}}-(-t_N)^{-\frac{i}{h}} \Big) \Big( (-t_N+s)^{-\frac{h-i}{h}}-(-t_N)^{-\frac{h-i}{h}} \Big). \end{eqnarray*} One ckecks directly that an anti-derivative of this function is \begin{eqnarray*} 2\log\ \frac{ \((-t_N+s)h\)^{-\frac{N}{2h}} - \( -t_N h\)^{-\frac{N}{2h}} } { \((-t_N+s)h\)^{\frac{1}{h}} - \( -t_N h\)^{\frac{1}{h}} }. \end{eqnarray*} If we substitute in this formula $t_N=-\lambda$ we get $2\log \(\frac{c(s/\lambda)}{hs}\)$ where \begin{equation}\label{coeff} c(s):= \(1+s\)^{-\frac{N}{2h}} \frac{s}{\(1+s \)^{\frac{1}{h}} - 1} = h+\frac{h^2-1}{24h}s^2 -\frac{h^2-1}{24h}s^3 + O(s^4). \end{equation} Therefore, the coefficient $c_a(0,\lambda,s)=e^{\frac{1}{2}\int_{-{\bf 1}}^{-\lambda{\bf 1}} \mathcal{W}_{a,a}}$ equals $c(s/\lambda)/c(s).$ \medskip {\em Proof of \thref{t2}, a).} Recalling the definition \eqref{A_N:jnk} of $W_n^k$ we get: \begin{eqnarray*} c(s)\sum_{k=0}^\infty \sum_{n\in \mathbb{Z}}W_n^k\lambda^{-n-k}{s^k}= \sum c(s/\lambda) \leftexp{\rm red}{\Gamma}(w,\zeta) \end{eqnarray*} where the last sum is over all branches of $\lambda^{1/h}$. On the other hand, we have $w^{-N/2}\zeta^{-N/2}(w-\zeta)^{-1} = c(s/\lambda)/(hs)$. Therefore, equation \eqref{jnk:bosonic} assumes the form: \begin{eqnarray*} \sum c(s/\lambda)\Gamma(w,\zeta) = h^2\ + hs\, \sum_{k=0}^\infty\sum_{n\in \mathbb{Z}} J_n^k\lambda^{-n-k-1}\frac{s^k}{k!}. \end{eqnarray*} Let $\mathcal{D}$ be an element of the Fock space $\mathbb{C}_\epsilon[[q_0,q_1+{\bf 1}, q_2,\ldots]].$ We need to prove that $\sum c(s/\lambda)\Gamma(w,\zeta)\mathcal{D}$ is regular in $\lambda$ if and only if $\sum c(s/\lambda)\leftexp{\rm red}{\Gamma}(w,\zeta)\mathcal{D}$ is regular in $\lambda$. But this is obvious because $ \Gamma(w,\zeta)$ and $\leftexp{\rm red}{\Gamma}(w,\zeta)$ differ by an invertible factor regular in $\lambda$, namely $\exp\sum_{k=1}^\infty t_{kh}\Big((\lambda+s)^k-\lambda^k\Big)h^k.$ \qed \subsection{The algebra of differential operators on the circle.} The {\em Fermionic Fock space} $\Lambda^\bullet\(\mathbb{C}[\zeta,\zeta^{-1}]\)$ is a $\mathbb{Z}$-graded vector space, whose degree-$m$ part $\Lambda^m\(\mathbb{C}[\zeta,\zeta^{-1}]\)$ is spanned by infinite-wedge monomials \begin{eqnarray*} \zeta^{i_0} \wedge \zeta^{i_1} \wedge \zeta^{i_2} \wedge \ldots \end{eqnarray*} such that $i_0>i_1>i_2>\ldots$ and $i_s=m-s-1$ for $s\gg 0$ (see \cite{KRa}). Denote by $\psi_{-i+\frac{1}{2}}$ the operator of wedging by $\zeta^{i}$ and by $\psi_{i-\frac{1}{2}}^*$ the operator of contraction by $\zeta^i$. The Lie algebra $gl_\infty$ of all $\mathbb{Z}\times\mathbb{Z}$ matrices having only finitely many non-zero entries can be represented in the Fock space via: \begin{eqnarray*} r(E_{ij})=\psi_{-i+\frac{1}{2}}\psi^*_{j-\frac{1}{2}} \end{eqnarray*} where $E_{ij}\in gl_\infty$ is the matrix defined by $E_{ij}e_k=\delta_{jk}e_i$. This representation can be extended to a {\em projective} representation of $\widetilde{{gl}}_\infty$ -- the Lie algebra of infinite matrices with finitely many non-zero diagonals. Namely, the formulas \begin{eqnarray*} \widehat{r}(E_{ij})=\ \ :\psi_{-i+1/2}\psi_{j-1/2}^*:\ \ = \begin{cases} \psi_{-i+1/2}\psi_{j-1/2}^* & \mbox{ for } j>0 \\ -\psi_{j-1/2}^*\psi_{i-1/2} & \mbox{ for } j\leq 0. \end{cases} \end{eqnarray*} define representation of a central extension $\widehat{{gl}}_\infty= \widetilde{{gl}}_\infty + \mathbb{C}\,c$, with central charge $c=1$ (see \cite{KRa}). Let $-\lambda^{n+k}\partial_\lambda^k$ ($k\geq 0, n\in \mathbb{Z}$) be a basis of the algebra $w_\infty$ of differential operators on the circle, and let \begin{eqnarray*} e_i = \zeta^{-i-\frac{N}{2}},\quad \zeta =\(h\lambda\)^{1/h}. \end{eqnarray*} Then $-\lambda^{n+k}\partial_\lambda^k$ is identified with the infinite matrix: \begin{equation}\label{jnk} -h^{-n-k}\sum_{i\in \mathbb{Z}}\ \prod_{l=0}^{k-1}\Big(-i-\frac{N}{2}-lh\Big)\ E_{i-nh,i}\ , \end{equation} and thus we get an embedding of Lie algebras $\phi_{-N/2,h}: w_\infty\hookrightarrow \widetilde{gl}_\infty$ (see \cite{KR, BHY}). On the other hand, $w_\infty$ has a unique central extension $W_{1+\infty}=w_\infty+\mathbb{C}\, C$, which can be described as follows. Fix a basis $e_i=\zeta^{-i}$, then each differential operator $-\zeta^{n+k}\partial_\zeta^k$ is represented by an infinite matrix, so we have an embedding $\phi_{0,1}:w_\infty \hookrightarrow \widetilde{gl}_\infty.$ The central extension $\widehat{gl}_\infty$ of $\widetilde{gl}_\infty$ induces a central extension of $w_\infty$, which is isomorphic to $W_{1+\infty}$. In other words, the map \begin{eqnarray*} \widehat{\phi}_{0,1}:W_{1+\infty}\rightarrow \widehat{gl}_\infty,\quad \zeta^{n+k}\partial_\zeta^k\mapsto \phi_{0,1}(\zeta^{n+k}\partial_\zeta^k ),\quad C\mapsto c \end{eqnarray*} is a Lie algebra embedding. We are going to make use of the following explicit formula for the commutator in $W_{1+\infty}$ (see \cite{KR}): \begin{equation}\label{comm:W} \left[ \zeta^ke^{xD_\zeta},\zeta^me^{yD_\zeta}\right]_{W_{1+\infty}} = \(e^{xm}-e^{yk}\)\zeta^{k+m}e^{(x+y)D_\zeta} + \delta_{k,-m} \frac{e^{xm}-e^{yk}}{1-e^{x+y}}\, C, \end{equation} where $D_\zeta=\zeta\partial_\zeta.$ The next lemma is essentially the same as formula (19) in \cite{BHY}. There is, however, a slight difference in the set up here and there, so we will give a separate proof. \begin{lemma}\label{extend:phi} The embedding $\phi_{-N/2, h}: w_\infty\hookrightarrow \widetilde{gl}_\infty$ can be extended to a Lie algebra embedding $\widehat{\phi}_{-N/2, h}: W_{1+\infty}\hookrightarrow \widehat{gl}_\infty$ in the following way: \begin{eqnarray*} \lambda^ne^{xD_\lambda} & \mapsto & {\phi}_{-\frac{N}{2},h}\(\lambda^ne^{xD_\lambda}\) + \delta_{n,0}\ \Big(\frac{e^{-\frac{xN}{2h}}}{1-e^{x/h}} - \frac{h}{1-e^x}\Big)\, c \\ C & \mapsto & hc \end{eqnarray*} where $D_{\lambda}=\lambda\partial_{\lambda}.$ \end{lemma} \proof Notice that $\phi_{-N/2, h}=\phi_{0,1}\circ \pi_{-N/2,h}$, where \begin{eqnarray*} \pi_{-N/2,h}:w_\infty \hookrightarrow w_\infty,\quad \lambda^kD_\lambda^m \mapsto \frac{\zeta^{kh}}{h^k} \Big(\frac{1}{h}(D_\zeta-N/2)\Big)^m. \end{eqnarray*} Therefore, it is enough to construct an extension $\widehat{\pi}_{-N/2,h}:W_{1+\infty}\rightarrow W_{1+\infty}$ of $\pi_{-N/2, h}$. We are going to look for $\alpha\in w_\infty^*$ and $K\in \mathbb{C}$ such that: \begin{eqnarray*} \widehat{\pi}_{-N/2,h}(L)={\pi}_{-N/2,h}(L)+ \langle \alpha,L\rangle \,C,\quad\widehat{\pi}_{-N/2,h}(C)=K\,C,\quad L\in w_\infty. \end{eqnarray*} Then \begin{eqnarray*} {\pi}_{-N/2,h}(\lambda^ke^{xD_\lambda})= \frac{e^{-\frac{Nx}{2h}}}{h^k}\zeta^{kh}e^{\frac{x}{h}D_\zeta} \end{eqnarray*} and using \eqref{comm:W} it is easy to see that $\widehat{\pi}_{-N/2,h}$ is a Lie algebra homomorphism if and only if \begin{eqnarray*} \langle \alpha, \lambda^{k+m}e^{(x+y)D_\lambda}\rangle = \delta_{k,-m}\Big(\frac{e^{-(x+y)N/(2h)}}{1-e^{(x+y)/h}} - \frac{K}{1-e^{x+y}}\Big). \end{eqnarray*} Replacing, $k+m$ by $n$ and $x+y$ by $x$, we get: \begin{equation}\label{central:term} \langle \alpha, \lambda^{n}e^{xD_\lambda}\rangle = \delta_{n,0}\Big(\frac{e^{-\frac{xN}{2h}}}{1-e^{x/h}} - \frac{K}{1-e^{x}}\Big), \end{equation} which equals \begin{eqnarray*} \frac{\delta_{n,0}}{1-e^x}\Big( e^{-\frac{xN}{2h}}+ e^{-\frac{x(N-2)}{2h}}+\cdots+e^{-\frac{x(N-2(h-1))}{2h}} - K\Big). \end{eqnarray*} For the RHS in \eqref{central:term} to be a well defined formal power series in $x$, it is necessary and sufficient that $K=h.$ \qed \subsection{Boson--Fermion isomorphism} Using the morphism constructed in \leref{extend:phi} and the standard representation $\widehat{r}$, we get a representation of $W_{1+\infty}$ on the Fermionic Fock space with central charge $h$. We would like now to use the Boson--Fermion isomorphism and obtain a representation in the Bosonic Fock space. Put \begin{eqnarray*} \psi(\zeta)=\sum_{i\in \mathbb{Z}}\psi_{i+\frac{1}{2}}\, \zeta^{-i-1}\quad \mbox{ and }\quad \psi^*(\zeta)=\sum_{j\in \mathbb{Z}}\psi^*_{j+\frac{1}{2}}\, \zeta^{-j-1}. \end{eqnarray*} The Boson--Fermion isomorphism identifies $\Lambda^m \(\mathbb{C}[\zeta,\zeta^{-1}]\)$ and $\mathbb{C}[[t_1,t_2,\ldots]]q^m$ in such a way that \begin{eqnarray*} \psi(\zeta)\ \mapsto \ \Gamma_+(\zeta),\quad \psi^*(\zeta)\ \mapsto \ \Gamma_-(\zeta), \end{eqnarray*} where the vertex operators are defined by: \begin{eqnarray*} \Gamma_\pm(\zeta)=q^{\pm 1}\zeta^{\pm m}\exp\Big(\,\pm\, \sum_{n=1}^\infty t_n \zeta^n\ \Big)\exp\Big(\pm \sum_{n=1}^\infty \frac{\partial}{\partial t_n}\, \frac{\zeta^{-n}}{-n}\Big), \end{eqnarray*} and the following anti-commutation relations are preserved: \begin{equation}\label{comm:1} [\psi(\zeta),\psi^*(w)]_+ = \delta(\zeta-w)=\sum_{n\in \mathbb{Z}} \zeta^nw^{-n-1}, \end{equation} \begin{equation}\label{comm:2} [\psi(\zeta),\psi(w)]_+ = [\psi^*(\zeta),\psi^*(w)]_+=0. \end{equation} It is easy to verify that we have the following Operator Product Expansion: \begin{eqnarray*} -\psi(\zeta)\psi^*(w) = i_{\zeta,w}\frac{1}{w-\zeta} \ \ + \ \ :\psi^*(w)\psi(\zeta): \end{eqnarray*} where $i_{\zeta,w}$ means that we have to expand as a geometric series in the region $|\zeta|>|w|.$ \begin{lemma}\label{t3} The following equality holds: \begin{eqnarray*} \sum_{n\in \mathbb{Z}}\sum_{k=0}^\infty \widehat{r}\circ \widehat{\phi}_{-N/2,h}(-\lambda^{n+k}\partial_\lambda^k) \lambda^{-n-k-1}{s^k}/{k!}\ =\ -\frac{h}{s}- \sum_{} w^{-\frac{N}{2}}\zeta^{-\frac{N}{2}}\psi(\zeta)\psi^*(w) \end{eqnarray*} where $w=(h(\lambda+s))^{1/h},$ $\zeta=(h\lambda)^{1/h},$ each summand on the RHS is interpreted as a formal Taylor series in $s$, and the sum is over all $h$ branches of $\lambda^{1/h}.$ \end{lemma} \proof First, we prove that: \begin{equation}\label{jnk:2} \sum_{n\in \mathbb{Z}}\sum_{k=0}^\infty \widehat{r}\circ \phi_{-N/2,h}(-\lambda^{n+k}\partial_\lambda^k) \lambda^{-n-k-1}{s^k}/{k!}\ =\ \sum_{} :w^{-\frac{N}{2}}\psi^*(w)\zeta^{-\frac{N}{2}}\psi(\zeta): \end{equation} Using Taylor's formula we get: \begin{eqnarray*} (\lambda+{s})^{-\frac{N}{2h}}\psi^*((\lambda+s)^{1/h}) = \sum_{k\geq 0}\ \sum_{i\in \mathbb{Z}}\ \partial_\lambda^k\(\lambda^{-\frac{N}{2h}-\frac{i}{h}}\)\psi^*_{i-\frac{1}{2}}\, \frac{s^k}{k!}. \end{eqnarray*} Differentiating with respect to $\lambda$, then multiplying by the series \begin{eqnarray*} \lambda^{-\frac{N}{2h}}\psi(\lambda^{1/h}) = \sum_{j\in \mathbb{Z}}\ \lambda^{-\frac{N}{2h}-\frac{j}{N+1}} \psi_{j-\frac{1}{2}}, \end{eqnarray*} and rescaling $\lambda$ and $s$ by $h$ we get that the RHS in \eqref{jnk:2} equals the sum (over all $k\geq 0$, $i,j\in \mathbb{Z}$) of the following terms: \begin{equation}\label{t3:1} \prod_{l=0}^{k-1} \Big(-i-\frac{N}{2}-lh\Big)\ (h\lambda)^{-\frac{i+j-1}{h} - k-1}\, :\psi^*_{i-\frac{1}{2}}\psi_{j-\frac{1}{2}}: \frac{s^k}{k!}. \end{equation} Note that averaging a formal series in $\lambda^{\pm1/h}$ over all branches of $\lambda^{1/h}$ kills all fractional powers and leaves the integral ones unchanged. Therefore if we sum \eqref{t3:1} over all branches of $\lambda^{1/h}$, then we get a non-zero answer only for $k\geq 0$, $i\in \mathbb{Z}$ and $j=nh-i+1$. Notice that under the above conditions $\eqref{t3:1}$ is independent of the branch and that \begin{eqnarray*} :\psi^*_{i-\frac{1}{2}}\psi_{j-\frac{1}{2}}: \ =\ :\psi^*_{i-\frac{1}{2}}\psi_{-i+nh+\frac{1}{2}}:\ =\ -:\psi_{-i+nh+\frac{1}{2}}\psi^*_{i-\frac{1}{2}}: \end{eqnarray*} By definition, the above operator is $-\widehat{r}(E_{i-nh,i})$. Therefore, the coefficient in front of $\lambda^{-n-k-1}\frac{s^k}{k!}$ in \eqref{t3:1} coincides with \eqref{jnk}. Notice that the additional factor of $h$ comes from the summation over all branches of $\lambda^{\pm1/h}$. The next step will be to use \leref{extend:phi}. In order to do this, we notice that $\sum_{k\geq 0} \lambda^{k}\partial_\lambda^k s^k/k! = e^{xD_\lambda}$ where $1+s=e^x$. Indeed, $\lambda^{k+1}\partial_\lambda^{k+1}=\lambda^k D\partial_\lambda^k = (D-k)\lambda^k\partial_\lambda^k$, so if we denote the LHS by $F(x,D)$ where $s=e^x-1$, then it is easy to check that $\partial_x F = D F$ and since $F(0,D)=1$ the identity follows. So in \eqref{jnk:2} if we replace $\phi_{-N/2,h}$ on the LHS by $\widehat{\phi}_{-N/2,h}$ then according to \leref{extend:phi} we have to add to the RHS the following expression: \begin{eqnarray*} -\frac{1}{\lambda} \Big(\frac{e^{-\frac{xN}{2h}}}{1-e^{x/h}} - \frac{h}{1-e^x}\Big) \end{eqnarray*} where $1+s/\lambda = e^x$. Recall that $w=(h(\lambda+s))^{1/h}$ and $\zeta = (h\lambda)^{1/h}$, so the above expression is equal to: \begin{eqnarray*} -\frac{h}{s} + h \frac{w^{-N/2}\zeta^{-N/2}}{w-\zeta} = -\frac{h}{s}+\sum \frac{w^{-N/2}\zeta^{-N/2}}{w-\zeta} \end{eqnarray*} where the sum is over all branches of $\lambda^{1/h}$. It remains only to notice that \begin{eqnarray*} :\psi^*(w)\psi(\zeta): \ +\ \iota_{\zeta,w}\, \frac{1}{w-\zeta} \ =\ -\psi(\zeta) \psi^*(w) \end{eqnarray*} where $\iota_{\zeta,w}$ means that we have to expand in the region $|\zeta|>|w|.$ \qed {\em Proof of \thref{t2}, b)} According to the Boson--Fermion isomorphism, the operator $-\psi(\zeta)\psi^*(w)$ is transformed into \begin{eqnarray*} -\Gamma_+(\zeta)\Gamma_-(w) = -\zeta^{-1}e^{\sum_{n=1}^\infty t_n \zeta^n} e^{-\sum_{n=1}^\infty \zeta^{-n}/n\partial_{t_n}}e^{-\sum_{n=1}^\infty t_n w^n} e^{\sum_{n=1}^\infty w^{-n}/n\partial_{t_n}}. \end{eqnarray*} Given two operators $A$ and $B$ such that $[A,B]=AB-BA$ commutes with both $A$ and $B$, we have $e^Ae^B =e^{[A,B]}e^Be^A$. Applying this for $A$ the translation term of $\Gamma_+$ and $B$ the multiplication term of $\Gamma_-$, we get: \begin{eqnarray*} e^{[A,B]}= \exp\ \sum_{n=1}^\infty (w/\zeta)^n/n = \exp\Big( \log \frac{1}{1-w/\zeta}\Big) = \frac{1}{1-w/\zeta} \end{eqnarray*} and so \begin{eqnarray*} -\frac{h}{s}-{w^{-N/2}\zeta^{-N/2}}\psi(\zeta)\psi^*(w) =-\frac{h}{s}+ \frac{w^{-N/2}\zeta^{-N/2} }{w-\zeta} \Gamma(w,\zeta) \end{eqnarray*} where $\Gamma(w,\zeta)$ is the vertex operator defined in \eqref{vop:kp}. Comparing the above formula with \leref{t3} and formula \eqref{jnk:bosonic} we get that $$ \widehat{r}\circ \widehat{\phi}_{-N/2,h} (-\lambda^{n+k}\partial_\lambda^k) = J_n^k. $$ Notice that even if we perform the dilaton shift in $J_n^k$, the commutation relations do not change, so we still have a representation of $W_{1+\infty}$ with central charge $h$. \qed \subsection{Example.} We compute explicitly $\leftexp{\rm red}{J}^1(\lambda)$, where the left superscript means that we set $t_h=t_{2h}=\cdots = 0$. We do not incorporate the dilaton shift in our computation for typographical reasons. The reader interested in the applications of $J^1$ to higher spin curves should dilaton-shift $t_{h+1}\mapsto t_{h+1}-\frac{1}{h+1}$ our final answer. Put $E=\mathbb{Z}\,\backslash \,h\mathbb{Z}$ and denote by $J_m$ the multiplication operator $-mt_{-m}$ for $m<0$ and the differential operator $\partial/\partial t_m$ for $m>0$. Then we have: $\phi_a(\lambda) = -\sum_{m\in E} J_m \, (h\lambda)^{m/h}/m,$ where $a$ corresponds to a choice of $h$-th root of $\lambda$. Using the Taylors formula, we get: \begin{eqnarray*} \sum_a \leftexp{\rm red}{\Gamma}(w,\zeta) = h + \sum_a :(\partial_\lambda\phi_a)^2:\frac{s^2}{2!} + O(s^3) \end{eqnarray*} Since $w^{-N/2}\zeta^{-N/2}(w-\zeta)^{-1} = c(s/\lambda)/hs$, using the expansion in \eqref{coeff}, we get \begin{eqnarray*} w^{-N/2}\zeta^{-N/2}(w-\zeta)^{-1} = \frac{1}{hs}\Big(h+\frac{h^2-1}{24h}\lambda^{-2} s^2 + O(s^3)\Big) \end{eqnarray*} Substituting in \eqref{jnk:bosonic} we get: \begin{eqnarray*} \leftexp{\rm red}{J}^1(\lambda)=\frac{1}{2}\sum_a :(\partial_\lambda\phi_a)^2: + \frac{h^2-1}{24h}\, \lambda^{-2} \end{eqnarray*} or in components: \begin{equation}\label{Virasoro} \leftexp{\rm red}{J}^1_n = \frac{h^{-n-1}}{2}\sum_{m\in E} :J_mJ_{nh-m}: + \delta_{n,0}\frac{h^2-1}{24h}. \end{equation} \subsection{$\mathcal{W}_{A_1}$ and Virasoro constraints} Assume now that $N=1$ and so $h=2$. By definition, the Witten--Kontsevich tau-function is the following generating series: \begin{equation}\label{D:pt} \mathcal{D}_{\rm pt}=\exp\Big( \sum_{g,n}\frac{1}{n!}\epsilon^{2g-2}\int_{\overline{\mathcal{M}}_{g,n}}\prod_{i=1}^n (\mathbf{q}(\psi_i)+\psi_i)\Big), \end{equation} where $\mathbf{q}(\psi)=\sum_k q_k \psi^k,$ $(q_0,q_1,\ldots)$ are formal variables, $\psi_i$ ($1\leq i\leq n$) are the first Chern classes of the cotangent-line bundles on $\overline{\mathcal{M}}_{g,n},$ and we have to expand in the powers of $q_0,q_1+1,q_2,\ldots$. The substitution \eqref{change:tq}, together with the dilaton shift, gives us $q_k+\delta_{k,1} = (2k+1)!!t_{2k+1}$. According to the main result in \cite{KacS}, $\mathcal{D}_{\rm pt}$ is a highest weight vector for the Virasoro algebra, where the Virasoro operators are $L_n=h^n( \leftexp{\rm red}{J}^1_n).$ On the other hand, it is very easy to see that if $\mathcal{D}$ is a series independent of $t_h,t_{2h},\ldots$ then $\leftexp{\rm red}{J}^1_n\mathcal{D} = J^1_n\mathcal{D}$, so \thref{t2}, c) and a) imply \begin{corollary}\label{W:A1} $\mathcal{D}_{\rm pt}$ satisfies the $\mathcal{W}_{A_1}$-constraints. \end{corollary} \sectionnew{The total descendant potential $\mathcal{D}_{A_N}$}\label{sec:2} The goal here is to define $\mathcal{D}_{A_N}$ following A. Givental \cite{G1}. The starting point is a system of differential equations, corresponding to the flat connection \eqref{connection} (see also \eqref{nabla:z} below). The connection has two singularities and we construct two fundamental solutions -- one near each singularity. It turns out that such solutions are symplectic transformations in $\H$ and we can quantize them. This way we obtain certain differential operators, which are applied to a product of $N$ copies of the Witten--Kontsevich tau-function $\mathcal{D}_{\rm pt}.$ \subsection{The system of Frobenius differential equations}\label{SR} Let $E$ be the vector field on $\mathcal{T}$, whose value at a point $t\in \mathcal{T}$ is $[f_t]\in \mathbb{C}[x]/\langle \partial_x f_t\rangle \cong T_t\mathcal{T}$, i.e., \begin{eqnarray*} E=\sum_{i=1}^N \, \frac{1}{h}\,(i+1)t^i\frac{\partial}{\partial t^i}. \end{eqnarray*} Notice that if we assign to $x$ and $t^i$ ($1\leq i\leq N$) degrees $1/h$ and $(i+1)/h$, then $f_t(x)$ is homogeneous of degree 1. Moreover, the residue pairing and the structure constants of the multiplication $\bullet_t$ are also homogeneous. The various homogeneity properties can be expressed in terms of the connection operator \eqref{connection}. Namely, $\nabla$ can be extended to a flat connection on $\mathcal{T}\times \mathbb{C}^*$ in the following way: \begin{equation}\label{nabla:z} \nabla_{\partial/\partial z} = \frac{\partial}{\partial z} -z^{-1}\,\mu+z^{-2} \,E\bullet\ , \end{equation} where the linear operator $\mu: T_t\mathcal{T}\rightarrow T_t\mathcal{T}$ is defined by \begin{eqnarray*} \mu(\partial/\partial t^i) = (i/h - 1/2)\partial/\partial t^i\quad (1\leq i\leq N) \end{eqnarray*} and $E\bullet$ is the operator of multiplication by the Euler vector field $E$. The connection $\nabla$ acts on the sections of the pullback bundle $\pi^*T\mathcal{T},$ where $\pi: \mathcal{T}\times \mathbb{C}^*\rightarrow \mathcal{T}$ is the projection map. The connection $\nabla$ is gauge equivalent to $d-(\mu/z) dz$ in a neighborhood of $z=\infty.$ There exists a unique gauge transformation $S_t$, which has the form $$ S_t(z)=1+S_1(t)z^{-1}+S_2(t)z^{-k}+\cdots, $$ where $S_k(t)$ are linear transformations of $H\cong T_t\mathcal{T}$. Equivalently, $S_t$ is the unique solution to the following system of differential equations: \begin{equation}\label{de_S} z\partial_i S_t = v_i\bullet\, S_t\,, \quad (z\partial_z + E)S_t = [\mu,S_t]. \end{equation} It is easy to see that $S_t$ satisfies also the initial condition $S_{\bf 0}=1.$ \medskip Near $z=0$ the system of ordinary differential equations $\nabla J=0$ (see \eqref{de_R} below) has a formal solution of the type $\Psi_tR_te^{U_t/z}$, where the notations are as follows. Let $u^i(t)$ ($1\leq i\leq N$) be the critical values of $f_t$. It is known that for a generic $t$ they form a local coordinate system on $\mathcal{T}$ in which the Frobenius multiplication and the residue pairing are diagonal. Namely, \begin{eqnarray*} \partial/\partial u^i \, \bullet_t\, \partial/\partial u^j = \delta_{ij}\partial/\partial u^j,\quad \(\partial/\partial u^i,\partial/\partial u^j \) = \frac{\delta_{ij}}{\Delta_i}, \end{eqnarray*} where $\Delta_i$ is the Hessian of $f_t$ at the critical point corresponding to the critical value $u^i.$ We denote by $\Psi_t$ the following linear isomorphism: \begin{eqnarray*} \Psi_t:\mathbb{C}^N\rightarrow T_t\mathcal{T} ,\quad e_i\mapsto \sqrt{\Delta_i}\partial/\partial u^i. \end{eqnarray*} Note that $\Psi_t$ identifies the residue pairing on $T_t\mathcal{T}$ with the standard Euclidean pairing $(e_i,e_j)=\delta_{ij}$. We let $U_t$ be a diagonal matrix with entries $u^1(t),\ldots, u^N(t)$. The series $$ R_t(z)=1+R_1(t)z+R_2(t)z^2+\ldots, $$ where $R_k$ are linear operators in $\mathbb{C}^N$, is uniquely determined by the following differential equations: \begin{equation}\label{de_R} z\partial_i \(\Psi R e^{U/z}\) = v_i\bullet_t\, \(\Psi R e^{U/z}\),\quad (z\partial_z+E)\(\Psi R e^{U/z}\) =\mu\,\(\Psi R e^{U/z}\). \end{equation} \subsection{Quantization of symplectic transformations} By definition, the {\em twisted loop group} $L^{(2)}GL(H)$ is the group of all symplectic transformations of $\H$ of the type: $M(z)=\sum_k M_k z^k$, where $M_k$ are linear operators in $H$, $z^k$ acts on $\H$ by multiplication, and the sum is over finitely many $k$. It follows from the definition of the symplectic structure $\Omega$ that $M(z)$ is symplectic if and only if $M^T(-z)M(z)=1$, where the transposition is with respect to the residue pairing on $H.$ It is known that both series $S_t$ and $R_t$ (see Subsection \ref{SR}) are symplectic transformations of $\H$. Notice that $S_t$ and $R_t$ have the form $e^{A(z)},$ where $A(z)$ is an infinitesimal symplectic transformation. On the other hand, a linear transformation $A(z)$ is infinitesimal symplectic if and only if the map $\mathbf{f}\in \H \mapsto A\mathbf{f}\in \H$ is a Hamiltonian vector field with Hamiltonian given by the quadratic function $h_A(\mathbf{f}) = \frac{1}{2}\Omega(A\mathbf{f},\mathbf{f})$. By definition, the quantization of $e^A$ is given by the differential operator $e^{\widehat{h}_A},$ where the quadratic Hamiltonians are quantized according to the following rules: \begin{eqnarray*} (p_{k,i}p_{l,j})\sphat = \epsilon^2\frac{\partial^2}{\partial q_k^i\partial q_l^j},\quad (p_{k,i}q_l^j)\sphat = (q_l^jp_{k,i})\sphat = q_l^j\frac{\partial}{\partial q_k^i},\quad (q_k^iq_l^j)\sphat =q_k^iq_l^j/\epsilon^2. \end{eqnarray*} By linearity we obtain a {\em projective} representation of the Poisson Lie algebra of quadratic Hamiltonians of $\H$ on the Fock space. Namely, \begin{eqnarray*} \{F,G\}\sphat = [\widehat{F},\widehat{G}]+C(F,G), \end{eqnarray*} where the cocycle $C$ is $0$ on all pairs of quadratic Darboux monomials except for \begin{eqnarray*} C(p_\alpha p_\beta,q_\alpha q_\beta)= \begin{cases} 1, & \mbox{ if } \alpha\neq\beta\\ 2, & \mbox{ otherwise } \end{cases},\quad \alpha=(k,i),\quad \beta=(l,j). \end{eqnarray*} The action of $\widehat{S}_t$ on an element $F(\mathbf{q})$ of the Fock space $\mathbb{C}_\epsilon[[q_0,q_1+{\bf 1},q_2,\ldots]]$ is given by the following formula (see \cite{G3}): \begin{equation}\label{S:fock} \widehat{S}_t^{-1}\ F(\mathbf{q})=e^{\frac{1}{2\epsilon^2}W_t(\mathbf{q},\mathbf{q})}F([S_t\mathbf{q}]_+), \end{equation} where the quadratic form is defined by: \begin{equation}\label{W} W_t(\mathbf{q},\mathbf{q})=\sum_{k,l} (v_j,W_{kl}v_i)q_l^iq_k^j,\quad \sum_{k,l} W_{kl}w^{-k}z^{-l}=\frac{S^T_t(w)S_t(z)-1}{z^{-1}+w^{-1}}, \end{equation} and $[\ ]_+$ means truncation of all negative powers of $z$. Similarly (see \cite{G3}), \begin{equation}\label{R:fock} \widehat{R}_t^{-1}\ F(\mathbf{q}) = \big(e^{\frac{\epsilon^2}{2}V_t} F\big)(R_t\mathbf{q}), \end{equation} where $V_t$ is a second order differential operator defined by: \begin{equation}\label{V} \sum (v^i, V_{kl} v^j)\frac{\partial^2}{\partial q_k^i\partial q_l^j},\quad \sum_{k,l}V_{kl}w^kz^l = \frac{1-R_t(w)R^T_t(z)}{z+w}. \end{equation} Notice that on the RHS in formula \eqref{R:fock} we first apply the differential operator $e^{\frac{\epsilon^2}{2}V_t}$ to $F(\mathbf{q})$ and then we make the substitution $\mathbf{q}\mapsto R_t\mathbf{q}.$ \subsection{The total descendant potential} Let $t\in \mathcal{T}$ be a semisimple point, i.e., such that the critical values $u^i(t)$ ($1\leq i\leq N$) form a coordinate system. We denote by $\tau=(\tau^1,\ldots,\tau^N)$ the flat coordinates of $t$. The {\em total descendant potential} of $A_N$-singularity is defined in a formal neighborhood of the point $\tau-{\bf 1}z\in \H_+$ by the following formula: \begin{equation}\label{DAN} \mathcal{D}_{A_N}(\mathbf{q})= e^{F^{(1)}(t)}\, \widehat{S}^{-1}_t\,\widehat{\Psi}_t\,\widehat{R}_t\,e^{\widehat{U_t/z}}\,\prod_{i=1}^N \mathcal{D}_{\rm pt}(\epsilon\,\sqrt{\Delta_i}; Q^i\sqrt{\Delta_i}), \end{equation} where: \begin{enumerate} \item[--] $Q^i=(Q_0^i,Q_1^i,\ldots )$, $1\leq i\leq N$, are $N$ sequences of variables. \item[--] The formal series $\mathcal{D}_{\rm pt}(\epsilon\,\sqrt{\Delta_i}\,; Q^i\sqrt{\Delta_i})$ is obtained from the total descendant potential of a point \eqref{D:pt} via the dilaton shift: $\t(z)=Q^i(z)+z$ and rescaling of $\epsilon$ and $Q^i$ by $\sqrt{\Delta_i}$. \item[--] $(\widehat{\Psi}_tF)(\mathbf{q})=F(\Psi^{-1}\mathbf{q}),$ i.e., this is simply the change of variables: \begin{eqnarray*} Q_k^j \sqrt{\Delta_j}= \sum_{i=1}^N\,\frac{\partial u^j}{\partial \tau^i}\, q_k^i\quad (k\geq 0, 1\leq j\leq N). \end{eqnarray*} \item[--] The genus-1 potential $F^{(1)}(t)$ is defined in such a way that the RHS of \eqref{DAN} is independent of $t$. The precise value is irrelevant for our purposes. \end{enumerate} The product in \eqref{DAN} is a formal power series with coefficients in $\mathbb{C}_\epsilon$ in the following variables: \begin{equation}\label{Q} Q_0^i\sqrt{\Delta_i}, Q_1^i\sqrt{\Delta_i}+1, Q_2^i\sqrt{\Delta_i}, \ldots (1\leq j\leq N). \end{equation} The operator $e^{\widehat{U_t/z}}$ is redundant, because its exponent $\widehat{U_t/z}$ is known to annihilate the product of the Witten--Kontsevich $\tau$-functions. The action of the operator $\widehat{R}_t^{-1}$ is given by formula \eqref{R:fock}, where instead of $\mathbf{q}=\sum_{k,i}q_k^i v_i z^k$ one has to use ${\bf Q}=\sum_{k,i} Q_k^i e_i z^k$. In particular, $\widehat{R}_t^{-1}$ preserves the space of formal power series in the variables \eqref{Q}. Furthermore, we have ${\bf 1} = \sum_i ({\bf 1},v^i)v_i$ and \begin{eqnarray*} Q_1^j\sqrt{\Delta_j}+1 = \sum_{i=1}^N \frac{\partial u^j}{\partial \tau^i}\, (q_1^i+({\bf 1},v^i) ) - \sum_{i=1}^N\frac{\partial u^j}{\partial \tau^i}({\bf 1},v^i) + 1. \end{eqnarray*} The second sum is equal to $1$, because the unity in $T_t\mathcal{T}$ is $\sum_i \partial/\partial u^i$. Therefore, the change of variables $\widehat{\Psi}_t$ is an identification between the Fock space $\mathbb{C}_\epsilon[[q_0,q_1+{\bf 1}, q_2,\ldots]]$ and the space of formal series in the variables \eqref{Q}. Finally, by using formula \eqref{S:fock}, it is easy to see that $\widehat{S}_t^{-1}$ is a map from the Fock space $\mathbb{C}_\epsilon[[q_0,q_1+{\bf 1}, q_2,\ldots]]$ to the space of formal series in $q_0-\tau,q_1+{\bf 1}, q_2,\ldots$. Here one needs to use that $S_1{\bf 1}=\tau$. This follows from the differential equations \eqref{de_S}, which imply $\partial_i S_1{\bf 1} = v_i$, and the initial condition $S_{\bf 0}=1$ which implies $S_1({\bf 0})=0$. More details can be found in \cite{G1}. \sectionnew{Symplectic Action on Vertex Operators} Let $S=1+S_1 z^{-1}+S_2 z^{-2}+\cdots$ and $R=1+R_1z+R_2z^2+\cdots$ be two symplectic transformations of $\H$. The adjoint action of their quantizations $\widehat{S}$ and $\widehat{R}$ on a vertex operator of the type $e^{\widehat{\phi}(\lambda,s)}$, where $\phi(\lambda,s)=\phi(\lambda+s)-\phi(\lambda)$ is given by the following formulas (see \cite{G2}): \begin{equation}\label{S} \widehat{S} \,e^{\widehat{\phi}(\lambda,s)}\, \widehat{S}^{-1} = e^{W(\phi(\lambda,s)_+,\phi(\lambda,s)_+)/2} e^{(S\phi(\lambda,s))\sphat}, \end{equation} where $W$ is the quadratic form defined by \eqref{W}, and \begin{equation}\label{R} \widehat{R}^{-1} \,e^{\widehat{\phi}(\lambda,s)}\, \widehat{R} = e^{V\phi(\lambda,s)_-^2/2} e^{(R^{-1}\phi(\lambda,s))\sphat}, \end{equation} where $\phi(\lambda,s)_-$ is identified with the linear function $\Omega(\phi(\lambda,s)_-,\ )$ and $V$ is the second order differential operator defined by \eqref{V}. \subsection{Remark} In our settings the exponents of the vertex operators have the form: \begin{eqnarray*} \phi(\lambda,s)=\sum_{n\in\mathbb{Z}} \(I^{(n)}(\lambda+s)-I^{(n)}(\lambda)\)(-z)^n,\quad I^{(n+1)}(\lambda)=\partial_\lambda I^{(n)}(\lambda). \end{eqnarray*} In formula \eqref{S}, the expression $S\phi(\lambda,s)$ could be interpreted in the formal $\lambda^{-1}$-adic sense, provided that each period $I^{(n)}(\lambda)$ expands as a Laurent series near $\lambda=\infty$. Similarly, formula \eqref{R} admits a formal $(\lambda-u)$-adic interpretation, provided that $I^{(n)}$ expands as a Laurent series near $\lambda=u$ for some $u\in \mathbb{C}.$ \subsection{Symplectic translation} We recall the fields $\phi_a(t,\lambda,s)\in \H$, which were introduced in the introduction. They satisfy the following differential equations: \begin{equation}\label{picard:fuchs} z\partial_i\, \phi_a(t,\lambda,s) = v_i\bullet\, \phi_a(t,\lambda,s),\ 1\leq i\leq N, \quad z\partial_\lambda \, \phi_a(t,\lambda,s) = \phi_a(t,\lambda,s). \end{equation} The last equation is trivial. The rest follow from the fact that the form $\omega=dx$ is {\em primitive} in the sense of K. Saito \cite{SaK} (see also \cite{He}, Chapter 11). An elementary proof, using only Cauchy residue theorem, can be found in \cite{MT}. The argument in \cite{MT} is for the space of miniversal deformations of a Laurent polynomial in one variable, but after a minor modification it works for $A_N$ singularity as well. \begin{lemma}\label{S:phi} The following formula holds: $S_t\phi_a(0,\lambda,s)=\phi_a(t,\lambda,s).$ \end{lemma} \proof Both $S_t\phi(0,\lambda,s)$ and $\phi(t,\lambda,s)$ satisfy the same ordinary differential equations in $t$ and since $S_{\bf 0}=1$, they satisfy the same initial condition at $t={\bf 0}$. \qed Let $t\in \mathcal{T}$ be a semisimple point. Let $u^i=u^i(t)$ be one of the critical values. We fix a pair of 1-point cycles $a,b\in H_0(f_t^{-1}(\lambda);(1/2)\mathbb{Z})$ such that $\beta:=(a-b)/2$ is a {\em vanishing cycle}, i.e., $\beta$ vanishes when transported from $(t,\lambda)$ to $(t,u^i)$. Notice that in the case of $A_1$ singularity $u^i(t)=t$ and \begin{eqnarray*} \phi_\beta (t,\lambda,s)=\sum_n (-z\partial_\lambda)^n \Big(\frac{1}{\sqrt{2(\lambda+s-t)}} -\frac{1}{\sqrt{2(\lambda-t)}}\Big)(-z)^n. \end{eqnarray*} We will denote the above sum by $\phi_{A_1}(t,\lambda,s).$ \begin{lemma}\label{R:phi} For $\lambda$ near $u^i$ the following formula holds: \begin{eqnarray*} \phi_\beta(t,\lambda,s)= \Psi_t R_t \phi_{A_1}(u^i,\lambda,s)e_i=\Psi_t R_t e^{U_t/z}\, \phi_{A_1}(0,\lambda,s). \end{eqnarray*} \end{lemma} \proof According to A. Givental (see Theorem 3 in \cite{G2}) we have: \begin{eqnarray*} \sum_{n\in\mathbb{Z}} I^{(n)}_\beta (t,\lambda)(-z)^n =\Psi_t R_t \sum_{n\in\mathbb{Z}} (-z\partial_\lambda)^n \frac{1}{\sqrt{2(\lambda-u^i)}}\, e_i. \end{eqnarray*} Replacing $\lambda$ with $\lambda+s$ and then subtracting the above formula we obtain the formula stated in the lemma. \qed \subsection{Phase factors and the symplectic structure $\Omega$} The phase factors in \eqref{S} and \eqref{R} can be expressed in terms of the symplectic structure. \begin{lemma}\label{v:omega} Let us identify the vectors $\mathbf{f},\overline{\mathbf{f}}\in \H_-$ with linear functions via $\Omega(\mathbf{f},\ )$ and $\Omega(\overline{\mathbf{f}},\ )$. Then we have \begin{eqnarray*} V\mathbf{f}\overline{\mathbf{f}} = \Omega\([R^{-1}\mathbf{f}]_+,[R^{-1}\overline{\mathbf{f}}]_-\). \end{eqnarray*} \end{lemma} \proof Using the definition of $V_{kl}$ and induction on $l,$ it is easy to prove that \begin{eqnarray*} V_{kl}=(-1)^{l+1}R_{k+l+1}+(-1)^{l}R_{k+l}R_1^T+\ldots+(-1)^{l+1-l}R_{k+1}R_l^T. \end{eqnarray*} Putting $$ \mathbf{f}=\sum_{k\geq 0} f_k(-z)^{-k-1}\quad\mbox{and}\quad \overline{\mathbf{f}}=\sum_{l\geq 0} \overline{f}_l(-z)^{-l-1} $$ we obtain \begin{eqnarray*} V\mathbf{f}\overline{\mathbf{f}} = \sum_{k,l} (f_k, V_{kl}\overline{f}_l)= \sum_{k,l}\sum_{i=0}^l (-1)^{l+1-i}(R_{k+l+1-i}^Tf_k,R_i^T\overline{f}_l). \end{eqnarray*} The last expression should be compared with \begin{eqnarray*} {\rm res}_{z=0}\([R^T(z)\mathbf{f}(-z)]_+,[R^T(-z)\overline{\mathbf{f}}(z)]_-\)dz. \end{eqnarray*} The lemma follows, because $R^T(-z)=R^{-1}.$ \qed The next lemma can be proved by a similar argument. \begin{lemma}\label{w:omega} For $\mathbf{q},\overline{\mathbf{q}}\in \H_+$, the following formula holds: \begin{eqnarray*} W(\mathbf{q},\overline{\mathbf{q}})=\Omega([S\mathbf{q}]_+,[S\overline{\mathbf{q}}]_-). \end{eqnarray*} \end{lemma} \subsection{The Phase form} The symplectic pairing $\Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda',s)_-)$ does not make sense, because the coefficient in front of $s^k$ for a fixed integer $k$ is an infinite sum. However, if we pick $u\in \mathbb{C}\cup\{\infty\}$ and expand the periods $I^{(n)}_\alpha(t,\lambda)$, $I^{(n)}_\beta(t,\lambda')$ ($n\in \mathbb{Z}$) as Laurent series near $\lambda=u$ and $\lambda'=u$ respectively, then the symplectic pairing determines a well defined element in the following space of formal Laurent series in two variables: \begin{eqnarray*} \mathbb{C}((\lambda-u,\lambda'-u)):=\mathbb{C}((\lambda-u))((\lambda'-u))\cap \mathbb{C}((\lambda'-u))((\lambda-u)), \end{eqnarray*} where $\lambda-u$ and $\lambda'-u$ should be replaced by $\lambda^{-1}$ and $(\lambda')^{-1}$ if $u=\infty.$ \begin{lemma}\label{primitive} Let $\alpha$ and $\beta$ be arbitrary cycles. Then \begin{eqnarray*} d\Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda',s)_-) = I^{(0)}_\alpha(t,\lambda,s)\bullet I^{(0)}_\beta(t,\lambda',s) , \end{eqnarray*} where $d$ is the de Rham differential on $\mathcal{T}.$ \end{lemma} \proof By definition, $\Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda',s))$ equals \begin{eqnarray*} \sum_{k=0}^\infty (-1)^{k+1}\(I^{(k)}_\alpha(t,\lambda,s),I^{(-k-1)}_\beta(t,\lambda',s)\). \end{eqnarray*} On the other hand, using the differential equations \eqref{picard:fuchs} we get: $$ \partial_i I^{(n)}_\alpha(t,\lambda,s) = -v_i\bullet_t I^{(n+1)}_\alpha(t,\lambda,s). $$ So if we differentiate with respect to $\tau^i$ we get \begin{eqnarray*} \sum_{k=0}^\infty (-1)^{k}\Big(\(v_i\bullet I^{(k+1)}_\alpha(t,\lambda,s),I^{(-k-1)}_\beta(t,\lambda',s)\) +\(I^{(k)}_\alpha(t,\lambda,s),v_i\bullet I^{(-k)}_\beta(t,\lambda',s)\)\Big) . \end{eqnarray*} The only term in the above sum that survives is $ \(I^{(0)}_\alpha(t,\lambda,s),v_i\bullet I^{(0)}_\beta(t,\lambda',s)\). $ \qed \medskip Let $t$ be a semi-simple point and $u^i(t)$ be one of the critical values of $f_t$. Given $\lambda$ sufficiently close to $u^i$ we pick a pair $a$ and $b$ of one point cycles from $H_{0}(f_t^{-1}(\lambda),(1/2)\mathbb{Z})$ such that $\beta=(a-b)/2$ is a vanishing cycle. \begin{lemma}\label{V:bb} The following formula holds: \begin{eqnarray*} d V_t(\phi_\beta(t,\lambda,s))_-^2 = -\mathcal{W}_{\beta,\beta}(t-\lambda\,{\bf 1})+ \Big(\frac{1}{\sqrt{2(\lambda+s-u^i)}}-\frac{1}{\sqrt{2(\lambda-u^i)}}\Big)^2du^i, \end{eqnarray*} where $d$ is the de Rham differential on $\mathcal{T}$. \end{lemma} \proof Put $\mathbf{f}=\Psi^{-1}\phi_\beta(t,\lambda,s)$ and $\mathbf{f}'=\Psi^{-1}\phi_\beta(t,\lambda',s)$ for brevity. According to \leref{v:omega}, $ V_t\mathbf{f}_-\mathbf{f}'_-$ is equal to: \begin{eqnarray*} \Omega([R_t^{-1}\mathbf{f}_-]_+,[R_t^{-1}\mathbf{f}'_-]_-) = \Omega([R_t^{-1}\mathbf{f}_-]_+,[R_t^{-1}\mathbf{f}']_-) =\Omega([R_t^{-1}\mathbf{f}]_+-R_t^{-1}\mathbf{f}_+,R_t^{-1}\mathbf{f}' ). \end{eqnarray*} Therefore, we get \begin{equation}\label{v:omega1} \Omega([R^{-1}\mathbf{f}]_+,[R^{-1}\mathbf{f}']_-)-\Omega(\mathbf{f}_+,\mathbf{f}'_-). \end{equation} On the other hand (see \leref{R:phi}) \begin{eqnarray*} R_t^{-1}\mathbf{f}_\beta(t,\lambda,s) = \sum_{n} I_{A_1}^{(n)}(u^i,\lambda,s)(-z)^n. \end{eqnarray*} For $A_1$ singularity we have \begin{eqnarray*} I^{(0)}_{A_1}(u^i,\lambda,s)= \Big( \frac{1}{\sqrt{2(\lambda+s-u^i)}}-\frac{1}{\sqrt{2(\lambda-u^i)}}\Big) \end{eqnarray*} and thus the lemma follows by applying \leref{primitive} and setting $\lambda'=\lambda$. \qed A similar argument yields that (see also Section 7 in \cite{G2}) \begin{eqnarray*} dW_t(\phi_a(0,\lambda,s)_+,\phi_a(0,\lambda,s)_+)= \mathcal{W}_{a,a}(t-\lambda{\bf 1}). \end{eqnarray*} On the other hand $W_{\bf 0}=0$ because $S_{\bf 0}=1$. Therefore, we get \begin{equation}\label{w:aa} W_t(\phi_a(0,\lambda,s)_+,\phi_a(0,\lambda,s)_+)=\int_{-\lambda{\bf 1}}^{\tau-\lambda{\bf 1}}\mathcal{W}_{a,a}. \end{equation} Finally, in a neighborhood of $\lambda=u^i$ we have the following vertex operators factorization (see Proposition 4 in \cite{G2}): \begin{eqnarray*} \Gamma_a(t,\lambda,s) = e^{K_a}\Gamma_\alpha(t,\lambda,s)\Gamma_\beta(t,\lambda,s) \end{eqnarray*} where $\alpha=(a+b)/2$ and \begin{eqnarray*} K_a=-\Omega\Big( (\phi_\alpha(t,\lambda,s))_+,(\phi_\beta(t,\lambda,s))_-\Big). \end{eqnarray*} Notice that $\phi_\alpha$ is analytic near $\lambda=u^i$. This is because each period $I^{(0)}_\alpha(t,\lambda)$ expands in the powers of $(\lambda-u^i)^{1/2}$ and has a pole of order at most 1/2. On the other hand, the cycle $\alpha$ is invariant under the local monodromy and so the period $I^{(0)}_\alpha(t,\lambda)$ must be single-valued near $\lambda=u^i$, i.e., the corresponding expansion has only integral powers of $\lambda-u^i$. Now it is easy to see that the symplectic pairing of $\phi_{\alpha +}$ and $\phi_{\beta -}$ is well defined in the $(\lambda-u^i)$-adic sense. It follows from \leref{primitive}, with $\lambda=\lambda'$ that $dK_a = -\mathcal{W}_{\alpha,\beta}(t-\lambda{\bf 1}).$ \subsection{Periods of the phase form} A crucial step in our proof of \thref{t1} is that certain periods of the phase form vanish. Using the map \begin{equation}\label{map} \mathcal{T}\times \mathbb{C}\rightarrow \mathcal{T},\quad (t,\lambda)\mapsto t-\lambda{\bf 1} \end{equation} we interpret the integral $\int_\gamma \mathcal{W}_{\alpha,\beta}$, where $\gamma$ is a path in $\mathcal{T}\times\mathbb{C}$ and $\alpha$, $\beta$ are cycles, as $\int_{\gamma} \widetilde{\mathcal{W}}_{\alpha,\beta}$, where $\widetilde{\mathcal{W}}_{\alpha,\beta}$ is the pullback via \eqref{map} of $\mathcal{W}_{\alpha,\beta}.$ \begin{lemma}\label{period:beta} Let $\gamma$ be a small loop around a generic point on the discriminant and $\beta$ be a cycle vanishing at that point. Then $\int_\gamma \mathcal{W}_{\beta,\beta} = 0.$ \end{lemma} \proof We may assume that $\gamma$ is in the $\lambda$-plane $\{t\}\times \mathbb{C}.$ By definition, the restriction of the pullback via \eqref{map} of $\mathcal{W}_{\beta,\beta}$ to $\gamma$ has the form: \begin{eqnarray*} \mathcal{W}_{\beta,\beta} = -\sum_{n=2}^\infty\Big(\ \sum_{\substack{k\geq 1,l\geq 1\\k+l=n}}^\infty \ \frac{1}{k!l!}\,\(I^{(k)}_\beta(t,\lambda), I^{(l)}_\beta(t,\lambda)\) \Big)s^n\ d\lambda. \end{eqnarray*} Using that $I^{(k)}_\beta(t,\lambda)=\partial_{\lambda}^kI^{(0)}_\beta(t,\lambda)$ and integration by parts we get \begin{eqnarray*} \int_\gamma \mathcal{W}_{\beta,\beta} = -\sum_{n=2}^\infty\int_\gamma\Big(\ \sum_{\substack{k\geq 1,l\geq 1\\k+l=n}}^\infty \ \frac{(-1)^k}{k!l!}\,\(I^{(0)}_\beta(t,\lambda), I^{(k+l)}_\beta(t,\lambda)\) \Big)s^n\ d\lambda. \end{eqnarray*} On the other hand, \begin{eqnarray*} 0=(1-1)^n=\sum_{k=0}^n {n \choose k}(-1)^k \end{eqnarray*} and thus \begin{eqnarray*} \sum_{k+l=n}\frac{(-1)^k}{k!l!}=-\frac{1}{n!}(1+(-1)^n). \end{eqnarray*} We get that $n=:2m$ should be even and that the period of the phase form turns into \begin{eqnarray*} \sum_{m=1}^\infty\int_\gamma \(I^{(0)}_\beta(t,\lambda), I^{(2m)}_\beta(t,\lambda)\) \frac{s^{2m}}{(2m)!} \ d\lambda = \int_\gamma \(I^{(0)}_\beta(t,\lambda), I^{(0)}_\beta(t,\lambda,s)+I^{(0)}_\beta(t,\lambda,-s)\)d\lambda . \end{eqnarray*} On the other hand, according to \leref{R:phi} we have \begin{eqnarray*} I^{(0)}_\beta(t,\lambda)=\Psi_t\sum_k R_k \partial_\lambda^{-k} I^{(0)}_{A_1}(u^i,\lambda),\quad I^{(0)}_\beta(t,\lambda,\pm s)=\Psi_t\sum_l R_l \partial_\lambda^{-l} I^{(0)}_{A_1}(u^i,\pm s, \lambda). \end{eqnarray*} However, $\Psi_t$ is an isometry and $R^T_t(-\partial_\lambda)R_t(\partial_\lambda)=1$, because $R_t$ is a symplectic transformation. Using these two facts and integration by parts we get that the period equals \begin{eqnarray*} \int_\gamma \(I^{(0)}_{A_1}(u^i,\lambda),I^{(0)}_{A_1}(u^i, s, \lambda)\)d\lambda + \int_\gamma \(I^{(0)}_{A_1}(u^i,\lambda),I^{(0)}_{A_1}(u^i, -s, \lambda)\)d\lambda. \end{eqnarray*} Expanding in the powers of $\pm s$ via the Taylor's formula we get \begin{eqnarray*} I^{(0)}_{A_1}(u^i,\pm s, \lambda) = \sum_{k\geq 1} \frac{(\mp s)^k}{k!} (2k-1)!! (2(\lambda-u^i))^{-1/2-k}\, e_i. \end{eqnarray*} Therefore, \begin{eqnarray*} \int_\gamma \, \(I^{(0)}_{A_1}(u^i,\lambda),I^{(0)}_{A_1}(u^i, \pm s, \lambda)\) \, d\lambda = \sum_{k\geq 1} \frac{(\mp s)^k}{k!} (2k-1)!! \int_\gamma (2(\lambda-u^i))^{-1-k} d\lambda = 0. \end{eqnarray*} \qed Now we are ready to prove the following global vanishing property. \begin{lemma}\label{period:alpha} Let $\gamma$ be any loop in $\mathcal{T}\times \mathbb{C}$ avoiding the discriminant and $\alpha$ is a cycle invariant under the parallel transport along $\gamma$. Then $\int_\gamma\mathcal{W}_{\alpha,\alpha}=0.$ \end{lemma} \proof The proof here follows the argument in \cite{G2}, Proposition 1. The parallel transport along a closed loop determines a monodromy transformation in $H_0(f_{\bf 0}^{-1}(1);\mathbb{C})$. It is not hard to see that all monodromy transformations form a group isomorphic to the quotient of the braid group on $N+1$ strands by the relations: $r_1^2=1,\ldots, r_N^2=1$, where $r_i$ is the braid whose strands are straight except for the ones from $i$ to $i+1$ and from $i+1$ to $i$. In fact, all vanishing cycles form a root system (in $H_0(f_{\bf 0}^{-1}(1);\mathbb{R})\cong \mathbb{R}^{N+1}$) of type $A_N$ and the monodromy group is isomorphic to $S_{N+1}$ -- the Weyl group of the root system of type $A_N$. Now we use the fact that if a vector $\alpha\in \mathbb{R}^{N+1}$ is invariant under a permutation $\sigma$ then $\sigma$ can be decomposed into a product of transpositions that leave $\alpha$ invariant. This means that our path $\gamma$ can be decomposed into $\gamma_1'\ldots\gamma_k'\gamma_1^2\ldots\gamma_l^2$ where $\gamma_i'$ and $\gamma_j$ are simple loops around the discriminant and the cycle $\alpha$ is invariant along $\gamma_i'.$ We have $\int_{\gamma_i'}\mathcal{W}_{\alpha,\alpha}=0$, because $\alpha$ is invariant along $\gamma_i'$, which in particular implies that the periods $I_\alpha^{(n)}(t,\lambda)$ -- hence the phase form $\mathcal{W}_{\alpha,\alpha}$ -- are holomorphic. The loop $\gamma_j$ goes around a generic point $(t,u^i(t))$ on the discriminant. Let $\beta$ be a vanishing cycle. Put $\alpha=\alpha'+\langle\alpha,\beta\rangle \beta/2,$ where $\alpha'$ is invariant along $\gamma_j$ and $\langle\ ,\ \rangle$ is the intersection pairing. Then we have: \begin{enumerate} \item $\int_{\gamma_j^2}\mathcal{W}_{\alpha',\alpha'} = 0$, because $\mathcal{W}_{\alpha',\alpha'}$ is holomorphic near $(t,u^i(t))$, \item $\int_{\gamma_j^2}\mathcal{W}_{\alpha',\beta} = 0$, because the parallel transport along $\gamma_j$ transforms $\beta$ into $-\beta$. So the period may be written as: \begin{eqnarray*} \int_{\gamma_j^2}\mathcal{W}_{\alpha',\beta} = \int_{\gamma_j}\mathcal{W}_{\alpha',\beta} + \int_{\gamma_j}\mathcal{W}_{\alpha',-\beta} =0. \end{eqnarray*} \item $\int_{\gamma_j^2}\mathcal{W}_{\beta,\beta} = 0$, according to \leref{period:beta}. \end{enumerate} It follows that $\int_{\gamma_j^2}\mathcal{W}_{\alpha,\alpha} = 0.$ \qed \sectionnew{Proof of \thref{t1}} We split the proof of \thref{t1} into 3 steps. \subsection{From descendants to ancestors.}\label{step1} The following formal series is called the {\em total ancestor potential} of the $A_N$-singularity: \begin{eqnarray*} \mathcal{A}_t:=\widehat{\Psi}_t\widehat{R}_te^{\widehat{U_t/z}}\,\prod_{i=1}^N\mathcal{D}_{\rm pt}(\epsilon\sqrt{\Delta_i};Q^i\sqrt{\Delta_i}). \end{eqnarray*} We have $\mathcal{D}_{A_N}=e^{F^{(1)}(t)}\widehat{S}_t^{-1}\mathcal{A}_t$. Using the conjugation formula \eqref{S}, \leref{S:phi}, and formula \eqref{w:aa} we get that the proof of \thref{t1} amounts to proving that the series \begin{equation}\label{anc:constraints} \sum_{a} c_a(t,\lambda,s)\Gamma_a(t,\lambda,s)\mathcal{A}_t \end{equation} is regular in $\lambda,$ where $c_a(t,\lambda,s)=e^{\int_{-{\bf 1}}^{t-\lambda{\bf 1}} \mathcal{W}_{a,a}}.$ The integration path in $c_a$ is the composition of the path from $-{\bf 1}$ to $-\lambda{\bf 1}$ used in the definition of $c_a(0,\lambda,s)$ and an arbitrary path from $-\lambda{\bf 1}$ to $t-\lambda{\bf 1}.$ The regularity of \eqref{anc:constraints} is interpreted the same way as that of \eqref{A_N:constraints}. \subsection{ From regularity at $\lambda=\infty$ to regularity at the critical values of $f_t$.} \label{step2} The total ancestor potential $\mathcal{A}_t$ has the following crucial property. If we write \begin{eqnarray*} \mathcal{A}_t =\exp \sum_{g=0}^\infty \overline{\mathcal{F}}^{(g)}(\mathbf{q})\epsilon^{2g-2}\quad \in \quad \mathbb{C}[[\mathbf{q},\epsilon^{\pm 1}]], \end{eqnarray*} then each $\overline{\mathcal{F}}^{(g)}$ satisfies the following $(3g-3+r)$-jet constraints: \begin{eqnarray*} \left. \frac{\partial^{r}\mathcal{F}^{(g)}}{\partial q_{k_1}^{i_1}\ldots \partial q_{k_r}^{i_r}}\right|_{\mathbf{q}(z)=-z} = 0 \quad \mbox{ if } k_1+\ldots + k_r\geq 3g-3+r. \end{eqnarray*} Notice that \begin{eqnarray*} \Gamma_a(t,\lambda,s)\mathcal{A}_t \quad\in\quad \mathbb{C}[[\mathbf{q},\epsilon^{\pm 1},\lambda^{\pm 1},s]]. \end{eqnarray*} Given a multi-index of the type $I=\{(k_1,i_1),\ldots,(k_r,i_r)\}$, we put $\mathbf{q}^I=q_{k_1}^{i_1}\ldots q_{k_r}^{i_r}$ and we say that $I$ has length $l(I):=r.$ \begin{lemma}\label{tameness} The coefficient in front of each $\mathbf{q}^I\,\epsilon^G\, s^M$ in $\Gamma_a(t,\lambda,s)\mathcal{A}_t$ depends polynomially on finitely many periods $I^{(n)}_a(t,\lambda)$. \end{lemma} \proof For brevity, put $\phi=\phi_a(t,\lambda,s)$. Using that the action of the vertex operator on the Fock space is given by \eqref{vop:action} we get \begin{equation}\label{vop:anc} \Gamma_a(t,\lambda,s)\mathcal{A}_t=\exp\Big( \frac{1}{\epsilon}\Omega(\mathbf{q}(z),\phi_-) + \sum_{g=0}^\infty \overline{\mathcal{F}}^{(g)}(\mathbf{q}+\epsilon \phi_+)\epsilon^{2g-2}\Big). \end{equation} Expanding the genus-$g$ term in the above sum in the powers of $\phi_+$ we get: \begin{eqnarray*} \sum \frac{1}{|{\rm Aut}|}\epsilon^{2g-2+r} \frac{ \partial^r\overline{\mathcal{F}}^{(g)} } {\partial q_{k_1}^{i_1}\ldots \partial q_{k_r}^{i_r} }(\mathbf{q})(I_a^{(k_1)},d\tau^{i_1})\ldots (I_a^{(k_r)},d\tau^{i_r}), \end{eqnarray*} where the sum is over all multi-indexes $\{(k_1,i_1),\ldots, (k_r,i_r)\}$, ordered lexicographically, and $|{\rm Aut}|$ is the corresponding number of index-automorphisms. Since we are interested in the coefficient in front of $\epsilon^{G}$ for a fixed $G$ we get that $g\leq G$ and $r\leq G$, i.e., there are only finitely many combinations of genus-$g$, partial derivative of order $r$ terms which contribute to our coefficient. On the other hand among all monomials in $\overline{\mathcal{F}}^{(g)}$ only the ones of length less or equal to $r+l(I)$ could contribute and thus due to the $(3g-3+r)$-jet property we have $k_i\leq 3g-3+r+l(I)$. We get that we have finitely many choices for $k_1,\ldots,k_r$. Finally, since \begin{eqnarray*} I^{(k_i)}_a(t,\lambda,s)=\sum_{n\geq 1} I^{(k_i+n)}_a(t,\lambda)\frac{s^n}{n!} \end{eqnarray*} and we are interested in the coefficient in front of a fixed $s^M$, we get that the coefficient in front of $\mathbf{q}^I\epsilon^Gs^M$ in the exponent of \eqref{vop:anc} depends polynomially on the periods $I^{(n)}_a(t,\lambda)$. Notice that we also have the following relations: $M+G\geq 0$ and $M>0$. The remaining term $\epsilon^{-1}\Omega(\mathbf{q},\phi_-)$ also has this form. Therefore, the exponent in \eqref{vop:anc} has the following property: \begin{enumerate} \item[(*)]it is a series of the type $\sum c_{I,G,M}\mathbf{q}^I\epsilon^G s^M,$ where the sum is over all multi-indexes $I$, integers $M\geq 1$, and integers $G\geq -M$, whose coefficients $c_{I,G,M}$ depend polynomially on finitely many periods $I^{(n)}(t,\lambda)$. \end{enumerate} It is straightforward to check that property (*) is preserved under exponentiation. The lemma follows. \qed Using \leref{tameness}, we get that the regularity condition is equivalent to the polynomiality of certain meromorphic functions, which are given by some polynomials of the periods $I^{(n)}(t,\lambda)$. Apriory, such functions are defined and single valued in the whole complex plane except possibly for $\lambda=u^i$ ($1\leq i\leq N$) -- the critical values of $f_t$. So we have to prove that for each $\lambda=u^i$ the expression \eqref{anc:constraints} has no pole at $\lambda=u^i$. \subsection{Regularity at the critical values.}\label{step3} Let $u^i$ be any of the critical values of $f_t$. All one point cycles in the sum \eqref{anc:constraints}, except for two, which will be denoted by $a$ and $b$, are invariant under the local monodromy transformation around the discriminant near the point $(t,u^i)$. This means that all terms in the sum are regular at $\lambda=u^i$, except for the two exceptional ones. Therefore, we have to prove that the expression \begin{equation}\label{ui:constraints} \Big(c_a(t,\lambda,s)\Gamma_a(t,\lambda,s)+c_b(t,\lambda,s)\Gamma_b(t,\lambda,s)\Big)\, \mathcal{A}_t \end{equation} expanded as a formal series in $(\lambda-u^i)^{\pm 1}$ has no poles. Put $\alpha=(a+b)/2$ and $\beta=(a-b)/2$. Then we have the following vertex operators factorization (using $a=\alpha+\beta$ and $b=\alpha-\beta$): \begin{equation}\label{vop:factorization} \Gamma_a = e^{K}\Gamma_{\alpha}\Gamma_\beta,\quad \Gamma_b = e^{-K}\Gamma_\alpha\Gamma_{-\beta},\quad K=-\Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda,s)_-). \end{equation} The ancestor potential $\mathcal{A}_t$ has the form $\widehat{\Psi}_t\widehat{R}_t \prod_i \mathcal{A}_i$. In \eqref{ui:constraints}, we factorize the vertex operators according to \eqref{vop:factorization}. We can drop the vertex operator $\Gamma_\alpha$ because it is analytic near $\lambda=u^i$. Then after conjugating by $\widehat{\Psi}_t\widehat{R}_t $ and using formula \eqref{R} and \leref{R:phi} we see that the regularity of \eqref{ui:constraints} is equivalent to the regularity of the following expression: \begin{eqnarray*} \Big(\overline{c}_a(t,\lambda,s)\Gamma_{+}(u^i,\lambda,s) + \overline{c}_b(t,\lambda,s)\Gamma_{-}(u^i,\lambda,s)\Big) \mathcal{A}_i, \end{eqnarray*} where $\Gamma_{+}(u^i,\lambda,s)=e^{\pm\widehat{\phi}_{A_1}(u^i,\lambda,s)_-}e^{\pm\widehat{\phi}_{A_1}(u^i,\lambda,s)_+}$ and the coefficients are given by the following formulas: \begin{eqnarray*} \log \overline{c}_a= \frac{1}{2}\int_{-{\bf 1}}^{t-\lambda\,{\bf 1}} \mathcal{W}_{a,a} - \Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda,s)_-) + \frac{1}{2}V\phi_\beta(t,\lambda,s)_-^2 \end{eqnarray*} and \begin{eqnarray*} \log \overline{c}_b= \frac{1}{2}\int_{-{\bf 1}}^{t-\lambda\,{\bf 1}} \mathcal{W}_{b,b} + \Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda,s)_-) + \frac{1}{2}V\phi_\beta(t,\lambda,s)_-^2. \end{eqnarray*} Let us rewrite the first formula in a slightly different way. We put $a=\alpha+\beta$ and so we have $\mathcal{W}_{a,a}=\mathcal{W}_{\alpha,\alpha}+2\mathcal{W}_{\alpha,\beta}+\mathcal{W}_{\beta,\beta}.$ Also we add and subtract the integral: \begin{eqnarray*} \frac{1}{2}\int_{-1}^{u^i-\lambda} I^{(0)}_{A_1}(\xi,0,s)\bullet I^{(0)}_{A_1}(\xi,0,s) = \frac{1}{2}\int_{-1}^{u^i-\lambda}\Big(\frac{1}{\sqrt{2(s-\xi)}}-\frac{1}{\sqrt{2(-\xi)}} \Big)^2 d\xi. \end{eqnarray*} It should be clear that $\log \overline{c}_a$ is a sum of the following three functions. The first one: \begin{equation}\label{ca:1} \frac{1}{2}\int_{-{\bf 1}}^{t-\lambda{\bf 1}} \mathcal{W}_{\alpha,\alpha} + \frac{1}{2}\int_{-1}^{u^i-\lambda} I^{(0)}_{A_1}(\xi,0,s)\bullet I^{(0)}_{A_1}(\xi,0,s), \end{equation} the second one: \begin{equation}\label{ca:2} \int_{-{\bf 1}}^{t-\lambda{\bf 1}} \mathcal{W}_{\alpha,\beta} - \Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda,s)_-), \end{equation} and the third one: \begin{equation}\label{ca:3} \frac{1}{2}V\phi_\beta(t,\lambda,s)_-^2 +\frac{1}{2}\int_{-{\bf 1}}^{t-\lambda{\bf 1}} \mathcal{W}_{\beta,\beta}-\frac{1}{2}\int_{-1}^{u^i-\lambda}\Big(\frac{1}{\sqrt{2(s-\xi)}}-\frac{1}{\sqrt{2(-\xi)}}\Big)^2d\xi. \end{equation} Notice that both \eqref{ca:2} and \eqref{ca:3} are single-valued near $\lambda=u^i$. Indeed, let $\gamma$ be a small loop -- based at $(t,\lambda)$ -- going around $(t,u^i)$, then the analytical continuation around $\gamma$ transforms $\Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda,s)_-)$ into $\Omega(\phi_\alpha(t,\lambda,s)_+,\phi_{-\beta}(t,\lambda,s)_-).$ However, the differential of $\Omega(\phi_\alpha(t,\xi,s)_+,\phi_\beta(t,\xi,s)_-)$ is $\mathcal{W}_{\alpha,\beta}(t-\xi\,{\bf 1})$ (see \leref{primitive}), so using the Stoke's theorem we get: \begin{equation}\label{anal_gain} \Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda,s)_-) - \Omega(\phi_\alpha(t,\lambda,s)_+,\phi_{-\beta}(t,\lambda,s)_-) = \int_{\gamma}\mathcal{W}_{\alpha,\beta}. \end{equation} The analytical continuation along $\gamma$, changes the value of the integral in \eqref{ca:2} also by $ \int_{\gamma}\mathcal{W}_{\alpha,\beta}$, which implies that the function \eqref{ca:2} is single valued near $\lambda=u^i$. Moreover, using \leref{primitive} and the Leibniz rule, we get that \eqref{ca:2} is independent of $t$ and $\lambda$, so \eqref{ca:2} is just a constant. A similar argument proves that \eqref{ca:3} is also a constant. It remains to analyze \eqref{ca:1}. The first integral is an analytic function in $\lambda$, because the cycle $\alpha$ is invariant under the local monodromy. The exponent of the second integral is exactly the coefficient, which our general theory would prescribe for the $\mathcal{W}_{A_1}$-constraints of the ancestor potential of $A_1$-singularity (see Subsection \ref{step1}). For $\log \overline{c}_b$, the only difference is that in \eqref{ca:2} we have to replace $\beta$ with $-\beta$. Therefore, the regularity of \eqref{ui:constraints} would follow from the $\mathcal{W}_{A_1}$ constraints for the ancestor potential of $A_1$-singularity, if we manage to prove that $\overline{c}_a= \overline{c}_b.$ The difference $\log \overline{c}_a - \log \overline{c}_b$ is equal to: \begin{equation}\label{log:c} \frac{1}{2}\int_{-{\bf 1}}^{t-\lambda{\bf 1}} \mathcal{W}_{a,a} -\frac{1}{2}\int_{-{\bf 1}}^{t-\lambda{\bf 1}} \mathcal{W}_{b,b} -2\Omega(\phi_\alpha(t,\lambda,s)_+,\phi_\beta(t,\lambda,s)_-). \end{equation} By definition $\beta=(a-b)/2$, so we have $-2\phi_\beta(t,\lambda,s)_-=\phi_b(t,\lambda,s)_- - \phi_a(t,\lambda,s)_- .$ Therefore, using \leref{primitive} and the Stoke's theorem we get that the last term in \eqref{log:c} -- with the coefficient $(-2)$ included -- equals $\int_{\gamma_i} \mathcal{W}_{\alpha,a},$ where $\gamma_i$ is a simple loop around the discriminant going around $(t,u^i)$. Using that $a=\alpha+\beta$ we get \begin{eqnarray*} \int_{\gamma_i} \mathcal{W}_{\alpha,a} = \int_{\gamma_i} \mathcal{W}_{\alpha,\beta} = \frac{1}{2}\int_{\gamma_i}\mathcal{W}_{a,a}, \end{eqnarray*} where for the last equality we used that $\int_{\gamma_i} \mathcal{W}_{\alpha,\alpha}=\int_{\gamma_i} \mathcal{W}_{\beta,\beta} = 0$. The first integral vanishes, because $\mathcal{W}_{\alpha,\alpha}$ is analytic near $\lambda=u^i$, while the second one is zero thanks to \leref{period:beta}. Recall that the integration paths of the integrals in \eqref{log:c} have the form $C$ and $C\circ \gamma_0$, where $C$ is a path from $-{\bf 1}$ to $t-\lambda{\bf 1}$ and $\gamma_0$ is a loop, based at $-{\bf 1}$, such that the parallel transport of $a$ along $\gamma_0$ is $b$. Therefore, \eqref{log:c} can be interpreted as: $\frac{1}{2}\oint_\gamma \mathcal{W}_{a,a}$, where $\gamma$ is the composition of the paths: $\gamma_0^{-1}\circ C^{-1}\circ\gamma_i\circ C.$ On the other hand, the cycle $a$ is invariant along $\gamma$. Recalling \leref{period:alpha} we get $\int_\gamma\mathcal{W}_{a,a}=0$. It remains only to prove $\mathcal{W}_{A_1}$ constraints for the ancestor potential of $A_1$ singularity. However, they follow from Subsection \ref{step1} and the $\mathcal{W}_{A_1}$ constraints for $\mathcal{D}_{\rm pt}$ -- see \coref{W:A1}. \thref{t1} is proved. \qed
1,477,468,750,707
arxiv
\section{Introduction} A necessity of a Green function diagonal study is directly connected with the generalized zeta-function (GZF) theory of elliptic differential operators \cite{S}, which is successfully applied to a regularization of the operators determinants \cite{H}. Such elliptic problems, for example, appear as Laplace transform of heat kernel equations, generally, with variable coefficients, which are conventially named as potentials. An important application of the theory is evaluation of semiclassical quantum corrections calculations to nontrivial classical solutions of important nonlinear equations and field theory \cite{RS,Kon}. The corrections are intimately linked to the fundamental solutions of related linear problems for the heat operator Laplace transform, which diagonal enters the zeta-function definition. Such regularization, for example, is realized in explicit form for kink solutions of the integrable Sine-Gordon equation \cite{kwant,ZL} as well as non-integrable Landau-Ginzburg ($\phi^4$) models \cite{Kon,ZL}. The kink solution (as well as multikink one) in this context corresponds to the case of point spectrum of the elliptic operators that appear after division of variables. This paper is devoted to investigation of a wide class of potentials which spectrum is continuous with eventual gaps - more precisely so-called finite-gap ones - see, e.g. the book \cite{BEB}. Such potentials and, especially three-gap one, correspond to basic three-wave interaction, which is important in many quasiperiodic processes description, an exemplary applications one can find in \cite{BB}. In the Sec. 2, starting from Laplace transform of the heat equation by time, we derive a nonlinear equation for the Green function diagonal, along ideas similar ones, mentioned in \cite{C.H, BEB} in the context of other equations. We construct its solutions in cases which potentials has direct link to the polynomial functions in appropriate variables (Sec. 3). The last section is devoted to examples and appendix contains a Mathematica program, related to a class of illustrations. \section{The equation} We are interested in a class of problems, connected with the parabolic partial differential operator Green function (kernel of heat equation) \begin{equation} \left(\frac{\partial}{\partial y}+\frac{\partial^2}{\partial x^2}-U(x)\right)g(x,x_0,y)=\delta(x-x_0)\delta(y), \end{equation} where $g(x,x_0,y) \in S$ is the fundamental solution over Schwartz space $S$; $\delta(x-x_0), \quad \delta(y)$ are Dirac delta-functions. After Laplace transform: \begin{equation}\label{a} \left(p+\frac{\partial^2}{\partial x^2}-U(x)\right)\hat{g}(x,x_0,p)=\delta(x-x_0) \end{equation} The construction of GZF in fact rely upon the Green function diagonal. In \cite{ZL} a statement about $\hat{g}(p,x,x)=G(p,x)$, is used. Namely, $G(p,x)$ solves the equation \begin{equation}\label{Hermit} 2GG''_{xx} - (G'_x)^2 - 4(U(x)-p)G^2+1=0. \end{equation} on condition, that $U(x)$ is bounded. The equation resembles one of derived by Hermit for \cite{C.H}. \textbf{Proof:} Let us consider homogeneous equation \begin{equation}\label{aa} \left(p + \frac{\partial^2}{\partial x^2} -U(x)\right)f\left(p,x,x_{0}\right)=0. \end{equation} The fundamental solution of (\ref{aa}) is built by standard procedure \cite{MW}. It has two linearly independent solutions, for example $\phi$ and $\psi$, converging respectively at $-\infty$ and $+\infty$. One can represent $\hat{g}_{D}$ through $\phi$ and $\psi$ respectively for $x<x_0$ and $x>x_0$ with a sewing condition determined by equation (\ref{a}) \begin{equation}\label{b} \hat{g}_{D}(p,x,x_0)=\left\{\begin{array}{c} A(x_0)\phi(p,x),\ x\leq x_0 \\ B(x_0)\psi(p,x),\ x\geq x_0 \end{array}\right. . \end{equation} From continuity condition of $\hat{g}_{D}$ one gets $$A(x_0)\phi(p,x_0)=B(x_0)\psi(p,x_0).$$ What leads to: $$A(x_0)=C(x_0)\psi(p,x_0),$$ $$B(x_0)=C(x_0)\phi(p,x_0).$$ Due to the symmetry of Green function in respect to exchanging $x$ and $x_0$, $C(x_0)$ is constant (later referred as C). To obtain condition for derivatives of $\phi$ and $\psi$ one integrates (\ref{a}) over $x$ in an $\varepsilon$ neighbourhood of $x_0$: \begin{equation}\label{c} \int_{x_0-\varepsilon}^{x_0+\varepsilon} \left(p + \frac{\partial^2}{\partial x^2} -U(x)\right)\hat{g}_{D}\left(p,x,x_{0}\right)dx=1, \end{equation} $$\left.\frac{\partial \hat{g}_{D}}{\partial x}\left(p,x,x_{0}\right)\right|_{x=x_0-\varepsilon}^{x_0+\varepsilon}+ \int_{x_0-\varepsilon}^{x_0+\varepsilon} \left(p-U(x)\right)\hat{g}_{D}\left(p,x,x_{0}\right)dx=1,$$ $$\frac{\partial \phi}{\partial x}(p,x_0+\varepsilon)\ C\psi(p,x_0)- \frac{\partial \psi}{\partial x}(p,x_0-\varepsilon)\ C\phi(p,x_0)+ \int_{x_0-\varepsilon}^{x_0+\varepsilon} \left(p-U(x)\right)\hat{g}_{D}\left(p,x,x_{0}\right)dx=1.$$ In $\varepsilon \rightarrow 0$ limit above equation reduces to \begin{equation}\label{d1} \frac{\partial \phi}{\partial x}(p,x_0)\ C\psi(p,x_0)-\frac{\partial \psi}{\partial x}(p,x_0)\ C\phi(p,x_0)=1. \end{equation} Since solutions of (\ref{aa}) are linear, one can assume $C=1$. Then (\ref{d1}) reduces to \begin{equation}\label{e} \frac{\partial\phi}{\partial x}(p,x_0)\ \psi(p,x_0)=\frac{\partial\psi}{\partial x}(p,x_0)\ \phi(p,x_0)+1. \end{equation} Actual proof will be made, by inserting (\ref{b}) to (\ref{Hermit}). For brevity function arguments will be omitted and $'$ will denote a derivative with respect to $x$ $$2\psi \phi \left(\psi '' \phi + 2\psi' \phi' + \psi \phi '' \right)-\left(\psi '\phi+\psi \phi '\right)^2-4(U(x)-p)\psi^2 \phi^2 +1=0$$ $$2\psi^2 \phi \left(\phi''-(U(x)-p)\phi\right)+2\psi \phi^2 \left(\psi''-(U(x)-p)\psi\right)+4\psi' \phi'\psi \phi-\left(\psi '\phi+\psi \phi '\right)^2 +1=0.$$ Because of (\ref{aa}) two first elements are nullified. One also uses property (\ref{e}): $$4\psi' \phi'\psi \phi-\left(2\psi '\phi+1\right)^2+1=0,$$ $$4\psi' \phi'\psi \phi-4\psi'^2\phi^2-4\psi'\phi-1+1=0,$$ \begin{equation}\label{g} \psi' \phi'\psi \phi-\psi'^2\phi^2-\psi' \phi=0, \end{equation} \begin{equation} \psi'^2\phi^2+\psi'\phi-\psi'^2\phi^2-\psi' \phi=0. \end{equation} Thus the proof is concluded. It is important to note, that the transition is general and doesn't rely on the nature of $U(x)$ as long as it's bound. It's usefulness is dependent on a few qualities of the potential though. \section{The main equation solution} \subsection{Substitutions} We consider a class of solutions of the equation (\ref{Hermit}), to be written in a form \begin{equation}\label{rep} G(p,x)=\frac{P(p,x)}{2\sqrt{Q(p)}}. \end{equation} It is most useful, if there exists a variable transition $x\rightarrow z$, $U(x)\rightarrow u(z)$, in which $P$ and $Q$ are polynomials. Basic conditions for it to be possible are: \begin{enumerate} \item $u$ is a polynomial in $z$, \item $(z'_x)^2$ is a polynomial in $z$ (note, that $z''_{xx}=\frac{1}{2}\frac{\partial}{\partial z}(z'_x)^2$), \end{enumerate} This does not ensure simplicity of solutions as will be shown further in the text. At this point, it is important to notice, that the second condition restricts $z(x)$ - apart from a class of elementary functions - to elliptic and hyperelliptic functions - see, e.g. \cite{BEB}. Let us assume, that $(z'_x)^2$ and $u$ are polynomials in the variable $z$ of degree $L+1$, and $K$ respectively, hence, $z''_{xx}$ is a polynomial of the degree $L$. We also assume, that coefficient by the highest power term of $(z'_x)^2$ is equal to $1$ (which is always attainable). After the change of variables, the equation will take the form: \begin{equation} 2P(P''(z'_x)^2+P'z''_{xx})-\left(P'z'_x\right)^2-4(u(z)-p)P^2+4Q=0 \end{equation} We also assume solution in a given form: \begin{equation}\label{sol} \begin{array}{c} P(p,z)=\sum_{n=0}^{N}p^n \sum_{l=0}^{M_n}P_{n,l} z^l\\ Q(p)=\sum_{n=0}^{2N+1} q_n p^n \end{array} \end{equation} \subsection{Classification} We will now proceed to analyse the solution by separating the equation in respect to powers of $p$ and $z$. The equation for $p^0, z^{2M_0+max(K,L-1)}$ takes following form: If $K>L-1$ \begin{equation} 4u_{K}P_{0,M_0}=0 \end{equation} If $K\leq L-1\quad \land\quad M_0\geq 1$ \begin{equation} 2P^2_{0,M_0}(M_0 (M_0-1)+\frac{L+1}{2}M_0)-P^2_{0,M_0}M^2_0-4u_{K}P^2_{0,M_0}\delta_{K,L-1}=0 \end{equation} \begin{equation}\label{M0} M_0^2+(L-1)M_0-4u_{K}\delta_{K,L-1}=0 \end{equation} Note, that $M_0=0$ leads to $K=0$ (this case will be examined later in the text). Another conclusion is, that $K\leq L-1$ is necessary for sought type of solutions. Furthermore, this leads to following, more precise conditions: \begin{equation} \begin{array}{c} L\geq 1 \qquad (due\ to\ K\geq 0) \\ K=L-1 \qquad (for\ M_0\ to\ have\ positive\ value) \end{array} \end{equation} Equation for $p^{2N+1}$ \begin{equation} 4\left(\sum_{l=0}^{M_N}P_{N,l} z^l\right)^2+4q_{2N+1}=0 \end{equation} leads to following conclusions: \begin{equation} M_N=0\quad \land\quad P^2_{N,0}=-q_{2N+1}. \end{equation} Let us now look at subsequent equations for descending powers of $p$. For $p^{2N}$ we have \begin{equation} -4u(z)P^2_{N,0}+8P_{N,0}\sum_{l=0}^{M_{N-1}}P_{N-1, l}z^l+4q_{2N}=0, \end{equation} which leads to \begin{equation}\label{2N} \sum_{l=0}^{M_{N-1}}P_{N-1, l}z^l=\frac{1}{2}\left(P_{N,0}u(z)-\frac{q_{2N}}{P_{N,0}}\right), \end{equation} \begin{equation} M_{N-1}=K. \end{equation} For $p^{2N-1}$ we get (for the highest power of $z$): on condition $M_{N-2}>M_{N-1}+K$ ($z\neq x^2$) \begin{equation} 4P_{N,0}P_{N-2,M_{N-2}}=0, \end{equation} on condition $M_{N-2}\leq M_{N-1}+K$ \begin{equation} \begin{array}{c} 2P_{N,0}P_{N-1,M_{N-1}}(M_{N-1}(M_{N-1}-1)+\frac{K+2}{2}M_{N-1}) \\ -2P_{N,0}P_{N-1,M_{N-1}}-8u_{K}P_{N,0}P_{N-1,M_{N-1}} \\ +8P_{N,0}P_{N-2,M_{N-2}}\delta_{M_{N-2},L-1}+4q_{2N-1}=0. \end{array} \end{equation} It's obvious, that $M_{N-2}\leq M_{N-1}+K$ is a necessary condition for (\ref{sol}). Now we can consider a general rule for all remaining equations. Thesis: $\forall_{0\leq k<N-1} M_{k}\leq (N-k)K$. Proof by induction: If $M_{k}> (N-k)K$, then equation for $p^{N+k+1}$ and highest power of $z$ takes form: \begin{equation} 4P_{N,0}P_{k,M_k}=0 \end{equation} If $M_{k}\leq (N-k)K$ and $\forall_{k<l<N} M_l=(N-l)K$ (possibility giving the highest possible value of $M_k$), then equation for $p^{N+k+1}$ and highest power of $z$ takes form: \begin{equation} \begin{array}{c} 2P_{N,0}P_{k+1}(M_{k+1}(M_{k+1}-1)+\frac{K+2}{2}M_{k+1})\\ +2\sum_{n=k+2}^{N-1}P_{n,M_n}P_{N-n+k+1,M_{N-n+k+1}}(M_{N-n+k+1}(M_{N-n+k+1}-1) \\ +\frac{K+2}{2}M_{N-n+k+1}) -2\sum_{n=k+2}^{N-1}P_{n,M_n}P_{N-n+k+1}M_{N-n+k+1}M_{n} \\ -4u_{K}\sum_{n=k+1}^{N}P_{n,M_n}P_{N-n+k+1} \\ +4\sum_{n=k+1}^{N-1}P_{n,M_n}P_{N-n+k,M_{N-n+k}} \\ +4\delta_{M_{k},M_{k+1}+L-1}P_{N,0}P_{k,M_k}=0 \end{array} \end{equation} Thus the solution exists only if the thesis holds. This leads directly to a minimal condition on $N$: \begin{equation}\label{Nmin} N\geq\frac{M_0}{K} \qquad \forall M_0\geq K \end{equation} In summary: existence of solutions of form (\ref{sol}) depends on the power of the potential ($K$), power of $(z'_x)^2$ ($L\leq 1$ can give abnormal results), amplitude of the highest power term of the potential and there exists a definite formula for the minimal value of $N$. \subsection{Solving algorithm}\label{Al} \begin{enumerate} \item If there exists a solution in the form (\ref{sol}) for a given $N$ (which fulfils requirement (\ref{Al})), it can be obtained in a straightforward manner. Since the actual value of $N$ is unknown, one starts with the minimal possible value $\frac{M_0}{K}$. After separating the equation in respect to powers of $p$ one analyzes the resulting equations starting from the highest power of $p$. All those equations can be written in a manner similar to (\ref{2N}) (here for the $p^{N+n+1} with possible $n$ from $N-1$ to $0$)$: \begin{equation} P_{N,0}\sum_{l=0}^{(N-n)*K}P_{n,l}z^l=F(P_{N,0},P_{N-1,K},P_{N-1,K-1},\dots,P{n+1,0},z)-\frac{q_{N+n+1}}{2} \end{equation} where $F$ contains all elements of equation not written explicitly. It is easy to see, we can obtain all coefficients, except for $P_{n,0}$, as solutions of linear equations, since all elements on the RHS, except for $q_{N+n+1}$ are known. As for the equation for $z^0$, it is more convenient, to express $q_{N+n+1}$ in terms of $P_{n,0}$. Solving all equations down to $p^{N+1}$ gives us all $P_{n,l}$ as well as some of $q_n$ in terms of $\{P_{n,0}\}_{i\in\{0,...,N\}}$. It is important to note, that all calculations done up to that point stay relevant, even if value of $N$ will have to be increased. \item In the next step, we use the equation for $p^N$ to calculate the possible values of $P_{n,0}$. Again, if we start from the highest powers of $z$, we can obtain those coefficients as solutions of linear equations, since any element containing $P_{n,0}$ is proportional to at most $z^{K(N-n+1)}$ and any element containing $P^2_{n,0}$ is proportional to at most $z^{K(N-2n+1)}$ (negative exponent means, that such coefficients are not present in equation for $p^N$). \item Subsequently we check the solution. If it doesn't hold, we increase the value of n, add relevant components to $P$ and $Q$ polynomials, calculate values of all new coefficients and go back to step 2. \end{enumerate} Since $q_n$ aren't necessary for calculation of any $P_{n,l}$ and checking the solution, we can slightly simplify the algorithm by calculating $q_n$ after all other coefficients. \noindent Described algorithm was implemented in Mathematica 7 and used to obtain solutions presented in section \ref{sol}. \subsection{On uniqueness of solutions} Using the above algorithm we obtain all coefficients as solutions of linear equations in respect to sought coefficients. This leads to a simple conclusion, that for a given $N$ solutions of form (\ref{sol}) are unique except for constant $u$, in which case the solution has no dependence on $z$ and because of this, there is no relation between any of $P_{n,0}$ (\ref{const}). As yet, there is no method of finding all allowed $N$ for a given potential. Therefore uniqueness of solutions is uncertain. \section{Exemplary solutions}\label{sol} \subsection{Constant potential}\label{const} Let's consider a constant potential \begin{equation}\label{con} U(x)=u \end{equation} It is obvious, that no change of variables is necessary, thus we can use $z=x$ ($L=-1$). Equation for $p^0 z^{2M_0}$ immediately gives \begin{equation} -4uP^2_{0,M_0} +4q_0\delta_{M_0,0}=0 \end{equation} Since $u$ can have an arbitrary value, if we shift the $p$ variable, this equation only holds for $M_0=0$. This means, that the simplest solution would be \begin{equation} G(p,x)=\frac{1}{2\sqrt{u-p}} \end{equation} This is not the only one, as will be shown. Let us consider a solution for potential (\ref{con}) and an arbitrary $N$. Equation for $p^{2N+1}$ gives as usual \begin{equation} M_N=0 \end{equation} \begin{equation} q_{2N+1}=-P^2_{N,0} \end{equation} Equation for $p^{2N}$ gives \begin{equation} M_{N-1}=0 \end{equation} \begin{equation} P_{N-1,0}=\frac{1}{2}\left(uP_{N,0}-\frac{q_{2N}}{P_{N,0}}\right) \end{equation} Subsequent equations will look alike, with $M_i=0$ for all $i$. It is easy to see, that one obtains a total of $3N+3$ parameters with only $2N+2$ equations and one can obtain a solutions for any value of $N$. In a sense, it's a consequence of the condition (\ref{Nmin}), since $\frac{0}{0}$ is an indeterminate symbol. \subsection{Triple-gap cnoidal potential} Let's take a solution one order higher then that for $\phi^4$ cnoidal solution: \begin{equation} U(x)=-12m^2k^2\ cn^2(mx;k) \end{equation} \begin{equation} z=cn^2(mx;k) \end{equation} \begin{equation} \left(z'_x\right)^2=4m^2z(1-z)(1-k^2+k^2z) \end{equation} \begin{equation} z''_{xx}=2m^2(-3k^2z^2+(4k^2-2)z+1-k^2) \end{equation} \begin{equation} M_0=3 \end{equation} \begin{equation} N=3 \end{equation} Algorithm explained in section (\ref{Al}) gives (assuming $P_{3,0}=1$ for simplicity): \small \begin{eqnarray} P_{2} & = & -2m^2(7 + k^2 (-14 + 3 z)) \\ P_{1} & = & m^4(49 + k^2 (-256 + 78 z) + k^4 (256 + 3 z (-52 + 15 z))) \\ P_{0} & = & -3m^6(12 + 8 k^2 (-19 + 9 z) + 3 k^4 (128 + z (-121 + 45 z)) +\\ & & k^6 (-256 + 3 z (121 + 5 z (-18 + 5 z)))) \\ Q(p) & = & -((-4 + 8 k^2) m^2 + p) ((9 - 96 k^2 + 96 k^4) m^4 + 10 (-1 + 2 k^2) m^2 p + p^2)\\ & & ((9 - 42 k^2 + 33 k^4) m^4 + 2 (-5 + 7 k^2) m^2 p + p^2)\\ & & (3 k^2 (-8 + 11 k^2) m^4 + 2 (-2 + 7 k^2) m^2 p + p^2) \end{eqnarray} \normalsize \section{Conclusion} Described equation allows calculation of heat equation's Green function diagonal in a straightforward manner. Developed algorithm should be especially useful for finding solutions for finite-gap potentials, which naturally emerge in periodic and quasi-periodic structures.
1,477,468,750,708
arxiv
\section{Introduction.} \label{sec-intro The central theme of this essay is the study of a special kind of element of the general linear group ${\rm GL}\,(d,q)$ of nonsingular $d\times d$ matrices over a finite field ${\rm GF}\,(q)$ of order $q$. We define these elements, which we call {\it primitive prime divisor elements} or ${\rm ppd}\,$-{\it elements}, and give good estimates of the frequencies with which they occur in ${\rm GL}\,(d,q)$ and the various classical matrix groups. Further we describe a classification of the subgroups of ${\rm GL}\,(d,q)$ which contain ${\rm ppd}\,$-elements, and explore their role in the design and analysis of a randomised algorithm for recognising the classical matrix groups computationally. Perhaps the best way to introduce these ideas, and to explain the reasons for investigating this particular set of research questions, may be to give a preliminary discussion of a generic recognition algorithm for matrix groups. We wish to determine whether a given subgroup $G$ of ${\rm GL}\,(d,q)$ contains a certain subgroup $\Omega$. We design the algorithm to study properties of randomly selected elements from $G$ in such a way that, if $G$ contains $\Omega$ then with high probability we will gain sufficient information from these elements to conclude with certainty that $G$ does contain $\Omega$. A skeleton outline of the algorithm could be written as follows. \begin{algorithm} \label{basicalg}% To recognise whether a given subgroup of ${\rm GL}\,(d,q)$ contains a certain subgroup $\Omega$. \begin{description} \item[Input:] $G=\langle X\rangle\le{\rm GL}\,(d,q)$\quad and possibly some extra information about $G$. \item[Output:] Either \begin{description} \item[(a)] $G$ contains the subgroup $\Omega$, or \item[(b)] $G$ does not contain $\Omega$. \end{description} \end{description} \end{algorithm} If Algorithm~\ref{basicalg} returns option (a) then $G$ definitely contains $\Omega$. However if option (b) is returned there is a possibility that this response is incorrect. In other words Algorithm~\ref{basicalg} is a {\it Monte Carlo algorithm}. It proceeds by making a sequence of random selections of elements from the group $G$, seeking a certain kind of subset $E$ of $G$, which if found will greatly assist in deciding whether or not $G$ contains $\Omega$. The essential requirements for $E$ are two-fold: \begin{description} \item[1.] If $G$ contains a subset $E$ with the required properties, then either $G$ contains $\Omega$, or $G$ belongs to a short list of other possible subgroups of ${\rm GL}\,(d,q)$ (and the algorithm must then distinguish subgroups in this list from subgroups containing $\Omega$). \item[2.] If $G$ contains $\Omega$, then the event of {\it not} finding a suitable subset $E$ in $G$ after a reasonable number $N(\varepsilon)$ of independent random selections of elements from $G$ has probability less than some small pre-assigned number $\varepsilon$. \end{description} In order to make the first requirement explicit, we need a classification of the subgroups of ${\rm GL}\,(d,q)$ which contain a suitable subset $E$. Similarly in order to make the second requirement explicit, we need good estimates for the proportions of ``$E$-type elements'' in groups containing $\Omega$. Moreover, if these two requirements are to lead to an efficient algorithm for recognising whether $G$ contains $\Omega$, the proportions of $E$-type elements in groups containing $\Omega$ must be fairly large to guarantee that we have a good chance of finding a suitable subset $E$ after a reasonable number of random selections; and in practice we need good heuristics for producing approximately random elements from a group. Also, among other things, we need efficient procedures to identify $E$-type elements, and to distinguish between the subgroups on the short list and the subgroups which contain $\Omega$. The aim of this paper is to present and discuss results of these types, and the corresponding recognition algorithms, in the cases where $\Omega$ is one of the classical matrix groups. In these cases the subset $E$ consists of certain ${\rm ppd}\,$-elements. I am grateful to Igor Shparlinski for some very helpful discussions and advice on the analysis in Section~\ref{sec-complexity}. Eamonn O'Brien made a careful reading of an early draft of the paper and the current version has been much improved as a result of his detailed comments. Also I thank John Cannon for making available to me the results mentioned in Section~\ref{sec-performance} of some tests of the {\sc Magma} implementation of the classical recognition algorithm. \section{Classical groups.}\label{sec-classical We consider certain subgroups of ${\rm GL}\,(d,q)$ where $d$ is a positive integer and $q=p^a$, a power of a prime $p$, and we let $V$ denote the underlying vector space of $d$-dimensional row vectors over ${\rm GF}\,(q)$ on which ${\rm GL}\,(d,q)$ acts naturally. The classical groups preserve certain bilinear, sesquilinear or quadratic forms on $V$. To describe them we adapt some notation from the book of Kleidman and Liebeck~\cite{kl}. A subgroup $G$ of ${\rm GL}\,(d,q)$ is said to {\it preserve a form $\kappa$ modulo scalars} if there exists a homo\-morphism $\mu:G\rightarrow {\rm GF}\,(q)^\#$ such that, in the case of a bilinear or sesquilinear form, $\kappa(ug,vg)=\mu(g)\cdot\kappa(u,v)$, or, in the case of a quadratic form, $\kappa(vg)=\mu(g)\cdot\kappa(v)$, for all $u,v\in V$ and $g\in G$. A matrix $g$ in such a group is said to {\it preserve $\kappa$ modulo scalars}, and if $\mu(g)=1$ then $g$ is said to {\it preserve} $\kappa$. We denote by $\Delta$ or $\Delta(V,\kappa)$ the group of all matrices in ${\rm GL}\,(d,q)$ which preserve $\kappa$ modulo scalars, and by $S$ the subgroup of $\Delta$ consisting of those matrices which preserve $\kappa$ and which have determinant 1. The subgroup $\Omega$ which we shall seek to recognise is equal to $S$ unless $\kappa$ is a non-degenerate quadratic form, and in this latter case $\Omega$ has index 2 in $S$ and is the unique such subgroup of $S$. There are four families of subgroups which we shall consider, and by a {\it classical group} in ${\rm GL}\,(d,q)$ we shall mean a subgroup $G$ which satisfies $\Omega\le G\le \Delta$, for $\Omega,\ \Delta$ in one of these families. The four families are as follows. \begin{description} \item[(i)] {\it Linear groups:} $\kappa=0,\ \Delta ={\rm GL}\,(d,q)$ and $\Omega = {\rm SL}\,(d,q)$; \item[(ii)] {\it Symplectic groups:} $d$ is even, $\kappa$ is a non-degenerate alternating bilinear form on $V$, $\Delta ={\rm GSp}\,(d,q)$ and $\Omega ={\rm Sp}\,(d,q)$; \item[(iii)] {\it Orthogonal groups:} $\kappa$ is a non-degenerate quadratic form on $V$, $\Delta = {\rm GO}^\varepsilon(d,q)$, and $\Omega =\Omega^\varepsilon(d,q)$, where $\varepsilon=\pm$ if $d$ is even, and $\varepsilon=\circ$ if $d$ is odd. If $d$ is odd then also $q$ is odd since $\kappa$ is non-degenerate; \item[(iv)] {\it Unitary groups:} $q$ is a square, $\kappa$ is a non-degenerate unitary form on $V$, that is a non-degenerate sesquilinear form with respect to the automorphism of ${\rm GF}\,(q)$ of order 2, $\Delta = {\rm GU}\,(d,q)$ and $\Omega = {\rm SU}\,(d,q)$. \end{description} The books~\cite{kl, det} are good references for information about the finite classical groups. \section{Primitive prime divisors and ${\rm ppd}\,$-elements.}\label{sec-ppds} Let $b, e$ be positive integers with $b>1$. A prime $r$ dividing $b^e -1$ is said to be a {\it primitive prime divisor} of $b^e - 1$ if $r$ does not divide $b^i -1$ for any $i$ such that $1 \leq i < e$. It was proved by Zsigmondy~\cite{zsig} in 1892 that $b^e - 1$ has a primitive prime divisor unless either the pair $(b, e)$ is $(2, 6)$, or $e = 2$ and $b+1$ is a power of 2. Observe that \[ |{\rm GL}\,(d,q)|=q^{d\choose 2} \prod_{1\le i\le d} (q^i-1). \] This means that primitive prime divisors of $q^e-1$ for various values of $e\le d$ divide $|{\rm GL}\,(d,q)|$, and indeed divide $|\Omega|$ for various of the classical groups $\Omega$ in ${\rm GL}\,(d,q)$. We define {\it primitive prime divisor elements}, sometimes called ${\rm ppd}\,$-{\it elements}, in ${\rm GL}\,(d,q)$ to be those elements with order a multiple of some such primitive prime divisor. Thus we define an element $g\in{\rm GL}\,(d,q)$ to be a ${\rm ppd}\,(d,q;e)$-{\it element} if its order $o(g)$ is divisible by some primitive prime divisor of $q^e-1$. Our interest is mainly in ${\rm ppd}\,(d,q;e)$-elements with $e>d/2$ and we shall describe in Section~\ref{sec-ppdsgps} a classification by Guralnick, Penttila, Saxl and the author in~\cite{ppds} of all subgroups of ${\rm GL}\,(d,q)$ containing such an element. We shall henceforth reserve the term ${\rm ppd}\,$-elements to refer to elements of ${\rm GL}\,(d,q)$ which are ${\rm ppd}\,(d,q;e)$-elements for some $e>d/2$. Note that, if $g\in{\rm GL}\,(d,q)$ is a ${\rm ppd}\,(d,q;e)$-element with $e>d/2$, then there is a unique $g$-invariant $e$-dimensional subspace of the underlying vector space $V$ on which $g$ acts irreducibly, and also the characteristic polynomial for $g$ has an irreducible factor over ${\rm GF}\,(q)$ of degree $e$. While neither of these two conditions is sufficient to guarantee that an element is a ${\rm ppd}\,(d,q;e)$-element, it turns out that most elements satisfying either of them are in fact ${\rm ppd}\,(d,q;e)$-elements. In addition, a large proportion of elements in any of the classical groups are ${\rm ppd}\,$-elements, and this fact has proved to be very important for the development of recognition algorithms for classical groups. In 1974 Hering~\cite{her1} investigated subgroups of ${\rm GL}\,(d,q)$ containing ${\rm ppd}\,(d,q;d)$-elements. Such subgroups act irreducibly on $V$. Hering was interested in applications of these results to geometry, in particular for constructing finite translation planes. He was also interested in the link between such groups and finite affine 2-transitive permutation groups. If $G$ is a finite affine 2-transitive permutation group acting on a set $X$, then $X$ may be taken as the set of vectors of a finite vector space, say $V=V(d,q)$ of dimension $d$ over ${\rm GF}\,(q)$, and $G= N G_o$ where $N$ is the group of translations of $V$ and $G_o$ is a subgroup of ${\rm GL}\,(d,q)$ acting transitively on $V^\#$, that is $G_o$ is a {\it transitive linear group}. Conversely if $G_o$ is a transitive linear group on $V$, and $N$ is the group of translations of $V$, then $ N G_o$ is a 2-transitive permutation group of affine type acting on $V$. Thus the problems of classifying finite affine 2-transitive groups, and classifying finite transitive linear groups are equivalent. Moreover if $G_o$ is transitive on $V^\#$ then $q^d-1$ divides $|G_o|$ so that $G_o$ contains a ${\rm ppd}\,(d,q;d)$-element. Hering's work led to a classification of finite affine 2-transitive permutation groups, see~\cite{her2} and also~\cite[Appendix]{lieb}. In common with most of the classifications we shall mention related to ${\rm ppd}\,$-elements, this classification depends on the classification of the finite simple groups. Merkt~\cite{merkt} extended Hering's work obtaining a better description of certain of the subgroups of ${\rm GL}\,(d,q)$ containing a ${\rm ppd}\,(d,q;d)$-element. Dempwolff~\cite{demp} in 1987 began an investigation of subgroups of ${\rm GL}\,(d,q)$ containing a ${\rm ppd}\,(d,q;e)$-element for some $e\ge d/2$. His analysis is independent of the work of Aschbacher which we shall describe in the next section, and he made significant progress on describing what we shall call (and shall define in the next section) the ``geometric subgroups'' containing such ${\rm ppd}\,$-elements. He also did some work on the nearly simple examples. The classification in~\cite{ppds} of all subgroups of ${\rm GL}\,(d,q)$ containing a ${\rm ppd}\,(d,q;e)$-element for some $e>d/2$ uses the work of Aschbacher to guide both the analysis and the presentation of the examples. Similar results may be obtained if the condition $e>d/2$ is relaxed, but their proofs become more technical. \section{Aschbacher's classification of finite linear groups.} \label{sec-asch}% Aschbacher's description~\cite{asch} of subgroups of ${\rm GL}\,(d,q)$, where $q=p^a$ with $p$ prime, has been very influential both on the way problems concerning linear groups are analysed and on the way results about such groups are presented. Aschbacher defined eight families of subgroups $\mathcal{C}_1,\ldots,\mathcal{C}_8$ of ${\rm GL}\,(d,q)$ as follows. These families are usually defined in terms of some geometrical property associated with the action on the underlying vector space $V$, and in all cases maximal subgroups of ${\rm GL}\,(d,q)$ in the family can be identified. Subgroups of ${\rm GL}\,(d,q)$ in these families are therefore called {\it geometric subgroups}. We indicate in parentheses the rough structure of a typical maximal subgroup in the family. Note that $Z$ denotes the subgroup of scalar matrices in ${\rm GL}\,(d,q)$. Also, as in~\cite{kl}, we denote by $b$ a cyclic group of order $b$, and for a prime $r$ we denote by $r^{1+2c}$ an extraspecial group of that order. \begin{description} \item[$\mathcal{C}_1$] These subgroups act reducibly on $V$, and maximal subgroups in the family are the stabilisers of proper subspaces\ (maximal parabolic subgroups). \item[$\mathcal{C}_2$] These subgroups act irreducibly but imprimitively on $V$, and maximal subgroups in the family are the stabilisers of direct sum decompositions $V=\oplus_{i=1}^t V_i$ with $\dim V_i = d/t$\ (wreath products ${\rm GL}\,(d/t,q)\wr S_t$). \item[$\mathcal{C}_3$] These subgroups preserve on $V$ the structure of a vector space over an extension field of ${\rm GF}\,(q)$, and maximal subgroups in the family are the stabilisers of extension fields of ${\rm GF}\,(q)$ of degree $b$, where $b$ is a prime dividing $d$\ (the groups ${\rm GL}\,(d/b,q^b).b$). \item[$\mathcal{C}_4$] These subgroups preserve on $V$ the structure of a tensor product of subspaces, and maximal subgroups in the family are the stabilisers of decompositions $V= V_1\otimes V_2$\ (central products ${\rm GL}\,(b,q)\circ {\rm GL}\,(c,q)$ where $d=bc$). \item[$\mathcal{C}_5$] These subgroups preserve on $V$ the structure of a vector space over a proper subfield of ${\rm GF}\,(q)$; such a subgroup is said to {\it be realisable over a proper subfield}. The maximal subgroups in the family are the stabilisers modulo scalars of subfields of ${\rm GF}\,(q)$ of prime index $b$ dividing $a$\ (central products ${\rm GL}\,(d,q^{1/b})\circ Z$). \item[$\mathcal{C}_6$] These subgroups have as a normal subgroup an $r$-group $R$ of symplectic type ($r$ prime) which acts absolutely irreducibly on $V$, and maximal subgroups in the family are the normalisers of these subgroups, $(Z_{q-1}\circ R).{\rm Sp}\,(2c,r)$, where $d=r^c$ and $R$ is an extraspecial group $r^{1+2c}$, or if $r=2$ then $R$ may alternatively be a central product $4\circ 2^{1+2c}$. \item[$\mathcal{C}_7$] These subgroups preserve on $V$ a tensor decomposition $V=\otimes_{i=1}^t V_i$ with $\dim V_i=c$, and maximal subgroups in the family are the stabilisers of such decompositions\ ($({\rm GL}\,(c,q)\circ\ldots \circ{\rm GL}\,(c,q)).S_t$, where $d=c^t$). \item[$\mathcal{C}_8$] These subgroups preserve modulo scalars a non-degenerate alternating, or sesquilinear, or quadratic form on $V$, and maximal subgroups in the family are the classical groups. \end{description} The main result of Aschbacher's paper~\cite{asch} (or see~\cite[Theorem~1.2.1]{kl}) states that, for a subgroup $G$ of ${\rm GL}\,(d,q)$ which does not contain ${\rm SL}\,(d,q)$, either $G$ is a geometric subgroup, or the socle $S$ of $G/(G\cap Z)$ is a nonabelian simple group, and the preimage of $S$ in $G$ is absolutely irreducible on $V$, is not realisable over a proper subfield, and is not a classical subgroup (as defined in Section~\ref{sec-classical}). The family of such subgroups is denoted $\mathcal{S}$, and subgroups in this family will often be referred to as {\it nearly simple} subgroups. Aschbacher~\cite{asch} also defined families of subgroups of each of the classical subgroups $\Delta$ in ${\rm GL}\,(d,q)$, analogous to $\mathcal{C}_1,\ldots,\mathcal{C}_8,\mathcal{S}$, and proved that each subgroup of a classical group $\Delta$ which does not contain $\Omega$ belongs to one of these families. \section{Linear groups containing ${\rm ppd}\,$-elements.} \label{sec-ppdsgps}% The analysis in~\cite{ppds} to determine the subgroups of ${\rm GL}\,(d,q)$ which contain a ${\rm ppd}\,(d,q;e)$-element for some $e>d/2$, was patterned on a similar analysis carried out in~\cite{recog} to classify subgroups of ${\rm GL}\,(d,q)$ which contain both a ${\rm ppd}\,(d,q;d)$-element and a ${\rm ppd}\,(d,q;d-1)$-element. Moreover the results in~\cite{ppds} seek to give information about the smallest subfield over which such a subgroup $G$ is realisable modulo scalars. We say that $G$ is {\it realisable modulo scalars} over a subfield ${\rm GF}\,(q_0)$ of ${\rm GF}\,(q)$ if $G$ is conjugate to a subgroup of ${\rm GL}\,(d,q_0)\circ Z$. Suppose that $G\le{\rm GL}\,(d,q)$ and that $G$ contains a ${\rm ppd}\,(d,q;e)$-element for some $e>d/2$, and let $r$ be a primitive prime divisor of $q^e-1$ which divides $|G|$. Suppose moreover that ${\rm GF}\,(q_0)$ is the smallest subfield of ${\rm GF}\,(q)$ such that $G$ is realisable modulo scalars over ${\rm GF}\,(q_0)$. There is a recursive aspect to the description in~\cite{ppds} of such subgroups $G$ which are geometric subgroups. For example, the reducible subgroups $G$ leave invariant some subspace or quotient space $U$ of $V$ of dimension $m\ge e$, and the subgroup $G^U$ of ${\rm GL}\,(m,q)$ induced by $G$ in its action on $U$ contains a ${\rm ppd}\,(m,q;e)$-element. In~\cite{ppds} no further description is given of these examples, though extra information may be obtained about the group $G^U$ by applying the results recursively. Although the classification of the geometric examples is not difficult, care needs to be taken in order not to miss some of them. For example, while at first sight it might appear that a maximal imprimitive subgroup ${\rm GL}\,(d/t,q)\wr S_t$ (where $t>1$) cannot contain a ${\rm ppd}\,(d,q;e)$-element since $r$ does not divide $|{\rm GL}\,(d/t,q)|$, it is possible sometimes for $r$ to divide $|S_t|=t!$, so that we do have some examples in the family $\mathcal{C}_2$. To understand how this can happen, observe that the defining condition for $r$ to be a primitive prime divisor of $q^e-1$, namely that $e$ is the least positive integer $i$ such that $r$ divides $q^i-1$, is equivalent to the condition that $q$ has order $e$ modulo the prime $r$. Thus $r=ke+1\ge e+1$ for some $k\ge 1$. Sometimes we can have $r=e+1$ (which satisfies $d/2<r\le d$) and hence in these cases an imprimitive subgroup ${\rm GL}\,(1,q)\wr S_d$ will contain ${\rm ppd}\,(d,q;e)$-elements. Both of the above observations come into play in describing the examples in the family $\mathcal{C}_3$. Here either the prime $r=e+1=d$ and the group $G$ is conjugate to a subgroup of ${\rm GL}\,(1,q^d).d$, or $e$ is a multiple of a prime $b$ where $b$ is a proper divisor of $d$ and, replacing $G$ by a conjugate if necessary, $G\le{\rm GL}\,(d/b,q^b).b$ such that $G\cap{\rm GL}\,(d/b,q^b)$ contains a ${\rm ppd}\,(d/b,q^b;e/b)$-element. After determination of the geometric examples there remains the problem of finding the nearly simple examples. So suppose that $G$ is nearly simple and $S\le G/(Z\cap G) \le{\rm Aut}\, S$ for some nonabelian simple group $S$. What we need is a list of all possible groups $G$ together with the values of $d, e$ and $q_0$. Although there is no classification of all the nearly simple subgroups of ${\rm GL}\,(d,q)$ in general, it is possible to classify those which contain a ${\rm ppd}\,(d,q;e)$-element. The reason we can do this is that, for each simple group $S$, the presence of a ${\rm ppd}\,(d,q;e)$-element in $G$ leads to both upper and lower bounds for $d$ in terms of the parameters of $S$ strong enough to lead to a complete classification. On the one hand $d$ is at least the minimum degree of a faithful projective representation of $S$ over a field of characteristic $p$, and lower bounds are available for this in terms of the parameters of $S$. On the other hand we have seen that $r=ke+1\ge e+1\ge (d+3)/2$, and in all cases we may deduce that $r$ divides $|S|$. Moreover we have an upper bound on the size of prime divisors of $S$ in terms of the parameters of $S$. For some simple groups $S$ the upper and lower bounds for $d$ obtained in this way conflict and we have a proof that there are no examples involving $S$. In many cases however this line of argument simply narrows down the range of possible values for $d$, $e$ and $r$. Often there are examples involving $S$, but, in order to complete the classification, we need to have more information about small dimensional representations of $S$ in characteristic $p$ than simply the lower bound for the dimension of such representations. For example if $S=A_n$ with $n\ge 9$ then $d\ge n-2$ if $p$ divides $n$ and $d\ge n-1$ otherwise, by~\cite{wag1, wag2, wag3}. Moreover $r\le n$, so $(d+3)/2\le n$, and we obtain $n-2\le d\le 2n-3$ and $r=e+1$. The upper bound for $d$ cannot be improved since we may have $r=n=e+1$ infinitely often. Thus we need more information about small dimensional representations of $A_n$ in characteristic $p$. For $n\ge 15$ this is available from a combination of results of James~\cite{james} and Wagner~\cite{wag3}. We see that the representations of $A_n$ and $S_n$ of dimension $n-1$ or $n-2$ are those coming from the deleted permutation module in the natural representation. These give an infinite family of examples with $q_0=p$. All other faithful projective representations of $A_n$ have dimension greater than the upper bound on $d$. For the remaining cases, where $n<15$, special arguments are required, making full use of information in~\cite{atlas,modat}. The result of this analysis is an explicit list of examples for alternating groups $S$. The list of examples of linear groups containing ${\rm ppd}\,$-elements can be found in~\cite[Section~2]{ppds} and is not reproduced here. Note that completing the classification of the nearly simple examples for classical groups $S$ over fields of characteristic different from $p$ involved proving new results about small dimensional representations of such groups over fields of characteristic $p$. \section{Various applications of the ``${\rm ppd}\,$ classification''.} \label{sec-applications}% The classification of subgroups of ${\rm GL}\,(d,q)$ containing ${\rm ppd}\,$-elements has already been used in a variety of applications concerning finite classical groups. In particular the papers~\cite{gk, gs} make use of it to answer questions concerning the generation of finite classical groups, while in~\cite{lieb2} it is used to show that the finite classical groups are characterised by their orbit lengths on vectors in their natural modules. Information about the invariant generation of classical simple groups (see~\cite{recog2, shalev}) can be deduced from the classification (in~\cite{recog2}, or see Section~\ref{sec-basic-ideas}) of subgroups of classical groups containing two different ${\rm ppd}\,$-elements. (Elements $x_1,\ldots,x_s$ of a group $G$ are said to generate $G$ invariably if $\langle x_1^{g_1},\ldots, x_s^{g_s}\rangle$ is equal to $G$ for all $g_1,\ldots,g_s\in G$.) Similarly in~\cite{br} the ${\rm ppd}\,$ classification, or more accurately the more specialised classification based on it (and described in Section~\ref{sec-basic-ideas}), can be used to deal with the finite classical groups in an analysis of finite groups with the permutizer property. A group $G$ is said to have the {\it permutizer property} if, for every proper subgroup $H$ of $G$, there is an element $g\in G\setminus H$ such that $H$ permutes with $\langle g\rangle$, that is $\langle g\rangle H= H\langle g\rangle$. The main result of~\cite{br} is that all finite groups with the permutizer property are soluble. The proof consists of an examination of a minimal counterexample to this assertion, and the ${\rm ppd}\,$ classification can be used to show that the minimal counterexample cannot be an almost simple classical group. \section{Two different ${\rm ppd}\,$-elements in linear groups} \label{sec-basic-ideas} The principal application up to now of the classification of linear groups containing ${\rm ppd}\,$-elements has been the development by Niemeyer and the author in~\cite{recog2} of a recognition algorithm for finite classical groups in their natural representation. The basic idea of this algorithm is as described in Section~\ref{sec-intro}. Given a subgroup $G$ of a classical group $\Delta$ in ${\rm GL}\,(d,q)$ (as described in Section~\ref{sec-classical}), we wish to determine if $G$ contains the corresponding classical group $\Omega$. We do this by examining randomly selected elements from $G$. The elements of $G$ which we seek by random selection are ${\rm ppd}\,(d,q;e)$-elements for various values of $e>d/2$, and an appropriate set of such elements will form the subset $E$ mentioned in Section~\ref{sec-intro}. It turns out that the proportion of ${\rm ppd}\,(d,q;e)$-elements in any of the classical groups is very high (as shown in Section~\ref{sec-probs}), so we are very likely to find such an element after a few independent random selections from any subgroup of $\Delta$ which contains $\Omega$. Suppose then that we have indeed found a ${\rm ppd}\,(d,q;e)$-element in our group $G$, for some $e>d/2$. The ${\rm ppd}\,$-classification just described then provides a restricted list of possibilities for the group $G$. The task is to distinguish subgroups containing $\Omega$ from the other possibilities, and this task is a nontrivial one. For the purposes of presenting the basic strategy, we assume that $G$ is irreducible on $V$ and that we have complete information about any $G$-invariant bilinear, sesquilinear or quadratic forms on $V$. There are standard tests in practice which may be used to determine whether $G$ is irreducible on $V$ and to find all $G$-invariant forms (see~\cite{hr,parker}). Note that in an implementation of the algorithm in~\cite{recog2} a different protocol may be followed for deciding the stage at which to obtain this precise information about $G$. Nevertheless, we may and shall assume that $G$ does not lie in the Aschbacher classes $\mathcal{C}_1$ or $\mathcal{C}_8$. Then, having found a ${\rm ppd}\,(d,q;e)$-element in $G$ for some $e>d/2$, the ${\rm ppd}\,$-classification would still allow the possibility that $G$ lies in one of $\mathcal{C}_2, \mathcal{C}_3, \mathcal{C}_5, \mathcal{C}_6$, or that $G$ is nearly simple, as well as the desired conclusion that $G$ contains $\Omega$. In the nearly simple case, the classification in~\cite{ppds} shows that there are approximately 30 infinite families and 60 individual examples of nearly simple groups in explicitly known representations. Guided by the original ${\rm SL}\,$-recognition algorithm developed in~\cite{recog}, we decided to seek, in the first instance, {\it two different ${\rm ppd}\,$-elements} in $G$ by which we mean a ${\rm ppd}\,(d,q;e)$-element and a ${\rm ppd}\,(d,q;e')$-element, where $d/2<e<e'\le d$. We also decided to strengthen the ${\rm ppd}\,$-property required of these elements in two different ways, by requiring at least one of the ${\rm ppd}\,$-elements to be large and at least one of them to be basic. Let $q=p^a$, and let $r$ be a primitive prime divisor of $q^e-1$. Recall that $r=ke+1$ for some integer $k$. We say that $r$ is a {\it basic primitive prime divisor} if $r$ is a primitive prime divisor of $p^{(ae)}-1$, and that $r$ is a {\it large primitive prime divisor} if either $r\ge 2e+1$, or $r=e+1$ and $(e+1)^2$ divides $q^e-1$. Correspondingly we say that a ${\rm ppd}\,(d,q;e)$-element $g$ is {\it basic} if $o(g)$ is divisible by a basic primitive prime divisor of $q^e-1$, and that $g$ is {\it large} if $o(g)$ is divisible by a large primitive prime divisor $r$ of $q^e-1$ and either $r\ge 2e+1$ or $r=e+1$ and $(e+1)^2$ divides $o(g)$. Note that, for $e\ge 2$, if $q^e-1$ has a primitive prime divisor, then $q^e-1$ has a basic primitive prime divisor unless $(q,e)=(4,3)$ or $(8,2)$. Similarly an explicit list can be given for pairs $(q,e)$ for which $q^e-1$ has a primitive prime divisor but does not have a large primitive prime divisor (see~\cite{feit,her1} or~\cite[Theorem~2.2]{recog2}). Thus in most cases $q^e-1$ has both a large primitive prime divisor and a basic primitive prime divisor; and many ${\rm ppd}\,$-elements will be both large and basic. We shall see in Section~\ref{sec-probs} that requiring the additional condition of being large or basic does not alter significantly the very good upper and lower bounds we can give for the proportion of ${\rm ppd}\,$-elements in subgroups of $\Delta$ containing $\Omega$. Suppose that we now have $G\subseteq\Delta$ for some classical group $\Delta$ in ${\rm GL}\,(d,q)$, with $G$ irreducible on the underlying vector space $V$, and suppose also that we have complete information about $G$-invariant forms so that we can guarantee that $G$ is not contained in the class $\mathcal{C}_8$ of subgroups of $\Delta$. Further we suppose that $G$ contains two different ${\rm ppd}\,$-elements, say a ${\rm ppd}\,(d,q;e)$-element $g$ and a ${\rm ppd}\,(d,q;e')$-element $h$, where $d/2<e<e'\le d$. In~\cite[Theorem~4.7]{recog2}, Niemeyer and the author refined the classification in~\cite{ppds} to find all possibilities for the group $G$. These possibilities comprise groups containing $\Omega$, members of the Aschbacher families $\mathcal{C}_2, \mathcal{C}_3$ and $\mathcal{C}_5$, and some nearly simple examples. The presence of two different ${\rm ppd}\,$-elements certainly restricts the possibilities within these families, but it is still difficult to distinguish some of them from groups containing $\Omega$. If we require that at least one of $g, h$ is large and at least one is basic then, as was shown in~\cite[Theorem~4.8]{recog2}, the possibilities for irreducible subgroups $G$ which do not contain $\Omega$ are certain subgroups in $\mathcal{C}_3$ and nearly simple groups in a very short list comprising explicit representations of one infinite family and five individual nearly simple groups. After our discussion of the proportions of ${\rm ppd}\,$-elements in classical groups in Section~\ref{sec-probs}, we shall return to our discussion of the recognition algorithm. We shall see that the algorithm can be completed by simply seeking a few more ${\rm ppd}\,$-elements of a special kind which, if found, will rule out all but one possibility for $G$, enabling us to conclude that $G$ contains $\Omega$. \section{Proportion of ${\rm ppd}\,$-elements in classical groups.} \label{sec-probs} The questions we wish to answer from our discussion in this section are the following. If $\Omega\le G\le\Delta\le{\rm GL}\,(d,q)$, and $G$ contains two different ${\rm ppd}\,$-elements at least one of which is large and at least one of which is basic, then what is the probability of finding two such elements after a given number $N$ of independent random selections of elements from $G$? In particular, for a given positive real number $\varepsilon$, is it true that the probability of failing to find such elements after $N$ selections is less than $\varepsilon$ provided $N$ is sufficiently large? And if so just how large must $N$ be? These questions can be answered using simple probability theory provided that we can determine, for a given $e$ (where $d/2<e\le d$), the proportion ${\rm ppd}\,(G,e)$ of elements of $G$ which are ${\rm ppd}\,(d,q;e)$-elements. This proportion may depend on the nature of the classical group $\Delta$: that is, on whether $\Delta$ is a linear, symplectic, orthogonal or unitary group. In particular ${\rm ppd}\,(G,e)=0$ if $\Delta$ is a symplectic or orthogonal group and $e$ is odd, or if $\Delta$ is a unitary group and $e$ is even, or if $\Delta$ is of type ${\rm O}^+$ and $e=d$. This can be seen easily by examination of the orders of these groups. In all other cases, provided that $d$ and $q$ are not too small, any subgroup of $\Delta$ which contains $\Omega$ will contain ${\rm ppd}\,(d,q;e)$-elements. So suppose now that $\Omega\le G\le\Delta$, that $d/2<e\le d$, and that $G$ contains a ${\rm ppd}\,(d,q;e)$-element $g$. It is not difficult (see~\cite[Lemma~5.1]{recog2}) to show that $V$ has a unique $e$-dimensional $g$-invariant subspace $W$ and that $g$ acts irreducibly on $W$. Moreover, if $\Delta$ is a symplectic, orthogonal, or unitary group, then $W$ must be nonsingular with respect to the bilinear, quadratic, or sesquilinear form defining $\Delta$. Next (see~\cite[Lemma~5.2]{recog2}) we observe that the group $G$ acts transitively on the set of all nonsingular $e$-dimensional subspaces of $V$ (or all $e$-dimensional subspaces if $\Delta={\rm GL}\,(d,q)$). Thus the proportion of ${\rm ppd}\,(d,q;e)$-elements in $G$ is the same as the proportion of such elements which fix a particular nonsingular $e$-dimensional subspace $W$. Therefore we need to determine the proportion of ${\rm ppd}\,(d,q;e)$-elements in the setwise stabiliser $G_W$ of $W$ in $G$. Now consider the natural map $\varphi: g\mapsto g|_W$ which sends $g\in G_W$ to the linear transformation of $W$ induced by $g$. Then $\Omega(W)\le \varphi(G)\le\Delta(W)\le{\rm GL}\,(W)$, and $\Delta(W)$ has the same type (linear, symplectic, orthogonal, or unitary) as $\Delta$. If $g\in G_W$ and $g$ is a ${\rm ppd}\,(d,q;e)$-element, then every element of the coset $g{\rm Ker}\,\varphi$ is also a ${\rm ppd}\,(d,q;e)$-element, since all elements in the coset induce the same linear transformation $g|_W$ of $W$. Moreover in this case $g|_W$ is a ${\rm ppd}\,(e,q;e)$-element in $\varphi(G)$ and all such elements arise as images under $\varphi$ of ${\rm ppd}\,(d,q;e)$-elements in $G_W$. It follows that ${\rm ppd}\,(G,e)$ is equal to the proportion ${\rm ppd}\,(\varphi(G),e)$ of ${\rm ppd}\,(e,q;e)$-elements in $\varphi(G)$. Thus it is sufficient for us to determine ${\rm ppd}\,(G,d)$ for each of the possibilities for $\Delta$ which contain ${\rm ppd}\,(d,q;d)$-elements. This was done already by Neumann and the author in~\cite[Lemmas~2.3 and 2.4]{recog} in the case where $\Delta={\rm GL}\,(d,q)$. The techniques used there work also in the other cases although some care is needed. The basic ideas are as follows. Let $g$ be a ${\rm ppd}\,(d,q;d)$-element in $G$, and let $C:=C_G(g)$. Then $C$ is a cyclic group, called a Singer cycle for $G$, and has order $n$ say, where $n$ divides $q^d-1$ and $n$ is divisible by some primitive prime divisor of $q^d-1$. The group $C$ is self-centralising in $G$. Further each ${\rm ppd}\,(d,q;d)$-element in $G$ lies in a unique $G$-conjugate of $C$. The number of $G$-conjugates of $C$ is $|G:N_G(C)|$, and so the number of ${\rm ppd}\,(d,q;d)$-elements in $G$ is equal to $|G:N_G(C)|$ times the number of such elements in $C$. It follows that $$ {\rm ppd}\,(G,d)=|G:N_G(C)|\cdot {\rm ppd}\,(C,d)\cdot {|C|\over |G|} = {{\rm ppd}\,(C,d)\over u}, $$ where ${\rm ppd}\,(C,d)$ is the proportion of ${\rm ppd}\,(d,q;d)$-elements in $C$, and $u:=|N_G(C):C|$. In the linear, symplectic and unitary cases $u=d$, while in the orthogonal case $u$ is either $d$ or $d/2$ depending on which intermediate subgroup $G$ is ($\Omega\le G\le\Delta$). In the orthogonal case we certainly have $u=d$ if $G$ contains ${\rm O}\, (V)$. Thus we need to estimate ${\rm ppd}\,(C,d)$. Let $\Phi$ denote the product of all the primitive prime divisors of $q^d-1$ (including multiplicities), so that $(q^d-1)/\Phi$ is not divisible by any primitive prime divisor of $q^d-1$. In all cases $\Phi$ divides $n=|C|$. Moreover an element $x\in C$ is a ${\rm ppd}\,(d,q;d)$-element if and only if $x^{n/\Phi}\ne 1$, that is if and only if $x$ does not lie in the unique subgroup of $C$ of order $n/\Phi$. Hence $$ {\rm ppd}\,(C,d)={n-n/\Phi\over n} = 1-{1\over\Phi}, $$ and therefore $$ {\rm ppd}\,(G,d)={1\over u}\ (1-{1\over\Phi}) < {1\over u}. $$ Since each primitive prime divisor of $q^d-1$ is of the form $kd+1\ge d+1$, the quantity $\Phi$ is at least $d+1$, and hence $$ {\rm ppd}\,(G,d)\ge {1\over u}\ (1-{1\over d+1}) $$ so we have $$ {1\over u}\ ({d\over d+1})\ \le{\rm ppd}\,(G,d)< {1\over u}. $$ Putting all of this together we see that in almost all cases ${\rm ppd}\,(G,d)$ lies between $1/(d+1)$ and $1/d$, with the exception being some orthogonal cases where ${\rm ppd}\,(G,d)$ lies between $2/(d+1)$ and $2/d$. To pull back this result to the general case where $d/2<e\le d$, we need to have some particular information about the group $\varphi(G)$ in the orthogonal case in order to know which of the bounds apply. It turns out (see~\cite[Theorem~5.7]{recog2}) that for all cases, and all $e$ for which $d/2<e\le d$ and $\Delta$ contains ${\rm ppd}\,(d,q;e)$-elements, we have $$ {1\over e+1}\le{\rm ppd}\,(G,e)<{1\over e} $$ except if $\Delta$ is an orthogonal group of minus type, $e=d$ is even, and $G\cap {\rm O}^-(d,q)$ is either $\Omega^-(d,q)$ (for any $q$) or ${\rm SO}^-(d,q)$ (for $q$ odd), in which case $2/(d+1)\le{\rm ppd}\,(G,d)< 2/d$. Further (see~\cite[Theorem~5.8]{recog2}), the proportion of large ${\rm ppd}\,(d,q;e)$-elements in $G$ and the proportion of basic ${\rm ppd}\,(d,q;e)$-elements in $G$, whenever such elements exist, also lie between the lower and upper bounds we have above for ${\rm ppd}\,(G,e)$. In the classical recognition algorithm in~\cite{recog2} we are not especially interested at first in particular values of $e$. We simply wish to find ${\rm ppd}\,$-elements for some $e$ between $d/2$ and $d$. The proportion of such elements in $G$ is $$ {\rm ppd}\,(G):=\sum_{d/2<e\le d} {\rm ppd}\,(G,e). $$ In the linear case, where $\Delta ={\rm GL}\,(d,q)$, this is approximately equal to $\sum_{d/2<e\le d} e^{-1}$ which, in turn, is approximately $$ \int_{d/2}^d{dx\over x} = \log 2 = 0.693\ldots $$ while in the other cases ${\rm ppd}\,(G)$ is approximately equal to the sum of $e^{-1}$ either over all even $e$, or all odd $e$ between $d/2$ and $d$; this is approximately equal to $(\log 2)/2$. These computations can be done carefully resulting in very good upper and lower bounds for ${\rm ppd}\,(G)$ which differ by a small multiple of $d^{-1}$, see~\cite[Theorem~6.1]{recog2}. Moreover, except for small values of $d$, these upper and lower bounds for ${\rm ppd}\,(G)$ are also upper and lower bounds for the proportions of large ${\rm ppd}\,$-elements and of basic ${\rm ppd}\,$-elements in $G$. We can model the process of random selection of $N$ elements from $G$, seeking ${\rm ppd}\,$-elements, as a sequence of $N$ binomial trials with probability of success on each trial (that is, each selection) being ${\rm ppd}\,(G)$. Using this model we can compute the probability of finding (at least) ``two different ${\rm ppd}\,$-elements'' after $N$ independent random selections. The extent to which this computed probability measures the true probability in a practical implementation depends on whether the assumptions for the binomial model hold for the implementation. In particular the binomial model will give a good fit if the selection procedure is approximately {\it uniform}, that is the probability of selecting each element of $G$ on each selection is approximately $|G|^{-1}$, and if the selections are approximately independent. For any small positive real number $\varepsilon$, under the binomial model the probability of failing to find ``two different ${\rm ppd}\,$-elements'' after $N$ independent uniform random selections is less than $\varepsilon$ provided that $N$ is greater than a small (specified) multiple of $\log \varepsilon^{-1}$, see~\cite[Theorem~6.4 and Lemma~6.5]{recog2}. The same approach (under the same assumptions about uniformity and independence of the random selections) gives good estimates for the number $N=N(\varepsilon)$ of selections needed in order that the probability of failing to find ``two different ${\rm ppd}\,$-elements'', at least one of which is large and at least one of which is basic, after $N$ random selections is less than $\varepsilon$. Namely $N(\varepsilon)$ is a small (specified) multiple of $\log \varepsilon^{-1}$. For example, if $\Delta ={\rm GL}\,(d,q)$ with $40\le d\le 1000$ and $\varepsilon= 0.1$, then $N(\varepsilon)=5$. \section{Classical recognition algorithm: an outline} \label{sec-outline} Suppose that $G\subseteq\Delta$ for some classical group $\Delta$ in ${\rm GL}\,(d,q)$, with $G$ irreducible on the underlying vector space $V$, and that we have complete information about $G$-invariant forms (so that $G$ is not contained in the class $\mathcal{C}_8$ of subgroups of $\Delta$). We wish to determine whether or not $G$ contains the corresponding classical group $\Omega$. Our algorithm is a Monte Carlo algorithm which may occasionally fail to detect that $G$ contains $\Omega$. The probability of this happening is less than a predetermined small positive real number $\varepsilon$. First we make a number $N$ of independent uniform random selections of elements from $G$, where $N\ge N(\varepsilon/3)$ as in Section~\ref{sec-probs}. If we fail to find two different ${\rm ppd}\,$-elements in $G$, with at least one of them large and at least one basic, then we report that $G$ does not contain $\Omega$. There is a possibility that this response is incorrect, but if in this case $G$ does contain $\Omega$ then from Section~\ref{sec-probs}, the probability of failing to find suitable elements is less than $\varepsilon/3$. Thus the probability of reporting at this stage that $G$ does not contain $\Omega$, given that $G$ does contain $\Omega$, is less than $\varepsilon/3$. Suppose now that $G$ contains two different ${\rm ppd}\,$-elements, say a ${\rm ppd}\,(d,q;e)$-element $g$ and a ${\rm ppd}\,(d,q;e')$-element $h$, where $d/2<e<e'\le d$, and that at least one of $g, h$ is large and at least one is basic. As discussed in Section~\ref{sec-basic-ideas}, the possibilities for $G$ are that (i)~$G\supseteq\Omega$, or that (ii)~$G$ is conjugate to a subgroup of ${\rm GL}\,(d/b,q^b).b$ for some prime $b$ dividing $d$, or that (iii)~$G$ is one of a very restricted set of nearly simple groups. In order to distinguish case (i) from cases (ii) and (iii) it turns out that essentially we need to find a few extra ${\rm ppd}\,$-elements. The ``extension field groups'' in case (ii) are the most difficult to handle. The basic idea here can be illustrated by considering the linear case where $\Delta={\rm GL}\,(d,q)$. For a prime $b$ dividing $d$, the only values of $e$ such that ${\rm GL}\,(d/b,q^b).b$ contains a ${\rm ppd}\,(d,q;e)$-element are multiples of $b$ (apart from the exceptional case where $b=d$ and $d$ is a primitive prime divisor of $q^{d-1}-1$). Thus finding in $G$ a ${\rm ppd}\,(d,q;e)$-element for some $e$ which is not a multiple of $b$ will prove that $G$ is not conjugate to a subgroup of ${\rm GL}\,(d/b,q^b).b$. If $G\supseteq\Omega$, then the proportion of such elements in $G$ is ${\rm ppd}\,(G) - \sum_{d/2<ib\le d}{\rm ppd}\,(G,ib)$ which is approximately equal to ${\rm ppd}\,(G) - (\sum_{d/(2b)<i\le d/b} (ib)^{-1})$. This in turn is approximately equal to $\log 2 - b^{-1}\log 2 = (\log 2) (b-1)/b$. By~\cite[Theorem~8.30]{nzm}, the number $\mu(d)$ of distinct primes dividing $d$ is $O(\log d/\log\!\log d)$. Arguing as in Section~\ref{sec-probs}, there is an integer $N_b(\varepsilon)$ such that, if $G\supseteq\Omega$, then the probability of failing to find a ${\rm ppd}\,(d,q;e)$-element in $G$ with $e$ coprime to $b$ after $N_b(\varepsilon)$ independent random selections is less than $\varepsilon/3\mu(d)$. If $G\supseteq\Omega$, then we may need to find up to $\mu(d)$ extra ${\rm ppd}\,$-elements to eliminate case (ii) as a possibility, and the probability of failing to eliminate it after $N$ random selections from $G$, where $N$ is the maximum of the $N_b(\varepsilon)$, is less than $\varepsilon/3$. If we fail to find the required set of elements after these $N$ further random selections then we report that $G$ does not contain $\Omega$. Thus the probability of reporting at this second stage that $G$ does not contain $\Omega$, given that $G$ does contain $\Omega$, is less than $\varepsilon/3$. The number $N$ of selections we need to make for this second stage is $O(\log \varepsilon^{-1} + \log\log d)$. Eliminating possibility (ii) for the other classical groups is done using these basic ideas, but the details are considerably more complicated for the symplectic and orthogonal groups when $b=2$. For each of the nearly simple groups which contain two different ${\rm ppd}\,$-elements $g, h$ as above, there are in fact only two values of $e$ for which the group contains ${\rm ppd}\,(d,q;e)$-elements, namely the values corresponding to the elements $g$ and $h$. To distinguish groups $G$ containing $\Omega$ from this nearly simple group we simply need to find in $G$ a ${\rm ppd}\,(d,q;e)$-element for a third value of $e$. For each pair $(d,q)$ there is only a small number of possible nearly simple groups (usually at most 1, and in all cases at most 3). As before there is some $N_{{\rm sim}\,}(\varepsilon)$ such that, if $G\supseteq\Omega$, then the probability of failing to find suitable elements to eliminate these nearly simple groups after $N_{{\rm sim}\,}(\varepsilon)$ random selections from $G$ is less than $\varepsilon/3$. If we fail to find the required elements after $N_{{\rm sim}\,}(\varepsilon)$ further random selections then we report that $G$ does not contain $\Omega$. Thus the probability of reporting at this third and final stage that $G$ does not contain $\Omega$, given that $G$ does contain $\Omega$, is less than $\varepsilon/3$. Once we have found all the required elements to remove possibilities (ii) and (iii) we may report with certainty that $G$ does contain $\Omega$. The probability that the algorithm reports that $G$ does not contain $\Omega$, given that $G$ does contain $\Omega$, is less than $\varepsilon$. The requirements to bound the probability of error at the three stages of the algorithm are such that the complete algorithm requires us to make $O(\log\varepsilon^{-1} + \log\!\log d)$ random selections from $G$. \section{Computing with polynomials} \label{sec-polynomials} In this section we describe how we process an element $g$ of a classical group $\Delta\le{\rm GL}\,(d,q)$ to decide if it is a ${\rm ppd}\,$-element. This is a central part of the algorithm. The first step is to compute the characteristic polynomial $c_g(t)$ of $g$, and to determine whether or not $c_g(t)$ has an irreducible factor of degree greater than $d/2$. If no such factor exists then $g$ is not a ${\rm ppd}\,$-element. So suppose that $c_g(t)$ has an irreducible factor $f(t)$ of degree $e>d/2$. Thus we know that there is a unique $g$-invariant $e$-dimensional subspace $W$ of $V$ and that the linear transformation $g|_W$ induced by $g$ on $W$ has order dividing $q^e-1$; $g$ will be a ${\rm ppd}\,(d,q;e)$-element if and only if the order of $g|_W$ is divisible by some primitive prime divisor of $q^e-1$. By an argument introduced in Section~\ref{sec-probs}, this will be the case if and only if $(g|_W)^{(q^e-1)/\Phi}\ne 1$, where $\Phi = \Phi(e,q)$ and $\Phi(e,q)$ denotes the product of all the primitive prime divisors of $q^e-1$ (including multiplicities). Determining whether or not this is the case can be achieved by computing within the polynomial ring ${\rm GF}\,(q)[t]$ modulo the ideal $\langle f(t)\rangle$, namely $(g|_W)^{(q^e-1)/\Phi}$ will be a non-identity matrix if and only if $t^{(q^e-1)/\Phi}\ne 1$ in this ring. We can test whether of not $g$ is a large or basic ${\rm ppd}\,(d,q;e)$-element by the same method with $\Phi(e,q)$ replaced by $\Phi_l(e,q)$ or $\Phi_b(e,q)$ respectively. Here $\Phi_l(e,q)$ and $\Phi_b(e,q)$ are the products of all the large and basic primitive prime divisors of $q^e-1$ (including multiplicities) respectively. The idea for checking the ${\rm ppd}\,$-property by determining whether a single power of $g$ is the identity comes from the special linear recognition algorithm in~\cite{recog}, while the idea of deciding this by a computation in the polynomial ring is due to Celler and Leedham-Green~\cite{cl}. \section{Complexity of the classical recognition algorithm} \label{sec-complexity} In~\cite[Section~4]{dimacs} an analysis of the running cost for the classical recognition algorithm was given based on ``classical'' algorithms for computing in finite fields. For example the cost of multiplying two $d\times d$ matrices was taken to be $O(d^3)$ field operations (that is, additions, multiplications, or computation of inverses). We take this opportunity to re-analyse the algorithm in terms of more modern methods for finite field computations. These methods can lead to improvements in performance over the classical methods. However efficient implementation of the modern methods is a highly nontrival task requiring substantial effort, see for example the paper of Shoup~\cite{sh} which addresses the problem of efficient factorisation of polynomials over finite fields. I am grateful to Igor Shparlinski for some interesting and helpful discussions concerning such algorithms. The {\it exponent of matrix multiplication} is defined as the infimum of all real numbers $x$ for which there exists a matrix multiplication algorithm which requires no more than $O(d^x)$ field operations to multiply together two $d\times d$ matrices over a field of order $q$. It is denoted by $\omega$ or $\omega(d,q)$. Thus, for all positive real numbers $\varepsilon$, there exists such an algorithm which requires $O(d^{\omega +\varepsilon})$ field operations, that is matrix multiplication can be performed with $O(d^{\omega+o(1)})$ field operations. In~\cite[Sections~15.3,~15.8]{bcs} an algorithm is given and analysed for which $O(d^x)$ field operations are used with $x < 2.39$ (and hence $\omega<2.39$), and it was shown there also that $\omega$ can depend (if at all) only on the prime $p$ dividing $q$ rather than on the field size $q$. Moreover the cost of performing a field operation depends on the data structure used to represent the field and is $O((\log q)^{1+o(1)})$ for each field operation, that is, the cost is $O((\log q)^{1+\varepsilon})$ for each $\varepsilon >0$. Now let $\mu$ be the cost of producing a single random element from the given subgroup $G=\langle X\rangle$ of ${\rm GL}\,(d,q)$. As discussed in~\cite[p.~190]{dim1}, theoretical methods for producing approximately random elements from a matrix group are not good enough to be translated into practical procedures for use with algorithms such as the classical recognition algorithm. For example, Babai~\cite[Theorem~1.1 and Proposition~7.2]{babai} produces, from a given generating set $X$ for a subgroup $G\le{\rm GL}\,(d,q)$, a set of $O(d^2\log q)$ elements of $G$ at a cost of $O(d^{10} (\log q)^5)$ matrix multiplications, from which nearly uniformly distributed random elements of $G$ can be produced at a cost of $O(d^{2}\log q)$ matrix multiplications per random element. The practical implementation of the classical recognition algorithm uses an algorithm developed in~\cite{clmno} for producing approximately random elements in classical groups which, when tested on a range of linear and classical groups was found to produce, for each relevant value of $e$, ${\rm ppd}\,(d,q;e)$-elements in proportions acceptably close to the true proportions in the group. This procedure has an initial phase which costs $O(d^{\omega+o(1)})$ field operations, and then the cost of producing each random element is $O(d^{\omega+o(1)})$ field operations (see also~\cite[Section~4.1]{dimacs}). Further analysis of the algorithm in~\cite{clmno} may be found in~\cite{cg, ds1, ds2}. Testing each random element $g\in G$ involves first finding its characteristic polynomial $c_g(t)$. The cost of doing this deterministically is $O(d^{\omega+o(1)})$ field operations (see~\cite{k-g} or~\cite[Section~16.6]{bcs}). Next we test whether $c_g(t)$ has an irreducible factor of degree greater than $d/2$. This can be done deterministically at a cost of $O(d^{\omega+o(1)} + d^{1+o(1)}\log q)$ field operations, see~\cite{ks}. (Although the full algorithm in~\cite{ks} for obtaining a complete factorisation of $c_g(t)$ is non-deterministic, we only need the first two parts of the algorithm, the so-called square-free factorisation and distinct-degree factorisation procedures, and these are deterministic.) Suppose now that $c_g(t)$ has an irreducible factor $f(t)$ of degree $e>d/2$. We then need to compute $\Phi(e,q)$, the product of all the primitive prime divisors of $q^e-1$ (counting multiplicities). A procedure for doing this is given in~\cite[Section~6]{recog}. It begins with setting $\Phi = q^e-1$ and proceeds by repeatedly dividing $\Phi$ by certain integers. The procedure runs over all the distinct prime divisors $c$ of $e$, and by~\cite[Theorem~8.30]{nzm} there are $O(\log e/\log\log e)= O(\log d/\log\log d)$ such prime divisors. For each $c$, the algorithm computes twice the greatest common divisor of two positive integers where the larger of the two integers may be as much as $q^e$, and then makes up to $d\log q$ greatest common divisor computations for which the larger of the two integers is $O(d)$. By~\cite[Theorem~8.20 and its Corollary]{ahu} (or see \cite[Note 3.8]{bcs}), the cost of computing the greatest common divisor of two positive integers less than $2^n$, is $O(n(\log n)^{O(1)})$ bit operations. It follows that the cost of computing $\Phi(e,q)$ is $O(d(\log d)^{O(1)} (\log q)^2)$ bit operations. Having found $\Phi(e,q)$, we need to determine whether $t^{(q^e-1)/\Phi(e,q)}$ is equal to $1$ in the polynomial ring ${\rm GF}\,(q)[t]$ modulo the ideal $\langle f(t)\rangle$. This involves $O(d\log q)$ multiplications modulo $f(t)$ of two polynomials of degree less than $d$ over ${\rm GF}\,(q)$. Each of these polynomial multiplications costs $O(d\log d\log\log d)$ field multiplications, (see~\cite[Theorem~2.13 and Example~2.6]{bcs}). Thus this test costs $O(d^2\log d\log\log d\log q)$ field operations. Therefore the cost of testing whether a random element $g$ is a ${\rm ppd}\,$-element is $O(d^{\omega+o(1)} + d^2\log d\log\log d\ \log q)$ field operations plus $O(d (\log d)^{O(1)} (\log q)^2))$ bit operations, and hence is $$O( d^{\omega+o(1)} (\log q)^{1+o(1)}+ d^2\log d\log\log d\, (\log q)^{2+o(1)} ) $$ bit operations. This is at most $O(d^{\omega+o(1)} (\log q)^{2+o(1)})$ bit operations. The cost of checking whether $g$ is a large ${\rm ppd}\,$-element is the same as this. To check if $g$ is a basic ${\rm ppd}\,(d,q;e)$-element involves computing $\Phi_b(e,q)=\Phi(ae,p)$ (where $q=p^a$) instead of $\Phi(e,q)$. Arguing as above, the cost of computing $\Phi_b(e,q)$ is $O(ad(\log (ad))^{O(1)} (\log p)^2) = O(d(\log d)^{O(1)} (\log q)^2)$ bit operations, and hence the cost of testing whether $g$ is a basic ${\rm ppd}\,$-element is also at most $O(d^{\omega+o(1)} (\log q)^{2+o(1)})$ bit operations. Since we need to test $O(\log \varepsilon^{-1} + \log\log d)$ elements of $G$, the total cost of the algorithm is as follows. \begin{theorem} Suppose that $G\subseteq\Delta$ for some classical group $\Delta$ in ${\rm GL}\,(d,q)$, with $G$ irreducible on the underlying vector space $V$, and that we have complete information about $G$-invariant forms (so that $G$ is not contained in the class $\mathcal{C}_8$ of subgroups of $\Delta$). Assume that $d$ is large enough that $\Omega$ contains two different ${\rm ppd}\,$-elements with at least one of them large and at least one basic. Further let $\varepsilon$ be a positive real number with $0<\varepsilon <1$. Assume that we can make uniform independent random selections of elements from $G$ and that the cost of producing each random element is $\mu$ bit operations. Then the classical recognition algorithm in~\cite{recog2} uses $O(\log \varepsilon^{-1} + \log\log d)$ random elements from $G$ to test whether $G$ contains $\Omega$, and in the case where $G$ contains $\Omega$, the probability of failing to report that $G$ contains $\Omega$ is less than $\varepsilon$. The cost of this algorithm is $$ O\big( (\log \varepsilon^{-1} + \log\log d) (\mu + d^{\omega+o(1)} (\log q)^{2+o(1)}) \big) $$ bit operations, where $\omega$ is the exponent of matrix multiplication. \end{theorem} \section{Classical recognition algorithm: final comments} \label{sec-performance} The classical recognition algorithm in~\cite{recog2} has been implemented and is available as part of the {\it matrix} share package with the {\sf GAP} system~\cite{gap}, and is also implemented in {\sc Magma}~\cite{magma}. In the {\sc Magma} implementation rather large groups have been handled by the algorithm without problems: John Cannon has informed us that, on a SUN Ultra 2 workstation with a 200 MHz processor, recognising ${\rm SL}\,(5000, 2)$, for example, took 3214 CPU seconds averaged over five runs, while recognising ${\rm SL}\,(10000, 2)$ was possible in 14334 CPU seconds, again averaged over five runs of the algorithm. The algorithm as described in this paper relies on the presence in the classical group $\Omega$ of two different ${\rm ppd}\,$-elements, where at least one is large and at least one is basic. However, for some small values of the dimension $d$, depending on the type of the classical group and the field order $q$, $\Omega$ may not contain such elements. In these cases a modification of the algorithm has been produced in~\cite{recog3} which makes use of elements which are similar to ${\rm ppd}\,$-elements. The results in \cite{recog3} demonstrate that, with some effort, it is possible to extend the probability computations in Section~\ref{sec-probs}. An alternative algorithm to recognise classical groups in their natural representations has been developed by Celler and Leedham-Green in~\cite{cl2}. This algorithm also uses the Aschbacher classification~\cite{asch} of subgroups of ${\rm GL}\,(d,q)$ as its organisational principle. Like the algorithm in~\cite{recog2} it makes use of a search by random selection for certain elements. Although no analysis of the complexity of the algorithm is given in~\cite{cl2}, the analysis we give in Section~\ref{sec-complexity} gives a reasonable measure of the complexity of this algorithm also. Finally, as with the algorithm in~\cite{recog2}, the algorithm in~\cite{cl2} does not work for certain families of small dimensional classical groups (notably the groups of type ${\rm O}^+(8,q)$), and the methods of~\cite{recog3} are required to deal with these groups.
1,477,468,750,709
arxiv
\section{Introduction} Let \(Q^m \subset {\mathbb R}^m\) be the open unit cube and \(N^n\) be a compact smooth manifold of dimension \(n\) imbedded in \({\mathbb R}^\nu\) for some \(\nu \ge 1\). Given \(k \in {\mathbb N}_*\) and \(1 \le p < +\infty\), we define the class of Sobolev maps from \(Q^m\) with values into \(N^n\) as \[ W^{k, p}(Q^m; N^n) = \big\{ u \in W^{k, p}(Q^m; {\mathbb R}^\nu) : u \in N^n \text{ a.e.} \big\}. \] We equip this set with the usual metric from \(W^{k, p}\), namely for every \(u, v \in W^{k, p}(Q^m; N^n)\), \[ d(u, v) = \norm{u - v}_{L^p(Q^m)} + \sum_{i = 1}^k \norm{D^i u - D^i v}_{L^p(Q^m)}. \] The goal of this paper is to investigate whether smooth maps are dense in \(W^{k, p}(Q^m; N^n)\) with respect to the strong topology induced by this metric. Smooth functions are strongly dense in \(W^{k, p}(Q^m; {\mathbb R})\) and, more generally, smooth maps are strongly dense in \(W^{k, p}(Q^m; {\mathbb R}^\nu)\). In particular any element of \(W^{k, p}(Q^m; N^n)\) can be approximated by maps in \(C^\infty(\overline Q^m; {\mathbb R}^\nu)\). The question of whether maps in \(W^{k, p}(Q^m; N^n)\) can be strongly approximated by maps in \(C^\infty(\overline Q^m; N^n)\) is more delicate and the answer to this question depends on whether \(kp \ge m\) or \(kp < m\). \bigskip We begin with the easier case \(kp \ge m\) which goes back to Schoen and Uhlenbeck~\cite{SchoenUhlenbeck}*{Section~4, Proposition}: \begin{theorem} \label{theoremDensityVMO} If \(kp \ge m\), then \(C^\infty(\overline{Q}^m; N^n)\) is strongly dense in \(W^{k, p}(Q^m; N^n)\). \end{theorem} Here is the sketch of the argument: given \(u \in W^{k, p}(Q^m; N^n)\), consider the convolution \(\varphi_\varepsilon \ast u\) with a smooth kernel \(\varphi_\varepsilon\). If the range of \(\varphi_\varepsilon \ast u\) lies in a small tubular neighborhood of \(N^n\), then we may project \(\varphi_\varepsilon \ast u\) pointwisely into \(N^n\). We can always do this for \(\varepsilon\) sufficiently small as long as \(kp \ge m\). Indeed, for \(kp > m\), the space \(W^{k, p}(Q^m; N^n)\) is continuously imbedded in \(C^0(\overline Q^m; N^n)\), hence \(\varphi_\varepsilon \ast u\) converges uniformly to \(u\); in particular \(\dist{(\varphi_\varepsilon \ast u, N^n)}\) converges uniformly to \(0\). For \(kp = m\), \(W^{k, p}\) is imbedded into the space of functions of vanishing mean oscillation \(\mathrm{VMO}\) and this again implies that \(\dist{(\varphi_\varepsilon \ast u, N^n)}\) converges uniformly to \(0\). \bigskip The case \(kp < m\) is more subtle. In fact, the conclusion of the previous theorem is no longer true for every compact manifold \(N^n\). This is a consequence of the following result of Bethuel and Zheng~\cite{BethuelZheng}*{Theorem 2} and Escobedo~\cite{Escobedo}*{Theorem~3}, using another idea of Schoen and Uhlenbeck~\cite{SchoenUhlenbeck}*{Section~4, Example}: \begin{theorem} \label{theoremDensityManifoldNecessary} If \(kp < m\) and if \(C^\infty(\overline{Q}^m; N^n)\) is strongly dense in \(W^{k, p}(Q^m; N^n)\), then \(\pi_{\floor{kp}}(N^n) = \{0\}\). \end{theorem} Throughout the paper, \(\floor{kp}\) denotes the integral part of \(kp\). We recall that given \(\ell \in {\mathbb N}\), the condition \(\pi_{\ell}(N^n) = \{0\}\) means that the \(\ell\)th homotopy group of \(N^n\) is trivial or equivalently that every continuous map \(f : \S^\ell \to N^n\) on the \(\ell\) dimensional sphere has a continuous extension \(F : \overline B^{\ell+1} \to N^n\) to the \(\ell+1\) dimensional unit ball. \medskip The reader might be intrigued by the role of the integer \(\floor{kp}\) in the previous theorem. An answer can be given by sketching a proof of Theorem~\ref{theoremDensityManifoldNecessary}. Consider \(\ell = \floor{kp}\) and take any \(f \in C^\infty(\S^\ell; N^n)\). The map \(u : Q^m \to N^n\) defined for \(x = (x', x'') \in Q^{\ell + 1} \times Q^{m - \ell - 1} \) by \begin{equation}\label{exempleClasseR} u(x) = f(\tfrac{x'}{\abs{x'}}) \end{equation} belongs to \(W^{k, p}(Q^m; N^n)\) since \(kp < \ell + 1\). If there exists a sequence \((u_j)_{j \in {\mathbb N}}\) in \(C^\infty(\overline{Q}^m; N^n)\) converging strongly to \(u\) in \(W^{k, p}\), then roughly \(u_j \to u\) as \(j \to \infty\) uniformly on sets of dimension \(\ell\) since \(kp \ge \ell\). This implies that there exists a sequence of smooth maps on \(\overline{B}^{\ell + 1}\) with values into \( N^n\) converging uniformly to \(f\) on \(\partial\overline{B}^{\ell + 1} = \S^\ell\) and then one deduces that \(f\) has a continuous extension to \(\overline{B}^{\ell + 1}\) still with values into \(N^n\). The previous argument gives a recipe to construct maps in \(W^{k, p} (Q^m; N^n)\) which cannot be approximated by smooth maps. For instance, the map \(u : Q^m \to \S^{m-1}\) defined for \(x \in Q^m\) by \[ u(x) = \frac{x}{|x|} \] belongs to \(W^{k, p}(Q^m; \S^{m-1})\) for \(kp < m\) but \(u\) cannot be strongly approximated by maps in \(C^\infty(\overline{Q}^m; \S^{m-1})\) for \(kp \ge m-1\) since the identity map on \(\S^{m-1}\) does not have a continuous extension to \(\overline{B}^m\) with values into \(\S^{m-1}\). \medskip The converse of Theorem~\ref{theoremDensityManifoldNecessary} in the case \(k = 1\) has been given in a remarkable work of Bethuel~\cite{Bethuel} (see also Hang and Lin~\cites{Hang-Lin_2001,Hang-Lin} for an improvement of Bethuel's argument and Haj\l asz~\cite{Hajlasz} for a simpler case): \begin{theorem} \label{theoremDensityManifoldBethuel} If \(p < m\) and if \(\pi_{\floor{p}}(N^n) = \{0\}\), then \(C^\infty(\overline{Q}^m; N^n)\) is strongly dense in \(W^{1, p}(Q^m; N^n)\). \end{theorem} One important difficulty that we face going from \(W^{1, p}\) maps to \(W^{2, p}\) maps is that given two maps in \(W^{2, p}\) which coincide on the common boundary of their domains, their juxtaposition need not belong to \(W^{2, p}\) unless their normal derivatives coincide. The aim of this paper is to prove the counterpart of Theorem~\ref{theoremDensityManifoldBethuel} for higher-order Sobolev spaces: \begin{theorem} \label{theoremDensityManifoldMain} If \(kp < m\) and if \(\pi_{\floor{kp}}(N^n) = \{0\}\), then \(C^\infty(\overline{Q}^m; N^n)\) is strongly dense in \(W^{k, p}(Q^m; N^n)\). \end{theorem} Some results concerning strong density of smooth maps in higher order Sobolev maps have been known for any \(k\) when the target manifold \(N^n\) is the circle \(\S^1\) by Brezis and Mironescu~\citelist{\cite{Mironescu}*{Theorem~5}\cite{Brezis-Mironescu}*{Theorem~4}} and for \(kp < n\) when \(N^n\) is the \(n\) dimensional sphere \(\S^n\) by Escobedo~\cite{Escobedo}*{Theorem~2}); Hardt and Rivière~\cite{Hardt-Riviere} have recently announced a strong density result for maps in \(W^{2, 2}(B^5; S^3)\). \medskip For \(kp < m\) and \(\pi_{\floor{kp}}(N^n) \ne \{0\}\), smooth maps cannot be strongly dense in \(W^{k, p}(Q^m; N^n)\) due to a topological obstruction coming from the manifold \(N^n\). This is not the end of the story since one might try to approximate maps in \(W^{k, p}(Q^m; N^n)\) by maps which are smooth except for a small set. In order to understand how big this small set should be, let us come back to the remark following Theorem~\ref{theoremDensityManifoldNecessary} above. We have seen that for \(\ell = \floor{kp}\) and \(f \in C^\infty(\S^\ell; N^n)\), the map \(u : Q^m \to N^n\) defined by \eqref{exempleClasseR} need not be approximated in \(W^{k, p}(Q^m; N^n)\) by smooth maps if \(f\) does not have a continuous extension to \(\overline{B}^{\ell+1}\). In this case, \(u\) is smooth except on the \(m - \ell -1\) dimensional plane \(T= \{0'\} \times {\mathbb R}^{m - \ell - 1}\). This suggests that topological singularities of maps in \(W^{k, p}(Q^m; N^n)\) are carried on sets of dimension \(m - \floor{kp} -1\). We shall consider a class which contains such maps \(u\): \begin{definition} Given \(i \in \{0, \dotsc, m-1\}\), we denote by \(R_i(Q^m; N^n)\) the set of maps \(u : \overline{Q}^m \to N^n\) such that \begin{enumerate}[$(i)$] \item there exists a finite union \(T\) of \(i\) dimensional planes such that \(u\) is smooth on \(\overline{Q}^m \setminus T\), \item for every \(j \in {\mathbb N}_*\) and \(x \in \overline{Q}^m \setminus T\), \[ \abs{D^j u(x)} \le \dfrac{C}{\dist{(x, T)}^j} \] for some constant \(C \ge 0\) depending on \(u\) and \(j\). \end{enumerate} \end{definition} Note that for \(kp < m\), \[ R_{m-\floor{kp}-1}(Q^m; N^n) \subset W^{k, p}(Q^m; N^n). \] An important step in the proof of Theorem~\ref{theoremDensityManifoldMain} consists in showing that the class \(R_{m-\floor{kp}-1} (Q^m; N^n)\) is dense in \(W^{k, p}(Q^m; N^n)\) regardless of the topology of the manifold \(N^n\). \begin{theorem} \label{theoremDensityManifoldNontrivial} If \(kp < m\), then \(R_{m-\floor{kp}-1}(Q^m; N^n)\) is strongly dense in \(W^{k, p}(Q^m; N^n)\). \end{theorem} This theorem extends a result of Bethuel~\cite{Bethuel}*{Theorem 2} concerning the case \(k=1\). \medskip We explain the strategy of our proof of Theorem~\ref{theoremDensityManifoldNontrivial} under the additional assumption \(m-1 < kp < m\) for any \(k \in {\mathbb N}_*\). Given a decomposition of \(Q^m\) in cubes of size \(\eta > 0\), we distinguish them between \emph{good cubes} and \emph{bad cubes}. This notion has been introduced by Bethuel~\cite{Bethuel}. Given a map \(u \in W^{k, p}(Q^m; N^n)\) and a cube \(\sigma^m_\eta\) in \(Q^m\) of radius \(\eta > 0\), we say that \(\sigma^m_\eta\) is a \emph{good cube} if \[ \frac{1}{\eta^{m - k p}} \int_{\sigma^m_\eta} \abs{D u}^{k p} \lesssim 1, \] which means that \(u\) does not oscillate too much in \(\sigma^m_\eta\); otherwise \(\sigma^m_\eta\) is a \emph{bad cube}. The main steps in the proof are the following: \begin{compactdesc} \item[Opening] We construct a map \(u_\eta^\mathrm{op}\) which is continuous on a neighborhood of the \(m-1\) dimensional faces of the bad cubes, and equal to \(u\) elsewhere. This map, which takes its values into \(N^n\), is close to \(u\) with respect to the \(W^{k, p}\) distance since there are not too many bad cubes. This step requires that \(k p > m - 1\) in order that \(W^{k, p}\) maps be continuous on faces of dimension \(m-1\). The opening technique has been introduced by Brezis and Li~\cite{Brezis-Li} in order to study the homotopy classes of \(W^{1, p} (Q^m; N^n)\). \item[Adaptive smoothing] By convolution with a smooth kernel, we then construct a smooth map \(u_\eta^\mathrm{sm} \in W^{k,p}(Q^m,N)\). The scale of convolution is chosen to be of the order of \(\eta\) on the good cubes, and close to zero in a neighborhood of the faces of the bad cubes. On the union of these sets, we are thus ensuring that \(u_\eta^\mathrm{sm}\) takes its values in a small neigborhood of \(N^n\). \item[Thickening] We propagate diffeomorphically the values of \(u_\eta^\mathrm{sm}\) near the faces of the bad cubes to the interior of these cubes. The resulting map \(u_\eta^\mathrm{th}\) coincides with \(u_\eta^\mathrm{sm}\) on the good cubes and near the faces of the bad cubes, is close to \(u\) with respect to the \(W^{k, p}\) distance and takes its values in a neighborhood of \(N^n\). This construction creates at most one singularity at the center of each bad cube. \end{compactdesc} The map obtained by projecting \(u_\eta^\mathrm{th}\) from a neighborhood of \(N^n\) into \(N^n\) itself belongs to the class \(R_{0}(Q^m; N^n)\) and converges strongly to \(u\) with respect to the \(W^{k, p}\) distance as \(\eta \to 0\). \medskip The sketch of the proof that we have given in a previous work \cite{Bousquet-Ponce-VanSchaftingen} for \(k=2\) and \(m-1 < 2p < m\) is based on the strategy above but it is organized differently, following \cite{Ponce-VanSchaftingen}. Lemma~B in \cite{Bousquet-Ponce-VanSchaftingen} corresponds to opening and thickening on bad balls whereas Lemma~G is a combination of opening and adaptive smoothing on good balls. Gastel and Nerf \cite{Gastel-Nerf} have developped an alternative to opening. In order to prove the counterpart of Lemma~G in \cite{Bousquet-Ponce-VanSchaftingen}, they have combined smoothing with gluing methods between \(W^{k, p}\) maps by interpolation. \medskip The proof of Theorem~\ref{theoremDensityManifoldMain} in the case \(m-1 \le kp < m\) relies on the fact that smooth maps are strongly dense in \(R_0(Q^m; N^n)\) with respect to the \(W^{k, p}\) distance when \(\pi_{m-1}(N^n) = \{0\}\) and \(kp < m\). The approximation of a map \(u \in R_0(Q^m; N^n)\) in this case goes as follows: \begin{compactdesc} \item[Continuous extension property] By the assumption on the homotopy group of \(N^n\), there exists a smooth map \(u^\textrm{ex}_\mu\) with values into \(N^n\) which coincides with \(u\) outside a neighborhood of radius \(\mu\eta\) of the singular set of \(u\). As a drawback, \(u^\text{ex}_\mu\) may be far from \(u\) with respect to the \(W^{k, p}\) distance. The role of this continuous extension property in the case of \(W^{1, p}\) approximation of maps \(u\) with higher dimensional singularities has been clarified by Hang-Lin~\cite{Hang-Lin}. \item[Shrinking] We propagate diffeomorphically the values of \(u^\text{ex}_\mu\) in the neighborhood of radius \(\mu\eta\) of each singularity of \(u\) into a smaller neighborhood of radius \(\tau\mu\eta\). Since \(kp < m\), we obtain a map \(u^\text{sh}_{\tau, \mu}\) which is still smooth but now close to \(u\) with respect to the \(W^{k, p}\) distance. This construction is reminiscent of thickening but does not create singularities. \end{compactdesc} The smooth map \(u^\text{sh}_{\tau, \mu}\) converges strongly to \(u\) with respect to the \(W^{k, p}\) distance as \(\tau \to 0\) and \(\mu \to 0\). \section{Tools for the proof of Theorem~\ref{theoremDensityManifoldNontrivial}} For \(a \in {\mathbb R}^m\) and \(r>0\), we denote by \(Q^{m}_r(a)\) the cube of radius \(r\) with center \(a\); by radius of the cube we mean half of the length of one of its sides. When \(a = 0\), we abbreviate \(Q_r^m = Q^{m}_r(0)\). \begin{definition} A family of closed cubes \(\mathcal{S}^m\) is a cubication of \(A \subset {\mathbb R}^m\) if all cubes have the same radius, if \(\bigcup\limits_{\sigma^m \in \mathcal{S}^m} \sigma^m = A\) and if for every \(\sigma^m_1, \sigma^m_2 \in \mathcal{S}^m\) which are not disjoint, \(\sigma^m_1 \cap \sigma^m_2\) is a common face of dimension \(i \in \{0, \dotsc, m\}\). \end{definition} The radius of a cubication is the radius of any of its cubes. \begin{definition} Given a cubication \(\mathcal{S}^m\) of \(A \subset {\mathbb R}^m\) and \(\ell \in \{0, \dotsc, m\}\), the skeleton of dimension \(\ell\) is the set \(\mathcal{S}^\ell\) of all \(\ell\) dimensional faces of all cubes in \(\mathcal{S}^m\). A subskeleton of dimension \(\ell\) of \(\mathcal{S}^m\) is a subset of \(\mathcal{S}^\ell\). \end{definition} Given a skeleton \(\mathcal{S}^\ell\), we denote by \(S^\ell\) the union of all elements of \(\mathcal{S}^\ell\), \[ S^\ell = \bigcup_{\sigma^\ell \in \mathcal{S}^\ell} \sigma^\ell. \] \subsection{Opening} For a given map \(u \in W^{k, p} (U^m; {\mathbb R}^\nu)\) on some subskeleton \(\mathcal{U}^m\) and for any \(\ell \in \{0, \dots, m-1\}\), we are going to construct a map \(u \circ \Phi \in W^{k, p} (U^m; {\mathbb R}^\nu)\) which is constant along the normals to \(U^\ell\) in a neighborhood of \(U^\ell\). In this region, the map \(u \circ \Phi\) will thus be essentially a \(W^{k, p}\) map of \(\ell\) variables. Hence, if \(k p > \ell\), then \(u \circ \Phi\) will be continuous there, whereas in the critical case \(\ell = kp\), the map \(u \circ \Phi\) need not be continuous but will still have vanishing mean oscillation. In this construction the map \(\Phi\) depends on \(u\) and is never injective. This idea of opening a map has been inspired by a similar construction of Brezis and Li~\cite{Brezis-Li}. Given a map \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\), we denote by \(\Supp{\Phi}\) the \emph{geometric support of \(\Phi\)}, namely the closure of the set \(\{x \in {\mathbb R}^m : \Phi(x) \ne x\}\). This should not be confused with the analytic support \(\supp{\varphi}\) of a function \(\varphi:{\mathbb R}^m\to {\mathbb R}\) which is the closure of the set \(\{x \in {\mathbb R}^m : \varphi(x) \ne 0\}\). \begin{proposition}\label{openingpropGeneral} Let \(\ell \in \{0, \dotsc, m-1\}\), \(\eta > 0\), \(0 < \rho < \frac{1}{2}\), and \(\mathcal{U}^\ell\) be a subskeleton of \({\mathbb R}^m\) of radius \(\eta\). Then, for every \(u\in W^{k, p}(U^\ell + Q^m_{2\rho\eta}; {\mathbb R}^\nu)\), there exists a smooth map \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) such that \begin{enumerate}[$(i)$] \item for every \(i \in \{0, \dotsc, \ell\}\) and for every \(\sigma^i \in \mathcal{U}^i\), \(\Phi\) is constant on the \(m-i\) dimensional cubes of radius \(\rho\eta\) which are orthogonal to \(\sigma^i\), \label{itemgenopeningprop1} \item \(\Supp{\Phi} \subset U^\ell + Q^m_{2\rho\eta}\) and \(\Phi(U^\ell + Q^m_{2\rho\eta}) \subset U^\ell + Q^m_{2\rho\eta}\), \label{itemgenopeningprop4} \item \(u \circ \Phi \in W^{k, p}(U^\ell + Q^m_{2\rho\eta}; {\mathbb R}^{\nu})\), \label{itemgenopeningprop5} and for every \(j \in \{1, \dotsc, k\}\), \begin{equation*} \eta^{j} \norm{D^j(u \circ \Phi)}_{L^p(U^\ell + Q^m_{2\rho\eta})} \leq C \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(U^\ell + Q^m_{2\rho\eta})}, \end{equation*} for some constant \(C > 0\) depending on \(m\), \(k\), \(p\) and \(\rho\), \item for every \(\sigma^\ell \in \mathcal{U}^\ell\) and for every \(j \in \{1, \dotsc, k\}\), \begin{equation*} \eta^{j} \norm{D^j(u \circ \Phi)}_{L^p(\sigma^\ell + Q^m_{2\rho\eta})} \leq C' \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(\sigma^\ell + Q^m_{2\rho\eta})}, \end{equation*} for some constant \(C' > 0\) depending on \(m\), \(k\), \(p\) and \(\rho\). \label{itemgenopeningprop6} \end{enumerate} \end{proposition} In the case of \(W^{2, p}\) maps, the quantity \(\norm{D(u \circ \Phi)}_{L^p}\) can be estimated in terms of \(\norm{Du}_{L^p}\); hence there is no explicit dependence of \(\eta\). However, concerning the second-order term, estimate in \((\ref{itemgenopeningprop5})\) reads as \begin{equation*} \norm{D^2 (u \circ \Phi)}_{L^p({U^\ell+Q^m_{2\rho\eta}})} \leq C \norm{D^2 u}_{L^p({U^\ell+Q^m_{2\rho\eta}})} + \frac{C}{\eta} \norm{Du}_{L^p({U^\ell+Q^m_{2\rho\eta}})}. \end{equation*} The factor \(\frac{1}{\eta} \) which comes naturally from a scaling argument is one of the differences with respect to the opening of \(W^{1, p}\) maps. In the proof of Theorem~\ref{theoremDensityManifoldMain}, we shall use the Gagliardo-Nirenberg interpolation inequality to deal with this extra term. Since the map \(u\) in the statement is defined almost everywhere, the map \(u \circ \Phi\) need not be well-defined by standard composition of maps. By \(u \circ \Phi\), we mean a map \(v\) in \(W^{k, p}\) such that there exists a sequence of smooth maps \((u_n)_{n \in {\mathbb N}}\) converging to \(u\) in \(W^{k, p}\) such that \((u_n \circ \Phi)_{n \in {\mathbb N}}\) converges to \(v\) in \(W^{k, p}\). By pointwise convergence, this map \(u \circ \Phi\) inherits several properties of \(\Phi\) and of \(u\). For instance, if \(\Phi\) is constant in a neighborhood of some point \(a\), then so is \(u \circ \Phi\). One can show that under some assumptions on \(\Phi\) which are satisfied in all the cases that we consider \(u \circ \Phi\) does not depend on the sequence \((u_n)_{n \in {\mathbb N}}\), but we shall not make use of this fact. The only property we shall need from \(u \circ \Phi\) is that its essential range is contained in the essential range of \(u\); this is actually the case in view of Lemma~\ref{lemmaOpeningLp}~\((\ref{OpeningLp2})\) below. In particular, \emph{if \(u\) is a map with values into the manifold \(N^n\), then \(u \circ \Phi\) is also a map with values into \(N^n\).} The following proposition is the main tool in the proof of Proposition~\ref{openingpropGeneral}. \begin{proposition}\label{openinglemmaGeneral} Let \(\ell \in \{0, \dotsc, m-1\}\), \(\eta > 0\), \(0 < \underline{\rho} < \overline{\rho}\) and \(A \subset {\mathbb R}^\ell\) be an open set. For every \(u\in W^{k, p}(A \times Q_{\overline{\rho}\eta}^{m - \ell}; {\mathbb R}^\nu)\), there exists a smooth map \(\zeta : {\mathbb R}^{m - \ell} \to {\mathbb R}^{m - \ell}\) such that \begin{enumerate}[$(i)$] \item \(\zeta\) is constant in \(Q_{\underline{\rho}\eta}^{m - \ell}\), \label{itemopeninglemmaGeneral1} \item \(\Supp{\zeta} \subset Q_{\overline{\rho}\eta}^{m - \ell}\) and \(\zeta(Q_{\overline{\rho}\eta}^{m - \ell}) \subset Q_{\overline{\rho}\eta}^{m - \ell}\), \label{itemopeninglemmaGeneral2} \label{itemgenopeninglemma1} \item if \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) is defined for every \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell}\) by \[ \Phi(x) = (x', \zeta(x'')) \] then \(u \circ \Phi \in W^{k, p}(A \times Q_{\overline{\rho}\eta}^{m - \ell}; {\mathbb R}^{\nu})\), and for every \(j \in \{1, \dotsc, k\}\), \label{itemopeninglemmaGeneral3} \begin{equation*} \eta^{j} \norm{D^j(u \circ \Phi)}_{L^p({A \times Q^{m-\ell}_{\overline{\rho}\eta}})} \leq C \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p({A \times Q^{m-\ell}_{\overline{\rho}\eta}})}, \end{equation*} for some constant \(C > 0\) depending on \(m\), \(k\), \(p\), \(\underline{\rho}\) and \(\overline{\rho}\). \end{enumerate} \end{proposition} We will temporarily accept this proposition and we prove the main result of the section: \begin{proof}[Proof of Proposition~\ref{openingpropGeneral}] We first take a finite sequence \((\rho_i)_{0 \le i \le \ell}\) such that \[ \rho = \rho_\ell < \ldots < \rho_i < \ldots < \rho_0 < 2\rho. \] We construct by induction on \(i \in \{0, \dots, \ell\}\) a map \(\Phi^i: {\mathbb R}^m\to {\mathbb R}^m\) such that \begin{enumerate}[(a)] \item for every \(r \in \{0, \dots, i\}\) and every \(\sigma^r \in \mathcal{U}^r\), \(\Phi^{i}\) is constant on the \(m-r\) dimensional cubes of radius \(\rho_i\eta\) which are orthogonal to \(\sigma^r\), \label{openingrecursive1} \item \(\Supp{\Phi^i}\subset U^i+Q^{m}_{2\rho\eta}\) and \(\Phi^i(U^i+Q^{m}_{2\rho\eta})\subset U^i+Q^{m}_{2\rho\eta}\), \label{openingrecursive2} \item \(u\circ \Phi^i \in W^{k,p}(U^\ell+Q^{m}_{2\rho\eta}; {\mathbb R}^{\nu})\), \label{openingrecursive3} \item for every \(\sigma^i \in \mathcal{U}^i\) and for every \(j \in \{1, \dotsc, k\}\), \label{openingrecursive4} \begin{equation*} \eta^{j} \norm{D^j(u \circ \Phi^i)}_{L^p(\sigma^i+Q^{m}_{2\rho\eta})} \leq C \sum_{\alpha =1}^j \eta^{\alpha} \norm{D^\alpha u}_{L^p(\sigma^i+Q^{m}_{2\rho\eta})}, \end{equation*} for some constant \(C > 0\) depending on \(m\), \(k\), \(p\) and \(\rho\). \end{enumerate} The map \(\Phi^\ell\) will satisfy the conclusion of the proposition. \medskip If \(i = 0\), then \(\mathcal{U}^0\) consists of all vertices of cubes in \(\mathcal{U}^m\). To construct \(\Phi^0\), we apply Proposition~\ref{openinglemmaGeneral} to the map \(u\) around each \(\sigma^0 \in \mathcal{U}^0\) with parameters \(\rho_0 < 2\rho\) and \(\ell = 0 \) : in this case, the set \(A \times Q_{\overline{\rho}\eta}^{m - \ell}\) in Proposition~\ref{openinglemmaGeneral} is simply \( Q_{2\rho}^m \). This gives a map \(\Phi^0\) such that for every \(\sigma^0 \in \mathcal{U}^0\), \(\Phi^0\) is constant on \(\sigma^0 + Q^m_{\rho_0 \eta}\) and \(\Phi^0=\mathrm{Id}\) outside \(U^0+Q^m_{2\rho \eta}\). Moreover, \(u\circ\Phi^0\in W^{k,p}(U^\ell+Q^m_{2\rho\eta}; {\mathbb R}^{\nu})\) and for every \(\sigma^0 \in \mathcal{U}^0\) and for every \(j \in \{1, \dotsc, k\}\), \begin{equation*} \eta^{j} \norm{D^j(u \circ \Phi^i)}_{L^p(\sigma^0 + Q^{m}_{2\rho\eta})} \leq C \sum_{\alpha =1}^j \eta^{\alpha} \norm{D^\alpha u}_{L^p(\sigma^0 + Q^{m}_{2\rho\eta})}, \end{equation*} Assume that the maps \(\Phi^0,\dotsc, \Phi^{i-1}\) have been constructed. To define \(\Phi^i\), we apply Proposition~\ref{openinglemmaGeneral} to the map \(u\circ \Phi^{i-1}\) around each \(\sigma^i \in \mathcal{U}^i\) with parameters \(\rho_{i} < \rho_{i-1}\) . This gives a smooth map \(\Phi_{\sigma^i} : {\mathbb R}^m \to {\mathbb R}^m\) such that \(\Phi_{\sigma^i}\) is constant on the \(m-i\) dimensional cubes of radius \(\rho_i\eta\) which are orthogonal to \(\sigma^i \). Let \(\Phi^i : {\mathbb R}^m \to {\mathbb R}^m\) be defined for \(x \in {\mathbb R}^m\) by \[ \Phi^i(x)= \begin{cases} \Phi^{i-1}(\Phi_{\sigma^i}(x)) &\text{if } x\in \sigma^i + Q^{m}_{\rho_{i-1}\eta,}\\ \Phi^{i-1}(x)&\text{otherwise.} \end{cases} \] We first explain why \(\Phi^i\) is well-defined. For this purpose, let \[ x \in (\sigma^i_1 + Q^{m}_{{\rho}_{i-1}\eta})\cap (\sigma^i_2 + Q^{m}_{{\rho}_{i-1}\eta}) \] for some \(\sigma^i_1 \in \mathcal{U}^i\) and \(\sigma^i_2 \in \mathcal{U}^i\). If \(\sigma^i_1\not=\sigma^i_2\), then \(\sigma^i_1\cap \sigma^i_2=\tau^r\) for some \(\tau^r \in \mathcal{U}^{r}\) with \(r \in \{0, \dots, i-1\}\) and \[ (\sigma^i_1 + Q^{m}_{\rho_{i-1}\eta})\cap (\sigma^i_2 + Q^{m}_{\rho_{i-1}\eta})\subset \tau^r + Q^{m}_{\rho_{i-1}\eta}. \] By the formula of \(\Phi_{\sigma^i_j}\) given in Proposition~\ref{openinglemmaGeneral}, \(x\), \(\Phi_{\sigma^i_1}(x)\) and \(\Phi_{\sigma^i_2}(x)\) belong to the same \(m-r\) dimensional cube of radius \(\rho_{i-1}\eta\) which is orthogonal to \(\tau^r\). Since by induction hypothesis \(\Phi^{i-1}\) is constant on the \(m-r\) dimensional cubes of radius \(\rho_{i-1}\eta\) which are orthogonal to \(\tau^r\), \[ \Phi^{i-1}(\Phi_{\sigma^i_1}(x)) = \Phi^{i-1}(\Phi_{\sigma^i_2}(x)). \] This implies that \(\Phi^i\) is well-defined. Moreover, \(\Phi^i\) is smooth and satisfies properties \eqref{openingrecursive1}--\eqref{openingrecursive3}. We prove the estimates given by \eqref{openingrecursive4}. If \(e_1, \dots, e_m\) is an orthonormal basis of \({\mathbb R}^m\) compatible with the cubication \(\mathcal{U}^m\), then by abuse of notation we denote by \(\sigma^i \times Q_{\alpha\eta}^{m - i}\) the parallelepiped given by \[ \Big\{ x + \sum_{s = 1}^{m - i} t_s e_{r_s} : x \in \sigma^i \text{ and } \abs{t_s} \le \alpha\eta \Big\}, \] where \(e_{r_1}, \dotsc, e_{r_{m - i}}\) are orthogonal to \(\sigma_i\). Note that for every \(\sigma^i \in \mathcal{U}^i\), \[ \sigma^i + Q^m_{2\rho\eta} = (\sigma^i \times Q^{m-i}_{2\rho\eta}) \cup (\partial\sigma^i + Q^m_{2\rho\eta}), \] where \(\partial\sigma^i\) denotes the \(i-1\) dimensional skeleton of \(\sigma^i\). By property~\((\ref{itemopeninglemmaGeneral3})\) of Proposition~\ref{openinglemmaGeneral}, \[ \int\limits_{\sigma^i \times Q^{m-i}_{\rho_{i-1}\eta}} \eta^{jp} \abs{D^j (u\circ \Phi^{i})}^p \le C_1 \sum_{\alpha = 1}^{j} \int\limits_{\sigma^i \times Q^{m - i}_{\rho_{i-1}\eta}} \eta^{\alpha p}\abs{D^\alpha (u\circ \Phi^{i-1})}^p. \] By property~\((\ref{itemopeninglemmaGeneral2})\) of Proposition~\ref{openinglemmaGeneral}, \(\Phi^{i} = \Phi^{i-1}\) on \((\sigma^i + Q^{m}_{2\rho\eta}) \setminus (\sigma^i \times Q^{m-i}_{\rho_{i-1}\eta})\). Thus, by additivity of the integral, we get \[ \int\limits_{\sigma^i + Q^{m}_{2\rho\eta}} \eta^{jp} \abs{D^j (u\circ \Phi^{i})}^p \le C_3 \sum_{\alpha = 1}^{j} \int\limits_{\sigma^i + Q^{m}_{2\rho\eta}} \eta^{\alpha p}\abs{D^\alpha (u\circ \Phi^{i-1})}^p. \] Since by induction hypothesis \(\Phi^{i-1}\) coincides with the identity map outside \(U^{i-1}+Q^m_{2\rho\eta}\), for every \(\alpha \in \{1, \dots, j\}\) we have \begin{multline*} \int\limits_{\sigma^i + Q^m_{2\rho\eta}} \eta^{\alpha p}\abs{D^\alpha (u\circ \Phi^{i-1})}^p \\ = \int\limits_{\partial\sigma^i + Q^m_{2\rho\eta}} \eta^{\alpha p}\abs{D^\alpha (u\circ \Phi^{i-1})}^p +\int\limits_{(\sigma^i + Q^m_{2\rho\eta}) \setminus (\partial\sigma^i + Q^m_{2\rho\eta})} \eta^{\alpha p}\abs{D^\alpha u}^p. \end{multline*} By induction hypothesis, for every \(i-1\) dimensional face \(\tau^{i-1}\) of \(\partial\sigma^i\), \[ \int\limits_{\tau^{i-1} + Q^{m}_{2\rho\eta}} \eta^{\alpha p} \abs{D^\alpha (u\circ \Phi^{i-1})}^p \le C_4 \sum_{\beta = 1}^{\alpha} \, \int\limits_{\tau^{i-1} + Q^{m}_{2\rho\eta}} \eta^{\beta p}\abs{D^\beta u}^p. \] Since the number of overlaps of the sets \(\tau^{i-1} + Q^{m}_{2\rho\eta}\) is bounded from above by a constant only depending on \(m\), we have by additivity of the integral, \[ \int\limits_{\partial\sigma^i + Q^m_{2\rho\eta}} \eta^{\alpha p}\abs{D^\alpha (u\circ \Phi^{i-1})}^p \le C_5 \sum_{\beta = 1}^{\alpha} \, \int\limits_{\partial\sigma^i + Q^{m}_{2\rho\eta}} \eta^{\beta p}\abs{D^\beta u}^p. \] Therefore, \[ \int\limits_{\sigma^i + Q^m_{2\rho\eta}} \eta^{j p}\abs{D^\alpha (u \circ \Phi^i)}^p \le C_6 \sum_{\alpha = 1}^{j} \int\limits_{\sigma^i + Q^{m}_{2\rho\eta}} \eta^{\alpha p}\abs{D^\alpha u}^p. \] The map \(\Phi^\ell\) satisfies properties \((\ref{itemgenopeningprop1})\)--\((\ref{itemgenopeningprop6})\). The estimate of property~\((\ref{itemgenopeningprop5})\) is a consequence of \((\ref{itemgenopeningprop6})\) and the additivity of the integral. \end{proof} We proceed to prove Proposition~\ref{openinglemmaGeneral} by making precise the meaning of \(u \circ \Phi\) in the statement. Given a function \(\Psi : U \times V \to W\) and \(z \in V\), we denote by \(\Psi_z : U \to W\) the map defined for every \(x \in U\) by \[ \Psi_z(x) = \Psi(x, z). \] For every measurable function \(g : W \to {\mathbb R}\), the composition \(g \circ \Psi_z\) is well-defined and gives a measurable function defined on \(W\) for every \(z\). \begin{lemma} \label{lemmaOpeningLp} Let \(U, W \subset {\mathbb R}^m\) and \(V \subset {\mathbb R}^l\) be measurable sets and let \(\Psi : U \times V \to W\) be a continuous map such that for every measurable function \(g : W \to {\mathbb R}\), \[ \int\limits_V \norm{g \circ \Psi_z}_{L^1(U)} \,\mathrm{d} z\le C \norm{g}_{L^1(W)}. \] If \(u \in L^p(W; {\mathbb R}^\nu)\) and if \((u_n)_{n \in {\mathbb N}}\) is a sequence of measurable functions converging to \(u\) in \(L^p(W; {\mathbb R}^\nu)\), then there exists a subsequence \((u_{n_i})_{i \in {\mathbb N}}\) such that for almost every \(z \in V\), \begin{enumerate}[$(i)$] \item the sequence \((u_{n_i} \circ \Psi_z)_{i \in {\mathbb N}}\) converges in \(L^p(U; {\mathbb R}^\nu)\) to a function which we denote by \(u \circ \Psi_{z}\), \label{OpeningLp1} \item the essential range of \(u \circ \Psi_{z}\) is contained in the essential range of \(u\). \label{OpeningLp2} \end{enumerate} \end{lemma} \begin{proof} Let \((u_n)_{n \in {\mathbb N}}\) be a sequence of measurable functions in \(W\) converging to \(u\) in \(L^p(W; {\mathbb R}^\nu)\). Given a sequence \((\varepsilon_n)_{n \in {\mathbb N}}\) of positive numbers, let \((u_{n_i})_{i \in {\mathbb N}}\) be a subsequence such that for every \(i \in {\mathbb N}\), \[ \norm{u_{n_{i + 1}} - u_{n_i}}_{L^p(W)} \le \varepsilon_i. \] By the assumption on \(\Psi\), \[ \int\limits_V \norm{u_{n_{i + 1}} \circ \Psi_{z} - u_{n_i} \circ \Psi_{z}}_{L^p(U)}^p \,\mathrm{d} z \le C \norm{u_{n_{i + 1}} - u_{n_i}}_{L^p(W)}^p \le C \varepsilon_i^p. \] Given a sequence \((\alpha_n)_{n \in {\mathbb N}}\) of positive numbers, let \[ Y_i = \Big\{ z \in V : \norm{u_{n_{i + 1}} \circ \Psi_{z} - u_{n_i} \circ \Psi_{z}}_{L^p(U)} > \alpha_i \Big\}. \] If the series \(\sum\limits_{i = 0}^\infty \alpha_i\) converges, then for every \(t \in {\mathbb N}\) and for every \(z \not\in \bigcup\limits_{i = t}^\infty Y_i\), the sequence \((u_{n_i} \circ \Psi_{z})_{i \in {\mathbb N}}\) is a Cauchy sequence in \(L^p(U; {\mathbb R}^\nu)\). By the Chebyshev inequality, \[ \alpha_i^p |Y_i| \le \int\limits_{Y_i} \norm{u_{n_{i + 1}} \circ \Psi_{z} - u_{n_i} \circ \Psi_{z}}_{L^p(U)}^p \,\mathrm{d} z \le C \varepsilon_i^p. \] Hence, for every \(t \in {\mathbb N}\), \[ \Big| \textstyle \bigcup\limits_{i = t}^\infty Y_i\Big| \le C \displaystyle\sum\limits_{i = t}^\infty \Big(\frac{\varepsilon_i}{\alpha_i}\Big)^p. \] Taking the sequences \((\varepsilon_n)_{n \in {\mathbb N}}\) and \((\alpha_n)_{n \in {\mathbb N}}\) such that both series \(\sum\limits_{i = 0}^\infty \alpha_i\) and \(\sum\limits_{i = 0}^\infty (\varepsilon_i/\alpha_i)^p\) converge, then the set \(E = \bigcap\limits_{t = 0}^\infty \bigcup\limits_{i = t}^\infty Y_i\) is negligeable and for every \(z \in V \setminus E\), \((u_{n_i} \circ \Psi_{z})_{i \in {\mathbb N}}\) is a Cauchy sequence in \(L^p(U; {\mathbb R}^\nu)\). This proves assertion \((\ref{OpeningLp1})\). \medskip It suffices to prove assertion \((\ref{OpeningLp2})\) when \(W\) has finite Lebesgue measure. For every \(z \in V \setminus E\), we denote by \(u \circ \Psi_z\) the limit in \(L^p(U; {\mathbb R}^\nu)\) of the sequence \((u_{n_i} \circ \Psi_{z})_{i \in {\mathbb N}}\). Let \(\theta : {\mathbb R}^\nu\to {\mathbb R}\) be a continuous function such that \( \theta^{-1}(0)\) is equal to the essential range of \(u\) and \(0 \le \theta \le 1\) in \({\mathbb R}^\nu\). For every \(i \in {\mathbb N}\), \[ \int\limits_V \norm{ \theta \circ (u_{n_i} \circ \Psi_z)}_{L^1(U)} \,\mathrm{d} z \leq C \norm{ \theta \circ u_{n_i} }_{L^1(W)}. \] By Fatou's lemma, \[ \int\limits_V \norm{\theta \circ (u \circ \Psi_z)}_{L^1(U)} \,\mathrm{d} z \le \liminf_{i \to \infty} \int\limits_V \norm{ \theta \circ (u_{n_i} \circ \Psi_z)}_{L^1(U)} \,\mathrm{d} z. \] Since \(W\) has finite Lebesgue measure and \(\theta\) is bounded, as \(i\) tends to infinity we get \[ \int\limits_V \norm{\theta \circ (u \circ \Psi_z)}_{L^1(U)} \,\mathrm{d} z \le C \norm{ \theta \circ u}_{L^1(W)} = 0. \] Therefore, for almost every \(z \in V\), \(\norm{\theta \circ (u \circ \Psi_z)}_{L^1(U)} = 0\), whence the essential range of \(u \circ \Psi_z\) is contained in the essential range of \(u\). \end{proof} From the previous lemma, we can prove the following property for maps in \(W^{k, p}\): \begin{lemma} \label{lemmaOpeningSobolev} Let \(U, W \subset {\mathbb R}^m\) and \(V \subset {\mathbb R}^l\) be open sets and let \(\Psi : U \times V \to W\) be a smooth map such that for every measurable function \(g : W \to {\mathbb R}\), \[ \int\limits_V \norm{g \circ \Psi_z}_{L^1(U)} \,\mathrm{d} z\le C \norm{g}_{L^1(W)}. \] If \(u \in W^{k, p}(W; {\mathbb R}^\nu)\) and if \((u_n)_{n \in {\mathbb N}}\) is a sequence of smooth functions converging to \(u\) in \(W^{k, p}(W; {\mathbb R}^\nu)\), then there exists a subsequence \((u_{n_i})_{i \in {\mathbb N}}\) such that for almost every \(z \in V\) the sequence \((u_{n_i} \circ \Psi_z)_{i \in {\mathbb N}}\) converges to \(u \circ \Psi_{z}\) in \(W^{k, p}(U; {\mathbb R}^\nu)\), and for every \(j \in \{1, \dots, k\}\), \[ \int\limits_V \norm{D^j (u \circ \Psi_z)}_{L^p(U)} \,\mathrm{d} z \leq C' \abs{V}^{1 - \frac{1}{p}} \sum_{i=1}^j \norm{D^i u}_{L^p(W)}, \] for some constant \(C' > 0\) depending on \(m\), \(p\), \(k\), \(C\) and \(\max\limits_{1 \le j \le k}\sup\limits_{z \in V} {\norm{D^j \Psi_z}_{L^\infty(U)}}\). \end{lemma} \begin{proof} Let \((u_n)_{n \in {\mathbb N}}\) be a sequence of smooth functions in \(W^{k, p}(W; {\mathbb R}^\nu)\) converging to \(u\) in \(W^{k, p}(W; {\mathbb R}^\nu)\). By the previous lemma, there exists a subsequence \((u_{n_i})_{i \in {\mathbb N}}\) such that for almost every \(z \in V\), \((u_{n_i} \circ \Psi_z)_{i \in {\mathbb N}}\) converges to \(u \circ \Psi_z\) in \(L^p\) and for every \(j \in \{1, \dots, k\}\), \(((D^j u_{n_i}) \circ \Psi_z)_{i \in {\mathbb N}}\) converges to \((D^j u) \circ \Psi_z\) in \(L^p\). For every \(v\in C^{\infty}(W; {\mathbb R}^\nu)\), for every \(z \in V\) and for each \(j \in \{1, \dots, k\}\), \[ \begin{split} \abs{D^j (v\circ \Psi_{z}) (x)} & \le C_1 \sum_{i = 1}^j \sum_{\substack{1 \le t_1 \le \ldots \le t_i\\ t_1 + \dots + t_i = j}}{\abs{D^i v(\Psi_z(x))} \abs{D^{t_1}\Psi_z(x)} \dotsm \abs{D^{t_i} \Psi_z(x)} } \\ & \le C_2 \sum_{i = 1}^j \abs{D^i v (\Psi_z(x))}, \end{split} \] whence \[ \norm{D^j (v\circ \Psi_{z})}_{L^p(U)}^p \leq C_3 \sum_{i=1}^j \norm{ \abs{D^i v}^p \circ \Psi_z}_{L^1(U)}. \] This implies that for almost every \(z \in V\), \((u_{n_i} \circ \Psi_z)_{i \in {\mathbb N}}\) is a Cauchy sequence in \(W^{k, p}(U; {\mathbb R}^\nu)\), thus \((u_{n_i} \circ \Psi_z)_{i \in {\mathbb N}}\) converges to \(u \circ \Psi_z\) in \(W^{k, p}(U; {\mathbb R}^\nu)\). Moreover, integrating with respect to \(z\) the above estimate and using the assumption on \(\Psi\) we get \[ \begin{split} \int\limits_V \norm{D^j (v\circ \Psi_{z})}_{L^p(U)}^p \,\mathrm{d} z & \leq C_3 \sum_{i=1}^j \int\limits_V \norm{\abs{D^i v}^p \circ \Psi_z}_{L^1(U)} \,\mathrm{d} z\\ & \le C_4 \sum_{i=1}^j \norm{\abs{D^i v}^p}_{L^1(W)} = C_4 \sum_{i=1}^j \norm{D^i v}_{L^p(W)}^p. \end{split} \] Thus, by Hölder's inequality, \[ \begin{split} \int\limits_V \norm{D^j (v\circ \Psi_{z})}_{L^p(U)} \,\mathrm{d} z & \le \abs{V}^{1 - \frac{1}{p}} \biggl( C_4 \sum_{i=1}^j \norm{D^i v}_{L^p(W)}^p \biggr)^{\frac{1}{p}}\\ & \le C_5 \abs{V}^{1 - \frac{1}{p}} \sum_{i=1}^j \norm{D^i v}_{L^p(W)}. \end{split} \] We obtain the desired estimate by taking \(v=u_{n_i}\) and letting \(n_i\) tend to infinity. \end{proof} We now show that the functional estimate in Propositions~\ref{lemmaOpeningLp} and~\ref{lemmaOpeningSobolev} is satisfied for maps \(\Psi\) of the form \[ \Psi(x, z) = \zeta(x + z) - z. \] \begin{lemma} \label{lemmaOpeningEstimate} Let \(U, V, W \subset {\mathbb R}^l\) be measurable sets and let \(\zeta : U + V \to {\mathbb R}^l\) be a continuous map such that for every \(x \in U\) and for every \(z \in V\), \(\zeta (x + z) - z \in W\). Then, for every measurable function \(g : W \to {\mathbb R}\), \begin{equation*} \int\limits_{V} \biggl(\int\limits_{U} \abs{g(\zeta(x + z) - z)} \,\mathrm{d} x \biggr) \,\mathrm{d} z \le |U + V| \int\limits_{W} \abs{g(x)} \,\mathrm{d} x. \end{equation*} \end{lemma} \begin{proof} Let \(\xi : (U + V) \times V \to {\mathbb R}^l\) be the function defined by \[ \xi(x, z) = \zeta(x + z) - z. \] By Fubini's theorem, \[ \int\limits_{V} \biggl(\int\limits_{U} |(g \circ \xi)(x, z)| \,\mathrm{d} x \biggr) \,\mathrm{d} z = \int\limits_{U} \biggl( \int\limits_{V} |g(\zeta(x + z) - z)| \,\mathrm{d} z \biggr) \,\mathrm{d} x. \] Applying the change of variables \(\tilde z = x + z\) in the variable \(z\) and Fubini's theorem, \begin{equation*} \begin{split} \int\limits_{V} \biggl(\int\limits_{U} |(g \circ \xi)(x, z)| \,\mathrm{d} x \biggr) \,\mathrm{d} z & = \int\limits_{U} \biggl( \int\limits_{x + V} |g(\zeta(\tilde z) + x - \tilde z)| \,\mathrm{d} \tilde z \biggr) \,\mathrm{d} x\\ &= \int\limits_{U + V} \biggl( \int\limits_{(\tilde z - V) \cap U} |g(\zeta(\tilde z) + x - \tilde z)| \,\mathrm{d} x \biggr) \,\mathrm{d} \tilde z. \end{split} \end{equation*} We now apply the change of variables \(\tilde x = \zeta(\tilde z) + x - \tilde z\) in the variable \(x\), and use the assumption on \(W\) to conclude \begin{equation*} \begin{split} \int\limits_{V} \biggl(\int\limits_{U} |(g \circ \xi)(x, z)| \,\mathrm{d} x \biggr) \,\mathrm{d} z &= \int\limits_{U + V} \biggl( \int\limits_{\zeta(\tilde z) -(V \cap (\Tilde z-U))} |g(\tilde x)| \,\mathrm{d} \tilde x \biggr) \,\mathrm{d} \tilde z \\ &\le \int\limits_{U + V} \biggl( \int\limits_{W} |g(\tilde x)| \,\mathrm{d} \tilde x \biggr) \,\mathrm{d} \tilde z\\ &= |U + V| \int\limits_{W} |g(\tilde x)| \,\mathrm{d} \tilde x. \end{split} \end{equation*} This gives the desired estimate. \end{proof} \begin{proof}[Proof of Proposition~\ref{openinglemmaGeneral}] By scaling, it suffices to establish the result when \(\eta = 1\). We fix \(\Hat{\rho}\) such that \(2\Hat{\rho} < \overline{\rho} - \underline{\rho}\). Let \(\tilde\zeta : {\mathbb R}^{m-\ell} \to {\mathbb R}^{m-\ell}\) be the smooth map defined by \[ \tilde\zeta(y) = (1 - \varphi(y))y, \] where \(\varphi : {\mathbb R}^{m-\ell} \to [0, 1]\) is a smooth function such that \begin{enumerate}[\(-\)] \item for \(y \in Q_{\underline{\rho}+\Hat{\rho}}^{m-\ell}\), \(\varphi(y) = 1\), \item for \(y \in {\mathbb R}^{m - \ell} \setminus Q_{\overline{\rho}-\Hat{\rho}}^{m-\ell}\), \(\varphi(y) = 0\). \end{enumerate} For any \(z \in Q^{m-\ell}_{\hat\rho}\), the function \(\zeta : {\mathbb R}^{m-\ell} \to {\mathbb R}^{m-\ell}\) defined for \(x'' \in {\mathbb R}^{m-\ell}\) by \[ \zeta(x'') = \Tilde\zeta(x'' + z) - z \] satisfies properties \((\ref{itemopeninglemmaGeneral1})\)--\((\ref{itemopeninglemmaGeneral2})\). We claim that for some \(z \in Q^{m-\ell}_{\hat\rho}\), the function \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) defined for \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell}\) by \[ \Phi(x) = (x', \zeta(x'')) \] satisfies property~\((\ref{itemopeninglemmaGeneral3})\). For this purpose, let \(\Psi : {\mathbb R}^m \times Q_{\Hat{\rho}}^{m-\ell} \to {\mathbb R}^m\) be the function defined for \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m-\ell}\) and \(z \in Q_{\Hat{\rho}}^{m-\ell}\) by \[ \Psi(x, z) = (x', \Tilde \zeta(x''+z)-z) . \] For every measurable function \(f : A \times Q_{\overline{\rho}}^{m-\ell} \to {\mathbb R}\), we have by Fubini's theorem, \begin{multline*} \int\limits_{Q_{\Hat{\rho}}^{m-\ell}} \norm{ f \circ \Psi_z}_{L^1(A \times Q_{\overline{\rho}}^{m-\ell})} \,\mathrm{d} z\\ = \int\limits_A \biggl[\int\limits_{Q_{\Hat{\rho}}^{m-\ell}} \biggl(\int\limits_{Q_{\overline{\rho}}^{m-\ell}} \bigabs{f(x', \Tilde\zeta(x''+z)-z) } \,\mathrm{d} x'' \biggr) \,\mathrm{d} z \biggr] \,\mathrm{d} x'. \end{multline*} Given \(x' \in A\), we apply Lemma~\ref{lemmaOpeningEstimate} with \(U = Q_{\overline{\rho}}^{m-\ell}\), \(V = Q_{\Hat{\rho}}^{m-\ell}\), \(W = Q_{\overline{\rho}}^{m-\ell}\), and \( \Tilde\zeta. \) We deduce that \[ \int\limits_{Q_{\Hat{\rho}}^{m-\ell}} \biggl(\int\limits_{Q_{\overline{\rho}}^{m-\ell}} \bigabs{f(x', \Tilde\zeta(x''+z)-z) } \,\mathrm{d} x'' \biggr) \,\mathrm{d} z \le C_1 \int\limits_{Q_{\overline{\rho}}^{m-\ell}} |f(x', x'')| \,\mathrm{d} x''. \] Thus, \[ \int\limits_{Q_{\Hat{\rho}}^{m-\ell}} \norm{ f \circ \Psi_z}_{L^1(A \times Q_{\overline{\rho}}^{m-\ell})} \,\mathrm{d} z \le C_1 \norm{f}_{L^1( A \times Q_{\overline{\rho}}^{m-\ell})}. \] \medskip By Lemma~\ref{lemmaOpeningSobolev}, for almost every \(z \in Q_{\Hat{\rho}}^{m-\ell}\), \(u \circ \Psi_{z} \in W^{k, p}(A \times Q_{\overline{\rho}}^{m-\ell}; {\mathbb R}^\nu) \) and for every \(j \in \{1, \dots, k\}\), \[ \int\limits_{Q_{\Hat{\rho}}^{m-\ell}} \norm{D^j (u \circ \Psi_{z})}_{L^p({A \times Q_{\overline{\rho}}^{m-\ell}})} \,\mathrm{d} z \leq C_2 \sum_{i=1}^j \norm{D^i u}_{L^p({A \times Q_{\overline{\rho}}^{m-\ell}})}. \] We may thus find some \(z \in Q_{\Hat{\rho}}^{m-\ell}\) such that \(u \circ \Psi_{z} \in W^{k, p}(A \times Q_{\overline{\rho}}^{m-\ell}; {\mathbb R}^\nu)\) and for every \(j \in \{1, \dots, k\}\), \[ \norm{D^j (u \circ \Psi_{z})}_{L^p({A \times Q_{\overline{\rho}}^{m-\ell}})} \leq C_3 \sum_{i=1}^j \norm{D^i u}_{L^p({A \times Q_{\overline{\rho}}^{m-\ell}})}. \] The function \(\zeta\) defined in terms of this point \(z\) satisfies the required properties. \end{proof} \begin{addendum}[{Proposition~\ref{openingpropGeneral}}] \label{addendumW1kp} Let \(\mathcal{K}^m\) be a cubication containing \(\mathcal{U}^m\) and let \(q \ge 1\). If \(u \in W^{1, q}(K^m + Q^m_{2\rho\eta}; {\mathbb R}^\nu)\), then the map \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) can be chosen with the additional property that \(u \circ \Phi \in W^{1, q}(K^m + Q^m_{2\rho\eta}; {\mathbb R}^\nu)\) and for every \(\sigma^m \in \mathcal{K}^m\), \begin{equation*} \norm{D(u \circ \Phi)}_{L^{q}(\sigma^m + Q^m_{2\rho\eta})} \leq C'' \norm{Du}_{L^{q}(\sigma^m + Q^m_{2\rho\eta})}, \end{equation*} for some constant \(C'' > 0\) depending on \(m\), \(q\) and \(\rho\). \end{addendum} \begin{proof} Since \(u \in W^{1, q}(U^\ell + Q^m_{2\rho\eta}; {\mathbb R}^\nu)\), we may apply Proposition~\ref{openingpropGeneral} with \(k = 1\) and \(p = q\) in order to obtain a map \(\Phi: {\mathbb R}^m \to {\mathbb R}^m\) such that \(u \circ \Phi \in W^{1, q}(U^\ell + Q^m_{2\rho\eta}; {\mathbb R}^\nu)\) and for every \(\sigma^\ell \in \mathcal{U}^\ell\), \begin{equation*} \norm{D(u \circ \Phi)}_{L^{q}(\sigma^\ell + Q^m_{2\rho\eta})} \leq C \norm{Du}_{L^{q}(\sigma^\ell + Q^m_{2\rho\eta})}. \end{equation*} Since the choice of the point \(z\) in the proof of Proposition~\ref{openinglemmaGeneral} can be done in a set of positive measure, we may do so by keeping the properties we already have for \(W^{k, p}\). For every \(\sigma^m \in \mathcal{K}^m\), if \(\sigma^{m, \ell}\) denotes the skeleton of dimension \(\ell\) of \(\sigma^m\), then by additivity of the integral, \begin{equation*} \norm{D(u \circ \Phi)}_{L^{q}((\sigma^{m, \ell} \cap U^\ell) + Q^m_{2\rho\eta})} \leq C \norm{Du}_{L^{q}((\sigma^{m, \ell} \cap U^\ell) + Q^m_{2\rho\eta})}. \end{equation*} Since \(\Phi\) coincides with the identity map in \((\sigma^m + Q^m_{2\rho\eta}) \setminus ((\sigma^{m, \ell} \cap U^\ell) + Q^m_{2\rho\eta})\), \[ \norm{D(u \circ \Phi)}_{L^{q}(\sigma^m + Q^m_{2\rho\eta})} \leq C \norm{Du}_{L^{q}(\sigma^m + Q^m_{2\rho\eta})}. \] This concludes the proof. \end{proof} \begin{addendum}[{Proposition~\ref{openingpropGeneral}}] \label{addendumVMO} Let \(\mathcal{K}^m\) be a cubication containing \(\mathcal{U}^m\). If \(u\in W^{1, kp}(K^m + Q^m_{2\rho\eta}; {\mathbb R}^\nu)\), then the map \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) given by Proposition~\ref{openingpropGeneral} and Addendum~\ref{addendumW1kp} above with \(q = kp\) satisfies \[ \lim_{r \to 0} \sup_{Q_r^m(a) \subset U^\ell + Q^m_{\rho\eta}} \frac{r^{\frac{\ell}{kp} - 1} }{\abs{Q_r^m}^2} \int\limits_{Q_r^m (a)}\int\limits_{Q_r^m (a)} \abs{u \circ \Phi(x) - u \circ \Phi (y)} \,\mathrm{d} x \,\mathrm{d} y = 0 \] and for every \(\sigma^m \in \mathcal{U}^m\) and for every \(a \in \sigma^m\) such that \(Q_r^m (a) \subset U^\ell + Q^m_{\rho\eta}\), \[ \frac{1}{\abs{Q_r^m}^2} \int\limits_{Q_r^m (a)}\int\limits_{Q_r^m (a)} \abs{u \circ \Phi(x) - u \circ \Phi (y)} \,\mathrm{d} x \,\mathrm{d} y \le \frac{C''' r^{1-\frac{\ell}{kp}}}{\eta^{\frac{m-\ell}{kp}}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)}, \] for some constant \(C''' > 0\) depending on \(m\), \(kp\) and \(\rho\). \end{addendum} If \(kp \ge \ell\), then the limit above implies that \(u \circ \Phi\) belongs to the space of functions of vanishing mean oscillation \(\textrm{VMO}(U^\ell + Q^m_{\rho\eta}; {\mathbb R}^\nu)\) and the estimate yields an estimate on the BMO seminorm on the domain \(U^\ell + Q^m_{\rho\eta}\) as defined by Jones \cite{Jones1980}. If \(kp > \ell > 0\), then the estimate implies that \(u \circ \Phi \in C^{0, 1 - \frac{\ell}{kp}}(U^\ell + Q^m_{\rho\eta}; {\mathbb R}^\nu)\) with an upper bound on the \(C^{0, 1 - \frac{\ell}{kp}}\) seminorm of \(u \circ \Phi\) (see \cite{Campanato1963}). The estimates of this addendum are not really useful when \(kp < \ell\) since in this case \(\lim\limits_{r \to 0} r^{\frac{\ell}{kp} - 1} = 0\). \begin{proof}[Proof of Addendum~\ref{addendumVMO}] Fix \(Q^m_r (a) \subset U^\ell + Q^{m}_{\rho \eta}\). Then \(a\in U^\ell+Q^{m}_{\rho\eta-r}\). Hence there exists an \(\ell\) dimensional face \(\tau^\ell \in \mathcal{U}^\ell\) such that \(Q^m_r (a) \subset \tau^\ell + Q^{m}_{\rho \eta} \). Without loss of generality, we may assume that \(\tau^\ell=Q^\ell_{\eta}\times\{0^{m-\ell}\}\subset {\mathbb R}^\ell\times {\mathbb R}^{m-\ell}\). From Proposition~\ref{openingpropGeneral} \((\ref{itemgenopeningprop1})\), the map \(\Phi\) is constant on the \(m-\ell\) dimensional cubes of radius \(\rho\eta\) which are orthogonal to \(Q^\ell_{(1+\rho)\eta}\times \{0^{m-\ell}\}\). Writing \(Q^m_r(a) = Q^\ell_r(a') \times Q^{m-\ell}_r(a'')\), then \(u\circ \Phi\) only depends on the first \(\ell\) dimensional variables in \(Q^m_r(a)\). Let \(v : Q_{(1 + \rho)\eta}^\ell \to {\mathbb R}^\nu\) be the function defined by \[ v(x') = (u \circ \Phi)(x', a''). \] By Addendum~\ref{addendumW1kp} above with \(q = kp\), \(u \circ \Phi \in W^{1, kp}(Q_{(1 + \rho)\eta}^\ell \times Q_{\rho\eta}^{m-\ell}; {\mathbb R}^\nu)\), whence \[ v \in W^{1, kp}(Q_{(1 + \rho)\eta}^\ell; {\mathbb R}^\nu). \] Note that \[ \frac{1}{\abs{Q_r^m}^2}\int\limits_{Q^m_{r}(a)}\int\limits_{Q^m_{r}(a)} \abs{u \circ \Phi(x) - u \circ \Phi(y)} \,\mathrm{d} x \,\mathrm{d} y = \frac{1}{\abs{Q_r^\ell}^2}\int\limits_{Q^\ell_{r}(a')}\int\limits_{Q^\ell_{r}(a')} \abs{v(x') - v(y')} \,\mathrm{d} x' \,\mathrm{d} y'. \] By the Poincar\'e-Wirtinger inequality, \[ \frac{1}{\abs{Q_r^\ell}^2}\int\limits_{Q^\ell_{r}(a')}\int\limits_{Q^\ell_{r}(a')} \abs{v(x') - v(y')} \,\mathrm{d} x' \,\mathrm{d} y' \le C_1 r^{1 - \frac{\ell}{kp}} \norm{Dv}_{L^{kp}(Q^\ell_{r}(a'))}. \] Thus, \[ \frac{1}{\abs{Q_r^m}^2}\int\limits_{Q^m_{r}(a)}\int\limits_{Q^m_{r}(a)} \abs{u \circ \Phi(x) - u \circ \Phi(y)} \,\mathrm{d} x \,\mathrm{d} y \le C_1 r^{1 - \frac{\ell}{kp}} \norm{Dv}_{L^{kp}(Q^\ell_{r}(a'))} \] and this implies the first part of the conclusion. In order to get the estimate of the oscillation of \(u \circ \Phi\) in terms of \(\norm{D(u\circ \Phi)}_{L^{kp}}\), note that \[ \norm{D(u\circ \Phi)}_{L^{kp}(Q^\ell_r(a') \times Q^{m-\ell}_{\rho\eta}(a''))} = (2\rho\eta)^\frac{m-\ell}{kp} \norm{Dv}_{L^{kp}(Q^\ell_{r}(a'))}. \] This implies for any \(\sigma^m\in\mathcal{U}^m\) such that \(\tau^\ell\subset \sigma^m\) \[ \begin{split} \norm{Dv}_{L^{kp}(Q^\ell_{r})} & = \frac{1}{(2\rho\eta)^\frac{m-\ell}{kp}} \norm{D( u\circ \Phi)}_{L^{kp}(Q^\ell_r \times Q^{m-\ell}_{\rho\eta})}\\ & \leq \frac{1}{(2\rho\eta)^{\frac{m-\ell}{kp}}} \norm{D(u\circ \Phi)}_{L^{kp}(\sigma^\ell+Q^m_{\rho\eta})}\\ & \leq \frac{1}{(2 \rho\eta)^{\frac{m - \ell}{kp}}} \norm{D(u\circ \Phi)}_{L^{kp}(\sigma^m+Q^m_{\rho\eta})}. \end{split} \] Thus, \[ \frac{1}{\abs{Q_r^m}^2}\int\limits_{Q^m_{r}(a)}\int\limits_{Q^m_{r}(a)} \abs{u \circ \Phi(x) - u \circ \Phi(y)} \,\mathrm{d} x \,\mathrm{d} y \le \frac{C_2 r^{1 - \frac{\ell}{k p}}} {(\rho\eta)^{\frac{m - \ell}{kp}}} \norm{D(u\circ \Phi)}_{L^{kp}(\sigma^m + Q_{\rho\eta}^m)}. \] By Addendum~\ref{addendumW1kp} above, \[ \norm{D(u\circ \Phi)}_{L^{kp}(\sigma^m + Q_{\rho\eta}^m)} \le C_3 \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)}. \] This proves the estimate that we claimed. \end{proof} \subsection{Adaptive smoothing} Given \(u \in W^{k, p}(\Omega; {\mathbb R}^\nu)\), we would like to consider a convolution of \(u\) with a parameter which may depend on the point where we compute the convolution itself. The main reason is that we want to choose the convolution parameter by taking into account the mean oscillation of \(u\): we choose a large parameter where \(u\) does not oscillate too much and a small parameter elsewhere. For this purpose, consider a function \(u \in L^1(\Omega; {\mathbb R}^\nu)\). Let \(\varphi\) be a \emph{mollifier}, in other words, \[ \varphi \in C_c^\infty(B_1^m), \quad \varphi \ge 0\ \text{in \(B_1^m\)} \quad \text{and} \quad \int\limits_{B_1^m} \varphi = 1. \] For every \(s \ge 0\) and for every \(x \in \Omega\) such that \(d(x, \partial\Omega) \ge s\), we may consider the convolution \[ (\varphi_s \ast u)(x) = \int\limits_{B_1^m} \varphi(z) u(x + s z) \,\mathrm{d} z. \] We may keep in mind that with this definition, \[ (\varphi_0 \ast u)(x) = \int\limits_{B_1^m} \varphi(z) \,\mathrm{d} z \, u(x) = u(x). \] This way of writing the convolution has the advantage that we may treat the cases \(s = 0\) and \(s > 0\) using the same formula. We now introduce a non-constant parameter in the convolution given by a nonnegative function \(\psi \in C^\infty(\Omega)\). The convolution \[ \varphi_\psi \ast u : \big\{x\in \Omega : \dist{(x, \partial\Omega)} \ge \psi(x) \big\} \to {\mathbb R}^\nu \] is well-defined and if \(\psi(a) > 0\) and \(\abs{D\psi (a)} < 1\) at some point \(a \in \Omega\), then by a change of variable in the integral the map \(\varphi_\psi \ast u\) is smooth in a neighborhood of \(a\). \begin{proposition} \label{lemmaConvolutionEstimatesLp} Let \(\varphi \in C_c^\infty(B_1^m)\) be a mollifier and let \(\psi \in C^\infty(\Omega)\) be a nonnegative function such that \(\norm{D\psi}_{L^\infty(\Omega)} < 1\). Then, for every \(u\in L^{p}(\Omega; {\mathbb R}^\nu)\) and for every open set \(\omega \subset \big\{x\in \Omega : \dist{(x, \partial\Omega)} \ge \psi(x) \big\}\), \(\varphi_{\psi} \ast u \in L^{p}(\omega; {\mathbb R}^\nu)\), \begin{equation*} \label{ineqConvolLp} \norm{\varphi_\psi \ast u}_{L^p(\omega)} \le \frac{1}{(1 - \norm{D\psi}_{L^\infty(\omega)})^\frac{1}{p}} \norm{u}_{L^p(\Omega)}, \end{equation*} and \[ \norm{\varphi_{\psi} \ast u - u}_{L^p(\omega)}\\ \leq \sup_{v \in B_1^m}{\norm{\tau_{\psi v}u - u}_{L^p(\omega)}}, \] where \(\tau_{\psi v} u (x) = u(x + \psi(x)v)\). \end{proposition} For \(p > 1\), it is possible to obtain an estimate for \(\norm{\varphi_\psi \ast u}_{L^p(\omega)}\) without any dependence on \(\psi\) by the theory of the Hardy-Littlewood maximal function (see for instance \cite{Stein}); this approach fails for \(p=1\). In the context of the proposition above, one can prove in a standard way the following statement: given \(u\in L^{p}(\Omega;{\mathbb R}^\nu)\), \(0 \le \beta < 1\) and \(\varepsilon >0\), there exists \(\delta>0\) such that for any nonnegative function \(\psi \in C^\infty(\Omega)\) satisfying \(\norm{\psi}_{L^\infty(\Omega)} \le \delta\) and \(\norm{D\psi}_{L^\infty(\Omega)} \le \beta\), and for every open set \(\omega \subset \big\{x\in \Omega : \dist{(x, \partial\Omega)} \ge \psi(x) \big\}\), \[ \sup_{v \in B_1^m}{\norm{\tau_{\psi v}u - u}_{L^p(\omega)}} \le \varepsilon. \] We may pursue these estimates for maps in \(W^{k, p}(\Omega; {\mathbb R}^\nu)\): \begin{proposition} \label{lemmaConvolutionEstimates} Let \(\varphi \in C_c^\infty(B_1^m)\) be a mollifier and let \(\psi \in C^\infty(\Omega)\) be a nonnegative function such that \(\norm{D\psi}_{L^\infty(\Omega)} < 1\). For every \(k \in {\mathbb N}_*\), for every \(u\in W^{k, p}(\Omega; {\mathbb R}^\nu)\) and for every open set \(\omega \subset \big\{x\in \Omega : \dist{(x, \partial\Omega)} \ge \psi(x) \big\}\), \(\varphi_{\psi} \ast u \in W^{k, p}(\omega; {\mathbb R}^\nu)\) and for every \(j \in \{1, \dots, k\}\), \[ \eta^{j} \norm{D^j(\varphi_{\psi} \ast u)}_{L^p(\omega)} \leq \frac{C}{(1 - \norm{D\psi}_{L^\infty(\omega)})^\frac{1}{p}} \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(\Omega)}, \] and \begin{multline*} \eta^{j} \norm{D^j(\varphi_{\psi} \ast u) - D^j u}_{L^p(\omega)}\\ \leq \sup_{v \in B_1^m}{\eta^{j} \norm{\tau_{\psi v}(D^j u) - D^j u}_{L^p(\omega)}} + \frac{C'}{(1 - \norm{D\psi}_{L^\infty(\omega)})^\frac{1}{p}} \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(A)}, \end{multline*} for some constants \(C > 0\) and \(C' > 0\) depending on \(m\), \(k\) and \(p\), where \[ A = \bigcup_{x \in \omega \cap\supp{D\psi}} B_{\psi(x)}^m(x) \] and \(\eta > 0\) is such that for every \(i \in \{2, \dotsc, k\}\), \[ \eta^{j} \norm{D^j \psi}_{L^\infty(\omega)} \le \eta. \] \end{proposition} \begin{proof} We only prove the second estimate. We assume for simplicity that \(u \in C^\infty(\Omega; {\mathbb R}^\nu)\). For every \(x \in \omega\), \[ (\varphi_\psi \ast u)(x) - u(x) = \int\limits_{B_1^m} \varphi(z) \big[ u(x + \psi(x) z) - u(x)] \,\mathrm{d} z . \] For every \(j \in \{1, \dots, k\}\), we have by the chain rule for higher order derivatives, \begin{multline*} \abs{D^j (\varphi_\psi \ast u)(x) - D^j u(x)}\\ \le \int\limits_{B_1^m} \varphi(z) \bigabs{D^ju(x + \psi(x) z)\circ (\mathrm{Id} +D\psi(x)\otimes z)^j - D^ju(x)} \,\mathrm{d} z \\ + C_1\sum_{i=1}^{j-1} \sum_{ \substack{\alpha_1 + 2 \alpha_2 + \dotsb + j \alpha_j = j\\ \alpha_1 + \alpha_2 + \dotsb + \alpha_j = i } } (1 + \abs{D\psi(x)})^{\alpha_1} \abs{D^2\psi(x)}^{\alpha_2} \dotsm \abs{D^j\psi(x)}^{\alpha_j} \int\limits_{B_1^m} \varphi(z) \abs{D^i u(x + \psi(x)z)} \,\mathrm{d} z. \end{multline*} Since \(\norm{D\psi}_{L^\infty(\Omega)} \le 1\), for every \(z \in B_1^m\), \[ \bigabs{(\mathrm{Id} +D\psi(x)\otimes z)^j - \mathrm{Id}} \le C_2 \abs{D\psi(x)}, \] and we have \begin{multline*} \abs{D^j (\varphi_\psi \ast u)(x) - D^j u(x)}\\ \\ \le \int\limits_{B_1^m} \varphi(z) \bigabs{D^ju(x + \psi(x) z) - D^ju(x)} \,\mathrm{d} z + C_2 \abs{D\psi(x)}\int\limits_{B_1^m} \varphi(z) \bigabs{D^ju(x + \psi(x) z)} \,\mathrm{d} z\\ + C_1 \sum_{i=1}^{j-1} \sum_{ \substack{\alpha_1 + 2 \alpha_2 + \dotsb + j \alpha_j = j\\ \alpha_1 + \alpha_2 + \dotsb + \alpha_j = i } } (1 + \abs{D\psi(x)})^{\alpha_1} \abs{D^2\psi(x)}^{\alpha_2} \dotsm \abs{D^j\psi(x)}^{\alpha_j} \int\limits_{B_1^m} \varphi(z) \abs{D^i u(x + \psi(x)z)} \,\mathrm{d} z. \end{multline*} Note that the second and the third terms in the right hand side are supported on \(\supp{D\psi}\) since \(\alpha_s \ne 0\) for some \(s > 1\). Moreover, by the choice of \(\eta\), \begin{multline*} (1 + \abs{D\psi(x)})^{\alpha_1} \abs{D^2\psi(x)}^{\alpha_2} \dotsm \abs{D^j\psi(x)}^{\alpha_j}\\ \begin{aligned} & \le (1 + 1) ^{\alpha_1} \Big(\frac{\eta}{\eta^2}\Big)^{\alpha_2} \dotsm \Big(\frac{\eta}{\eta^j}\Big)^{\alpha_j}\\ & = 2^{\alpha_1} \frac{\eta^{\alpha_1 + \alpha_2 + \dotsb + \alpha_j}}{\eta^{\alpha_1 + 2 \alpha_2 + \dotsb + j \alpha_j}} = 2^{\alpha_1} \frac{\eta^i}{\eta^j} \le 2^{j} \frac{\eta^i}{\eta^j}. \end{aligned} \end{multline*} Therefore, \begin{multline*} \abs{D^j (\varphi_\psi \ast u)(x) - D^j u(x)}\\ \le \int\limits_{B_1^m} \varphi(z) \bigabs{D^ju(x + \psi(x) z) - D^ju(x)} \,\mathrm{d} z \\ + C_3 \sum_{i=1}^j \frac{\eta^i}{\eta^j} \chi_{\supp{D\psi}}(x) \int\limits_{B_1^m} \varphi(z) \abs{D^i u(x + \psi(x)z)} \,\mathrm{d} z. \end{multline*} By the Minkowski inequality, \begin{multline*} \bigg( \int\limits_{\omega} \bigg( \int\limits_{B_1^m} \varphi(z) \abs{D^ju(x + \psi(x) z) - D^j u(x)} \,\mathrm{d} z \bigg)^p \,\mathrm{d} x \bigg)^{\frac{1}{p}} \\ \begin{aligned} & \le \int\limits_{B_1^m} \bigg( \int\limits_{\omega} \abs{D^ju(x + \psi(x) z) - D^ju(x)}^p \,\mathrm{d} x \bigg)^{\frac{1}{p}} \varphi(z) \,\mathrm{d} z\\ & \le \sup_{v \in B_1^m}{\norm{\tau_{\psi v}(D^j u) - D^j u}_{L^p(\omega)}} \int\limits_{B_1^m} \varphi(z) \,\mathrm{d} z\\ & = \sup_{v \in B_1^m}{\norm{\tau_{\psi v}(D^j u) - D^j u}_{L^p(\omega)}} , \end{aligned} \end{multline*} and for every \(i \in \{1, \dots, j\}\), we also have \begin{multline*} \bigg( \int\limits_{\omega \cap \supp{D\psi}} \bigg( \int\limits_{B_1^m} \varphi(z) \abs{D^i u(x + \psi(x) z)} \,\mathrm{d} z \bigg)^p \,\mathrm{d} x \bigg)^{\frac{1}{p}} \\ \le \int\limits_{B_1^m} \varphi(z) \bigg( \int\limits_{\omega \cap \supp{D\psi}} \abs{D^i u(x + \psi(x) z)}^p \,\mathrm{d} x \bigg)^{\frac{1}{p}} \,\mathrm{d} z. \end{multline*} Using the change of variable \(y = x + \psi(x) z\) with respect to the variable \(x\), we deduce by definition of \(A\) that \begin{multline*} \bigg( \int\limits_{\omega \cap \supp{D\psi}} \bigg( \int\limits_{B_1^m} \varphi(z) \abs{D^i u(x + \psi(x) z)} \,\mathrm{d} z \bigg)^p \,\mathrm{d} x \bigg)^{\frac{1}{p}} \\ \begin{aligned} & \le \int\limits_{B_1^m} \varphi(z) \bigg( \frac{1}{1 - \norm{D\psi}_{L^\infty(\omega)}} \int\limits_{A} \abs{D^i u(y)}^p \,\mathrm{d} y \bigg)^{\frac{1}{p}} \,\mathrm{d} z\\ & = \frac{1}{(1 - \norm{D\psi}_{L^\infty(\omega)})^\frac{1}{p}} \norm{D^i u}_{L^p(A)}. \end{aligned} \end{multline*} This gives the desired estimate for \(u \in C^\infty(\Omega; {\mathbb R}^\nu)\). The case of functions in \(W^{k, p}(\Omega; {\mathbb R}^\nu)\) follows by density. \end{proof} \subsection{Thickening} Given a map \(u \in W^{k, p}(U^m; {\mathbb R}^\nu)\) which behaves nicely near the skeleton \(U^\ell\), we would like to construct a map \(u \circ \Phi\) that does not depend on the values of \(u\) away from the skeleton \(U^\ell\). The price to pay is that the map \(u \circ \Phi\) will be singular on the dual skeleton \(T^{\ell^*}\); these singularities will however be mild enough to allow \(u \circ \Phi\) to be in \(R_{\ell^*} (U^m; {\mathbb R}^\nu)\) and to satisfy \(W^{k, p}\) estimates for \(k p < \ell +1\). The thickening construction is related to homogenization of functions on cubes that are used in the study of density problems for \(k = 1\); see \cites{Bethuel, BethuelZheng, Hang-Lin}. The precise meaning of dual skeleton we use is the following: \begin{definition} Given \(\ell \in \{0, \dotsc, m-1\}\) and the \(\ell\) dimensional skeleton \(\mathcal{S}^\ell\) of a cubication \(\mathcal{S}^m\), the dual skeleton \(\mathcal{T}^{\ell^*}\) of \(\mathcal{S}^\ell\) is the skeleton of dimension \(\ell^* = m - \ell - 1\) composed of all cubes of the form \(\sigma^{\ell^*} + x - a\), where \(\sigma^{\ell^*} \in \mathcal{S}^{\ell^*}\), \(a\) is the center and \(x\) the vertex of a cube of \(\mathcal{S}^m\). \end{definition} The integer \(\ell^*\) gives the greatest dimension such that \(S^\ell \cap T^{\ell^*} = \emptyset.\) \medskip The proposition below provides the main properties of the map \(\Phi\): \begin{proposition}\label{propositionthickeningfromaskeleton} Let \(\ell \in \{0, \dotsc, m-1\}\), \(\eta > 0\), \(0 < \rho < 1\), \(\mathcal{S}^m\) be a cubication of \({\mathbb R}^m\) of radius \(\eta\), \(\mathcal{U}^m\) be a subskeleton of \(\mathcal{S}^m\) and \(\mathcal{T}^{\ell^*}\) be the dual skeleton of \(\mathcal{U}^\ell\). There exists a smooth map \(\Phi : {\mathbb R}^m \setminus T^{\ell^*} \to {\mathbb R}^m\) such that \begin{enumerate}[$(i)$] \item \(\Phi\) is injective, \item for every \(\sigma^m \in \mathcal{S}^m\), \(\Phi(\sigma^m \setminus T^{\ell^*}) \subset \sigma^m \setminus T^{\ell^*}\), \label{itempropositionthickeningfromaskeleton2} \item \(\Supp{\Phi} \subset U^m + Q^{m}_{\rho\eta}\) and \(\Phi(U^m\setminus T^{\ell^*}) \subset U^\ell + Q^{m}_{\rho\eta}\), \label{itempropositionthickeningfromaskeleton3} \item for every \(j \in {\mathbb N}_*\) and for every \(x \in {\mathbb R}^m \setminus T^{\ell^*}\), \[ \abs{D^j \Phi(x)} \le \frac{C\eta}{\bigl(\dist(x, T^{\ell^*})\bigr)^j}, \] for some constant \(C > 0\) depending on \(j\), \(m\) and \(\rho\), \label{itempropositionthickeningfromaskeleton5} \item for every \(0 < \beta < \ell + 1\), for every \(j \in {\mathbb N}_*\) and for every \(x \in {\mathbb R}^m \setminus T^{\ell^*}\), \[ \eta^{j-1} \abs{D^j \Phi(x)} \le C' \bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C' > 0\) depending on \(\beta\), \(j\), \(m\) and \(\rho\). \label{itempropositionthickeningfromaskeleton4} \end{enumerate} \end{proposition} This proposition gives \(W^{k, p}\) bounds on \(u \circ \Phi\) for every \(W^{k, p}\) function \(u\). The proposition and the corollary below will be applied in the proof of Theorem~\ref{theoremDensityManifoldNontrivial} with \( \ell = \lfloor kp \rfloor \). \begin{corollary} \label{corollaryEstimateThickening} Let \(\Phi : {\mathbb R}^m \setminus T^{\ell^*} \to {\mathbb R}^m\) be the map given by Proposition~\ref{propositionthickeningfromaskeleton}. If \(\ell+1 > kp\), then for every \(u \in W^{k, p}(U^m + Q^{m}_{\rho\eta}; {\mathbb R}^\nu)\), \(u \circ \Phi \in W^{k, p}(U^m + Q^{m}_{\rho\eta}; {\mathbb R}^\nu)\) and for every \(j \in \{1, \dotsc, k\}\), \[ \eta^{j} \norm{D^j(u \circ \Phi)}_{L^p(U^m + Q^{m}_{\rho\eta})} \leq C'' \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(U^m + Q^{m}_{\rho\eta})}, \] for some constant \(C'' > 0\) depending on \(m\), \(k\), \(p\) and \(\rho\). \end{corollary} \begin{proof} We first establish the estimate for a map \(u\) in \(C^{\infty}(U^m + Q^{m}_{\rho\eta} ; {\mathbb R}^{\nu})\). By the chain rule for higher-order derivatives, for every \(j \in \{1, \dots, k\}\) and for every \(x \in U^m \setminus T^{\ell^*}\), \[ \abs{D^j (u \circ \Phi) (x)}^p \le C_1 \sum_{i=1}^j \sum_{\substack{1 \le t_1 \le \dotsc \le t_i\\ t_1 + \dotsb + t_i = j}} \abs{D^i u(\Phi(x))}^p \abs{D^{t_1} \Phi(x)}^p \dotsm \abs{D^{t_i} \Phi(x)}^p. \] Let \(0 < \beta < \ell + 1\). If \(1 \le t_1 \le \dotsc \le t_i\) and \(t_1 + \dotsb + t_i = j \), then by property~\((\ref{itempropositionthickeningfromaskeleton4})\) of Proposition~\ref{propositionthickeningfromaskeleton}, \[ \abs{D^{t_1} \Phi(x)}^p \dotsm \abs{D^{t_i}\Phi(x)}^p \le C_2 \frac{\bigl(\jac{\Phi}(x)\bigr)^\frac{t_1 p}{\beta}}{\eta^{(t_1 - 1)p}} \dotsm \frac{\bigl(\jac{\Phi}(x)\bigr)^\frac{t_i p}{\beta}}{{\eta^{(t_i - 1)p}}} = C_2 \frac{\bigl(\jac{\Phi}(x)\bigr)^\frac{jp}{\beta}}{\eta^{(j-i)p}}. \] Since \(kp < \ell + 1\), we may take \(\beta = jp\). Thus, \[ \abs{D^{t_1} \Phi(x)}^p \dotsm \abs{D^{t_i}\Phi(x)}^p \le C_2 \frac{\jac{\Phi}(x)}{\eta^{(j-i)p}} \] and this implies \[ \eta^{jp}\abs{D^j (u \circ \Phi) (x)}^p \le C_3 \sum_{i=1}^j \eta^{ip}\abs{D^i u(\Phi(x))}^p \jac{\Phi}(x). \] Since \(\Phi\) is injective and \(\Supp{\Phi} \subset U^m + Q^m_{\rho\eta}\), we have \(\Phi\big( (U^m + Q^{m}_{\rho\eta}) \setminus T^{\ell^*} \big) \subset U^m + Q^m_{\rho\eta}\). Thus, by the change of variable formula, \[ \begin{aligned} \int\limits_{(U^m + Q^{m}_{\rho\eta}) \setminus T^{\ell^*}} \eta^{jp}\abs{D^j (u \circ \Phi)}^p &\le C_3 \sum_{i=1}^j \int\limits_{(U^m + Q^{m}_{\rho\eta}) \setminus T^{\ell^*}} \eta^{ip}\abs{(D^i u) \circ \Phi}^p \jac{\Phi}\\ &\le C_3 \sum_{i=1}^j \int\limits_{U^m + Q^{m}_{\rho\eta}} \eta^{ip}\abs{D^i u}^p \end{aligned} \] and \(u \circ \Phi \in W^{k, p}((U^m + Q^{m}_{\rho\eta}) \setminus T^{\ell^*}; {\mathbb R}^\nu)\). Since \(\ell > 0\), the dimension of the skeleton \(T^{\ell^*}\) is strictly less than \(m - 1\). Thus, \(u \circ \Phi \in W^{k, p}(U^m + Q^{m}_{\rho\eta}; {\mathbb R}^\nu)\). By density of smooth maps in \(W^{k, p}(U^m + Q^{m}_{\rho\eta}; {\mathbb R}^\nu)\), we deduce that for every \(u \in W^{k, p}(U^m + Q^{m}_{\rho\eta}; {\mathbb R}^\nu)\), the function \(u \circ \Phi\) also belongs to this space and satisfies the estimate above. \end{proof} We describe the construction of the map \(\Phi\) given by Proposition~\ref{propositionthickeningfromaskeleton} in the case of only one \(\ell\) dimensional cube: \begin{proposition} \label{lemmaThickeningFaceFromPrimalSkeleton} Let \(\ell \in \{1, \dotsc, m\}\), \(\eta > 0\), \(0 < \underline{\rho}< \rho <\overline{\rho} < 1\) and \(T = \{0^\ell\} \times Q^{m-\ell}_{\rho\eta}\). There exists a smooth function \(\lambda : {\mathbb R}^m \setminus T \to [1,\infty)\) such that if \(\Phi : {\mathbb R}^m \setminus T \to {\mathbb R}^m\) is defined for \(x = (x', x'') \in ({\mathbb R}^\ell \times {\mathbb R}^{m - \ell}) \setminus T \) by \[ \Phi(x) = (\lambda(x)x', x''), \] then \begin{enumerate}[$(i)$] \item \(\Phi\) is injective, \label{itemthickeninglemma0.5} \item \label{itemthickeninglemma2} \(\Supp{\Phi} \subset Q^\ell_{(1-\rho)\eta} \times Q^{m-\ell}_{\rho\eta}\), \item \( \Phi\big((Q^{\ell}_{(1-\rho)\eta }\times Q^{m-\ell}_{\underline{\rho}\eta}) \setminus T\big) \subset (Q^{\ell}_{(1-\rho)\eta } \setminus Q^{\ell}_{(1-\overline{\rho})\eta})\times Q^{m-\ell}_{\underline{\rho}\eta}, \) \label{itemthickeninglemma1} \item for every \(j \in {\mathbb N}_*\) and for every \( x = (x', x'') \in (Q^\ell_{(1-\rho)\eta} \times Q^{m-\ell}_{\rho\eta}) \setminus T\), \[ \abs{D^j \Phi(x)} \le \frac{C\eta}{\abs{x'}^j}, \] for some constant \(C > 0\) depending on \(j\), \(m\), \(\underline{\rho}\), \(\rho\) and \(\overline{\rho}\), \label{itemthickeninglemma3} \item \label{itemthickeninglemma4} for every \(0 < \beta < \ell\), for every \(j \in {\mathbb N}_*\) and for every \( x \in (Q^\ell_{(1-\rho)\eta} \times Q^{m-\ell}_{\rho\eta}) \setminus T\), \[ \eta^{j-1} \abs{D^j \Phi(x)} \le C' \bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C' > 0\) depending on \(\beta\), \(j\), \(m\), \(\underline{\rho}\), \(\rho\) and \(\overline{\rho}\). \end{enumerate} \end{proposition} We temporarily admit Proposition~\ref{lemmaThickeningFaceFromPrimalSkeleton} and we prove Proposition~\ref{propositionthickeningfromaskeleton}. \begin{proof}[Proof of Proposition~\ref{propositionthickeningfromaskeleton}] We first introduce finite sequences \((\rho_i)_{\ell\leq i\leq m}\) and \((\tau_i)_{\ell\leq i\leq m}\) such that \[ 0 < \rho_m < \tau_{m-1} < \rho_{m-1} < \dotsc < \rho_{\ell+1} < \tau_{\ell} < \rho_\ell = \rho. \] For \(i = m\), we take \(\Phi_m=\mathrm{Id}\). Using downward induction, we shall define for every \(i \in \{\ell, \dotsc, m-1\}\) smooth maps \(\Phi_{i} : {\mathbb R}^m \setminus T^{i^*} \to {\mathbb R}^m\) such that \begin{enumerate}[(a)] \item \label{item1240} \(\Phi_i\) is injective, \item \label{item1241} for every \(\sigma^m \in \mathcal{S}^m\) and for every \(r \in \{i^*, \dotsc, m-1\}\), \(\Phi_i(\sigma^m \setminus T^r) \subset \sigma^m \setminus T^r\), \item \label{item1242} \(\Supp{\Phi_i} \subset U^m + Q^{m}_{\rho_i\eta}\), \item \label{item1243} \(\Phi_i(U^m \setminus T^{i^*}) \subset U^i + Q^{m}_{\rho_i\eta}\), \item \label{item1244} for every \(x\in {\mathbb R}^m \setminus T^{i^*}\) and for every \(r \in \{i^*, \dots, m-2\}\), \[ \dist(\Phi_i(x), T^{r}) \dist(x, T^{r+1}) = \dist(\Phi_i(x), T^{r+1})\dist(x,T^{r}), \] \item \label{item1245} for every \(j \in {\mathbb N}_*\) and for every \(x \in {\mathbb R}^m \setminus T^{i^*}\), \[ \abs{D^j \Phi_i(x)} \le \frac{C\eta}{\bigl(\dist(x, T^{i^*})\bigr)^j}, \] for some constant \(C > 0\) depending on \(j\), \(m\) and \(\rho\), \item \label{item1246} for every \(0 < \beta < i+1\), for every \(j \in {\mathbb N}_*\) and for every \(x \in {\mathbb R}^m \setminus T^{i^*}\), \[ \eta^{j-1} \abs{D^j \Phi_i(x)} \le C' \bigl(\jac{\Phi_i}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C' > 0\) depending on \(\beta\), \(j\), \(m\) and \(\rho\). \end{enumerate} The map \(\Phi_\ell\) will satisfy the conclusion of the proposition. \medskip Let \( i \in \{\ell+1, \dotsc, m\}\) and let \(\Theta_{i}\) be the map obtained from Proposition~\ref{lemmaThickeningFaceFromPrimalSkeleton} with parameters \(\underline\rho = \rho_{i}\), \(\rho = \tau_{i-1}\), \(\overline\rho = \rho_{i-1}\) and \(\ell = i\). Given \(\sigma^i \in \mathcal{U}^{i}\), we may identify \(\sigma^i\) with \(Q^{i}_{\eta} \times \{0^{m-i}\}\) and \(T^{(i-1)^*} \cap (\sigma^i + Q_{\tau_{i-1}\eta}^m)\) with \(\{0^{i}\} \times Q_{\tau_{i-1}\eta}^{m-i}\). The map \(\Theta_i\) induces by isometry a map which we shall denote by \(\Theta_{\sigma^i}\). Let \(\Psi_i : {\mathbb R}^m \setminus T^{(i-1)^*} \to {\mathbb R}^m\) be defined for every \(x \in {\mathbb R}^m \setminus T^{(i-1)^*}\) by \[ \Psi_i(x):=\begin{cases} \Theta_{\sigma^i}(x) & \text{if } x \in \sigma^i + Q^{m}_{\tau_{i-1}\eta} \text{ for some } \sigma^i \in \mathcal{U}^i,\\ x&\text{otherwise} . \end{cases} \] We first explain why \(\Psi_i\) is well-defined. Since \(\Theta_{\sigma^i}\) coincides with the identity map on \(\partial\sigma^i + Q^m_{\tau_{i-1}\eta}\), then for every \(\sigma^i_1, \sigma^i_2 \in \mathcal{U}^{i}\), if \(x \in (\sigma_1^i + Q^{m}_{\tau_{i-1}\eta}) \cap (\sigma_2^i + Q^{m}_{\tau_{i-1}\eta})\) and \(\sigma_1^i \ne \sigma_2^i\), then \[ \Theta_{\sigma_1^i}(x) = x = \Theta_{\sigma_2^i}(x). \] One also verifies directly that \(\Psi_i\) is smooth on \({\mathbb R}^m \setminus T^{(i-1)^*}\). Assuming that \(\Phi_i\) has been defined satisfying properties \eqref{item1240}--\eqref{item1246}, we let \[ \Phi_{i-1}=\Psi_i \circ \Phi_i. \] The map \(\Phi_{i-1}\) is well-defined on \({\mathbb R}^m\setminus T^{(i-1)^*}\) since \(\Phi_i({\mathbb R}^m \setminus T^{(i-1)^*}) \subset {\mathbb R}^m \setminus T^{(i-1)^*}\). We now check that \(\Phi_{i-1}\) satisfies all required properties. \begin{proof}[Proof of Property \eqref{item1240}] The map \(\Phi_{i-1}\) is injective since \(\Psi_i\) and \(\Phi_i\) are injective. \end{proof} \begin{proof}[Proof of Property \eqref{item1241}] For every \(r\in \{(i - 1)^*, \dotsc, m - 1\}\) and for every \(\sigma^m \in \mathcal{S}^m\), we have by induction hypothesis \(\Phi_{i}(\sigma^m\setminus T^r)\subset \sigma^m \setminus T^r\). Moreover, for any \(\sigma^m \in \mathcal{S}^m\) and any \(\tilde\sigma^i\in \mathcal{U}^{i}\), the formula of \(\Theta_i\) implies that \( \Theta_{\tilde\sigma^i}(\sigma^m \setminus T^r) \subset \sigma^m\setminus T^r.\) \end{proof} \begin{proof}[Proof of Property \eqref{item1242}] By induction hypothesis \(\Phi_i\) coincides with the identity map outside \(U^{m}+Q^{m}_{\rho_{i}\eta}\). By construction, \(\Psi_i\) coincides with the identity map outside \(U^{m}+Q^{m}_{\tau_{i-1}\eta}\) (see Proposition~\ref{lemmaThickeningFaceFromPrimalSkeleton}, property \((\ref{itemthickeninglemma2})\)). Since \(\rho_i < \tau_{i-1} < \rho_{i-1}\), we deduce that \(\Supp{\Phi_{i-1}} \subset U^m + Q^{m}_{\rho_{i-1}\eta}\). \end{proof} \begin{proof}[Proof of Property \eqref{item1243}] By induction hypothesis (property \eqref{item1243}) \[ \Phi_{i}(U^m \setminus T^{i^*}) \subset U^i + Q^m_{\rho_i\eta} \] and (property \eqref{item1241}) \[ \Phi_i({\mathbb R}^m \setminus T^{(i-1)^*}) \subset {\mathbb R}^m \setminus T^{(i-1)^*}. \] Since \(T^{(i-1)^\ast}\supset T^{i^\ast}\), we have \[ \Phi_{i}(U^m \setminus T^{(i-1)^*}) \subset (U^i + Q^m_{\rho_i\eta})\setminus T^{(i-1)^\ast}. \] By construction of \(\Theta_i\) (see Proposition~\ref{lemmaThickeningFaceFromPrimalSkeleton}, property \((\ref{itemthickeninglemma1})\)), for every \(\sigma^i \in \mathcal{U}^i\), \[ \Theta_{\sigma^i}\big((\sigma^i + Q^m_{\rho_i\eta}) \setminus T^{(i-1)^*}\big) \subset \partial \sigma^i + Q^m_{\rho_{i-1}\eta}. \] Taking the union over all faces \(\sigma^i \in \mathcal{U}^i\), we get \[ \Psi_i\big((U^i + Q^m_{\rho_i\eta})\setminus T^{(i-1)^\ast}\big)\subset U^{i-1} + Q^m_{\rho_{i-1}\eta}. \] Combining the information for \(\Phi_i\) and \(\Psi_i\), we obtain \[ \Phi_{i-1}(U^m \setminus T^{(i-1)^\ast}) \subset U^{i-1} + Q^m_{\rho_{i-1}\eta}. \qedhere \] \end{proof} \begin{proof}[Proof of Property \eqref{item1244}] Let \(r \in \{(i-1)^*, \dots, m-2\}\) and \(x\in {\mathbb R}^m \setminus T^{(i-1)^*}\). If \(\Phi_{i-1}(x) = \Phi_{i}(x)\), then the conclusion follows by induction. If \(\Phi_{i-1}(x) \ne \Phi_{i}(x)\), then there exists \(\sigma^i \in \mathcal{U}^i\) such that \(\Phi_{i}(x) \in \sigma^i + Q^{m}_{\tau_{i-1}\eta}\) and \(\Phi_{i-1}(x) =\Theta_{\sigma^i} (\Phi_{i}(x))\). Since \(\Phi_i(x) \in \Supp{\Psi_i}\), \[ \Phi_i(x) \in (\sigma^i + Q^{m}_{\tau_{i-1}\eta}) \setminus (\partial\sigma^i + Q^{m}_{\tau_{i-1}\eta}). \] Up to an isometry, we may assume that \(\sigma^i = Q_\eta^{i} \times \{0^{m - i}\}\). For every \(0 < \lambda < 1\) and for every \(y=(y',y'')\in Q^{i}_{(1-\lambda)\eta}\times Q^{m-i}_{\lambda\eta}\), \[ \dist(y, T^r)=\dist\big((y',0), T^r\cap (Q^{i}_{(1-\lambda)\eta}\times \{0^{m - i}\})\big). \] In view of the formula of \(\Theta_{i}\), we deduce that for every \(y\in (\sigma^i + Q^{m}_{\tau_{i-1}\eta}) \setminus (\partial\sigma^i \times Q^{m}_{\tau_{i-1}\eta})\), \[ \dist(\Theta_{\sigma^i}(y), T^r)\dist(y, T^{r+1})=\dist\big(\Theta_{\sigma^i}(y), T^{r+1})\dist(y, T^{r}\big); \] this identity is reminiscent of Thales' intercept theorem from Euclidean geometry. By induction hypothesis, we then get \[ \begin{split} \dist(\Phi_{i-1}(x), T^{r})\dist(x,T^{r+1}) & = \dist(\Theta_{\sigma^i}(\Phi_{i}(x)), T^{r})\dist(x,T^{r+1})\\ & = \dist(\Theta_{\sigma^i}(\Phi_{i}(x)), T^{r+1})\dist(x,T^{r})\\ & = \dist(\Phi_{i-1}(x)), T^{r+1})\dist(x,T^{r}). \end{split} \] This gives the conclusion. \end{proof} \begin{proof}[Proof of Property \eqref{item1245}] Let \(x \in {\mathbb R}^m \setminus T^{(i-1)^*}\). If \(\Psi_i\) coincides with the identity map in a neighborhood of \(\Phi_i(x)\), then \(D^j \Phi_{i-1}(x)=D^j \Phi_{i}(x)\) and the conclusion follows from the induction hypothesis and the fact that \(T^{(i-1)^*}\supset T^{i^*}\). If \(\Psi_i\) does not coincide with the identity map in a neighborhood of \(\Phi_i(x)\), then there exists \(\sigma^i \in \mathcal{U}^{i}\) such that \[ \Phi_{i}(x)\in (\sigma^i + Q^{m}_{\tau_{i-1}\eta}) \setminus (\partial\sigma^i + Q^{m}_{\tau_{i-1}\eta}) \] and \(\Phi_{i-1}(x)=\Theta_{\sigma_i}(\Phi_{i}(x))\). By the chain rule for higher order derivatives, \[ \abs{D^j\Phi_{i-1}(x)} \le C_1 \sum_{r=1}^j \sum_{\substack{1 \le t_1 \le \dotsc \le t_r\\ t_1 + \dotsb + t_r = j}}\abs{D^r \Theta_{\sigma_i}(\Phi_i(x))}\, \abs{D^{t_1} \Phi_i(x)} \dotsm \abs{D^{t_r} \Phi_i(x)}. \] By construction of \(\Theta_i\) (see Proposition~\ref{lemmaThickeningFaceFromPrimalSkeleton}, property \((\ref{itemthickeninglemma3})\)), we have for any \(y = (y',y'')\in (Q_{(1 - \tau_{i-1})\eta}^i \times Q^{m-i}_{\tau_{i-1}\eta}) \setminus (\{0^{i}\}\times Q^{m-i}_{\tau_{i-1}\eta})\), \[ \abs{D^{r}\Theta_{i}(y)}\le \frac{C_2 \eta}{\abs{y'}^r}. \] This implies \[ \abs{D^r\Theta_{\sigma^i}(\Phi_{i}(x))} \le \frac{C_2 \eta}{(\dist \big(\Phi_{i}(x), T^{(i-1)^{\ast}} )\big)^r}. \] By the induction hypothesis, for every \(1 \le t_1 \le \ldots \le t_r\) such that \(t_1 + \dots + t_r = j\), \begin{multline*} \abs{D^{t_1} \Phi_i(x)} \dotsm \abs{D^{t_r}\Phi_i(x)}\\ \le C_3 \frac{\eta}{\big(\dist (x, T^{i^\ast})\big)^{t_1}} \dotsm \frac{\eta}{\big(\dist (x, T^{i^\ast})\big)^{t_r}} = C_3 \frac{\eta^r}{\big(\dist (x, T^{i^\ast})\big)^{j}}. \end{multline*} Thus, \[ \abs{D^j \Phi_{i-1}(x)}\le C_4 \sum_{r=1}^{j} \frac{\eta^{r + 1}}{\big(\dist (\Phi_{i}(x), T^{(i-1)^{\ast}} )\big)^r \big(\dist (x, T^{i^\ast})\big)^j}. \] We recall that by property \eqref{item1245}, \[ \dist(\Phi_{i}(x), T^{(i-1)^{*}}) \dist(x, T^{i^*}) = \dist (x, T^{(i-1)^*}) \dist (\Phi_{i}(x), T^{i^*}). \] Since \(\Phi_{i}(x)\in (\sigma^i + Q^{m}_{\tau_{i-1}\eta}) \setminus (\partial\sigma^i + Q^{m}_{\tau_{i-1}\eta})\), \[ \dist (\Phi_{i}(x), T^{i^*}) \ge (1-\tau_{i-1})\eta \ge (1-\rho)\eta. \] Thus, \begin{multline*} \big(\dist (\Phi_{i}(x), T^{(i-1)^{\ast}} )\big)^r \big(\dist (x, T^{i^\ast})\big)^j \\ \begin{aligned} & = \big(\dist (x, T^{(i-1)^*}) \dist (\Phi_{i}(x), T^{i^*})\big)^r \big(\dist (x, T^{i^\ast})\big)^{j - r}\\ & \ge \big(\dist{(x, T^{(i-1)^*})}\big)^r \big((1-\rho)\eta)^r \big(\dist (x, T^{i^\ast})\big)^{j - r}. \end{aligned} \end{multline*} Since \(T^{i^*} \subset T^{(i-1)^*}\), we conclude that \[ \abs{D^j \Phi_{i-1}(x)}\le C_5 \frac{\eta}{\bigl(\dist (x,T^{(i-1)^{\ast}})\bigr)^j}. \qedhere \] \end{proof} \begin{proof}[Proof of Property \eqref{item1246}] Let \(j \in {\mathbb N}_*\) and let \(x \in {\mathbb R}^m \setminus T^{(i-1)^*}\). If \(\Psi_i\) coincides with the identity map in a in a neighborhood of \(\Phi_i(x)\), then \(D^j \Phi_{i-1}(x)=D^j \Phi_{i}(x)\) and \(\jac{\Phi_{i-1}}(x) = \jac{\Phi_{i}}(x)\). The conclusion then follows from the induction hypothesis. Assume that \(\Psi_i\) does not coincides with the identity map in a neighborhood of \(\Phi_i(x)\). Let \(0 < \beta < i\) and \(r \in \{0, \dots, j\}\). By induction hypothesis, if \(1 \le t_1 \le \ldots \le t_r\) and \(t_1 + \dotsb + t_r = j \), then \[ \abs{D^{t_1} \Phi_i(x)} \dotsm \abs{D^{t_r}\Phi_i(x)} \le C_1 \frac{(\jac{\Phi_i}(x))^\frac{t_1}{\beta}}{\eta^{t_1 - 1}} \dotsm \frac{(\jac{\Phi_i}(x))^\frac{t_r}{\beta}}{{\eta^{t_r - 1}}} = C_1 \frac{(\jac{\Phi_i}(x))^\frac{j}{\beta}}{\eta^{j-r}}. \] Let \(\sigma^i \in \mathcal{U}^{i}\) be such that \[ \Phi_{i}(x)\in (\sigma^i + Q^{m}_{\tau_{i-1}\eta}) \setminus (\partial\sigma^i + Q^{m}_{\tau_{i-1}\eta}) \] and \(\Phi_{i-1}(x)=\Theta_{\sigma_i}\circ \Phi_{i}(x)\). By construction of \(\Theta_i\) (see Proposition~\ref{lemmaThickeningFaceFromPrimalSkeleton}, property \((\ref{itemthickeninglemma4})\)), we have for any \(y\in (Q_{(1 - \tau_{i-1})\eta}^i \times Q^{m-i}_{\tau_{i-1}\eta}) \setminus (\{0^{i}\}\times Q^{m-i}_{\tau_{i-1}\eta})\), \[ \eta^{r-1}\abs{D^{r}\Theta_{i}(y)}\le C_2(\jac{\Theta_i}(y))^{\frac{r}{\beta \frac{r}{j}}} = C_2 (\jac{\Theta_i}(y))^\frac{j}{\beta}. \] Thus, \begin{multline*} \abs{D^{r}\Theta_{\sigma^i}(\Phi_i(x))}\, \abs{D^{t_1} \Phi_i(x)} \dotsm \abs{D^{t_r}\Phi_i(x)}\\ \le C_3 \frac{(\jac{\Theta_{\sigma^i}}(\Phi_i(x)))^{\frac{j}{\beta}}}{\eta^{r-1}} \frac{(\jac{\Phi_i}(x))^{\frac{j}{\beta}}}{\eta^{j-r}} = \frac{C_3}{\eta^{j-1}}\big(\jac{\Phi_{i-1}}(x) \big)^{\frac{j}{\beta}}. \end{multline*} Therefore, by the chain rule for higher order derivatives, \[ \abs{D^j\Phi_{i-1}(x)} \le \frac{C_4}{\eta^{j-1}} \big(\jac{\Phi_{i-1}}(x) \big)^{\frac{j}{\beta}}. \] This gives the conclusion. \end{proof} By downward induction, we conclude that properties \eqref{item1240}--\eqref{item1246} hold for every \(i \in \{\ell, \dots, m\}\). In particular, \(\Phi_\ell\) satisfies properties $(i)$--\((v)\) of Proposition~\ref{propositionthickeningfromaskeleton}. \end{proof} We establish a couple of lemmas in order to prove Proposition~\ref{lemmaThickeningFaceFromPrimalSkeleton}: \begin{lemma} \label{lemmaThickeningAroundPrimalSqueleton} Let \(\ell\in \{1,\ldots, m\}\), let \(\eta > 0\), let \(0 < \underline{\rho} < \rho < \overline{\rho} < 1\) and \(0 < \kappa < 1 - \overline{\rho}\). There exists a smooth function \(\lambda : {\mathbb R}^m \to [1,\infty)\) such that if \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) is defined for \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell} \) by \[ \Phi(x) = (\lambda(x)x', x''), \] then \begin{enumerate}[$(i)$] \item \(\Phi\) is a diffeomorphism, \item \(\Supp{\Phi} \subset Q^\ell_{(1-\rho)\eta} \times Q^{m-\ell}_{\rho \eta}\), \item \( \Phi\bigl( (Q^\ell_\eta \setminus Q^\ell_{\kappa \eta}) \times Q^{m-\ell}_{\underline{\rho} \eta }\bigr) \subset (Q^\ell_\eta \setminus Q^\ell_{(1-\overline{\rho})\eta}) \times Q^{m-\ell}_{\underline{\rho} \eta }, \) \item \label{item1454} for every \(j \in {\mathbb N}_*\) and for every \(x \in {\mathbb R}^m\), \[ \eta^{j-1}\abs{D^j \Phi(x)} \le C, \] for some constant \(C > 0\) depending on \(j\), \(m\), \(\underline{\rho}\), \(\rho\), \(\overline{\rho}\) and \(\kappa\), \item \label{item1455} for every \(j \in {\mathbb N}_*\) and for every \(x \in {\mathbb R}^m\), \[ C' \le \jac{\Phi}(x) \le C'', \] for some constants \(C', C'' > 0\) depending on \(m\), \(\underline{\rho}\), \(\rho\), \(\overline{\rho}\) and \(\kappa\). \end{enumerate} \end{lemma} \begin{proof} By scaling, we may assume that \(\eta=1\). Let \(\psi : {\mathbb R} \to [0, 1]\) be a smooth function such that \begin{enumerate}[\(-\)] \item \(\psi\) is nonincreasing on \({\mathbb R}_+\) and nondecreasing on \({\mathbb R}_-\), \item for \(\abs{t} \le 1 - \overline{\rho}\), \(\psi(t)=1\), \item for \(\abs{t} \ge 1-\rho\), \(\psi(t)=0\). \end{enumerate} Let \(\theta : {\mathbb R} \to [0, 1]\) be a smooth function such that \begin{enumerate}[\(-\)] \item for \(\abs{t}\le \underline{\rho}\), \(\theta(t)=1\), \item for \(\abs{t} \ge \rho\), \(\theta(t)=0\). \end{enumerate} Let \(\varphi : {\mathbb R}^m \to {\mathbb R}\) be the function defined for \(x=(x_1, \dotsc, x_m) \in {\mathbb R}^m\) by \[ \textstyle \varphi(x) = \prod\limits_{i=1}^\ell \psi(x_i) \prod\limits_{i=\ell+1}^m \theta (x_i). \] Thus, \begin{enumerate}[\(-\)] \item for every \(x \in {\mathbb R}^m \setminus (Q^\ell_{1-\rho} \times Q^{m - \ell}_{\rho})\), \(\varphi(x) = 0\), \item for every \(x \in Q^\ell_{1-\overline\rho} \times Q^{m - \ell}_{\underline\rho}\), \(\varphi(x) = 1\). \end{enumerate} We shall define the map \(\Phi\) in terms of its inverse \(\Psi\): let \(\Psi : {\mathbb R}^m \to {\mathbb R}^m\) be the function defined for \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell}\) by \[ \Psi(x)= \big((1 - \alpha \varphi(x))x', x''\big), \] where \(\alpha \in {\mathbb R}\). In particular, \begin{enumerate}[\(-\)] \item for every \(x \in {\mathbb R}^m \setminus (Q^\ell_{1-\rho} \times Q^{m - \ell}_{\rho})\), \(\Psi(x) = x\), \item for every \(x = (x', x'') \in Q^\ell_{1-\overline\rho} \times Q^{m - \ell}_{\underline\rho}\), \(\Psi(x) = ((1-\alpha) x', x'')\). \end{enumerate} In view of this second property, taking \(\alpha=1-\frac{\kappa}{1-\overline{\rho}}\), we deduce that \(\Psi\) is a bijection between \(Q^\ell_{1-\overline\rho} \times Q^{m-\ell}_{\underline\rho}\) and \(Q^\ell_{\kappa} \times Q^{m-\ell}_{\underline\rho}\). We now prove that \(\Psi\) is injective. If \(x, y \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell}\) satisfy \(\Psi(x) = \Psi(y)\), then \(y'' = x''\) and \(y' = t x'\) for some \(t > 0\). Since \(\alpha \in (0, 1)\), the function \[ g : s \in [0, \infty) \longmapsto s(1-\alpha \varphi(sx', x'')) \] is the product of an increasing function with a nondecreasing positive function. Thus, \(g\) is increasing, whence \(\Psi\) is injective. Since \(g(0)= 0\) and \(\lim\limits_{t \to +\infty}{g(t)} = +\infty\), by the Intermediate value theorem, \(g([0, \infty)) = [0, \infty)\). Thus, \(\Psi\) is surjective. Therefore, the map \(\Psi\) is a bijection. We claim that for every \(x \in {\mathbb R}^m\), \(D\Psi(x)\) is invertible. Indeed, for every \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell}\) and for every \(v = (v', v'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell}\), \[ D\Psi(x)[v] = \bigl((1 - \alpha \varphi(x))v' - \alpha D\varphi(x)[v] x', v'' \bigr). \] The Jacobian of \(\Psi\) can be computed as the determinant of a nilpotent perturbation of a diagonal linear map to be \[ \jac{\Psi(x)} = (1 - \alpha \varphi(x))^{\ell-1} \bigl(1 - \alpha \varphi(x) - \alpha D\varphi(x)[(x', 0)] \bigr). \] Since \(\psi\) is nonincreasing on \({\mathbb R}_+\) and nondecreasing on \({\mathbb R}_-\), \(D\varphi(x)[(x', 0)] \le 0\). Thus, \[ \jac{\Psi(x)} \ge (1 - \alpha \varphi(x))^{\ell} \ge (1 - \alpha)^\ell > 0. \] The map \(\Phi = \Psi^{-1}\) satisfies all the desired properties. \end{proof} \begin{lemma} \label{lemmaThickeningAroundDualSqueleton} Let \(\ell \in \{1,\dotsc, m\}\), \(\eta > 0\), \(0<\underline{\rho}<\rho<\overline{\rho}<1\) and \(T=\{0^\ell\}\times Q^{m-\ell}_{\rho\eta}.\) There exists a smooth function \(\lambda : {\mathbb R}^m \setminus T \to [1,\infty)\) such that if \(\Phi : {\mathbb R}^m \setminus T \to {\mathbb R}^m\) is defined for \(x = (x', x'') \in ({\mathbb R}^\ell \times {\mathbb R}^{m - \ell})\setminus T\) by \[ \Phi(x) = (\lambda(x)x', x''), \] then \begin{enumerate}[$(i)$] \item \(\Phi\) is injective, \label{item03011} \item \(\Supp{\Phi}\subset B^\ell_{(1-\rho)\eta} \times Q^{m-\ell}_{\rho\eta}\), \label{item03012} \item \(\Phi\big((B^{\ell}_{(1-\rho)\eta}\times Q^{m-\ell}_{\underline\rho\eta})\setminus T\big)\subset (B^{\ell}_{(1-\rho)\eta}\setminus B^{\ell}_{(1-\overline\rho) \eta}) \times Q^{m-\ell}_{\underline{\rho}\eta} \), \label{item03013} \item for every \(j\in {\mathbb N}_{*}\) and for every \(x = (x', x'') \in (B^\ell_{(1-\rho)\eta} \times Q^{m-\ell}_{\rho\eta}) \setminus T\), \begin{equation*} \abs{D^j\Phi(x)}\leq \frac{C\eta}{\abs{x'}^j}, \end{equation*} for some constant \(C > 0\) depending on \(j\), \(m\), \(\underline\rho\), \(\rho\) and \(\overline\rho\), \label{item03014} \item for every \(0 < \beta < \ell\), for every \(j \in {\mathbb N}_*\) and for every \( x \in {\mathbb R}^m \setminus T\), \[ \eta^{j-1}\abs{D^j \Phi(x)} \le C' \bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C' > 0\) depending on \(\beta\), \(j\), \(m\), \(\underline{\rho}\), \(\rho\) and \(\overline{\rho}\). \label{item03015} \end{enumerate} \end{lemma} \begin{proof} By scaling, we may assume that \(\eta = 1\). Given \(b > 0\), let \(\varphi : (0, \infty) \to [1, \infty)\) be a smooth function such that \begin{enumerate}[\(-\)] \item for \(0 < s \le 1-\overline\rho\), \(\varphi(s)= \dfrac{1-\overline\rho}{s}\Bigl(1+\frac{b}{\ln \frac{1}{s}}\Bigr)\), \item for \(s \ge 1-\rho\), \(\varphi(s)=1\), \item the function \(s \in (0, \infty) \mapsto s\varphi(s)\) is increasing. \end{enumerate} This is possible for any \(b > 0\) such that \[ (1-\overline\rho)\Bigl(1+\frac{b}{\ln \frac{1}{1-\overline\rho}}\Bigr) < 1 - \rho. \] Let \(\theta : {\mathbb R}^{m-\ell} \to [0, 1]\) be a smooth function such that \begin{enumerate}[\(-\)] \item for \(y \in Q^{m-\ell}_{\underline{\rho}}\), \(\theta(y) = 0\), \item for \(y \in {\mathbb R}^{m-\ell} \setminus Q^{m-\ell}_{\rho}\), \(\theta(y)=1\). \end{enumerate} We now introduce for \(x=(x', x'') \in {\mathbb R}^\ell\times {\mathbb R}^{m-\ell}\), \[ \zeta(x) = \sqrt{\abs{x'}^2+\theta\bigl(x''\bigr)^2}. \] Let \(\lambda : {\mathbb R}^m \setminus T \to {\mathbb R}\) be the function defined for \(x=(x', x'') \in {\mathbb R}^m \setminus T\) by \[ \lambda(x)= \varphi(\zeta(x)). \] Since \(\zeta \ne 0\) in \({\mathbb R}^m\setminus T\), the function \(\lambda\) is well-defined and smooth. In addition, \(\lambda \ge 1\). We now check that the map \(\Phi\) defined in the statement satisfies all the required properties. \begin{proof}[Proof of Property~\((\ref{item03011})\)] In order to check that \(\Phi\) is injective, we first observe that if \(x=(x', x''), y=(y', y'') \in B^\ell_1 \times Q^{m-\ell}_{\rho}\) and \(\Phi(x)=\Phi(y)\), then \(x''=y''\), and there exists \(t > 0\) such that \(y' = t x'\). The conclusion follows from the fact that the function \[ h: s \in [0, \infty) \longmapsto s \varphi\bigl(\sqrt{s^2+\theta(x'')^2}\bigr) \] is increasing. \end{proof} \begin{proof}[Proof of Property~\((\ref{item03012})\)] For every \(x = (x', x'') \in ({\mathbb R}^\ell\times {\mathbb R}^{m-\ell})\setminus T\), if \(x' \not\in B^\ell_{1-\rho}\) or if \(x'' \not\in Q_\rho^{m - \ell}\), then \(\zeta(x) \ge 1-\rho\). Thus, \(\lambda(x) = \varphi(\zeta(x)) = 1\) and \(\Phi(x) = x\). We then have \(\Supp{\Phi}\subset B^\ell_{1-\rho} \times Q^{m-\ell}_{\rho}\). \end{proof} \begin{proof}[Proof of Property~\((\ref{item03013})\)] We first observe that since the function \(s \in (0, \infty) \mapsto s\varphi(s)\) is increasing and \(\lim\limits_{s \to 0}{s\varphi(s)} = 1 - \overline{\rho}\), for every \(s > 0\), \[ s \varphi(s) \ge 1 - \overline{\rho}. \] Since for every \(x = (x', x'') \in (B^{\ell}_{1-\rho}\times Q^{m-\ell}_{\underline\rho})\setminus T\), we have \(\zeta(x) = \abs{x'}\), we deduce that \[ \abs{\lambda(x) x'} = \varphi(\abs{x'})\abs{x'} \ge 1-\overline\rho. \] On the other hand, since the function \(h\) defined above is increasing, \[ \abs{\lambda(x) x'} = h(\abs{x'}) \le h(1-\rho) = 1-\rho. \] We conclude that \(\lambda(x)x' \in B^\ell_{1-\rho} \setminus B^\ell_{1-\overline\rho}\). \end{proof} \begin{proof}[Proof of Property~\((\ref{item03014})\)] By the chain rule, \[ \abs{D^{j}\lambda(x)} \le C_1 \sum_{i=1}^{j} \sum_{\substack{1 \le t_1 \le \dotsc \le t_i\\ t_1 + \dotsb + t_i = j}} \abs{\varphi^{(i)}(\zeta(x))}\, \abs{D^{t_1} \zeta(x)} \dotsm \abs{D^{t_i} \zeta(x)}. \] For every \(i \in {\mathbb N}_{\ast}\) and for every \(s > 0\), \begin{equation*} \abs{\varphi^{(i)}(s)} \leq \frac{C_2}{s^{i+1}} \end{equation*} and for every \(x \in (B^{\ell}_1\times {\mathbb R}^{m-\ell})\setminus T\), \begin{equation*} \abs{D^i \zeta(x)} \leq \frac{C_3}{\zeta(x)^{i-1}}. \end{equation*} Thus, for every \(1 \le t_1 \le \ldots \le t_i\) such that \(t_1 + \dotsb + t_i = j\), \[ \abs{D^{t_1} \zeta(x)} \dotsm \abs{D^{t_i} \zeta(x)} \le \frac{C_4}{\zeta(x)^{t_1-1} \dotsm \zeta(x)^{t_i-1}} = \frac{C_4}{\zeta(x)^{j - i}}. \] By the chain rule, \[ \begin{split} \abs{D^{j} \lambda(x)} & \le C_5 \sum_{i=1}^{j} \frac{1}{\zeta(x)^{i+1}} \frac{1}{\zeta(x)^{j - i}} = \frac{C_5 j}{\zeta(x)^{j+1}}. \end{split} \] Hence, by the Leibniz rule, for any \(x\in (B^{\ell}_{1}\times {\mathbb R}^{m-\ell})\setminus T\), \begin{equation} \label{equationEstimationDeriveeThickening} \abs{D^j\Phi(x)}\le\frac{C_6}{\zeta(x)^j}. \end{equation} Since \(\zeta(x) \ge \abs{x'}\), the conclusion follows. \end{proof} \begin{proof}[Proof of Property~\((\ref{item03015})\)] For every \( x=(x', x'') \in ({\mathbb R}^\ell\times {\mathbb R}^{m-\ell})\setminus T\) and \(v=(v', v'') \in {\mathbb R}^\ell \times {\mathbb R}^{m-\ell}\), \[ D\Phi(x)[v]=\Bigl(\varphi\bigl(\zeta(x)\bigr)v' +\varphi^{(1)}\bigl(\zeta(x)\bigr)\\ \frac{x' \cdot v'+\theta(x'') D\theta(x'')[v'']}{\zeta(x)} x', v''\Bigr). \] The Jacobian can be computed as the determinant of a nilpotent perturbation of a diagonal linear map to be \[ \begin{split} \jac \Phi(x) & =\varphi(\zeta(x))^{\ell-1}\Bigl(\varphi(\zeta(x))+\varphi^{(1)}(\zeta(x))\frac{\abs{x'}^2}{\zeta(x)} \Bigr)\\ & =\varphi(\zeta(x))^{\ell-1}\Bigl(\varphi(\zeta(x)) \Bigl(1 - \frac{\abs{x'}^2}{\zeta(x)^2}\Bigr) + \big(\varphi^{(1)}(\zeta(x))\zeta(x) + \varphi(\zeta(x)) \big)\frac{\abs{x'}^2}{\zeta(x)^2} \Bigr). \end{split} \] Since for every \(s > 0\), \[ s \varphi^{(1)}(s) + \varphi(s) = (s\varphi(s))^{(1)} \ge 0 \] and since there exists \(c_1 > 0\) such that for every \(s > 0\), \[ \varphi(s) \ge \frac{c_1}{s}, \] we have \[ \jac \Phi(x) \ge \varphi(\zeta(x))^{\ell} \Bigl(1 - \frac{\abs{x'}^2}{\zeta(x)^2}\Bigr) \ge \frac{c_2} {\zeta(x)^\ell} \Bigl(1 - \frac{\abs{x'}^2}{\zeta(x)^2}\Bigr). \] If \(\abs{x'} \le \theta(x'')\), then \(\zeta(x) \ge \sqrt{2}\abs{x'}\) and we get \[ \jac \Phi(x) \ge \frac{c_3}{\zeta(x)^\ell}. \] On the other hand, by direct inspection, for every \(\alpha < 1\), there exists a constant \(c_4 > 0\) depending on \(\alpha\) such that for every \(s > 0\), \[ s \varphi^{(1)}(s) + \varphi(s) \ge \frac{c_4}{s^\alpha}. \] Thus, \[ \jac \Phi(x) \ge \varphi(\zeta(x))^{\ell-1} \big(\varphi^{(1)}(\zeta(x))\zeta(x) + \varphi(\zeta(x)) \big)\frac{\abs{x'}^2}{\zeta(x)^2} \ge \frac{c_5}{\zeta(x)^{\ell -1 + \alpha}} \frac{\abs{x'}^2}{\zeta(x)^2}. \] If \(\abs{x'} > \theta(x'')\), then \(\zeta(x) \le \sqrt{2}\abs{x'}\) and we get \[ \jac \Phi(x) \ge \frac{c_6}{\zeta(x)^{\ell -1 + \alpha}}. \] In both cases, we deduce that for every \(\beta < \ell\) and for every \( x \in {\mathbb R}^m \setminus T\), \[ \jac \Phi(x) \ge \frac{c_7}{\zeta(x)^{\beta}}. \] Thus, by estimate \eqref{equationEstimationDeriveeThickening} in the proof of property~\((\ref{item03014})\) above, when \(x\in (B^{\ell}_{1-\rho}\times Q^{m-\ell}_\rho)\setminus T\), \[ \abs{D^j\Phi(x)}\le\frac{C_5}{\zeta(x)^j} \le \frac{C_5}{(c_7)^\frac{j}{\beta}} (\jac \Phi(x))^\frac{j}{\beta}. \qedhere \] \end{proof} The proof of Lemma~\ref{lemmaThickeningAroundDualSqueleton} is complete. \end{proof} \begin{proof}[Proof of Proposition~\ref{lemmaThickeningFaceFromPrimalSkeleton}] Define \(\Phi\) to be the composition of the map \(\Phi_1\) given by Lemma~\ref{lemmaThickeningAroundPrimalSqueleton} with any parameter \(\kappa\le \frac{1-\overline{\rho}}{\sqrt{\ell}}\) together with the map \(\Phi_2\) given by Lemma~\ref{lemmaThickeningAroundDualSqueleton}; more precisely, \(\Phi=\Phi_1\circ \Phi_2\). By composition, the map \(\Phi\) is injective and \(\Supp{\Phi}\subset Q^{\ell}_{(1-\rho)\eta }\times Q^{m-\ell}_{\rho\eta}\). Moreover, the choice of \(\kappa\) implies that \(Q^{\ell}_{\kappa\eta}\subset B^{\ell}_{(1-\overline\rho)\eta}\). Hence, \[ \Phi\big((Q^{\ell}_{(1-\rho)\eta }\times Q^{m-\ell}_{\underline{\rho}\eta}) \setminus T\big) \subset (Q^{\ell}_{(1-\rho)\eta } \setminus Q^{\ell}_{(1-\overline{\rho})\eta})\times Q^{m-l}_{\underline{\rho}\eta}. \] By the chain rule for higher order derivatives and by the estimate of the derivatives of \(\Phi_1\) (Lemma~\ref{lemmaThickeningAroundPrimalSqueleton}, see property (\(\ref{item1454}\))), \[ \begin{split} \abs{D^{j}\Phi(x)} & \le C_1 \sum_{i=1}^{j} \sum_{\substack{1 \le t_1 \le \dotsc \le t_i\\ t_1 + \dotsb + t_i = j}} \abs{D^i\Phi_1(\Phi_2(x))}\, \abs{D^{t_1} \Phi_2(x)} \dotsm \abs{D^{t_i} \Phi_2(x)}\\ & \le C_2 \sum_{i=1}^{j} \sum_{\substack{1 \le t_1 \le \dotsc \le t_i\\ t_1 + \dotsb + t_i = j}} \frac{\abs{D^{t_1} \Phi_2(x)} \dotsm \abs{D^{t_i} \Phi_2(x)}}{\eta^{i-1}}. \end{split} \] The estimate for \(D^j \Phi\) is a consequence of the estimates of the derivatives of \(\Phi_2\) (see Lemma~\ref{lemmaThickeningAroundDualSqueleton}, property (\(\ref{item03014}\))). The estimate for \(\jac{\Phi}\) is a consequence of the estimate for \(\jac{\Phi_2}\) given by property (\(\ref{item03015}\)) of Lemma~\ref{lemmaThickeningAroundDualSqueleton} and the lower bound for \(\jac{\Phi_1}\) given by property (\(\ref{item1455}\)) of Lemma~\ref{lemmaThickeningAroundPrimalSqueleton}. \end{proof} \section{Proof of Theorem~\ref{theoremDensityManifoldNontrivial}} First observe that if \(u \in W^{k, p}(Q_1^m; N^n)\), then the restrictions to \(Q_1^m\) of the maps \(u_\gamma \in W^{k, p}(Q_{1 + 2\gamma}^m; N^n)\) defined for \(x \in Q_{1 + 2 \gamma}^m\) by \(u_\gamma(x) = u (x/(1 + 2 \gamma))\) converge strongly to \(u\) in \(W^{k, p}(Q_1^m; N^n)\) when \(\gamma\) tends to \(0\). We can thus assume from the beginning that \(u \in W^{k, p}(Q_{1 + 2\gamma}^m; N^n)\). We apply successively the opening, smoothing and thickening constructions to this map \(u\). We divide the proof in four parts: \begin{Part} Construction of a map \(u^\mathrm{th}_\eta \in W^{k, p}(Q^{m}_{1+\gamma}; {\mathbb R}^\nu) \cap C^\infty(Q^{m}_{1+\gamma} \setminus T^{\ell^*}_\eta; {\mathbb R}^\nu)\) such that for every \(j \in \{1, \dots, k\}\), \begin{multline*} \eta^{j} \norm{D^j u^\mathrm{th}_\eta - D^j u}_{L^p(Q^{m}_{1+\gamma})}\\ \leq \sup_{v \in B_1^m}{\eta^{j} \norm{\tau_{\psi_\eta v}(D^j u) - D^j u}_{L^p(Q^{m}_{1+\gamma})}} + C \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(U^m_\eta + Q^m_{2\rho\eta})}, \end{multline*} where \(\mathcal{U}^m_\eta\) is a subskeleton of \(Q^{m}_{1+\gamma}\) and \(\mathcal{T}^{\ell^*}_\eta\) is the dual skeleton of \(\mathcal{U}^\ell_\eta\). \end{Part} \textsl{Using the terminology presented in the Introduction, the subskeleton \(\mathcal{U}^m_\eta\) will be chosen to be the set of all bad cubes together with the set of good cubes which intersect some bad cube. The precise choice of \(\mathcal{U}^m_\eta\) will be made in Part~3.} \medskip Let \(\mathcal{K}^m_\eta\) be a cubication of \(Q_{1+\gamma}^m\) of radius \(0 < \eta \le \gamma\) and let \(\mathcal{U}^m_\eta\) be a subskeleton of \(\mathcal{K}^m_\eta\). Let \(0 < \rho < \frac{1}{2}\); thus, \[ 2\rho\eta \le \gamma. \] Given \(\ell \in \{0, \dots, m - 1\}\), we begin by opening the map \(u\) in a neighborhood of \(U^\ell_\eta\). More precisely, let \(\Phi^\mathrm{op} : {\mathbb R}^m \to {\mathbb R}^m\) be the smooth map given by Proposition~\ref{openingpropGeneral} and consider the map \[ u^\mathrm{op}_\eta = u \circ \Phi^\mathrm{op}. \] In particular, \(u^\mathrm{op}_\eta \in W^{k, p}(Q^m_{1+2\gamma}; N^n)\) and \(u^\mathrm{op}_\eta = u\) in the complement of \(U^\ell_\eta + Q^m_{2\rho\eta}\). For every \(j \in \{1, \dots, k\}\), \begin{equation} \label{inequalityMainOpening} \begin{split} \eta^j\norm{D^j u^\mathrm{op}_\eta - D^j u}_{L^p(Q^m_{1+2\gamma})} & = \eta^j\norm{D^j u^\mathrm{op}_\eta - D^j u}_{L^p(U^\ell_\eta + Q^m_{2\rho\eta})}\\ & \le \eta^j\norm{D^j u^\mathrm{op}_\eta}_{L^p(U^\ell_\eta + Q^m_{2\rho\eta})} + \eta^j\norm{D^j u}_{L^p(U^\ell_\eta + Q^m_{2\rho\eta})}\\ & \le \refstepcounter{cte} C_{\thecte} \sum_{i = 1}^j \eta^{i} \norm{D^i u}_{L^p(U^\ell_\eta + Q^m_{2\rho\eta})}. \end{split} \end{equation} We next consider a smooth function \(\psi_\eta \in C^\infty(Q^m_{1+2\gamma})\) such that \[ \boxed{0< \psi_\eta \le \rho \eta.} \] Given a mollifier \(\varphi \in C_c^\infty(B_1^m)\), let for every \(x\in Q^{m}_{1+\gamma + \rho\eta}\), \[ u^\mathrm{sm}_\eta(x) = (\varphi_{\psi_\eta(x)} \ast u^\mathrm{op}_\eta)(x). \] Since \(0 < \psi_\eta \le \rho\eta\), the map \(u^\mathrm{sm}_\eta : Q_{1 + \gamma+\rho\eta}^m \to {\mathbb R}^\nu\) is well-defined and is smooth. If \[ \boxed{\norm{D\psi_\eta}_{L^\infty(Q^m_{1+2\gamma})} \le \beta} \] for some \(\beta < 1\) and if for every \(i \in \{2, \dotsc, k\}\), \[ \boxed{ \eta^{i} \norm{D^i \psi_\eta}_{L^\infty(Q^m_{1+2\gamma})} \le \eta, } \] then by Proposition~\ref{lemmaConvolutionEstimates} with \(\omega = Q^m_{1+\gamma}\), we have for every \(j \in \{1, \dots, k\}\), \begin{multline*} \eta^{j} \norm{D^j u^\mathrm{sm}_\eta - D^j u^\mathrm{op}_\eta}_{L^p(Q^{m}_{1+\gamma})}\\ \leq \sup_{v \in B_1^m}{\eta^{j} \norm{\tau_{\psi_\eta v}(D^j u^\mathrm{op}_\eta) - D^j u^\mathrm{op}_\eta}_{L^p(Q^{m}_{1+\gamma})}} + \refstepcounter{cte} C_{\thecte} \sum_{i=1}^j \eta^{i} \norm{D^i u^\mathrm{op}_\eta}_{L^p(A)}, \end{multline*} where \( A = \bigcup\limits_{x \in Q^{m}_{1+\gamma} \cap \supp{D\psi_\eta}}B_{\psi_\eta(x)}^m(x). \) For every \(v \in B_1^m\), \begin{multline*} \eta^{j} \norm{\tau_{\psi_\eta v}(D^j u^\mathrm{op}_\eta) - D^j u^\mathrm{op}_\eta}_{L^p(Q^m_{1+\gamma})}\\ \begin{aligned} & \le \eta^{j} \norm{\tau_{\psi_\eta v}(D^j u^\mathrm{op}_\eta) - \tau_{\psi_\eta v}(D^j u)}_{L^p(Q^m_{1+\gamma})}\\ & \qquad+\eta^{j} \norm{\tau_{\psi_\eta v}(D^j u) - D^j u}_{L^p(Q^m_{1+\gamma})}+\eta^{j} \norm{D^j u^\mathrm{op}_\eta - D^j u}_{L^p(Q^m_{1+\gamma})} \end{aligned} \end{multline*} and, by the change of variable formula, \[ \norm{\tau_{\psi_\eta v}(D^j u^\mathrm{op}_\eta) - \tau_{\psi_\eta v}(D^j u)}_{L^p(Q^m_{1+\gamma})}\le \refstepcounter{cte} C_{\thecte} \norm{D^j u^\mathrm{op}_\eta - D^j u}_{L^p(Q^m_{1+2\gamma})}. \] If we further assume that \[ \boxed{\supp{D\psi_\eta}\subset U^m_\eta,} \] then since \(\psi_\eta \le \rho\eta\), we have \(A \subset U^m_\eta + Q^m_{\rho\eta}\). By Proposition~\ref{openingpropGeneral}, we then have \[ \sum_{i=1}^j \eta^{i} \norm{D^i u^\mathrm{op}_\eta}_{L^p(A)} \le \sum_{i=1}^j \eta^{i} \norm{D^i u^\mathrm{op}_\eta}_{L^p(U^m_\eta + Q^m_{\rho\eta})} \le \refstepcounter{cte} C_{\thecte} \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(U^m_\eta + Q^m_{2\rho\eta})}. \] Thus, for every \(j \in \{1, \dots, k\}\), \begin{multline} \label{inequalityMainSmoothening} \eta^{j} \norm{D^j u^\mathrm{sm}_\eta - D^j u^\mathrm{op}_\eta}_{L^p(Q^m_{1+\gamma})}\\ \le \sup_{v \in B_1^m}{\eta^{j} \norm{\tau_{\psi_\eta v}(D^j u) - D^j u}_{L^p(Q^m_{1+\gamma})}} + \refstepcounter{cte} C_{\thecte} \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(U^m_\eta + Q^m_{2\rho\eta})}. \end{multline} Given \(0 < \underline{\rho} < \rho\), we apply thickening to the map \(u^\mathrm{sm}_\eta\) in a neighborhood of \(U^\ell_\eta\) of size \(\underline{\rho}\eta\). More precisely, denote by \(\Phi^\mathrm{th} : {\mathbb R}^m \to {\mathbb R}^m\) the smooth map given by Proposition~\ref{propositionthickeningfromaskeleton} with the parameter \(\underline{\rho}\) and let \[ u^\mathrm{th}_\eta = u^\mathrm{sm}_\eta \circ \Phi^\mathrm{th}. \] Then, \(u^\mathrm{th}_\eta = u^\mathrm{sm}_\eta\) in the complement of \(U^m_\eta + Q^m_{\underline{\rho}\eta}\). Assuming in addition that \[ \boxed{\ell + 1 > kp,} \] then by Corollary~\ref{corollaryEstimateThickening}, \(u^\mathrm{th}_\eta \in W^{k, p}(K^m_\eta; {\mathbb R}^\nu)\) and for every \(j \in \{1, \dots, k\}\), \[ \begin{split} \eta^j\norm{D^j u^\mathrm{th}_\eta - D^j u^\mathrm{sm}_\eta}_{L^p(K^m_\eta)} & \leq \eta^j\norm{D^j u^\mathrm{th}_\eta - D^j u^\mathrm{sm}_\eta}_{L^p(U^m_\eta + Q^m_{\underline{\rho}\eta})}\\ & \leq \eta^j\norm{D^j u^\mathrm{th}_\eta}_{L^p(U^m_\eta + Q^m_{\underline{\rho}\eta})} + \eta^j\norm{D^j u^\mathrm{sm}_\eta}_{L^p(U^m_\eta + Q^m_{\underline{\rho}\eta})}\\ & \le \refstepcounter{cte} C_{\thecte} \sum_{i = 1}^j \eta^{i} \norm{D^i u^\mathrm{sm}_\eta}_{L^p(U^m_\eta + Q^m_{\underline{\rho}\eta})}. \end{split} \] Thus, by Proposition~\ref{lemmaConvolutionEstimates} and by Proposition~\ref{openingpropGeneral}, \begin{equation} \label{inequalityMainThickening} \begin{split} \eta^j\norm{D^j u^\mathrm{th}_\eta - D^j u^\mathrm{sm}_\eta}_{L^p(K^m_\eta)} & \le \refstepcounter{cte} C_{\thecte} \sum_{i = 1}^j \eta^{i} \norm{D^i u^\mathrm{op}_\eta}_{L^p(U^m_\eta + Q^m_{(\underline \rho + \rho)\eta})}\\ & \le \refstepcounter{cte} C_{\thecte} \sum_{i = 1}^j \eta^{i} \norm{D^i u}_{L^p(U^m_\eta + Q^m_{2\rho\eta})}. \end{split} \end{equation} By the triangle inequality, we deduce from \eqref{inequalityMainOpening}, \eqref{inequalityMainSmoothening} and \eqref{inequalityMainThickening} that for every \(j \in \{1, \dots, k\}\), \begin{multline*} \eta^{j} \norm{D^j u^\mathrm{th}_\eta - D^j u}_{L^p(K^m_\eta)}\\ \leq \sup_{v \in B_1^m}{\eta^{j} \norm{\tau_{\psi_\eta v}(D^j u) - D^j u}_{L^p(Q^{m}_{1+\gamma})}} + C \sum_{i=1}^j \eta^{i} \norm{D^i u}_{L^p(U^m_\eta + Q^m_{2\rho\eta})}. \end{multline*} This gives the estimate we claimed since \(K^{m}_{\eta}=Q^m_{1+\gamma}\). We observe that \(u^\mathrm{th}_\eta\) is smooth except on \((U^m_\eta + Q^m_{\underline\rho \eta}) \cap T^{\ell^*}_\eta\) where \(T^{\ell^*}_\eta\) is the dual skeleton corresponding to the cubication \(\mathcal{K}^{m}_\eta\). \qed \medskip The map \(u^\mathrm{th}_\eta\) need not have its values on the manifold \(N^n\), so we need to estimate the distance between the image of \(u^\mathrm{th}_\eta\) and \(N^n\). \begin{Part} The directed Hausdorff distance from the image of the map \(u^\mathrm{th}_\eta\) to the manifold \(N^n\) satisfies the estimate \begin{multline*} \Dist_{N^n}{(u^\mathrm{th}_\eta(K^m_\eta\setminus T^{\ell^*}_\eta))}\\ \begin{aligned} \le \max \biggl\{ &\max_{\sigma^m \in \mathcal{K}^m_\eta \setminus \mathcal{E}^m_\eta} \frac{C'}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)},\\ & \sup_{x \in U^\ell_\eta + Q_{\underline{\rho}\eta}^m}\frac{C''}{\abs{Q_{s}^m}^2} \int\limits_{Q_{s}^m(x)}\int\limits_{Q_{s}^m(x)} \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y\,\mathrm{d} z\biggr\}, \end{aligned} \end{multline*} where \(\mathcal{E}^m_\eta\) is a subskeleton of \(\mathcal{U}^m_\eta\), and this estimate implies that for every \(\eta > 0\) sufficiently small, the image of \(u^\mathrm{th}_\eta\) is contained in a small tubular neighborhood of \(N^n\). \end{Part} \textsl{The subskeleton \(\mathcal{E}^m_\eta\) will be chosen in Part~3 as the set of bad cubes and \(\mathcal{K}^m_\eta \setminus \mathcal{E}^m_\eta\) will be the set of good cubes.} \medskip We first observe that by Proposition~\ref{propositionthickeningfromaskeleton} \((\ref{itempropositionthickeningfromaskeleton2})\), \(\Phi^\textrm{th}(K^m_\eta\setminus (T^{\ell^\ast}\cup U^m_\eta)) \subset K^m_\eta\setminus U^m_\eta\) while by Proposition~\ref{propositionthickeningfromaskeleton}~\((\ref{itempropositionthickeningfromaskeleton3})\), \(\Phi^\textrm{th}(U^m_\eta\setminus T^{\ell^\ast})\subset U^\ell_\eta+Q^{m}_{\underline{\rho}\eta}\). Hence, \[ \Phi^\mathrm{th}(K^m_\eta \setminus T^{\ell^*}_\eta) \subset (K^m_\eta \setminus U^m_\eta) \cup (U^\ell_\eta + Q^m_{\underline{\rho}\eta}). \] Given a set \(S \subset {\mathbb R}^\nu\), we denote by \(\Dist_{N^n} {(S)}\) the directed Hausdorff distance from \(S\) to \(N^n\), \[ \Dist_{N^n}{(S)} = \sup{\big\{ \dist{(x, N^n)} : x \in S \big\}}. \] With this notation we have \[ \Dist_{N^n}{(u^\mathrm{th}_\eta(K^m_\eta\setminus T^{\ell^*}_\eta))} \le \Dist_{N^n}\Bigl( \big(u^\mathrm{sm}_\eta \big((K^m_\eta \setminus U^m_\eta) \cup (U^\ell_\eta + Q^m_{\underline{\rho}\eta})\big) \Bigr). \] Since the image of the map \(u^\mathrm{op}_\eta\) obtained by opening \(u\) is contained in \(N^n\) (see Lemma~\ref{lemmaOpeningLp}), for every \(x \in K^m_\eta\) we have \[ \dist{(u^\mathrm{sm}_\eta(x), N^n)} \le \frac{1}{\abs{Q_{\psi_\eta(x)}^m}} \int\limits_{Q_{\psi_\eta(x)}^m(x)} \abs{u^\mathrm{sm}_\eta(x) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} z. \] On the other hand, since \(u^\mathrm{sm}_\eta\) is the convolution of \(u^\mathrm{op}_\eta\) with a mollifier, \[ \begin{split} \abs{u^\mathrm{sm}_\eta(x) - u^\mathrm{op}_\eta(z)} &\le \frac{1}{\psi(x)^m} \int\limits_{B_{\psi_\eta(x)}^m(x)} \varphi \Bigl(\frac{x - y}{\psi (x)}\Bigr) \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y\\ &\le \frac{\setcounter{cte}{1} C_{\thecte}}{\abs{Q_{\psi_\eta(x)}^m}} \int\limits_{Q_{\psi_\eta(x)}^m(x)} \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y . \end{split} \] Thus, \begin{equation} \label{equationDistanceBasic} \dist{(u^\mathrm{sm}_\eta(x), N^n)} \le \frac{C_{\thecte}}{\abs{Q_{\psi_\eta(x)}^m}^2} \int\limits_{Q_{\psi_\eta(x)}^m(x)}\int\limits_{Q_{\psi_\eta(x)}^m(x)} \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y\,\mathrm{d} z. \end{equation} Since \(N^n\) is a compact subset of \( {\mathbb R}^\nu \), \(u\) is bounded. By the Gagliardo-Nirenberg interpolation inequality (see \cite{Gagliardo, Nirenberg1959}), \(D u \in L^{kp}(Q^m_{1 + 2\gamma})\). By the Poincar\'e-Wirtinger inequality, \[ \frac{1}{\abs{Q_{\psi_\eta(x)}^m}^2} \int\limits_{Q_{\psi_\eta(x)}^m(x)}\int\limits_{Q_{\psi_\eta(x)}^m(x)} \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y\,\mathrm{d} z \le \frac{\refstepcounter{cte} C_{\thecte}}{\psi_\eta(x)^{\frac{m}{kp} - 1}} \norm{Du^\mathrm{op}_\eta}_{L^{kp}(Q_{\psi_\eta(x)}^m(x))}. \] Since \(\psi_\eta \le \rho \eta\), if \(\sigma^m \in \mathcal{K}^m_\eta\) is such that \(x \in \sigma^m\), then \(Q_{\psi_\eta(x)}^m(x) \subset \sigma^m + Q_{\rho\eta}^m\). Hence, \[ \begin{split} \dist{(u^\mathrm{sm}_\eta(x), N^n)} & \le \frac{\refstepcounter{cte} C_{\thecte}}{\psi_\eta(x)^{\frac{m}{kp} - 1}} \norm{Du^\mathrm{op}_\eta}_{L^{kp}(Q_{\psi_\eta(x)}^m(x))} \\ & \le \frac{C_{\thecte}}{\psi_\eta(x)^{\frac{m}{kp} - 1}} \norm{Du^\mathrm{op}_\eta}_{L^{kp}(\sigma^m + Q_{\rho\eta}^m)}. \end{split} \] Thus, by Addendum~\ref{addendumW1kp} to Proposition~\ref{openingpropGeneral}, \[ \dist{(u^\mathrm{sm}_\eta(x), N^n)} \le \frac{\refstepcounter{cte} C_{\thecte}}{\psi_\eta(x)^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)}. \] We rewrite this estimate for every \(x \in K^m_\eta\) as \begin{equation} \label{equationDistanceKU} \dist{(u^\mathrm{sm}_\eta(x), N^n)} \le \Big(\frac{\eta}{\psi_\eta(x)}\Big)^{\frac{m}{kp} - 1} \frac{C_{\thecte}}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)}. \end{equation} If \(x\in (U^{\ell}_\eta+Q^{m}_{\underline{\rho}\eta}) \cap U^{m}_{\eta}\), then \(x\in \sigma^m\) for some cube \(\sigma^m \in \mathcal{U}^m_\eta\). If \[ \boxed{\psi_\eta (x) \le (\rho - \underline\rho) \eta,} \] then \(Q_{\psi_\eta(x)}^m(x) \subset U^\ell_\eta + Q^m_{{\rho}\eta}\). By Addendum~\ref{addendumVMO} to Proposition~\ref{openingpropGeneral}, we have \begin{multline*} \frac{1}{\abs{Q_{\psi_\eta(x)}^m}^2} \int\limits_{Q_{\psi_\eta(x)}^m(x)}\int\limits_{Q_{\psi_\eta(x)}^m(x)} \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y\,\mathrm{d} z \\ \le (\psi_\eta(x))^{1 - \frac{\ell}{kp}} \frac{\refstepcounter{cte} C_{\thecte}}{\eta^{\frac{m - \ell}{kp}}} \norm{Du}_{L^{kp}(\sigma^m + Q^m_{2\rho\eta})}. \end{multline*} Therefore, \begin{equation*} \dist{(u^\mathrm{sm}_\eta(x), N^n)} \le (\psi_\eta(x))^{1 - \frac{\ell}{kp}} \frac{C_{\thecte}}{\eta^{\frac{m - \ell }{kp}}} \norm{Du}_{L^{kp}(\sigma^m + Q^m_{2\rho\eta})}. \end{equation*} We rewrite this estimate for every \(x\in (U^{\ell}_\eta+Q^{m}_{\underline{\rho}\eta}) \cap U^{m}_{\eta}\) as \begin{equation}\label{equationDistanceKUbis} \dist{(u^\mathrm{sm}_\eta(x), N^n)} \le \Big(\frac{\psi_\eta(x)}{\eta}\Big)^{1 - \frac{\ell}{kp}} \frac{C_{\thecte}}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q^m_{2\rho\eta})}. \end{equation} We now describe the function \(\psi_\eta\) that we shall take. Given two parameters \(0 < s < t\) and given a function \(\zeta \in C^\infty(Q^m_{1+2\gamma})\), we define \[ \psi_\eta = t\zeta+s (1 - \zeta). \] More precisely, let \(\mathcal{E}^m_\eta\) be a subskeleton of \(\mathcal{U}^m_\eta\) such that \[ \boxed{E^m_\eta \subset \Int{U^m_\eta}} \] in the relative topology of \(Q^{m}_{1+\gamma}\). Since \(\dist{(E^m_\eta, K^m_\eta \setminus U^m_\eta)} \ge \eta\), we take a function \(\zeta \in C^\infty(K^m_\eta)\) such that \begin{enumerate}[$(i)$] \item \(0 \le \zeta \le 1\) in \(K^m_\eta\), \item \(\zeta = 1\) in \(K^m_\eta \setminus U^m_\eta\), \item \(\zeta = 0\) in \(E^m_\eta\), \item for every \(j \in \{1, \dots, k\}\), \(\eta^j\norm{D^j\zeta}_{L^\infty} \le \tilde C\), for some constant \(\tilde C > 0\) depending only on \(m\). \end{enumerate} Thus, \(\supp{D\psi_\eta} \subset U_\eta^m\) and \[ \eta^j\norm{D^j\psi_\eta}_{L^\infty} \le \tilde C t. \] In order to apply Proposition~\ref{lemmaConvolutionEstimates} and to have \(\psi_\eta \le (\rho - \underline{\rho}) \eta\), we choose \[ t = \min \Bigl\{ \frac{\kappa}{\tilde C}, \rho - \underline{\rho}\Bigr\} \, \eta, \] for some fixed number \(0 < \kappa < 1\). Since \(\psi_\eta = t \) in \(K^m_\eta \setminus U^m_\eta\) and \(t \ge c\eta\) for some constant \(c > 0\) independent of \(\eta\), we have from \eqref{equationDistanceKU}, \[ \Dist_{N^n}{\big(u^\mathrm{sm}_\eta(K^m_\eta \setminus U^m_\eta)\big)} \le \max_{\sigma^m \in \mathcal{K}^m_\eta \setminus \mathcal{U}^m_\eta} \frac{\refstepcounter{cte} C_{\thecte}}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)}. \] Since \(\psi_\eta = s \) in \(E^m_\eta\), we have from \eqref{equationDistanceBasic}, \begin{multline*} \Dist_{N^n}{\Bigl(u^\mathrm{sm}_\eta\big((U^\ell_\eta + Q^m_{\underline{\rho}\eta})\cap E^{m}_{\eta}\big)\Bigr)}\\ \le \sup_{x \in U^\ell_\eta + Q_{\underline{\rho}\eta}^m}\frac{C_1}{\abs{Q_{s}^m}^2} \int\limits_{Q_{s}^m(x)}\int\limits_{Q_{s}^m(x)} \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y\,\mathrm{d} z. \end{multline*} Finally, if \[ \boxed{\ell \le kp,} \] then by \eqref{equationDistanceKUbis} and by the estimate \(\frac{\psi_\eta(x)}{\eta} \leq t = \refstepcounter{cte} C_{\thecte} \eta\), we get \[ \Dist_{N^n}{\Bigl(u^\mathrm{sm}_\eta\big((U^\ell_\eta + Q^m_{\underline{\rho}\eta})\cap (U^{m}_{\eta} \setminus E^m_{\eta} \big)\Bigr)} \le \max_{\sigma^m \in \mathcal{U}^m_\eta \setminus \mathcal{E}^m_\eta} \frac{\refstepcounter{cte} C_{\thecte}}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)}. \] Since we have already required that \(\ell+1>kp\), we are thus led to take \[ \boxed{\ell = \floor{kp}.} \] We deduce that \begin{multline*} \Dist_{N^n}{(u^\mathrm{th}_\eta(K^m_\eta\setminus T^{\ell^*}_\eta))}\\ \begin{aligned} \le \max \biggl\{ &\max_{\sigma^m \in \mathcal{K}^m_\eta \setminus \mathcal{E}^m_\eta} \frac{C'}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)},\\ & \sup_{x \in U^\ell_\eta + Q_{\underline{\rho}\eta}^m}\frac{C''}{\abs{Q_{s}^m}^2} \int\limits_{Q_{s}^m(x)}\int\limits_{Q_{s}^m(x)} \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y\,\mathrm{d} z\biggr\}. \end{aligned} \end{multline*} This gives the estimate we claimed. The nearest point projection \(\Pi\) onto \(N^n\) is well-defined and smooth on a tubular neighborhood of \(N^n\) of radius \(\iota > 0\). We now choose the subskeleton \(\mathcal{E}^m_\eta\) used in the definition of \(\zeta\) and \(\psi_\eta\) as the set of cubes \(\sigma^m \in \mathcal{K}^m_\eta\) such that \[ \frac{C'}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^{m})} > \iota. \] Thus, \[ \max_{\sigma^m \in \mathcal{K}^m_\eta \setminus \mathcal{E}^m_\eta} \frac{C'}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\sigma^m + Q_{2\rho\eta}^m)} \le \iota. \] We then take the subskeleton \(\mathcal{U}^m_\eta\) used in the constructions of opening and thickening as the the set of cubes \(\sigma^m \in \mathcal{K}^m_\eta\) which intersect some cube in \(\mathcal{E}^m_\eta\); in particular \(\Int{E^m_\eta} \subset U^m_\eta\) in the relative topology of \(Q_{1 + \gamma}^m\). In view of the uniform limit of Addendum~\ref{addendumVMO} to Proposition~\ref{openingpropGeneral}, since \(\ell \le kp\), for every \(s > 0\) small enough, \[ \sup_{x \in U^\ell_\eta + Q_{\underline{\rho}\eta}^m}\frac{C''}{\abs{Q_{s}^m}^2} \int\limits_{Q_{s}^m(x)}\int\limits_{Q_{s}^m(x)} \abs{u^\mathrm{op}_\eta(y) - u^\mathrm{op}_\eta(z)} \,\mathrm{d} y\,\mathrm{d} z \le \iota. \] We conclude that \(u^\mathrm{th}_\eta(K^m_\eta \setminus T^{\ell^*}_\eta)\) is contained in a tubular neighborhood of \(N^n\) of radius \(\iota\). \qed \begin{Part} The maps \(\Pi \circ u^\mathrm{th}_\eta\) converge to \(u\) in \(W^{k, p}(Q_1^m; N^n)\) as \(\eta\) tends to \(0\). \end{Part} Using the estimate from Part~1, we show that for every \(j \in \{1, \dots, k\}\), \[ \lim\limits_{\eta \to 0} \norm{D^j u^\mathrm{th}_\eta - D^j u}_{L^p(Q^m_{1+\gamma})} = 0. \] By continuity of the translation operator in \(L^p\) (see the remark following Proposition~\ref{lemmaConvolutionEstimatesLp}), \begin{equation}\label{equationLpConvergenceTranslates} \lim_{\eta \to 0} \sup_{v \in B_1^m}{\norm{\tau_{\psi_\eta v}(D^j u) - D^j u}_{L^p(Q^{m}_{1+\gamma})}} = 0. \end{equation} We now need to show that \[ \lim_{\eta \to 0} \sum_{i=1}^j \eta^{i-j} \norm{D^i u}_{L^p(U^m_\eta + Q^m_{2\rho\eta})} = 0. \] By the Gagliardo-Nirenberg interpolation inequality, for every \(i \in \{1, \dots, k-1\}\), \(D^i u \in L^{\frac{kp}{i}}(Q^m_{1 + 2\gamma})\). By H\"older's inequality, for every \(i \in \{1, \dots, k\}\) we have \[ \begin{split} \eta^{i-j} \norm{D^i u}_{L^p(U^m_\eta + Q^m_{2\rho\eta})} & \le \eta^{i-j} \abs{U^m_\eta + Q^m_{2\rho\eta}}^{\frac{k-i}{kp}} \norm{D^i u}_{L^{\frac{kp}{i}}(U^m_\eta + Q^m_{2\rho\eta})}\\ & = \eta^{k-j} \left(\frac{\abs{U^m_\eta + Q^m_{2\rho\eta}}}{\eta^{kp}} \right)^{\frac{k - i}{kp}} \norm{D^i u}_{L^{\frac{kp}{i}}(U^m_\eta + Q^m_{2\rho\eta})}. \end{split} \] From this estimate, we need that \(\abs{U^m_\eta + Q^m_{2\rho\eta}} = O(\eta^{kp})\) as \(\eta \to 0\). We observe that \(\abs{U^m_\eta + Q^m_{2\rho\eta}}\) satisfies the following estimate in terms of the number of elements \(\#\mathcal{U}^m_\eta\) of the subskeleton \(\mathcal{U}^m_\eta\), \[ \bigabs{U^m_\eta + Q^m_{2\rho\eta}} \le 2^m (\eta + 2\rho\eta)^m (\#\mathcal{U}^m_\eta) = \setcounter{cte}{1} C_{\thecte} \eta^m (\#\mathcal{U}^m_\eta). \] Note that for every cube \(\sigma^m \in \mathcal{U}^m_\eta\), if \(\tau^m \in \mathcal{E}^m_\eta\) intersects \(\sigma^m\), then \(\tau^m + Q_{2\rho\eta}^m \subset \sigma^m + Q^m_{2(1 + \rho)\eta}\). Denoting \(\sigma^m\) by \(Q^m_\eta(a)\), we have \( \tau^m + Q_{2\rho\eta}^m \subset Q^m_{\alpha\eta} (a), \) where \(\alpha = 3 + 2 \rho\), whence \[ \tau^m + Q_{2\rho\eta}^m \subset Q^m_{\alpha\eta} (a) \cap Q_{1 + 2\gamma}^m. \] By the definition of \(\mathcal{E}^m_\eta\), \[ \iota < \frac{C'}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(\tau^m + Q_{2\rho\eta}^m)} \le \frac{C'}{\eta^{\frac{m}{kp} - 1}} \norm{Du}_{L^{kp}(Q^m_{\alpha\eta} (a) \cap Q_{1 + 2\gamma}^m)}. \] Thus, for every \(Q^m_{\eta} (a) \in \mathcal{U}^m_\eta\), \[ 1 < \frac{\refstepcounter{cte} C_{\thecte}}{\eta^{m - kp}} \int\limits_{Q^m_{\alpha\eta} (a) \cap Q_{1 + 2\gamma}^m} \abs{Du}^{kp}. \] Since the cubes \(Q^m_{\alpha\eta} (a)\) intersect each other finitely many times and the number of overlaps only depend on \(\alpha\) and on the dimension \(m\), \[ \#\mathcal{U}^m_\eta \le \frac{C_{\thecte}}{\eta^{m - kp}} \sum_{Q^m_{\eta} (a) \in \mathcal{U}^m_\eta} \int\limits_{Q^m_{\alpha\eta} (a) \cap Q_{1 + 2\gamma}^m} \abs{Du}^{kp} \le \frac{\refstepcounter{cte} C_{\thecte}}{\eta^{m - kp}} \int\limits_{Q^m_{1 + 2 \gamma}} \abs{Du}^{kp}. \] We deduce that \[ \bigabs{U^m_\eta + Q^m_{2\rho\eta}} \le \refstepcounter{cte} C_{\thecte} \eta^m \frac{1}{\eta^{m - kp}} \int\limits_{Q^m_{1 + 2 \gamma}} \abs{Du}^{kp} = C_{\thecte} \eta^{kp} \int\limits_{Q^m_{1 + 2 \gamma}} \abs{Du}^{kp}. \] This means that \[ \limsup_{\eta \to 0}{\frac{\bigabs{U^m_\eta + Q^m_{2\rho\eta}}}{\eta^{kp}}} < \infty. \] Hence, by Lebesgue's dominated convergence theorem, \[ \lim_{\eta \to 0}\norm{D^i u}_{L^{\frac{kp}{i}}(U^m_\eta + Q^m_{2\rho\eta})} = 0. \] In view of \eqref{equationLpConvergenceTranslates} and the estimate from Part~1, we have \(\lim\limits_{\eta \to 0} \norm{D^j u^\mathrm{th}_\eta - D^j u}_{L^p(Q^m_{1+\gamma})} = 0\). Recall that \(u^\mathrm{th}_\eta = u^\mathrm{sm}_\eta\) in the complement of \(U^m_\eta + Q^m_{\underline\rho\eta}\). Since \(u^\mathrm{sm}_\eta \to u\) in measure and \(\abs{U^m_\eta + Q^m_{\underline\rho\eta }}\to 0\) as \(\eta \to 0\), \(u^\mathrm{th}_\eta \to u\) in measure as \(\eta \to 0\). Hence, \(u^\mathrm{th}_\eta\) converges to \(u\) in \(L^p(Q^m_{1+\gamma})\) and \[ \lim_{\eta \to 0}{\norm{u^\mathrm{th}_\eta - u}_{W^{k, p}(Q_{1+\gamma}^m)}} = 0. \] Therefore, \[ \lim_{\eta \to 0}{\norm{\Pi\circ u^\mathrm{th}_\eta - u}_{W^{k, p}(Q_{1+\gamma}^m)}} = 0. \] This gives the conclusion of this part. \qed \begin{Part} The map \(\Pi \circ u^\mathrm{th}_\eta\) belongs to the class \(R_{\ell^*}(Q_1^m; N^n)\). \end{Part} It suffices to prove the pointwise estimates of \(D^j (\Pi\circ u^\mathrm{th}_\eta)\). Since \(\Pi \circ u^\mathrm{th}_\eta = (\Pi\circ u^\mathrm{sm}_\eta) \circ \Phi^\mathrm{th}\) and the map \(\Pi \circ u^\mathrm{sm}_\eta\) is smooth in \(K^m_\eta\), by the chain rule for higher order derivatives, \begin{align*} \abs{D^j (\Pi\circ u^\mathrm{th}_\eta)} & \le \refstepcounter{cte} C_{\thecte} \sum_{i = 1}^j \sum_{\substack{1 \le \alpha_1 \le \dotsc \le \alpha_i\\\alpha_1 + \dots + \alpha_i = j}} \abs{D^i(\Pi \circ u^\mathrm{sm}_\eta)} \abs{D^{\alpha_1}\Phi^\mathrm{th}} \dotsm \abs{D^{\alpha_i}\Phi^\mathrm{th}}\\ & \le \refstepcounter{cte} C_{\thecte} \sum_{i = 1}^j \sum_{\substack{1 \le \alpha_1 \le \dotsc \le \alpha_i\\\alpha_1 + \dots + \alpha_i = j}} \abs{D^{\alpha_1}\Phi^\mathrm{th}} \dotsm \abs{D^{\alpha_i}\Phi^\mathrm{th}}. \end{align*} By Proposition~\ref{propositionthickeningfromaskeleton}~\((\ref{itempropositionthickeningfromaskeleton5})\), we have for \(x \in K^m_\eta \setminus T^{\ell^*}_\eta\), \begin{align*} \abs{D^j (\Pi\circ u^\mathrm{th}_\eta)(x)} & \le \refstepcounter{cte} C_{\thecte} \sum_{i = 1}^j \sum_{\substack{1 \le \alpha_1 \le \dotsc \le \alpha_i\\\alpha_1 + \dots + \alpha_i = j}} \frac{\eta}{\big(\dist{(x, T^{\ell^*}_\eta)}\big)^{\alpha_1}} \dotsm \frac{\eta}{\big(\dist{(x, T^{\ell^*}_\eta)}\big)^{\alpha_i}}\\ & \le \frac{\refstepcounter{cte} C_{\thecte}}{\big(\dist{(x, T^{\ell^*}_\eta)}\big)^{j}}. \end{align*} This concludes the proof of the theorem. \qed \section{Tools for the proof of Theorem~\ref{theoremDensityManifoldMain}} \subsection{Continuous extension property} From Theorem~\ref{theoremDensityManifoldNontrivial} we have been able to approximate a map by another map which is smooth except on a dual skeleton of dimension \(\floor{kp}^*\). We would like to modify our approximation near this singular set in order to obtain a smooth map. An important tool will be the following: \begin{proposition} \label{propositionSmoothExtension} Let \(\mathcal{K}^m\) be a skeleton of radius \(\eta > 0\), \(\ell \in \{0, \ldots, m-1 \}\), \(\mathcal{T}^{\ell^*}\) be the dual skeleton of \(\mathcal{K}^\ell\) and let \(u \in C^\infty(K^m \setminus T^{\ell^*}; N^n)\). If there exists \(f \in C^0(K^m; N^n)\) such that \(f|_{K^\ell} = u|_{K^\ell}\), then for every \(0 < \mu < 1\), there exists \(v \in C^\infty(K^m; N^n)\) such that \(v = u\) on \(K^m \setminus (T^{\ell^*} + Q^m_{\mu\eta})\). \end{proposition} In the proof of Proposition~\ref{propositionSmoothExtension}, we shall rely on the fact that \(K^\ell\) is a homotopy retract of \(K^m \setminus T^{\ell^*}\), that is, there exists a continuous retraction of \(K^m \setminus T^{\ell^*}\) onto \(K^\ell\) which is homotopic to the identity map in \(K^m \setminus T^{\ell^*}\): \begin{fact} \label{factHomotopyRetraction} There exists a continuous homotopy \(H_\ell : [0, 1] \times (K^m \setminus T^{\ell^*}) \to K^m \setminus T^{\ell^*}\) such that \begin{enumerate}[$(i)$] \item for every \(x \in K^m\setminus T^{\ell^*}\), \(H_\ell(0, x) = x\), \label{item15431} \item for every \(x \in K^m \setminus T^{\ell^*}\), \(H_\ell(1, x) \in K^\ell\), \label{item15432} \item for every \(x \in K^\ell\), \(H_\ell(1, x) = x\). \label{item15433} \end{enumerate} \end{fact} \begin{proof}[Proof of Proposition~\ref{propositionSmoothExtension}] Given \(0 < \underline\delta < \delta < \overline\delta < \mu\), let \(\varphi : K^m \to [0, 1 ]\) be a continuous function such that \begin{enumerate}[\(-\)] \item for every \(x \in K^m \setminus (T^{\ell^*} + Q_{\overline\delta\eta}^m)\), \(\varphi(x) = 0\), \item for every \(x \in \partial(T^{\ell^*} + Q_{\delta\eta}^m)\), \(\varphi(x) = 1\). \item for every \(x \in T^{\ell^*} + Q_{\underline\delta \eta}^m\), \(\varphi(x) = 0\). \end{enumerate} We define \(w : K^m \to N^n\) by \[ w(x) = \begin{cases} (u \circ H_\ell)(\varphi(x), x) & \text{if \(x \in K^m \setminus (T^{\ell^*} + Q_{\delta\eta}^m)\) },\\ (f \circ H_\ell)(\varphi(x), x) & \text{if \(x \in (T^{\ell^*} + Q_{\delta\eta}^m) \setminus T^{\ell^*}\)},\\ f(x) & \text{if \(x \in T^{\ell^*}\)}. \end{cases} \] By properties~\((\ref{item15431})\) and \((\ref{item15432})\) of Fact~\ref{factHomotopyRetraction}, \(w\) is well-defined and continuous on \(K^m\), and \(w = u\) on \(K^m \setminus (T^{\ell^*} + Q_{\overline\delta\eta}^m)\). Let \(\overline w : {\mathbb R}^m \to {\mathbb R}^\nu\) be a continuous extension of \(w\). Given a mollifier \(\varphi \in C_c^\infty(B_1^m)\), there exists a nonnegative function \(\psi \in C^\infty({\mathbb R}^m)\) such that for any \(\iota > 0\), \begin{enumerate}[\(-\)] \item \(\supp{\psi} \subset T^{\ell^*} + Q_{\mu\eta}^m\), \item \(\psi > 0\) in a neighborhood of \(T^{\ell^*} + Q_{\overline\delta\eta}^m\), \item \(\norm{\varphi_\psi \ast \overline w - \overline w}_{L^\infty({\mathbb R}^m)} \le \iota\). \end{enumerate} If the nearest point projection \(\Pi\) onto \(N^n\) is well-defined and smooth on a tubular neighborhood of \(N^n\) of radius \(\iota > 0\), then the map \(\Pi \circ (\varphi_\psi \ast \overline w)\) restricted to \(K^m\) satisfies all the required properties. \end{proof} The natural question that arises is whether a continuous extension of \(u|_{K^\ell}\) to \(K^m\) exists. This property depends on the skeleton \(\mathcal{K}^m\) and on the manifold \(N^n\). \begin{proposition} \label{propositionContinuousExtensionProperty} Let \(\mathcal{K}^m\) be a skeleton of radius \(\eta > 0\) and \(\ell \in \{0, \ldots, m-1\}\). If \(K^m\) is a cube and if \(\pi_\ell(N^n)=\{0\}\), then for every \(u \in C^{0}(K^\ell; N^n)\) there exists \(f \in C^{0}(K^m; N^n)\) such that \(f|_{K^\ell}=u\). \end{proposition} We will use the fact that it is always possible to find a continuous extension, regardless of \(N^n\), by losing one dimension. This property has been introduced as the \(\ell\) extension property by Hang and Lin \cite{Hang-Lin}*{Definition 2.3}. \begin{proposition} \label{lemmaContinuousExtensionProperty} Let \(\mathcal{K}^m\) be a skeleton of radius \(\eta > 0\) and \(\ell \in \{0, \ldots, m-1\}\). If \(K^m\) is a cube, then for every \(u \in C^{0}(K^{\ell+1}; N^n)\), there exists \(g\in C^{0}(K^m; N^n)\) such that \(g|_{K^{\ell}}= u|_{K^{\ell}}\). \end{proposition} In the proof of Proposition~\ref{lemmaContinuousExtensionProperty}, we shall assume that if \(K^m\) is a cube, then the identity map on \(K^{\ell}\) is homotopic to a constant with respect to \(K^{\ell+1}\): \begin{fact} \label{factHomotopyConstant} If \(K^m\) is a cube, then there exists a continuous homotopy \(G_\ell : [0, 1] \times K^{\ell} \to K^{\ell+1}\) such that \begin{enumerate}[$(i)$] \item for every \(x \in K^{\ell}\), \(G_\ell(0, x) = x\), \item there exists \(a \in K^{\ell}\) such that for every \(x \in K^{\ell}\), \(G_\ell(1, x) = a\). \end{enumerate} \end{fact} \begin{proof}[Proof of Proposition~\ref{lemmaContinuousExtensionProperty}] Let \(\varphi : K^m \to [0, 1]\) be a continuous function such that \begin{enumerate}[\(-\)] \item for every \(x \in K^{\ell}\), \(\varphi(x) = 0\), \item for every \(x \in T^{\ell^*}\), \(\varphi(x) = 1\). \end{enumerate} We define \(g : K^m \to N^n\) by \[ g(x) = \begin{cases} u \bigl(G_{\ell}(\varphi(x), H_{\ell}(1, x))\bigr) & \text{if \(x \in K^m \setminus T^{\ell^*}\)},\\ u(a) & \text{if \(x \in T^{\ell^*}\)}. \end{cases} \] where \(H_{\ell} : [0, 1] \times K^m \setminus T^{\ell^*} \to K^m \setminus T^{\ell^*}\) is the homotopy retraction of Fact~\ref{factHomotopyRetraction}. The map \(g\) is continuous and by property~\((\ref{item15433})\) of Fact~\ref{factHomotopyRetraction} we have for every \(x \in K^{\ell}\), \(g(x) = u(x)\). \end{proof} \begin{proof}[Proof of Proposition~\ref{propositionContinuousExtensionProperty}] Let \(u \in C^{0}(K^\ell; N^n)\). Since \(\pi_\ell(N^n)= \{0\}\), for every \(\sigma^{\ell + 1} \in \mathcal{K}^{\ell+1}\), the restriction \(u|_{\partial\sigma^{\ell + 1}}\) has a continuous extension \(u_{\sigma^{\ell + 1}}\) to \(\sigma^{\ell + 1}\). Let \(v : K^{\ell + 1} \to N^n\) be the map defined for every \(x \in K^{\ell + 1}\) by \(v(x) = u_{\sigma^{\ell + 1}}(x)\), where \(\sigma^{\ell + 1} \in \mathcal{K}^{\ell + 1}\) is such that \(x \in \sigma^{\ell + 1}\). The map \(v\) is well-defined and continuous; moreover, \(v|_{K^{\ell}} = u\). By Proposition~\ref{lemmaContinuousExtensionProperty} applied to \(v\), there exists \(f : K^m \to N^n\) such that \(f|_{K^{\ell}} = v|_{K^{\ell}}\); hence \(f\) is a continuous extension of \(u\) to \(K^m\). \end{proof} \subsection{Shrinking} Given a map \(u \in W^{k, p} (K^m; {\mathbb R}^\nu)\) whose energy is controlled outside a neighborhood of the dual skeleton \(T^{\ell^*}\), we are going to construct for every \(\tau > 0\) a map \(u \circ \Phi\) whose energy will be controlled on the whole \(K^m\) when \(\tau\) is small enough. This shrinking construction is very similar to the thickening construction. In both cases, the dimension of the dual skeleton \(T^{\ell^*}\) must satisfy \(\ell^* < m - kp\), or equivalently, \(l+1 > kp\). The main differences are that shrinking only acts in a neighborhood of the dual skeleton \(T^{\ell^*}\) and does not create singularities. Shrinking can be thought of as desingularized thickening and requires more careful estimates. As for thickening, we begin by constructing the diffeomorphism \(\Phi\) regardless of \(u\): \begin{proposition} \label{lemmaThickeningFaceNearDualSkeletonGlobal} Let \(\ell \in \{0, \dotsc, m-1\}\), \(\eta > 0\), \(0 < \mu <\frac{1}{2}\), \(0 < \tau < \frac{1}{2}\), \(\mathcal{S}^m\) be a cubication of \({\mathbb R}^m\) of radius \(\eta\) and \(\mathcal{T}^{\ell^*}\) be the dual skeleton of \(\mathcal{S}^\ell\). There exists a smooth map \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) such that \begin{enumerate}[$(i)$] \item \(\Phi\) is injective, \label{itempropositionshrinkingfromaskeleton15} \item for every \(\sigma^m \in \mathcal{S}^m\), \(\Phi(\sigma^m)\subset \sigma^m\), \label{itempropositionshrinkingfromaskeleton2} \item \(\Supp{\Phi}\subset T^{\ell^*} + Q^m_{2\mu\eta}\) and \(\Phi\big(T^{\ell^*} + Q^m_{\tau\mu\eta}\big) \supset T^{\ell^*} + Q^m_{\mu\eta}\), \label{itempropositionshrinkingfromaskeleton1} \item for every \(0 < \beta < \ell + 1\), for every \(j \in {\mathbb N}_*\) and for every \(x\in {\mathbb R}^m\), \[ (\mu\eta)^{j-1}\abs{D^j \Phi(x)} \le C \bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C > 0\) depending on \(\beta\), \(j\) and \(m\), \label{itempropositionshrinkingfromaskeleton5} \item for every \(0 < \beta < \ell + 1\), for every \(j \in {\mathbb N}_*\) and for every \(x\in \Phi^{-1}(T^{\ell^*} + Q^m_{\mu\eta})\), \[ (\mu\eta)^{j-1}\abs{D^j \Phi(x)} \le C' \tau^{j(\frac{\ell + 1}{\beta} - 1)} \bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C' > 0\) depending on \(\beta\), \(j\) and \(m\). \label{itempropositionshrinkingfromaskeleton4} \end{enumerate} \end{proposition} As a consequence of the estimates of Proposition~\ref{lemmaThickeningFaceNearDualSkeletonGlobal}, we have the following \(W^{k, p}\) estimates that will be applied in the proof of Theorem~\ref{theoremDensityManifoldMain} with \(\ell = \floor{kp}\). \begin{corollary} \label{corollaryShrinkingWkpEstimate} Let \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) be the map given by Proposition~\ref{lemmaThickeningFaceNearDualSkeletonGlobal} and let \(\mathcal{K}^m\) be a subskeleton of \(\mathcal{S}^m\). If \(\ell+1 > kp\), then for every \(u \in W^{k, p}(K^m \cap(T^{\ell^*} + Q^m_{2\mu\eta}); {\mathbb R}^\nu)\), \(u \circ \Phi \in W^{k, p}(K^m \cap(T^{\ell^*} + Q^m_{2\mu\eta}); {\mathbb R}^\nu)\) and for every \(j \in \{1, \dotsc, k\}\), \begin{multline*} (\mu \eta)^{j} \norm{D^j(u \circ \Phi)}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}))}\\ \begin{aligned} &\leq C'' \sum_{i=1}^j (\mu\eta)^{i} \norm{D^i u}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) \setminus (T^{\ell^*} + Q^m_{\mu\eta}))}\\ & \qquad + C'' \tau^{\frac{\ell+1-kp}{p}}\sum_{i=1}^j (\mu\eta)^{i} \norm{D^i u}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{\mu\eta}))}, \end{aligned} \end{multline*} for some constant \(C'' > 0\) depending on \(m\), \(k\) and \(p\). \end{corollary} \begin{proof} We first establish the estimate for a map \(u\) in \(C^{\infty}(K^m \cap(T^{\ell^*} + Q^m_{2\mu\eta}) ; {\mathbb R}^{\nu})\). By the chain rule for higher-order derivatives, for every \(j \in \{1, \dotsc, k\}\) and for every \(x \in K^m\), \[ \abs{D^j (u \circ \Phi) (x)}^p \le \setcounter{cte}{1} C_{\thecte} \sum_{i=1}^j \sum_{\substack{1 \le t_1 \le \dotsc \le t_i\\ t_1 + \dotsb + t_i = j}} \abs{D^i u(\Phi(x))}^p \abs{D^{t_1} \Phi(x)}^p \dotsm \abs{D^{t_i} \Phi(x)}^p. \] As in the proof of Corollary~\ref{corollaryEstimateThickening}, if \(1 \le t_1 \le \ldots \le t_i\) and \(t_1 + \dotsb + t_i = j \), then for every \(x \in K^m \cap(T^{\ell^*} + Q^m_{2\mu\eta})\), \[ \abs{D^{t_1} \Phi(x)}^p \dotsm \abs{D^{t_i}\Phi(x)}^p \le \refstepcounter{cte} C_{\thecte} \frac{\jac{\Phi}(x)}{\eta^{(j-i)p}} \] and this implies \[ \eta^{jp}\abs{D^j (u \circ \Phi) (x)}^p \le \refstepcounter{cte} C_{\thecte} \sum_{i=1}^j \eta^{ip}\abs{D^i u(\Phi(x))}^p \jac{\Phi}(x). \] Let \(\sigma^m \in \mathcal{K}^m\). Since \(\Phi\) is injective, by the change of variable formula, \begin{multline*} \int\limits_{\Phi^{-1}(\sigma^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) \setminus (T^{\ell^*} + Q^m_{\mu\eta}))} (\mu\eta)^{jp}\abs{D^j (u \circ \Phi)}^p\\ \begin{aligned} & \le C_{\thecte} \sum_{i=1}^j \int\limits_{\Phi^{-1}(\sigma^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) \setminus (T^{\ell^*} + Q^m_{\mu\eta}))} (\mu\eta)^{ip}\abs{(D^i u) \circ \Phi}^p \jac{\Phi}\\ &\le C_{\thecte} \sum_{i=1}^j \int\limits_{\sigma^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) \setminus (T^{\ell^*} + Q^m_{\mu\eta})} (\mu\eta)^{ip}\abs{D^i u}^p. \end{aligned} \end{multline*} Let \(0 < \beta < \ell + 1\). If \(1 \le t_1 \le \dotsc \le t_i\) and \(t_1 + \dotsb + t_i = j \), then by property~\((\ref{itempropositionshrinkingfromaskeleton4})\) of Proposition~\ref{lemmaThickeningFaceNearDualSkeletonGlobal} we have for every \(x \in \Phi^{-1}(K^m \cap (T^{\ell^*} + Q^m_{\mu\eta}))\), \begin{multline*} \abs{D^{t_1} \Phi(x)}^p \dotsm \abs{D^{t_i}\Phi(x)}^p \\ \begin{aligned} & \le \refstepcounter{cte} C_{\thecte} \tau^{t_1 p(\frac{\ell + 1}{\beta} - 1)} \frac{\bigl(\jac{\Phi}(x)\bigr)^\frac{t_1 p}{\beta}}{(\mu\eta)^{(t_1 - 1)p}} \dotsm \tau^{t_i p(\frac{\ell + 1}{\beta} - 1)} \frac{\bigl(\jac{\Phi}(x)\bigr)^\frac{t_i p}{\beta}}{{(\mu\eta)^{(t_i - 1)p}}}\\ & = C_{\thecte} \tau^{jp(\frac{\ell + 1}{\beta} - 1)}\frac{\bigl(\jac{\Phi}(x)\bigr)^\frac{jp}{\beta}}{(\mu\eta)^{(j-i)p}}. \end{aligned} \end{multline*} Taking \(\beta = jp\), we have \[ \abs{D^{t_1} \Phi(x)}^p \dotsm \abs{D^{t_i}\Phi(x)}^p \le C_{\thecte} \tau^{\ell + 1 - jp} \frac{\jac{\Phi}(x)}{(\mu\eta)^{(j-i)p}} \] and this implies \[ (\mu\eta)^{jp}\abs{D^j (u \circ \Phi) (x)}^p \le \refstepcounter{cte} C_{\thecte} \tau^{\ell + 1 - jp} \sum_{i=1}^j (\mu\eta)^{ip}\abs{D^i u(\Phi(x))}^p \jac{\Phi}(x). \] Since \(\Phi\) is injective, by the change of variable formula, \begin{multline*} \int\limits_{\Phi^{-1}(\sigma^m \cap (T^{\ell^*} + Q^m_{\mu\eta}))} (\mu\eta)^{jp}\abs{D^j (u \circ \Phi)}^p \\ \begin{aligned} & \le C_{\thecte} \tau^{\ell + 1 - jp} \sum_{i=1}^j \int\limits_{\Phi^{-1}(\sigma^m \cap (T^{\ell^*} + Q^m_{\mu\eta}))} (\mu\eta)^{ip}\abs{(D^i u) \circ \Phi}^p \jac{\Phi}\\ & = C_{\thecte} \tau^{\ell + 1 - jp} \sum_{i=1}^j \int\limits_{\sigma^m \cap (T^{\ell^*} + Q^m_{\mu\eta})} (\mu\eta)^{ip} \abs{D^i u}^p. \end{aligned} \end{multline*} Since \(\sigma^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) \subset \Phi^{-1}\big(\sigma^m \cap (T^{\ell^*} + Q^m_{2\mu\eta})\big)\), by additivity of the integral we then have \begin{multline*} \int\limits_{\sigma^m \cap (T^{\ell^*} + Q^m_{2\mu\eta})} (\mu\eta)^{jp}\abs{D^j (u \circ \Phi)}^p\\ \le C_3 \sum_{i=1}^j \int\limits_{\sigma^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) \setminus (T^{\ell^*} + Q^m_{\mu\eta})} (\mu\eta)^{ip} \abs{D^i u}^p\\ + C_{\thecte} \tau^{\ell + 1 - jp} \sum_{i=1}^j \int\limits_{\sigma^m \cap (T^{\ell^*} + Q^m_{\mu\eta})} (\mu\eta)^{ip} \abs{D^i u}^p. \end{multline*} We may take the union over all faces \(\sigma^m \in \mathcal{K}^m\) and we deduce the estimate for smooth maps. By density of smooth maps in \(W^{k, p}(K^m \cap(T^{\ell^*} + Q^m_{2\mu\eta}); {\mathbb R}^\nu)\), we deduce that for every \(u\) in \(W^{k, p}(K^m \cap(T^{\ell^*} + Q^m_{2\mu\eta}); {\mathbb R}^\nu)\), the function \(u \circ \Phi\) also belongs to this space and satisfies the estimate above. \end{proof} We first describe the construction of the map \(\Phi\) in the case of only one \(\ell\) dimensional cube. \begin{proposition} \label{lemmaThickeningFaceNearDualSkeleton} Let \(\ell \in \{1, \dotsc, m\}\), \(\eta > 0\), \(0 < \underline{\mu} < \mu < \overline{\mu} < 1\) and \(0 <\tau < \underline\mu/\mu\). There exists a smooth function \(\lambda : {\mathbb R}^m\to [1,\infty)\) such that if \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) is defined for \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell} \) by \[ \Phi(x) = (\lambda(x)x', x''), \] then \begin{enumerate}[$(i)$] \item \(\Phi\) is injective, \label{itemlemmaThickeningFaceNearDualSkeletonidentity0} \item \(\Supp{\Phi}\subset Q^\ell_{\mu\eta} \times Q^{m-\ell}_{(1-\mu)\eta}\), \label{itemlemmaThickeningFaceNearDualSkeletonidentity} \item \(\Phi\big(Q^\ell_{\tau\mu\eta} \times Q^{m-\ell}_{(1-\overline{\mu})\eta} \big) \supset Q^\ell_{\underline{\mu}\eta} \times Q^{m-\ell}_{(1-\overline{\mu})\eta}\), \label{itemlemmaThickeningFaceNearDualSkeletonidentity1} \item for every \(0 < \beta < \ell\), for every \(j \in {\mathbb N}_*\) and for every \(x\in {\mathbb R}^m\), \[ (\mu\eta)^{j-1}\abs{D^j \Phi(x)} \le C \bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C > 0\) depending on \(\beta\), \(j\), \(m\), \(\mu/\underline{\mu}\) and \(\overline{\mu}/ \mu\), \label{itemlemmaThickeningFaceNearDualSkeletonidentity2} \item for every \(\beta > 0\), for every \(j \in {\mathbb N}_*\) and for every \(x\in Q^{\ell}_{\tau\mu\eta}\times Q^{m-\ell}_{(1-\overline\mu)\eta}\), \[ (\mu\eta)^{j-1}\abs{D^j \Phi(x)} \le C' \tau^{j(\frac{\ell}{\beta} - 1)} \bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C' > 0\) depending on \(\beta\), \(j\), \(m\), \(\mu/\underline{\mu}\) and \(\overline{\mu}/ \mu\). \label{itemlemmaThickeningFaceNearDualSkeletonidentity3} \end{enumerate} \end{proposition} We postpone the proof of Proposition~\ref{lemmaThickeningFaceNearDualSkeleton} and we proceed to establish Proposition~\ref{lemmaThickeningFaceNearDualSkeletonGlobal}. \begin{proof}[Proof of Proposition~\ref{lemmaThickeningFaceNearDualSkeletonGlobal}] We first introduce finite sequences \( (\mu_i)_{\ell\leq i \leq m}\) and \((\nu_i)_{\ell\leq i \leq m}\) such that \[ 0<\mu_\ell=\mu<\nu_{\ell+1}<\mu_{\ell+1}<\dotsc<\mu_{m-1}<\nu_m<\mu_m\leq 2\mu. \] Let \(\Phi_m = \mathrm{Id}\). Using downward induction, we shall define maps \(\Phi_i:{\mathbb R}^m\to {\mathbb R}^m\) for \(i \in \{\ell, \dotsc, m-1\}\) such that \(\Phi_i\) satisfies the following properties: \begin{enumerate}[(a)] \item \(\Phi_i\) is injective, \label{item1281} \item for every \(\sigma^m \in \mathcal{S}^m\), \(\Phi_i(\sigma^m) \subset \sigma^m\), \label{item1282} \item \(\Supp{\Phi_i}\subset T^{i^*} + Q^m_{2 \mu\eta}\), \item for every \(r \in \{i^*, \dots, m - 1\}\), \(\Phi_i \big(T^{r} + Q^m_{\tau\mu\eta} \big) \supset T^{r} + Q^m_{\tau\mu\eta}\), \label{item1283} \item \(\Phi_i \big(T^{i^*} + Q^m_{\tau\mu\eta} \big) \supset T^{i^*} + Q^m_{\mu_i\eta}\), \label{item1284} \item for every \(0 < \beta < i+1\), for every \(j \in {\mathbb N}_*\) and for every \(x \in {\mathbb R}^m\), \[ (\mu\eta)^{j-1} \abs{D^j \Phi_i(x)} \le C \bigl(\jac{\Phi_i}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C > 0\) depending on \(\beta\), \(j\), and \(m\), \label{item1285} \item for every \(0 < \beta < i+1\), for every \(j \in {\mathbb N}_*\) and for every \(x\in \Phi_i^{-1}(T^{i^*} + Q^m_{\mu_i\eta})\), \[ (\mu\eta)^{j-1}\abs{D^j \Phi_i(x)} \le C' \tau^{j(\frac{i+1}{\beta} - 1)} \bigl(\jac{\Phi_i}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C' > 0\) depending on \(\beta\), \(j\), and \(m\). \label{item1286} \end{enumerate} The map \(\Phi_\ell\) will satisfy the conclusion of the proposition. \medskip Let \( i \in \{\ell+1, \dotsc, m\}\) and let \(\Theta_{i}\) be the map obtained from Proposition~\ref{lemmaThickeningFaceNearDualSkeleton} with parameters \(\ell = i\), \(\underline\mu = \mu_{i - 1}\), \(\mu = \nu_{i}\), \(\overline\mu = \mu_{i}\) and \(\frac{\tau\mu}{\nu_i}\). Given \(\sigma^i \in \mathcal{S}^{i}\), we may identify \(\sigma^i\) with \(Q^{i}_{\eta} \times \{0^{m-i}\}\) and \(T^{(i-1)^*} \cap (\sigma^i + Q_{2\mu\eta}^m)\) with \(\{0^{i}\} \times Q_{2\mu\eta}^{m-i}\). The map \(\Theta_i\) induces by isometry a map which we shall denote by \(\Theta_{\sigma^i}\). Let \(\Psi_i : {\mathbb R}^m \to {\mathbb R}^m\) be defined for every \(x \in {\mathbb R}^m\) by \[ \Psi_i(x):=\begin{cases} \Theta_{\sigma^i}(x) & \text{if } x \in \sigma^i + Q^{m}_{(1-\nu_i)\eta} \text{ for some } \sigma^i \in \mathcal{S}^i,\\ x&\text{otherwise} . \end{cases} \] We explain why \(\Psi_i\) is well-defined. Since \(\Theta_{\sigma^i}\) coincides with the identity map on \(\partial\sigma^i + Q^m_{(1-\nu_{i})\eta}\), then for every \(\sigma^i_1, \sigma^i_2 \in \mathcal{S}^{i}\), if \(x \in (\sigma_1^i + Q^{m}_{(1-\nu_{i})\eta}) \cap (\sigma_2^i + Q^{m}_{(1-\nu_{i})\eta})\) and \(\sigma_1^i \ne \sigma_2^i\), then \[ \Theta_{\sigma_1^i}(x) = x = \Theta_{\sigma_2^i}(x). \] One also verifies that \(\Psi_i\) is smooth. Assuming that \(\Phi_i\) has been defined satisfying properties \eqref{item1281}--\eqref{item1286}, we let \[ \Phi_{i-1}=\Psi_i \circ \Phi_i. \] We check that \(\Phi_{i-1}\) satisfies all required properties. Up to an exchange of coordinates, for every \(\sigma^i \in \mathcal{S}^i\), we may assume that \(\sigma^i = Q_\eta^i \times \{0^{m-i}\}\) and \(\Theta_{\sigma^i}\) can be written as \(\Theta_{\sigma^i}(x) = (\lambda(x)x', x'')\), with \(\lambda(x) \ge 1\). Hence, for every \(0 < s \le 1\) and every \(r\in \{0, \dotsc, m-1\}\), \begin{equation} \label{equation1218} \Psi_i(T^r + Q_{s\eta}^m) \supset T^r + Q_{s\eta}^m. \end{equation} Moreover, in the new coordinates, the set \begin{equation} \label{equation1627} (\sigma^i \times Q^{m-i}_{\eta}) \cap \big( (T^{(i-1)^*} + Q^m_{\tau\mu\eta}) \setminus (T^{i^*} + Q^m_{\mu_i\eta}) \big) \end{equation} becomes \begin{equation} \label{equation1628} Q^i_{\tau\mu\eta} \times Q^{m-i}_{(1 - \mu_i)\eta}. \end{equation} In view of properties~\((\ref{itemlemmaThickeningFaceNearDualSkeletonidentity0})\) and~\((\ref{itemlemmaThickeningFaceNearDualSkeletonidentity1})\) of Proposition~\ref{lemmaThickeningFaceNearDualSkeleton}, \[ \Theta_{\sigma^i}(Q^i_{\tau\mu\eta} \times Q^{m-i}_{(1 - \mu_i)\eta}) \supset Q^i_{\mu_{i-1}\eta} \times Q^{m-i}_{(1 - \mu_i)\eta}. \] Since this property holds for every \(\sigma^i \in \mathcal{S}^i\), \begin{equation} \label{equation1219} \Psi_i\big( (T^{(i-1)^*} + Q^m_{\tau\mu\eta}) \setminus (T^{i^*} + Q^m_{\mu_i\eta}) \big) \supset (T^{(i-1)^*} + Q^m_{\mu_{i-1}\eta}) \setminus (T^{i^*} + Q^m_{\mu_i\eta}) . \end{equation} \begin{proof}[Proof of Property \eqref{item1283}] Let \(r \in \{(i-1)^*, \dots, m-1\}\). By induction hypothesis and by equation~\eqref{equation1218} with \(s = \tau\mu\), \[ \Phi_{i-1}(T^r + Q^m_{\tau\mu\eta}) \supset \Psi_i(T^r + Q^m_{\tau\mu\eta}) \supset T^r + Q^m_{\tau\mu\eta}. \qedhere \] \end{proof} \begin{proof}[Proof of Property \eqref{item1284}] By induction hypothesis (properties~\eqref{item1283} and \eqref{item1284}), \[ \Phi_{i}(T^{(i-1)^*} + Q^m_{\tau\mu\eta}) \supset (T^{(i-1)^*} + Q^m_{\tau\mu\eta})\cup (T^{i^*} + Q^m_{\mu_i\eta}). \] Thus, \[ \Phi_{i-1}(T^{(i-1)^*} + Q^m_{\tau\mu\eta}) \supset \Psi_i (T^{(i-1)^*} + Q^m_{\tau\mu\eta}) \cup \Psi_i (T^{i^*} + Q^m_{\mu_i\eta}). \] By inclusion~\eqref{equation1219} and by inclusion~\eqref{equation1218} with \(r = i^*\) and \(s = \mu_i\), \[ \begin{split} \Phi_{i-1}(T^{(i-1)^*} + Q^m_{\tau\mu\eta}) & \supset \big((T^{(i-1)^*} + Q^m_{\mu_{i-1}\eta}) \setminus (T^{i^*} + Q^m_{\mu_i\eta})\big) \cup \big(T^{i^*} + Q^m_{\mu_i\eta}\big) \\ & = T^{(i-1)^*} + Q^m_{\mu_{i-1}\eta}. \end{split} \] This gives the conclusion. \end{proof} \begin{proof}[Proof of Property \eqref{item1286}] Let \(j \in {\mathbb N}_*\) and \(0 < \beta < i\). By the chain rule for higher order derivatives, we have for every \(x \in {\mathbb R}^m\), \[ \abs{D^j\Phi_{i-1}(x)} \le C_1 \sum_{r=1}^j \sum_{\substack{1 \le t_1 \le \dotsc \le t_r\\ t_1 + \dotsb + t_r = j}}\abs{D^r \Psi_i(\Phi_i(x))}\, \abs{D^{t_1} \Phi_i(x)} \dotsm \abs{D^{t_r} \Phi_i(x)}. \] Let \(x \in \Phi_{i-1}^{-1}(T^{(i-1)^*} + Q^{m}_{\mu_{i-1}\eta})\). By induction hypothesis (property~\eqref{item1285}), for every \(r \in \{1, \dots, j\}\), if \(1 \le t_1 \le \ldots \le t_r\) and \(t_1 + \dotsb + t_r = j \), then \[ \abs{D^{t_1} \Phi_i(x)} \dotsm \abs{D^{t_r}\Phi_i(x)} \le C_2 \frac{(\jac{\Phi_i}(x))^\frac{j}{\beta}}{(\mu\eta)^{j-r}}. \] If in addition \(x \in \Phi_{i-1}^{-1}\big((T^{(i-1)^*} + Q^{m}_{\mu_{i-1}\eta}) \setminus (T^{i^*} + Q^{m}_{\mu_{i}\eta})\big)\), then \(\Phi_i(x) \in \Psi_{i}^{-1}\big((T^{(i-1)^*} + Q^{m}_{\mu_{i-1}\eta}) \setminus (T^{i^*} + Q^{m}_{\mu_{i}\eta})\big)\). By the correspondence between the sets given by \eqref{equation1627} and \eqref{equation1628}, by inclusion~\eqref{equation1219}, and by property~\((\ref{itemlemmaThickeningFaceNearDualSkeletonidentity3})\) of Proposition~\ref{lemmaThickeningFaceNearDualSkeleton}, we have for every \(0 < \alpha < i\), \[ \abs{D^{r} \Psi_i(\Phi_i(x))} \le C_3 \tau^{r(\frac{i}{\alpha} - 1)} \frac{\big(\jac{\Psi_i}(\Phi_i(x))\big)^\frac{r}{\alpha}}{(\mu\eta)^{r-1}}. \] Take \(\alpha = \beta\frac{r}{j}\). Since \(r \le j\) and \(\tau \le 1\), we get \[ \abs{D^{r} \Psi_i(\Phi_i(x))} \le C_3 \tau^{r(\frac{ij}{\beta r} - 1)} \frac{\big(\jac{\Psi_i}(\Phi_i(x))\big)^\frac{j}{\beta}}{(\mu\eta)^{r-1}} \le C_3 \tau^{j(\frac{i}{\beta} - 1)} \frac{\big(\jac{\Psi_i}(\Phi_i(x))\big)^\frac{j}{\beta}}{(\mu\eta)^{r-1}}. \] Thus, for every \(x \in \Phi_{i-1}^{-1}\big((T^{(i-1)^*} + Q^{m}_{\mu_{i-1}\eta}) \setminus (T^{i^*} + Q^{m}_{\mu_{i}\eta})\big)\), \[ \begin{split} \abs{D^j\Phi_{i-1}(x)} & \le C_4 \tau^{j(\frac{i}{\beta} - 1)} \frac{\big(\jac{\Psi_i}(\Phi_i(x))\big)^\frac{j}{\beta}}{(\mu\eta)^{r-1}} \frac{(\jac{\Phi_i}(x))^\frac{j}{\beta}}{(\mu\eta)^{j-r}}\\ & = C_4 \tau^{j(\frac{i}{\beta} - 1)} \frac{(\jac{\Phi_{i-1}(x)})^\frac{j}{\beta}}{(\mu\eta)^{j-1}}. \end{split} \] On the other hand, if \(x \in \Phi_{i-1}^{-1}(T^{i^*} + Q^{m}_{\mu_{i}\eta})\), then \(\Phi_i(x) \in \Psi_{i}^{-1}(T^{i^*} + Q^{m}_{\mu_{i}\eta})\). By inclusion \eqref{equation1218} with \(r = i^*\) and \(s = \mu_i\), \(\Phi_i(x) \in T^{i^*} + Q^{m}_{\mu_{i}\eta}\). By induction hypothesis (property \eqref{item1286}), we deduce that for every \(r \in \{1, \dots, j\}\), if \(1 \le t_1 \le \ldots \le t_r\) and \(t_1 + \dotsb + t_r = j \), then \[ \abs{D^{t_1} \Phi_i(x)} \dotsm \abs{D^{t_r}\Phi_i(x)} \le C_5 \tau^{j(\frac{i}{\beta} - 1)} \frac{(\jac{\Phi_i}(x))^\frac{j}{\beta}}{(\mu\eta)^{j-r}}. \] By property~\((\ref{itemlemmaThickeningFaceNearDualSkeletonidentity2})\) of Proposition~\ref{lemmaThickeningFaceNearDualSkeleton}, \[ \abs{D^r\Psi_{i}(\Phi_i(x))} \le C_6 \frac{(\jac{\Psi_i}(\Phi_i(x)))^\frac{j}{\beta}}{(\mu\eta)^{r-1}}. \] We deduce as above that \[ \abs{D^j\Phi_{i-1}(x)} \le C_7 \tau^{j(\frac{i}{\beta} - 1)} \frac{(\jac{\Phi_{i-1}}(x))^\frac{j}{\beta}}{(\mu\eta)^{j-1}}. \] This gives the conclusion. \end{proof} The other properties can be checked as in the proof of Proposition~\ref{propositionthickeningfromaskeleton}. By downward induction, we conclude that properties \eqref{item1281}--\eqref{item1286} hold for every \(i \in \{\ell, \dots, m-1\}\). In particular, we deduce properties $(i)$--\((v)\) of Proposition~\ref{lemmaThickeningFaceNearDualSkeleton}. \end{proof} We need a couple of lemmas in order to prove Proposition~\ref{lemmaThickeningFaceNearDualSkeleton}: \begin{lemma} \label{lemmaThickeningAroundPrimalSqueletonmu} Let \(\eta > 0\), let \(0<\underline{\mu}<\mu<\overline{\mu}<1\) and \(0 < \kappa < \underline{\mu}/\mu\). There exists a smooth function \(\lambda : {\mathbb R}^m \to [1,\infty)\) such that if \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) is defined for \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell} \) by \[ \Phi(x) = (\lambda(x)x', x''), \] then \begin{enumerate}[$(i)$] \item \(\Phi\) is a diffeomorphism, \item \(\Supp{\Phi}\subset Q^\ell_{\mu\eta} \times Q^{m-\ell}_{(1-\mu)\eta}\), \item \(\Phi\bigl( Q^\ell_{\kappa\mu\eta} \times Q^{m-\ell}_{(1-\overline{\mu})\eta } \bigr) \supset Q^\ell_{\underline{\mu} \eta} \times Q^{m-\ell}_{(1-\overline{\mu})\eta }\), \item for every \(j\in {\mathbb N}_{*}\) and for every \(x\in {\mathbb R}^m\), \begin{equation*} (\mu\eta)^{j-1}\abs{D^j\Phi(x)}\leq C, \end{equation*} for some constant \(C > 0\) depending on \(j\), \(m\), \(\mu / \underline{\mu}\), \(\overline{\mu}/\mu\) and \(\kappa\), \item for every \(j\in {\mathbb N}_{*}\) and every \(x\in {\mathbb R}^m\), \[ C' \le \jac \Phi(x) \le C'', \] \end{enumerate} for some constants \(C', C'' > 0\) depending on \(m\), \(\mu / \underline{\mu}\), \(\overline{\mu}/\mu\) and \(\kappa\). \end{lemma} \begin{proof} By scaling, we may assume that \(\mu\eta=1\). Let \(\psi : {\mathbb R} \to [0, 1]\) be a smooth function such that \begin{enumerate}[\(-\)] \item the function \(\psi\) is nonincreasing on \({\mathbb R}_+\) and nondecreasing on \({\mathbb R}_-\), \item for \(\abs{t} \le \underline{\mu}/\mu\), \(\psi(t)=1\), \item for \(\abs{t} \ge 1\), \(\psi(t)=0\). \end{enumerate} Let \(\theta : {\mathbb R} \to [0, 1]\) be a smooth function such that \begin{enumerate}[\(-\)] \item for \(\abs{t}\le \frac{1-\overline{\mu}}{\mu}\), \(\theta(t)=1\), \item for \(\abs{t} \ge \frac{1-\mu}{\mu}\), \(\theta(t)=0\). \end{enumerate} Since \(\frac{1-\mu}{\mu}-\frac{1-\overline{\mu}}{\mu}= \overline{\mu}/ \mu - 1\), we may require that for every \(j\in {\mathbb N}_*\) and for every \(t\geq 0\), \(\abs{D^j \theta (t)}\leq C\), for some constant \(C > 0\) depending only on \(j\) and \(\overline{\mu}/\mu\). Let \(\varphi : {\mathbb R}^m \to {\mathbb R}\) be the function defined for \(x=(x_1, \dotsc, x_m) \in {\mathbb R}^m\) by \[ \textstyle \varphi(x) = \prod\limits_{i=1}^\ell \psi(x_i) \prod\limits_{i=\ell+1}^m \theta (x_i). \] Let \(\Psi : {\mathbb R}^m \to {\mathbb R}^m\) be the function defined for \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell}\) by \[ \Psi(x)= \big((1 - \alpha \varphi(x))x', x''\big), \] where \(\alpha \in {\mathbb R}\). In particular, for every \(x = (x', x'') \in Q^\ell_{\underline\mu/\mu} \times Q^{m - \ell}_{\frac{1 - \overline\mu}{\mu}}\), \(\Psi(x) = ((1 - \alpha) x', x'')\). Taking \(\alpha=1-\frac{\kappa\mu}{\underline{\mu}}\), we deduce that \(\Psi\) is a bijection between \(Q^\ell_{\underline\mu/\mu} \times Q^{m-\ell}_{\frac{1 - \overline\mu}{\mu}}\) and \(Q^\ell_{\kappa} \times Q^{m-\ell}_{\frac{1 - \overline\mu}{\mu}}\). As in Lemma~\ref{lemmaThickeningAroundPrimalSqueleton} we can prove that \(\Phi=\Psi^{-1}\) satisfies the required properties. \end{proof} \begin{lemma} \label{lemmaThickeningAroundDualSqueletonmu} Let \(\ell \in \{1,\dotsc, m\}\), \(\eta > 0\), \(0<\underline{\mu}<\mu<\overline{\mu}<1\) and \(0 < \tau < \underline{\mu}/\mu\). There exists a smooth function \(\lambda : {\mathbb R}^m \to [1, \infty)\) such that if \(\Phi : {\mathbb R}^m \to {\mathbb R}^m\) is defined for \(x = (x', x'') \in {\mathbb R}^\ell \times {\mathbb R}^{m - \ell}\) by \[ \Phi(x) = (\lambda(x)x', x''), \] then \begin{enumerate}[$(i)$] \item \(\Phi\) is injective, \item \(\Supp{\Phi}\subset Q^\ell_{\mu\eta} \times Q^{m-\ell}_{(1-\mu)\eta}\), \item \(\Phi(B^{\ell}_{\tau\mu\eta}\times Q^{m-\ell}_{(1-\overline{\mu})\eta})\supset B^{\ell}_{\underline{\mu} \eta} \times Q^{m-\ell}_{(1-\overline{\mu})\eta} \), \label{item2169} \item for every \(0 < \beta < \ell\), for every \(j \in {\mathbb N}_*\) and for every \(x\in {\mathbb R}^m\), \[ (\mu\eta)^{j-1}\abs{D^j \Phi(x)} \le C \bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C > 0\) depending on \(\beta\), \(j\), \(m\), \(\mu/\underline{\mu}\) and \(\overline{\mu}/\mu\), \item for every \(\beta > 0\), for every \(j \in {\mathbb N}_*\) and for every \(x\in B^{\ell}_{\tau\mu\eta}\times Q^{m-\ell}_{(1-\overline\mu)\eta}\), \[ (\mu\eta)^{j-1}\abs{D^j \Phi(x)} \le C' \tau^{j(\frac{\ell}{\beta}- 1)}\bigl(\jac{\Phi}(x)\bigr)^\frac{j}{\beta}, \] for some constant \(C' > 0\) depending on \(\beta\), \(j\), \(m\), \(\mu/\underline{\mu}\) and \(\overline{\mu}/\mu\). \label{item1132} \end{enumerate} \end{lemma} \begin{proof} By scaling, we may assume that \(\mu\eta = 1\). Given \(\varepsilon > 0\) and \(b > 0\), let \(\varphi : (0, \infty) \to [1, \infty)\) be a smooth function such that \begin{enumerate}[\(-\)] \item for \(0 < s \le \tau \sqrt{1 + \varepsilon}\), \(\varphi(s)= \dfrac{\underline{\mu}/\mu}{s} \sqrt{1 + \varepsilon} \, \Bigl(1+\frac{b}{\ln \frac{1}{s}}\Bigr)\), \item for \(s \ge 1\), \(\varphi(s)=1\), \item the function \(s \in (0, \infty) \mapsto s\varphi(s)\) is increasing. \end{enumerate} Note that such function \(\varphi\) exists if we take \(\varepsilon > 0\) such that \[ (\underline{\mu}/\mu) \sqrt{1 + \varepsilon} < 1 \] and thus \(\tau \sqrt{1 + \varepsilon} < 1\) and if we take \(b > 0\) such that \[ (\underline{\mu}/{\mu}) \sqrt{1 + \varepsilon} \, \Bigl(1+\frac{b}{\ln \frac{1}{(\underline{\mu}/{\mu})\sqrt{1 + \varepsilon}}}\Bigr) < 1. \] Let \(\theta : {\mathbb R}^{m-\ell} \to [0, 1]\) be a smooth function such that \begin{enumerate}[\(-\)] \item for \(y \in Q^{m-\ell}_{\frac{1-\overline{\mu}}{\mu}}\), \(\theta(y) = 0\), \item for \(y \in {\mathbb R}^{m-\ell} \setminus Q^{m-\ell}_{\frac{1 - {\mu}}{\mu}}\), \(\theta(y)=1\). \end{enumerate} We now introduce for \(x=(x', x'') \in {\mathbb R}^\ell\times {\mathbb R}^{m-\ell}\), \[ \zeta(x) = \sqrt{\abs{x'}^2 + \theta\bigl(x''\bigr)^2 + \varepsilon\tau^2}. \] Let \(\lambda : {\mathbb R}^m\to {\mathbb R}\) be the function defined for \(x \in {\mathbb R}^m\) by \[ \lambda(x)= \varphi(\zeta(x)). \] As in the proof of Lemma~\ref{lemmaThickeningAroundDualSqueleton}, one may check that the map \(\Phi\) defined in the statement satisfies all the required properties: \begin{proof}[Proof of statement \((\ref{item2169})\)] Let \(x \in B^\ell_{{\underline{\mu}}/{\mu}} \times Q^{m-\ell}_{\frac{1 - \overline{\mu}}{\mu}}\). For every \(s \ge 0\), \[ \Phi(sx', x'') = \Big( s \varphi(\sqrt{s^2\abs{x'}^2 + \varepsilon \tau^2}) x', x'' \Big). \] Consider the function \(h : [0, \infty) \to {\mathbb R}\) defined by \[ h(s) = s \varphi(\sqrt{s^2 + \varepsilon \tau^2}). \] Then, assuming that \(x' \ne 0\), \[ \Phi(sx', x'') = \Big( h(s\abs{x'}) \frac{x'}{\abs{x'}}, x'' \Big). \] We have \(h(0) = 0\) and \(h(\tau) > \underline{\mu}/\mu \ge \abs{x'}\). By the Intermediate value theorem, there exists \(t \in (0, \tau )\) such that \(h(t) = \abs{x'}\). Thus, \(t \frac{x'}{\abs{x'}} \in B_\tau^\ell\) and \(\Phi(t \frac{x'}{\abs{x'}}, x'') = x\). \end{proof} \begin{proof}[Proof of statement \((\ref{item1132})\)] Proceeding as in the proof of Lemma~\ref{lemmaThickeningAroundDualSqueleton}, one gets for every \(x \in B^{\ell}_{1}\times {\mathbb R}^{m-\ell}\), \begin{equation*} \abs{D^j\Phi(x)}\le\frac{\setcounter{cte}{1} C_{\thecte}}{\zeta(x)^j}. \end{equation*} Since \(\zeta(x) \ge \tau\sqrt{\varepsilon}\), we deduce that \begin{equation*} \abs{D^j\Phi(x)}\le\frac{C_{\thecte}}{(\tau\sqrt{\varepsilon})^j} \le \frac{\refstepcounter{cte} C_{\thecte}}{\tau^j}. \end{equation*} On the other hand, \[ \jac \Phi(x) =\varphi(\zeta(x))^{\ell-1}\Bigl(\varphi(\zeta(x)) \Bigl(1 - \frac{\abs{x'}^2}{\zeta(x)^2}\Bigr) + \big(\varphi^{(1)}(\zeta(x))\zeta(x) + \varphi(\zeta(x)) \big)\frac{\abs{x'}^2}{\zeta(x)^2} \Bigr). \] Since for every \(s > 0\), \(\varphi^{(1)}(s)s + \varphi(s) \ge 0\), we have \[ \jac \Phi(x) \ge \varphi(\zeta(x))^{\ell} \Bigl(1 - \frac{\abs{x'}^2}{\zeta(x)^2}\Bigr) \ge \frac{\refstepcounter{cte} C_{\thecte}}{\zeta(x)^\ell} \Bigl(1 - \frac{\abs{x'}^2}{\zeta(x)^2}\Bigr). \] If \(x\in B^{\ell}_{\tau}\times Q^{m-\ell}_{\frac{1-\overline{\mu}}{\mu}}\), then \(\zeta(x) \leq \tau \sqrt{1 + \varepsilon}\) and \(\zeta(x)^2 \ge (1 + \varepsilon)|x'|^2\). Thus, \[ \jac \Phi(x) \ge \frac{C_{\thecte}} {(\tau \sqrt{1 + \varepsilon})^\ell} \frac{\varepsilon}{1 + \varepsilon} = \frac{\refstepcounter{cte} C_{\thecte}}{\tau^\ell}. \] Combining the estimates of \(|D^j \Phi|\) and \(\jac{\Phi}\), we have the conclusion. \end{proof} In order to establish the remaining properties stated in Lemma~\ref{lemmaThickeningAroundDualSqueletonmu}, we only need to repeat the proof of Lemma~\ref{lemmaThickeningAroundDualSqueleton} with obvious modifications. \end{proof} \begin{proof}[Proof of Proposition~\ref{lemmaThickeningFaceNearDualSkeleton}] Define \(\Phi\) to be the composition of the map \(\Phi_1\) given by Lemma~\ref{lemmaThickeningAroundPrimalSqueletonmu} with \(\kappa=\frac{\underline{\mu}}{\mu\sqrt{\ell}}\) together with the map \(\Phi_2\) given by Lemma~\ref{lemmaThickeningAroundDualSqueletonmu}; more precisely, \(\Phi=\Phi_1\circ \Phi_2\). The propeties of \(\Phi\) can be established as in the case of thickening. \end{proof} \section{Proof of Theorem~\ref{theoremDensityManifoldMain}} Let \(\mathcal{K}^m\) be a cubication of \(Q_1^m\) of radius \(\eta > 0\) and let \(\mathcal{T}^{\ell^*}\) be the dual skeleton with respect to \(\mathcal{K}^\ell\) for some \(\ell \in \{0, \dots, m-1\}\). \begin{claim} Let \(v \in C^\infty(K^m \setminus T^{\ell^*}; N^n) \cap W^{k, p}(K^m; N^n)\). If \(\pi_\ell(N^n) = \{0\}\) and if \(\ell^* < m - kp\), then there exists a family of smooth maps \(v^\textrm{sh}_{\tau_\mu, \mu} : K^m \to N^n\) such that \[ \lim_{\mu \to 0}{\norm{ v^\mathrm{sh}_{\tau_\mu, \mu} - v}_{W^{k, p}(K^m)}} = 0. \] \end{claim} This claim is a removable singularity property of topological nature for \(W^{k, p}\) maps. Theorem~\ref{theoremDensityManifoldMain} follows from Theorem~\ref{theoremDensityManifoldNontrivial} and this claim. Indeed, by Theorem~\ref{theoremDensityManifoldNontrivial} the class of maps \(v\) in the claim is dense in \(W^{k, p}(K^m; N^n)\) when \(\ell = \lfloor kp \rfloor\). Since the maps \(v^\mathrm{sh}_{\tau_\mu, \mu}\) are smooth and converge to \(v\) in \(W^{k, p}\), we deduce that smooth maps are dense in \(W^{k, p}(K^m; N^n)\). \begin{proof}[Proof of the Claim] Assuming that \(\pi_\ell(N^n) = \{0\}\), we can modify \(v\) in a neighborhood of \(T^{\ell^*}\) in order to obtain a smooth map \(v^\mathrm{ex}_\mu : K^m \to N^n\). More precisely, for every \(0 < \mu < 1\), by Proposition~\ref{propositionSmoothExtension} and Proposition~\ref{propositionContinuousExtensionProperty}, there exists \(v^\mathrm{ex}_\mu \in C^\infty(K^m; N^n)\) such that \(v^\mathrm{ex}_\mu = v\) in \(K^m \setminus (T^{\ell^*} + Q_{\mu\eta}^m)\). Although \(v\) and \(v^\mathrm{ex}_\mu\) coincide in a large set, \(\norm{v^\mathrm{ex}_\mu}_{W^{k, p}(K^m)}\) can be much larger than \(\norm{v}_{W^{k, p}(K^m)}\) since the extension is of topological nature and does not take into account the values of \(v\) in a neighborhood of \(T^{\ell^*}\). In order to get a better extension of \(v\), we have to shrink \(T^{\ell^*} + Q_{\mu\eta}^m\) into a smaller neighborhood of \(T^{\ell^*}\). Assume that \(\mu < \frac{1}{2}\) and take \(0< \tau< \frac{1}{2}\). Let \(\Phi^\mathrm{sh}_{\tau, \mu} :{\mathbb R}^m \to {\mathbb R}^m\) be the smooth diffeomorphism given by Proposition~\ref{lemmaThickeningFaceNearDualSkeletonGlobal}. Define \[ v^\mathrm{sh}_{\tau, \mu}= (v^\mathrm{ex}_\mu \circ \Phi^\mathrm{sh}_{\tau, \mu}). \] In particular \(v^\mathrm{sh}_{\tau, \mu} \in C^\infty(K^m; N^n)\). Since \(v^\mathrm{sh}_{\tau, \mu} = v\) in the complement of \(T^{\ell^\ast} + Q^{m}_{2\mu\eta}\), for every \(j \in {\mathbb N}_*\), \begin{align*} \norm{D^j v^\mathrm{sh}_{\tau, \mu} - D^j v}_{L^p(K^m)} & = \norm{D^j v^\mathrm{sh}_{\tau, \mu} - D^j v}_{L^p(K^m\cap(T^{\ell^\ast} + Q^{m}_{2\mu\eta}))}\\ & \le \norm{D^j v^\mathrm{sh}_{\tau, \mu}}_{L^p(K^m\cap(T^{\ell^\ast} + Q^{m}_{2\mu\eta}))} + \norm{D^j v}_{L^p(K^m\cap(T^{\ell^\ast} + Q^{m}_{2\mu\eta}))}. \end{align*} If \(\ell^* < m-kp\), or equivalently if \(\ell + 1 > kp\), then by Corollary~\ref{corollaryShrinkingWkpEstimate} we have for every \(j \in \{1, \dotsc, k\}\), \begin{multline*} (\mu \eta)^{j} \norm{D^jv^\mathrm{sh}_{\tau, \mu}}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}))}\\ \leq \setcounter{cte}{1} C_{\thecte} \sum_{i=1}^j (\mu\eta)^{i} \norm{D^i v^\mathrm{ex}_\mu}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) \setminus (T^{\ell^*} + Q^m_{\mu\eta}))}\\ \qquad + C_{\thecte} \tau^{\frac{\ell+1-kp}{p}}\sum_{i=1}^j (\mu\eta)^{i} \norm{D^i v^\mathrm{ex}_\mu}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{\mu\eta}))}. \end{multline*} Since \(v^\mathrm{ex}_{\mu} = v\) in the complement of \(T^{\ell^\ast} + Q^{m}_{\mu\eta}\), we deduce that \begin{multline*} (\mu \eta)^{j} \norm{D^j v^\mathrm{sh}_{\tau, \mu} - D^j v}_{L^p(K^m)} \\ \le \refstepcounter{cte} C_{\thecte} \sum_{i=1}^j (\mu\eta)^{i} \norm{D^i v}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) )}\\ + C_1 \tau^{\frac{\ell+1-kp}{p}}\sum_{i=1}^j (\mu\eta)^{i} \norm{D^i v^\mathrm{ex}_\mu}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{\mu\eta}))}. \end{multline*} We show that \begin{equation} \label{equationConvergenceNearDualSkeleton} \lim_{\mu \to 0}{\sum_{i=1}^j (\mu\eta)^{i-j} \norm{D^i v}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{2\mu\eta}) )}} = 0. \end{equation} Since \(N^n\) is a compact subset of \({\mathbb R}^\nu\), \(v\) is bounded. By the Gagliardo-Nirenberg interpolation inequality, for every \(i \in \{1, \dots, k-1\}\), \(D^i v \in L^{\frac{kp}{i}}(K^m)\). By H\"older's inequality, for every \(i\in \{1, \dotsc, k\}\) we then have \begin{multline*} (\mu\eta)^{i-j} \norm{D^i v}_{L^p(K^m\cap( T^{\ell^\ast} + Q^{m}_{2\mu\eta}))} \\ \begin{aligned} & \le (\mu\eta)^{i-j} \bigabs{K^m\cap( T^{\ell^\ast} + Q^{m}_{2\mu\eta})}^{\frac{k-i}{kp}} \norm{D^i v}_{L^{\frac{kp}{i}}(K^m\cap( T^{\ell^\ast} + Q^{m}_{2\mu\eta}))}\\ & = \eta^{i-j} \mu^{k - j + (\ell + 1 - kp) \frac{k - i}{kp}} \bigg( \frac{\bigabs{K^m\cap( T^{\ell^\ast} + Q^{m}_{2\mu\eta})}}{\mu^{\ell + 1}} \bigg)^\frac{k-i}{kp} \norm{D^i v}_{L^{\frac{kp}{i}}(K^m\cap( T^{\ell^\ast} + Q^{m}_{2\mu\eta}))}. \end{aligned} \end{multline*} Since \(\bigabs{K^m\cap( T^{\ell^\ast} + Q^{m}_{2\mu\eta})}\leq \refstepcounter{cte} C_{\thecte} \mu^{\ell+1}\), the limit follows. For every \(0 < \mu < \frac{1}{2}\), take \(0 < \tau_\mu < \frac{1}{2}\) such that \begin{equation} \label{equationConvergenceTau} \lim_{\mu \to 0}{\tau_\mu^{\frac{\ell+1-kp}{p}}\sum_{i=1}^j (\mu\eta)^{i-j} \norm{D^i v^\mathrm{ex}_\mu}_{L^p(K^m \cap (T^{\ell^*} + Q^m_{\mu\eta}))}} = 0. \end{equation} From \eqref{equationConvergenceNearDualSkeleton} and \eqref{equationConvergenceTau}, we deduce that for every \(j \in \{1, \dots, k\}\), \[ \lim_{\mu \to 0}{\norm{D^j v^\mathrm{sh}_{\tau_\mu, \mu} - D^j v}_{L^{p}(K^m)}} = 0. \] Since \(v^\mathrm{sh}_{\tau_\mu, \mu}\) converges in measure to \(v\) as \(\mu\) tends to \(0\), we then have \[ \lim_{\mu \to 0}{\norm{ v^\mathrm{sh}_{\tau_\mu, \mu} - v}_{W^{k, p}(K^m)}} = 0. \] This establishes the claim. \end{proof} \section{Concluding remarks} \subsection{Other domains} Our proof can be adapted to more general domains \(\Omega \subset {\mathbb R}^m\). In order to apply the extension argument at the beginning of the proof of Theorem~\ref{theoremDensityManifoldNontrivial}, it suffices that \(\Omega\) is starshaped. Concerning Theorem~\ref{theoremDensityManifoldMain}, the crucial tool is the extension property of Proposition~\ref{lemmaContinuousExtensionProperty}. This can be enforced by assuming that \[ \pi_0(\Omega) = \ldots = \pi_{\floor{kp} - 1}(\Omega) = \{0\}. \] This contains in particular the case where \(\Omega\) is starshaped. Another option is to require that for some CW-complex structure, \(\overline{\Omega}\) has the \(\floor{kp} - 1\) extension property with respect to \(N^n\). More precisely, for every \(u \in C^0(\overline{\Omega}^{\floor{kp}}; N^n)\), the restriction \(u \vert_{\overline{\Omega}^{\floor{kp}-1}}\) of \(u\) to the skeleton of \(\overline\Omega\) of dimension \(\floor{kp} - 1\) has a continuous extension to \(\overline{\Omega}\). It can be showed that this property does not depend on the CW-complex structure of \(\overline{\Omega}\) (see remark following \cite{Hang-Lin}*{Definition 2.3}). \subsection{Complete manifolds} Our argument also works for complete manifolds \(N^n\) that are embedded in \({\mathbb R}^\nu\) and for which there exists a projection \(\Pi\) defined on a uniform neighborhood of size \(\iota\) around \(N^n\). The compactness of \(N^n\) ensures the Gagliardo-Nirenberg interpolation inequality that for every \(i \in \{1, \dotsc, k-1\}\), \(D^i u \in L^\frac{kp}{i}(Q^m_1)\). This inequality still holds if the assumption \(u \in L^\infty\) is replaced by \(u \in W^{1, kp}\). In this case, one proves that if \(\pi_{\floor{kp}(N^n)} = \{0\}\), then for every \( u \in W^{k, p}(Q^m_1; N^n) \cap W^{1, kp}(Q^m_1; N^n), \) there exists a family of maps \(u_\eta \in C^\infty(Q^m_1; N^n)\) such that for every \(i \in \{1, \dotsc, k\}\), \[ \lim_{\eta \to 0} \norm{D^i u_\eta - D^i u}_{L^\frac{kp}{i}(Q^m_1)} = 0 \] and \(u_\eta\) converges to \(u\) in measure as \(\eta\) tends to \(0\). Hence, \[ \lim_{\eta \to 0} \norm{u_\eta - u}_{W^{k, p}(Q^m_1) \cap W^{1, kp}(Q^m_1)} = 0 \] \section*{Acknowledgments} The authors would like to thank Petru Mironescu for interesting discussions and for his encouragement. The second (ACP) and third (JVS) authors were supported by the Fonds de la Recherche scientifique---FNRS. \begin{bibdiv} \begin{biblist} \bib{Bethuel}{article}{ author={Bethuel, Fabrice}, title={The approximation problem for Sobolev maps between two manifolds}, journal={Acta Math.}, volume={167}, date={1991}, pages={153--206}, } \bib{BethuelZheng}{article}{ author={Bethuel, Fabrice}, author={Zheng, Xiao Min}, title={Density of smooth functions between two manifolds in Sobolev spaces}, journal={J. Funct. Anal.}, volume={80}, date={1988}, pages={60--75}, } \bib{Bousquet-Ponce-VanSchaftingen}{article}{ author={Bousquet, Pierre}, author={Ponce, Augusto C.}, author={Van Schaftingen, Jean}, title={A case of density in \(W^{2,p}(M;N)\)}, journal={C. R. Math. Acad. Sci. Paris}, volume={346}, date={2008}, pages={735--740}, } \bib{Brezis-Li}{article}{ author={Brezis, Haim}, author={Li, Yanyan}, title={Topology and Sobolev spaces}, journal={J. Funct. Anal.}, volume={183}, date={2001}, pages={321--369}, } \bib{Brezis-Mironescu}{article}{ author={Brezis, H.}, author={Mironescu, P.}, title={On some questions of topology for $S^1$-valued fractional Sobolev spaces}, journal={RACSAM. Rev. R. Acad. Cienc. Exactas F\'\i s. Nat. Ser. A Mat.}, volume={95}, date={2001}, pages={121--143}, } \bib{Campanato1963}{article}{ author={Campanato, S.}, title={Propriet\`a di h\"olderianit\`a di alcune classi di funzioni}, journal={Ann. Scuola Norm. Sup. Pisa (3)}, volume={17}, date={1963}, pages={175--188}, } \bib{Escobedo}{article}{ author={Escobedo, Miguel}, title={Some remarks on the density of regular mappings in Sobolev classes of \(S^M\)-valued functions}, journal={Rev. Mat. Univ. Complut. Madrid}, volume={1}, date={1988}, pages={127--144}, } \bib{Gagliardo}{article}{ author={Gagliardo, Emilio}, title={Ulteriori propriet\`a di alcune classi di funzioni in pi\`u variabili}, journal={Ricerche Mat.}, volume={8}, date={1959}, pages={24--51}, } \bib{Gastel-Nerf}{article}{ author={Gastel, Andreas}, author={Nerf, Andreas J.}, title={Density of smooth maps in \(W^{k,p}(M,N)\) for a close to critical domain dimension}, journal={Ann. Global Anal. Geom.}, volume={39}, date={2011}, pages={107--129}, } \bib{Hajlasz}{article}{ author={Haj{\l}asz, Piotr}, title={Approximation of Sobolev mappings}, journal={Nonlinear Anal.}, volume={22}, date={1994}, pages={1579--1591}, } \bib{Hang-Lin_2001}{article}{ author={Hang, Fengbo}, author={Lin, Fanghua}, title={Topology of Sobolev mappings}, journal={Math. Res. Lett.}, volume={8}, date={2001}, pages={321--330}, } \bib{Hang-Lin}{article}{ author={Hang, Fengbo}, author={Lin, Fanghua}, title={Topology of Sobolev mappings, II}, journal={Acta Math.}, volume={191}, date={2003}, pages={55--107}, } \bib{Hardt-Riviere}{unpublished}{ author={Hardt, Robert}, author={Rivi\`ere, Tristan}, title={Bubbling phenomena and weak convergence for maps in \(W^{2, 2}(B^5; S^3)\)}, } \bib{Jones1980}{article}{ author={Jones, Peter W.}, title={Extension theorems for BMO}, journal={Indiana Univ. Math. J.}, volume={29}, date={1980}, pages={41--66}, } \bib{Mironescu}{article}{ author={Mironescu, Petru}, title={On some properties of \(S^1\)-valued fractional Sobolev spaces}, conference={ title={Noncompact problems at the intersection of geometry, analysis, and topology}, }, book={ series={Contemp. Math.}, volume={350}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={2004}, pages={201--207}, } \bib{Nirenberg1959}{article}{ author={Nirenberg, L.}, title={On elliptic partial differential equations}, journal={Ann. Scuola Norm. Sup. Pisa (3)}, volume={13}, date={1959}, pages={115--162}, } \bib{Ponce-VanSchaftingen}{article}{ author={Ponce, Augusto C.}, author={Van Schaftingen, Jean}, title={Closure of smooth maps in \(W^{1,p}(B^3;S^2)\)}, journal={Differential Integral Equations}, volume={22}, date={2009}, pages={881--900}, } \bib{SchoenUhlenbeck}{article}{ author={Schoen, Richard}, author={Uhlenbeck, Karen}, title={Boundary regularity and the Dirichlet problem for harmonic maps}, journal={J. Differential Geom.}, volume={18}, date={1983}, pages={253--268}, } \bib{Stein}{book}{ author={Stein, Elias M.}, title={Singular integrals and differentiability properties of functions}, series={Princeton Mathematical Series, No. 30}, publisher={Princeton University Press}, place={Princeton, N.J.}, date={1970}, } \end{biblist} \end{bibdiv} \end{document}
1,477,468,750,710
arxiv
\section{Introduction} Elliptic flow is described by the differential second Fourier coefficient of the azimuthal momentum distribution $v_{2}({\cal D})=\langle \cos(2\phi) \rangle_{\cal D}$ \cite{Ollit92,Bar94,PosVol}. The brackets denote averaging over many particles and events, and ${\cal D}$ represents a phase-space window in the $(p_{T},y)$ plane in which $v_{2}$ is calculated. The azimuthal angle $\phi$ is measured with respect to the reaction plane defined by the impact parameter vector $\vec b$ and the beam direction. For non-central collisions ($b \neq 0$), $v_{2}$ is an important observable due to its sensitivity to the EoS, and through it to a possible phase transition to the QGP. Since we could identify protons via $dE/dx$ only at low $p_{T}$, the $v_{2}$ of the $\Lambda$ is important because this is a baryon as well. In comparison to the elliptic flow of pions and $K^{0}_{S}$ mesons the $\Lambda$ flow can be used to check the mass ordering effect and for comparison to hydrodynamical predictions. Testing the differential flow measurements of different particle species against different scaling scenarios may yield additional information about the origin of flow. As strangeness enhancement has been suggested as a signature of the deconfined stage \cite{Koch}, understanding of the $\phi$ and $K_{S}^{0}$ meson production is important as here hidden and open strangeness are involved. The study of $\phi$ yields in different decay channels is important in light of a possible modification of the $\phi$ mass, width and the branching ratios near the phase boundary. \section{Experiment} The CERES experiment consists of two radial Silicon Drift Detectors (SDD), two Ring Imaging CHerenkov (RICH) detectors and a radial drift Time Projection Chamber (TPC). The CERES spectrometer covers $\eta=2.05-2.70$ with full azimuthal acceptance. The two SDDs are located at 10 and 13~cm downstream of a segmented Au target. They were used for the tracking and vertex reconstruction. The purpose of the RICH detectors is electron identification. The new radial-drift TPC operated inside a magnetic field with a maximal radial component of 0.5~T providing a precise determination of the momentum. Charged particles emitted from the target are reconstructed by matching track segments in the SDD and in the TPC using a momentum-dependent matching window. A more detailed description of the CERES experiment can be found in \cite{Mar}. For the flow analysis, we used 30$\cdot 10^{6}$ Pb+Au events at 158~AGeV/c collected in the year of 2000 data taking period. Of these, $91.2\%$ were triggered on $\sigma/\sigma_{geo} \le 7\%$, and $8.3\%$ events with $\sigma/\sigma_{geo} \le 20\%$. The $\phi$ meson analysis in the kaon (dilepton) channel used 24$\cdot 10^{6}$ (18$\cdot 10^{6}$) events taken with the most central trigger. \section{Methods of strange particle reconstruction} \label{Methods} The $\Lambda$ particles were reconstructed via the decay channel $\Lambda \rightarrow p+\pi^{-}$ with a $BR=63.9\%$ and $c\tau=7.89$~cm \cite{PPB04}. Due to the late decay of the $\Lambda$ particle, as candidates for $\Lambda$ daughters, only those TPC tracks which have no match to a SDD track were chosen. Partial particle identification (PID) was performed using $dE/dx$ information from the TPC by applying a $\pm 1.5 \sigma$ ($+1 \sigma$) window around the momentum dependent Bethe-Bloch value for pions (protons). On the pair level, a $p_{T}$ dependent opening angle cut is applied, in addition to a cut in the Armenteros-Podalanski variables ($q_{T} \le 0.125$~GeV/c and $0 \le \alpha \le 0.65$) to suppress $K_{S}^{0}$. With these cuts values for $S/B\approx0.04$ and $S/\sqrt{B}\approx500$ were obtained \cite{Jovan}. The $K_{S}^{0}$ particles were reconstructed via the decay channel $K_{S}^{0}\rightarrow \pi^{+}+\pi^{-}$ with a $BR=68.95\%$ and $c\tau=2.68$~cm \cite{PPB04}. Partial PID for $\pi^{+}$ and $\pi^{-}$ was performed by applying a $\pm 1.5 \sigma$ window around the momentum dependent Bethe-Bloch energy loss value for pions. As the $K_{S}^{0}$ particle comes from a primary vertex, a possibility to suppress fake track combinations is given by a cut (0.02~cm) on the radial distance between the point where the back extrapolated momentum vector of the $K_{S}^{0}$ candidate intersects the $x-y$ plane and the primary vertex. In addition, a cut of 1~cm on the z-position of the secondary vertex was applied. In this approach, the values of $S/B \approx 0.92$ and $S/\sqrt{B} \approx 500$ were obtained \cite{Wilrid,Jovan}. In order to remove the effect of autocorrelations, tracks which were chosen as candidates for daughter particles were not used for the determination of the reaction plane orientation. In the case of $\Lambda$ particle reconstruction, the combinatorial background was determined by ten random rotations of positive daughter tracks around the beam axis and constructing the invariant mass distribution, while in the case of $K^{0}_{S}$ particle reconstruction, the mixed event technique was used. $\Lambda$ ($K^{0}_{S}$) particles were reconstructed in $y$-$p_{T}$-$\phi$ bins. We used the area under the peak, obtained by fitting the invariant mass distribution with a Gaussian, to measure the yield of $\Lambda$ ($K^{0}_{S}$) in a given bin. Plotting the yield versus $\phi$ for different $p_{T}$ and $y$ values one can construct the $dN_{\Lambda(K^{0}_{S})}/d\phi$ distribution. Fitting these distributions with a function $c[1+2v_{2}'\cos(2\phi)]$, it is possible to extract the observed differential $v_{2}'$ values. The obtained $v_{2}'$ coefficients were corrected for the reaction plane resolution via $v_{2}=v_{2}'/\sqrt{2\langle \cos[2(\Phi_{a}-\Phi_{b})]\rangle}$ \cite{PosVol}. Here, $\Phi_{a}$ and $\Phi_{b}$ denote the azimuthal orientations of reaction planes reconstructed from two random subevents. In the case of the $\pi^{\pm}$ elliptic flow analysis, subevents are formed from positive and negative pions separately. Using the method of subevents, correction factors were calculated for different centrality bins. In all 3 analyses ($\Lambda$, $K^{0}_{S}$ and $\pi^{\pm}$) similar values were obtained. The corresponding resolution ranges from about 0.16 to 0.31, depending on the centrality. Due to the small statistics of strange particles, the differential elliptic flow analysis was performed for only two centrality classes. The huge statistics of $\pi^{\pm}$ allowed to perform the differential elliptic flow analysis in six centrality bins. As we used the combination of data taken with different triggers, the centrality is characterized by a weighted mean centrality $\langle \frac{\sigma}{\sigma_{geo}} \rangle$ calculated using the numbers of TPC tracks as statistical weights \cite{Jovan}. \section{Results} In Fig.~\ref{fig:hydro} are shown the resulting $p_{T}$ dependences of $v_{2}$ for three particle species. An increase of the elliptic flow magnitude {\it vs} $p_{T}$ for all three particle species is visible. In the case of $\Lambda$ elliptic flow, the absolute systematic error $\Delta v_2$, estimated from two different ways of $\Lambda$ reconstruction, is $+0.001 \atop -0.007$ for $p_{T} < 1.6$~GeV/c and $+0.00 \atop -0.02$ for $p_{T} > 1.6$~GeV/c which is small compared to the statistical errors. Particles are accepted as $\pi^{\pm}$ if their TPC $dE/dx$ is within a $\pm 1.5\sigma$ window around the nominal Bethe-Bloch value for pions. The HBT contribution to the $\pi^{\pm}$ elliptic flow is subtracted using the procedure described in \cite{Dinh99}. Separately calculated elliptic flow of $\pi^{+}$ and $\pi^{-}$ shows that the averaged difference between them is $\approx 0.003$ in both $\eta$ and $y$, which can be attributed to the contamination of protons in $\pi^{+}$ sample. Comparing results obtained from two independent analysis methods we concluded that the overall absolute systematic error in $\pi^{\pm}$ elliptic flow measurements is not bigger than 0.0036. \begin{figure}[h] \begin{minipage}[c]{.33\textwidth} \includegraphics[height=5.2cm]{./hydro_SQMLambda_SC.eps} \end{minipage} \begin{minipage}[c]{.33\textwidth} \includegraphics[height=5.2cm]{./hydro_SQMK0S_SC.eps} \end{minipage} \begin{minipage}[c]{.33\textwidth} \includegraphics[height=5.2cm]{./hydro_SQMPion_SC.eps} \end{minipage} \caption{The $\Lambda$ (left), $K^{0}_{S}$ (middle) and $\pi^{\pm}$ (right) elliptic flow $\it vs$ transverse momentum in semicentral events. Hydrodynamical predictions are presented for two freeze-out temperatures: $T_{f}=120$~MeV (solid) and $T_{f}=160$~MeV (dotted). \label{fig:hydro}} \end{figure} The elliptic flow results are compared with the hydrodynamical calculations done by P. Huovinen based on \cite{Kolb01,Pasi05}. The calculation was done in 2+1 dimensions with initial conditions fixed via a fit to the $p_{T}$ spectra of negatively charged particles and protons in Pb+Pb collisions at 158~A GeV/c \cite{Kolb99}. The underlying EoS assumes a first order phase transition to a QGP at a critical temperature of $T_{c}=165$~MeV. The hydrodynamical predictions were calculated with 2 freeze-out temperatures, $T_{f}=120$~MeV and $T_{f}=160$~MeV. The model prediction with the lower freeze-out temperature of $T_{f}=120$~MeV overpredicts the data, while rather good agreement can be achieved with a higher freeze-out temperature of $T_{f}=160$~MeV (this is however not the preferred value considering the proton $p_{T}$ spectra). \begin{figure}[h] \begin{minipage}[c]{.33\textwidth} \includegraphics[height=5.2cm]{./SPS_RHIC_CERES_SQM.eps} \end{minipage} \begin{minipage}[c]{.33\textwidth} \includegraphics[height=5.2cm]{./RHIC_CERES_K_SQM.eps} \end{minipage} \begin{minipage}[c]{.33\textwidth} \includegraphics[height=5.2cm]{./compare_all_SQM.eps} \end{minipage} \caption{Comparison of $\Lambda$ (left) and $K^{0}_{S}$ (middle) elliptic flow measured by CERES, STAR and NA49. Comparison between the elliptic flow magnitude of the $\pi^{\pm}$, low-$p_{T}$ protons, $\Lambda$, and $K_{S}^{0}$ in semicentral events (right). \label{fig:compare_RHIC_NA49}} \end{figure} A comparison of the CERES data to results from NA49 \cite{Stef05} at the same energy ($\sqrt{s_{NN}}=17$~GeV) and to STAR results \cite{Oldenburg05} at $\sqrt{s_{NN}}=200$~GeV is shown in Fig.~\ref{fig:compare_RHIC_NA49}. The NA49 and CERES data are in very good agreement. After rescaling the STAR results to the centrality used in the CERES experiment, the $v_{2}$ values measured at RHIC are $15-20\%$ higher due to the higher beam energy. In Fig.~\ref{fig:compare_RHIC_NA49} (right), the elliptic flow magnitude of the $\pi^{\pm}$, $K_{S}^{0}$, low momentum protons, and $\Lambda$ measured by CERES are compared. A mass ordering effect is observed. At small $p_{T}$, up to $\approx1.5$~GeV/c, $v_{2}(\Lambda)<v_{2}(K_{S}^{0})<v_{2}(\pi^{\pm})$. In the region of high $p_{T}$, above $\approx2$~GeV/c, the tendency is the opposite. As proton and $\Lambda$ hyperon have similar masses and 3 valence quarks each, the $v_{2}$ of low momentum identified protons is considered as a natural continuation of $\Lambda$ $v_{2}(p_{T})$ dependence in the region of small $p_{T}$. The indication of a possible undershoot to negative values is tantalizing but not significant in view of the statistical errors. \begin{figure}[h] \begin{minipage}[h]{.48\textwidth} \includegraphics[height=5.2cm] {./scaled_all_SQM.eps} \end{minipage} \begin{minipage}[h]{.48\textwidth} \includegraphics[height=5.2cm] {./scaled_ytfs_all_SQM.eps} \end{minipage} \caption{Comparison between the elliptic flow magnitude of $\pi^{\pm}$, low-$p_{T}$ protons, $\Lambda$, and $K_{S}^{0}$ scaled to the number of the constituent quarks (left) and to the $y^{fs}_{T}$ variable (right). \label{fig:scaled_all_new}} \end{figure} Fig.~\ref{fig:scaled_all_new} (left) shows the scaled elliptic flow magnitude $v_{2}/n_{q}$ for $\pi^{\pm}$, $K_{S}^{0}$, low-$p_{T}$ protons and $\Lambda$ plotted against $p_{T}/n_{q}$ in semicentral events. Here, $n_{q}$ denotes the number of the constituent quarks. There is an indication that high $p_{T}$ particles ($p_{T}>1.5$~GeV/c) show scaling behavior. A similar behavior is observed by the STAR experiment at RHIC \cite{Oldenburg05}. This is consistent with the coalescence mechanism where co-moving quarks with high $p_{T}$ form hadrons. In this case scaling to the number of the constituent quarks shows the original momentum space azimuthal anisotropy formed at the early stage of the collision. Within the Buda-Lund model of hydrodynamics \cite{Cso96}, a scaling of elliptic flow of different particle species has been suggested \cite{Cso03,Csa04} when instead transverse momentum the transverse rapidity is used. We use their scaling variable $y^{fs}_{T}$ \cite{Taran05} and show, in Fig.~\ref{fig:scaled_all_new} (right), the results for $\pi^{\pm}$, $K_{S}^{0}$, low-$p_{T}$ protons and $\Lambda$ in semicentral events. Within statistical errors a reasonable scaling is observed for all particles. This may indicate a hydrodynamic behavior of matter created in central heavy-ion collisions at the highest SPS energy. \begin{figure}[h] \begin{minipage}[h]{.49\textwidth} \includegraphics[height=5.5cm] {./sqm_k0_001.eps} \end{minipage} \begin{minipage}[h]{.49\textwidth} \includegraphics[height=5.5cm] {./sqm_k0_002.eps} \end{minipage} \caption{Transverse momentum and rapidity $K_{S}^{0}$ spectra from the $K^{0}_{S}$ analysis performed without PID and without secondary vertex reconstruction \cite{Radom}. \label{fig:kaons}} \end{figure} Two independent analyses of the $K_{S}^{0}$ spectra were done using the CERES data \cite{Radom,Wilrid}. The first one, performed without PID and without secondary vertex reconstruction, is based on TPC information only \cite{Radom}. A cut in the Armenteros-Podalanski plane was used in order to suppress $\Lambda$ contamination. The $p_{T}$ and $y$ spectra are shown in Fig.~\ref{fig:kaons}. An alternative approach of the $K_{S}^{0}$ reconstruction was performed without PID but with secondary vertex reconstruction \cite{Wilrid} which is already described in Section~\ref{Methods}. In both analyses, Pb+Au events taken with the most central trigger were used. The $K_{S}^{0}$ transverse momenta spectrum obtained with this analysis \cite{Wilrid} is shown in Fig.~\ref{fig:Ludolphs} (left). The invariant multiplicity was fitted with an exponential fall-off with transverse mass $m_{t}$. The yields and the inverse slope parameter $T$ of the $p_{T}$ spectra from the two analyses are in good agreement. A comparison with results from other experiments is shown in Fig.~\ref{fig:Ludolphs} (right). In order to match the centrality of the CERES experiment, results \begin{figure}[h] \begin{minipage}[h]{.49\textwidth} \includegraphics[height=6.0cm] {./K0_pt_Jovan.eps} \end{minipage} \begin{minipage}[h]{.49\textwidth} \includegraphics[height=6.0cm] {./K0_dNdy_Jovan.eps} \end{minipage} \caption{Left: Transverse momentum $K_{S}^{0}$ spectrum from the $K^{0}_{S}$ analysis performed with secondary vertex reconstruction \cite{Wilrid}. Right: A comparison between CERES results (red circles \cite{Wilrid} and blue squares \cite{Radom}), published (open triangles) \cite{NA49} and preliminary (open crosses) \cite{NA49_nuclexp} NA49 results and NA57 data (green diamonds) \cite{NA57}. The black dotted line represents a fit to the charged kaon yield measured by the NA49, while blue (green) dotted line corresponds to a fit to the $K_{S}^{0}$ yield measured by the CERES (NA57). \label{fig:Ludolphs}} \end{figure} from the NA49 \cite{NA49,NA49_nuclexp} and NA57 \cite{NA57} experiments are slightly rescaled. A rather good agreement between the NA49 analysis of charged kaons and the CERES $K^{0}_{S}$ results in shape and yield was found. The difference in the yield is only 5\%. The rapidity distribution of $K^{0}_{S}$ observed by NA49 shows a similar shape as the one from CERES (represented with the blue dotted line in Fig.~\ref{fig:Ludolphs} (right)) and a relatively good agreement in the yield. Within the CERES acceptance the results agree with the NA57 data, although the NA57 fit does not. The CERES experiment enabled for the first time at SPS to study simultaneously the leptonic and charged kaon decay modes of the $\phi$ meson, which may shed light onto the $\phi$ puzzle \cite{PRLMarin}. In order to obtain the $p_{T}$ spectrum of $\phi$ mesons, the invariant \begin{figure}[h] \begin{minipage}[h]{.48\textwidth} \includegraphics[height=5.6cm] {./fig_phi_invm_ceres.eps} \end{minipage} \begin{minipage}[h]{.48\textwidth} \includegraphics[height=5.6cm] {./data_trento_mix_scaledphi.eps} \end{minipage} \caption{Left: $K^{+}K^{-}$ invariant mass spectrum after background subtraction in $1.5$~GeV/c $<p^{\phi}_{T}<1.75$~GeV/c and $2.2<y^{\phi}<2.4$. Right: $e^{+}e^{-}$ invariant mass spectrum compared to the hadron decay cocktail (solid line) and to a model calculation assuming the dilepton yield form the QGP phase and an in medium spread $\rho$ (dashed line). \label{fig:ceres}} \end{figure} mass distributions of $K^{+}K^{-}$ pairs were constructed. The corresponding distributions of the combinatorial background were calculated using the mixed-event technique. An example is shown in Fig.~\ref{fig:ceres} (left). To study $\phi$ mesons in the dilepton ($e^{+}e^{-}$) decay mode, electrons are identified using the RICH detectors and the TPC $dE/dx$. The main difficulties of reconstructing the $\phi$ meson in the dilepton channel are the low branching ratio and huge combinatorial background. Details of how to reduce the combinatorial background are explained in \cite{Mar,Misko,Sergej}. The $e^{+}e^{-}$ invariant-mass spectrum, corrected for the efficiency and normalized to the number of charged particles in the acceptance is shown in Fig.~\ref{fig:ceres} (right). In the same figure are shown the expectations from the hadron decay cocktail \cite{Sako}, as well as a model calculation where the cocktail $\rho$ contribution is replaced by an explicit in-medium modification combined with continuous $\pi\pi$ annihilation \cite{Rap}. The later accounts very well for the data. The inverse slope parameter of $T=$273$\pm$9(stat)$\pm$10(syst)~MeV and a rapidity density $dN/dy$ of 2.05$\pm$0.14(stat)$\pm$0.25(syst) in the $K^{+}K^{-}$ mode and $T=$306$\pm$82(stat)$\pm$40(syst)~MeV and $dN/dy=$2.04$\pm$0.49(stat)$\pm$0.32(syst) in the dilepton mode are in good agreement within errors. The data do not support a possible enhancement of the $\phi$ yield in the dilepton over the hadronic channel by a factor larger than 1.6 at the 95\% CL. \begin{figure}[h] \begin{minipage}[h]{.48\textwidth} \includegraphics[height=5.6cm] {./fig_phi_pt_eekk.eps} \end{minipage} \begin{minipage}[h]{.48\textwidth} \includegraphics[height=5.6cm] {./fig_phi_mt_eept.eps} \end{minipage} \caption{Left: Acceptance and efficiency corrected $p_{T}$ spectrum of $\phi$ measured in the $K^{+}K^{-}$ (open cycles) and $e^{+}e^{-}$ (closed circles) decay modes. Right: Scaled $m_{T}$ distribution of $\phi$ mesons reconstructed in the $K^{+}K^{-}$ (triangles) and $e^{+}e^{-}$ (circles) decay channels compared to the results from NA49 (squares) and NA50 (diamonds). \label{fig:marin}} \end{figure} The $p_{T}$ dependence of the $\phi$ meson yield measured in the $K^{+}K^{-}$ and $e^{+}e^{-}$ decay channels, corrected for the acceptance and efficiency, is shown in Fig.~\ref{fig:marin} (left). The results are in very good agreement. After accounting for the slightly different measurement conditions, a comparison between CERES results and the existing Pb+Pb systematics \cite{Ror} is shown in Fig.~\ref{fig:marin} (right). The CERES results are in good agreement with the results from NA49 measured in the kaon channel. On the other hand, CERES data in the $K^{+}K^{-}$ channel do not agree with NA50 results. \section*{References}
1,477,468,750,711
arxiv
\section{The Spectral Containment Property}\label{sec:eigenvaluecontainment} We would like to relate the hyperbolicity cone of a homogeneous stable polynomial with the hyperbolicity cone of its minor lift. Recall from \Cref{def:eigenvalue_containment} that a homogeneous multiaffine stable polynomial $p$ has the \emph{spectral containment property} if for any $X \in H(P)$, there is some vector $\lambda$ consisting of the eigenvalues of $X$ with appropriate multiplicity so that $\lambda \in H(p)$. Elementary symmetric polynomials have the spectral containment property, and we will show that several other polynomials have the spectral containment property in this section. The remainder of this section is devoted to proving some sufficient conditions for the spectral containment property, as well as showing some connections between this property and the Schur-Horn theorem. \subsection{Schur--Horn Theorem and stable linear functions} Recall that a linear homogeneous polynomial $p(x) = a_1x_1 + \dots + a_nx_n$ is stable if and only if either $a_i \ge 0$ for each $i \in [n]$, or $a_i \le 0$ for each $i \in [n]$. Moreover, in this case $H(p) = \{x \in \mathbb{R}^n : p(x) \ge 0\}$. These are the simplest stable polynomials and yet it is not completely trivial to show that they have the spectral containment property. \begin{theorem}\label{thm:lineareigprop} Every stable linear homogeneous polynomial has the spectral containment property. \end{theorem} In order to prove this, we will use Schur's contribution to the Schur--Horn theorem. \begin{theorem}[Schur]\label{thm:schur_horn} Let $p : \mathbb{R}^n \rightarrow \mathbb{R}$ be a homogeneous linear function, and let $P$ be the associated minor lift. Let $A$ be a symmetric matrix and let $\lambda$ be an eigenvalue vector for $A$. Let $\mathfrak{S}_n$ denote the symmetric group which acts on $\mathbb{R}^n$ by permuting coordinates. Let $\textnormal{O}(n)$ denote the orthogonal group of $n\times n$ matrices. Then \[ \max_{\pi \in \mathfrak{S}_n} p(\pi(\lambda)) = \max_{U \in \textnormal{O}(n)} P(UAU^{\intercal}). \] \end{theorem} \begin{proof}[Proof of \Cref{thm:lineareigprop}] Suppose that $A \in H(P)$, which is equivalent to $P(A) \ge 0$. By the Schur--Horn theorem, there is some eigenvalue vector of $A$, say $\lambda$, so that $p(\lambda) \ge P(A) \ge 0$. Thus, there is an eigenvalue vector of $A$ contained in $H(p)$ as desired. \end{proof} We will see in \Cref{sec:schurhornprop} that if an appropriate generalization of the Schur-Horn theorem holds, then we would be able to show the spectral containment property for a large class of polynomials. \subsection{Operations Preserving the Spectral Containment Property} In this section we prove that the spectral containment property is preserved under some simple operations involving adjoining a new variable. \begin{lemma} Let $q\in\mathbb{R}[x_1,\ldots,x_n]$ be stable, multiaffine and homogeneous. Let $p\in\mathbb{R}[x_0,\ldots,x_n]$ be defined by $p(x_0, \dots, x_n) = q(x_1, \dots, x_n)$. If $q$ has the spectral containment property, then $p$ has the spectral containment property. \end{lemma} \begin{proof} First note that $x = (x_0,\ldots,x_n) \in H(p)$ if and only if $(x_1,\ldots,x_n) \in H(q)$. Let $X \in H(P)$, then we can divide $X$ into blocks as \[ X = \begin{pmatrix} X_{00} & v^{\intercal}\\ v & M \end{pmatrix}. \] Here, $M$ is equal to $X|_{[n]}$, and $v$ is some element of $\mathbb{R}^n$. If $I_m$ is the $n\times n$ identity matrix, we can see from the definition of $P$ that $P(X + tI_{n+1}) = Q(M + tI_{n})$. Therefore, for $t \ge 0$, $Q(M + tI_n) = P(X+tI_{n+1}) \ge 0$, which implies $M \in H(Q)$. Let $\lambda(M)$ and $\lambda(X)$ be eigenvalue vectors of $M$ and $X$ respectively, with the property that the entries of $\lambda(M)$ and $\lambda(X)$ appear in increasing order. The Cauchy interlacing inequalities say that \[ \lambda_{0}(X) \le \lambda_{1}(M) \le \lambda_{1}(X) \le \lambda_{2}(M) \le \lambda_{2}(X) \le \dots \le \lambda_{{n}}(M) \le \lambda_{{n}}(X). \] Thus for $i\in[n]$ we can write $\lambda_i(X)=\lambda_i(M)+\epsilon_i$ for some $\epsilon\geq0$. Since $q$ has the spectral containment property, there is a permutation $\sigma$ such that $(\lambda_{\sigma(i)}(M))_{1\leq i\leq n}\in H(q)$. Since the hyperbolicity cone of the stable polynomial $q$ is convex and contains the nonnegative orthant, we also have $(\lambda_{\sigma(i)}(X))_{1\leq i\leq n}=(\lambda_{\sigma(i)}(M)+\epsilon_{\sigma(i)})_{1\leq i\leq n}\in H(q)$. This implies that $(\lambda_0(X),\lambda_{\sigma(1)}(X),\ldots,\lambda_{\sigma(n)}(X))\in H(p)$. \end{proof} The spectral containment property is also preserved when multiplying by a new variable. \begin{prop}\label{lem:multiplyx0} Let $q\in\mathbb{R}[x_1,\ldots,x_n]$ be stable, multiaffine and homogeneous. Let $p\in\mathbb{R}[x_0,\ldots,x_n]$ defined by $p(x_0, \dots, x_n) = x_0q(x_1, \dots, x_n)$. If $q$ has the spectral containment property, then $p$ has the spectral containment property. \end{prop} Before we show this, we need another lemma. Let $X$ be a matrix written in block form as \[ X = \begin{pmatrix} X_{00} & v^{\intercal}\\ v & M \end{pmatrix} \] and $X_{00} \neq 0$. We write $X / 0 := M - X_{00}^{-1}vv^{\intercal}$ for the Schur complement. \begin{lemma}\label{lem:sup} Let $q \in \mathbb{R}[x_1, \dots, x_n]$ be stable, multiaffine and homogeneous. Let $p = x_0 q \in \mathbb{R}[x_0, \dots, x_n]$, and $X \in H(P)$, with $X_{00} > 0$, then $X / 0 \in H(Q)$. \end{lemma} \begin{proof} Note that a vector $x = (x_0,x_1,\dots,x_n) \in H(p)$ if and only if $x_0 \ge 0$ and $(x_1,\dots,x_n) \in H(q)$. Recall the determinant formula for Schur complements: for any $n\times n $ matrix $X$, \[ \det(X) = X_{00} \det(X / 0). \] Also, it is not hard to see from the definition that if $S\subseteq \{0,1,\dots, n\}$, and $ 0\in S$, then \[ X|_{S} / 0 = X/0|_{(S\setminus 0)}\, , \] that is, Schur complements interact naturally with taking submatrices. Therefore, \[ P(X) = \sum_{S \subseteq \{0, \dots, n\}} a_S \det(X|_S) = \sum_{S \subseteq \{0, \dots, n\}} a_S X_{00}\det((X /0)|_{S\setminus 0}) = X_{00} Q(X / 0) \] Thus, if $X \in H(P)$ and $X_{00} > 0$, then \[ Q(X / 0) = \frac{P(X)}{X_{00}} \ge 0 \] We can strengthen this result by noting that if we let $J$ be the block diagonal matrix given by \[ J = \begin{pmatrix} 0 & 0\\ 0 & I_n, \end{pmatrix} \] then $J \in H(P)$, since it is in particular positive semidefinite. It is clear from the definition that $X / 0 + tI_{n+1} = (X+tJ) / 0$. Thus, we have that for all $t \ge 0$, \[ Q(X / 0 + tI_{n+1}) = Q((X+tJ) / 0) = \frac{P(X + tJ)}{X_{00}} \ge 0, \] which implies that $X / 0 \in H(Q)$. \end{proof} \begin{proof}[Proof of \Cref{lem:multiplyx0}] First assume that $X_{00} > 0$. By \Cref{lem:sup}, and the spectral containment property for $q$, we have that there is an ordering of the eigenvalues of $X / 0$ so that $\lambda(X / 0) \in H(q)$. Now, we can write \[ X = \begin{pmatrix} 0 & 0 \\ 0 & X / 0 \end{pmatrix} + \begin{pmatrix} X_{00} & v^{\intercal} \\ v & X_{00}^{-1}vv^{\intercal} \end{pmatrix}, \] where the second term is a rank 1 positive semidefinite matrix. Let $X' = \begin{pmatrix} 0 & 0 \\ 0 & X / 0 \end{pmatrix}$. Note that $X'$ is block diagonal, so that if $\lambda(X')$ is an eigenvalue vector for $X/0$, then $0 \oplus \lambda(X')$ is an eigenvalue vector for $X'$. In particular, by ordering the entries appropriately, $\lambda(X') \in H(p)$, from our characterization of $H(p)$ in terms of $H(q)$. By the Weyl inequalities, there is an ordering of the eigenvalues of $X$ so that $\lambda_i(X) \ge \lambda_i(X')$ for each $i$. This implies that \[ \lambda(X) = (0\oplus \lambda(X')) + v \] where $v$ is a nonnegative vector, and therefore $v$ is in $H(p)$. Therefore, $\lambda \in H(p)$. The case of $X_{00}=0$ follows from continuity of eigenvalues. Observe that if $X$ is in the interior of $H(P)$, then $X_{00} > 0$, and also, since the eigenvalues of a symmetric matrix vary continuously with the matrix, the property of having an eigenvalue vector in $H(p)$ is closed. Therefore, since $H(p)$ is closed and has nonempty interior, there is an eigenvalue vector of $X$ in $H(p)$. \end{proof} \subsection{Polynomials Interlacing an Elementary Symmetric Polynomial} The spectral containment property can be proved more easily for polynomials which interlace some elementary symmetric polynomial. Before stating the main result, we note that the minor lift map preserves interlacing. \begin{lemma} Let $p,q\in\mathbb{R}[x_1,\ldots,x_n]$ be stable, multiaffine and homogeneous. Let $P,Q$ be the associated minor lifts. Then $p$ interlaces $q$ if and only if $P$ interlaces $Q$. \label{lem:interlacingpreservers} \end{lemma} \begin{proof} Assume that $p$ interlaces $q$. Then by the multivariate Hermite--Biehler Theorem \cite[Thm.~5.3]{MR2353258} we have that $p+iq$ is stable. Let $A$ be a symmetric $n\times n$ matrix. We have to show that $P(tI+A)$ interlaces $Q(tI+A)$. From \cite[Thm.~1.3]{weyl} we see that the linear operator $T_A$ that sends a multiaffine polynomial $p$ to the polynomial $P(\Diag(x_1, \dots, x_n) + A)$ is a stability preserver. Thus $T_A(p+iq)$ is stable. Substituting $t$ for all variables in $T_A(p+iq)$ shows that $P(tI+A)+iQ(tI+A)$ is stable. Now the claim follows from another application of the Hermite--Biehler Theorem. The other direction is clear, since $p$ and $q$ are the respective restrictions of $P$ and $Q$ to the diagonal matrices. \end{proof} \begin{lemma} \label{lem:interlacing_e_k} Suppose that $p$ is a stable, multiaffine and homogeneous polynomial of degree $d$, and that $e_{d-1}$ interlaces $p$. Further suppose that for any $X \in H(P)$, there is some eigenvalue vector $\lambda$ of $X$, such that $p(\lambda) \ge P(X)$. Then $p$ has the spectral containment property. \end{lemma} \begin{proof} We first note the fact that if $p$ is any hyperbolic polynomial, and $q$ interlaces $p$, then $x$ is in the interior of $H(p)$ if and only if $x$ is in $H(q)$ and $p(x) > 0$. This follows easily from considering the bivariate case. Let $X$ be in the interior of $H(P)$. We first want to show that there is an eigenvalue vector of $X$ that is contained in $H(p)$; the case for general $X$ will then follow from the fact that the eigenvalues of a symmetric matrix are continuous as a function of the entries of the matrix. Since $e_{d-1}$ interlaces $p$, by \Cref{lem:interlacingpreservers}, we have that $E_{d-1}$ interlaces $P$. From this, we conclude that since $X \in H(P)$, $X$ is contained in $H(E_{d-1})$, and so any vector of eigenvalues of $X$ is contained in $H(e_{d-1})$. Let $\lambda$ be any eigenvalue vector of $X$ so that $0 < P(X) \le p(\lambda)$, then we see that this $\lambda$ must then be in the interior of $H(p)$, as desired. \end{proof} In \Cref{lem:interlacers_open}, we show that the set of stable multiaffine forms interlacing $e_{d-1}$ is an open subset containing $e_{d}$. This implies that if we have a hyperbolic polynomial $p$ which is sufficiently close to $e_{d}$, then $p$ will have the spectral containment property as long as for any $X \in H(P)$, there is some eigenvalue vector $\lambda$, so that $p(\lambda) \ge P(X)$. We will apply this lemma in a few cases, together with some variational characterizations for eigenvalues to show the spectral containment property for some special kinds of polynomials. \begin{lemma}\label{lem:sosmult} Let $p,q$ be multiaffine polynomials of degree $d+1$ and $d$, and let $a\in\mathbb{R}^n$. There exist multiaffine polynomials $m_1,\ldots, m_s, n_1,\ldots,n_s$ of degree $d$ such that $$\operatorname{D}_ap\cdot q-p\cdot \operatorname{D}_a q=m_1n_1+\ldots+m_sn_s.$$ \end{lemma} \begin{proof} This is straightforward. \end{proof} \begin{prop}\label{lem:interlacers_open} There is an open neighborhood $U$ of $e_{d+1}$ in the vector space of multiaffine forms of degree $d+1$ such that every stable multiaffine $p\in U$ of degree $d+1$ is interlaced by $e_d$. \end{prop} \begin{proof} Let $I$ be the ideal generated by all multiaffine polynomials of degree $d$ and let $V$ be the degree $2d$ part of $I^2$. Let $\Sigma\subset V$ be the set of all polynomials that can be written as a sum of squares of multiaffine polynomials of degree $d$. It follows from the proof of \cite[Thm.~6.2]{kummer2020spectral} that $\operatorname{D}_e e_{d+1}\cdot e_{d}-e_{d+1}\cdot \operatorname{D}_e e_{d}$ is in the interior of $\Sigma$ (with respect to the euclidean topology on $V$). Thus it follows from \Cref{lem:sosmult} that there is an open neighborhood $U$ of $e_{d+1}$ such that for every stable multiaffine $p\in U$ the polynomial $\operatorname{D}_e p\cdot e_{d}-p\cdot \operatorname{D}_e e_{d}$ is in $\Sigma$. Thus $e_d$ interlaces $p$ by \cite[Thm.~2.1]{interlacers}. \end{proof} \subsection{Generalized Schur-Horn Property and the Spectral Containment Property}\label{sec:schurhornprop} We say that an $n$-variate multiaffine homogeneous polynomial $p$ has the \textbf{Schur-Horn property} if for any $n\times n$ symmetric matrix $X$ with some eigenvalue vector $\lambda$, \[ \max_{\pi \in \mathfrak{S}_n} p(\pi(\lambda)) = \max_{U \in O(n)} P(UXU^{\intercal}). \] The Schur-Horn property for $p$ is equivalent to the fact that for any $n\times n$ symmetric matrix $X$ with eigenvalue vector $\lambda$, \[ \max_{\pi \in \mathfrak{S}_n} p(\pi(\lambda)) \ge P(X). \] Another equivalent formulation states that $p$ has the Schur-Horn property if and only if the maximum of $P(UXU^{\intercal})$ as $U$ varies over $O(n)$ is obtained for some $U$ such that $UXU^{\intercal}$ is diagonal. The Schur-Horn theorem states that any linear homogeneous polynomial has the Schur-Horn property. We now relate Schur-Horn property and the spectral containment property. \begin{theorem}\label{thm:schur_horn_to_eigenvalue} Let $p$ is an homogeneous multiaffine form of degree $d$. If $p$ has the Schur-Horn property, and $e_{d-1}$ interlaces $p$, then $p$ has the spectral containment property. \end{theorem} \begin{proof} It is clear that if $p$ has the Schur-Horn property, then in particular, for any $X \in H(P)$, there is some eigenvalue vector $\lambda$ so that $p(\lambda) \ge P(X)$. Therefore, $p$ has the spectral containment property by \Cref{lem:interlacing_e_k}. \end{proof} Using the Schur-Horn property and our previous lemmas, we can show that a family of stable polynomials have the spectral containment property. \begin{lemma}\label{lem:schur_horn_add} If $p$ is a degree $d$ homogeneous multiaffine polynomial with the Schur-Horn property, then $e_d(x) + p$ also has the Schur-Horn property. \end{lemma} \begin{proof} It can easily be seen that if $X$ is an $n\times n$ symmetric matrix, with an eigenvalue vector $\lambda$, that \begin{align*} \max_{\pi \in \mathfrak{S}_n} (e_d(\pi(\lambda)) + p(\pi(\lambda))) &= e_d(\lambda) + \max_{\pi \in \mathfrak{S}_n} p(\pi(\lambda))\\ & = E_d(X) + \max_{U \in O(n)} P(UXU^{\intercal})\\ & = \max_{U \in O(n)} E_d(UXU^{\intercal}) + P(UXU^{\intercal}). \end{align*} This gives the desired result. \end{proof} \begin{lemma} If $p$ is a degree $d$ homogeneous multiaffine polynomial with the Schur-Horn property, then for $\epsilon > 0$ sufficiently small, $e_d(x) + \epsilon p$ has the spectral containment property. \end{lemma} \begin{proof} By \Cref{lem:interlacers_open}, we see that for $\epsilon$ sufficiently small, $e_d(x) + \epsilon p$ is interlaced by $e_{d-1}$. Moreover, by \Cref{lem:schur_horn_add}, we see that $e_d(x) + \epsilon p$ has the Schur-Horn property. Therefore, by \Cref{thm:schur_horn_to_eigenvalue}, we see that $e_d(x) + \epsilon p$ has the spectral containment property. \end{proof} We now give some examples of polynomials with the Schur-Horn property. \subsection{The Schur-Horn Property For Degree $n-1$ Polynomials} \begin{theorem}\label{thm:inverseSchurHorn} If $p \in \mathbb{R}[x_1, \dots, x_n]$ is a degree $n-1$ multilinear homogeneous polynomial, then $p$ has the Schur-Horn property. \end{theorem} \begin{proof} Write $p(x) = \sum_{i=1}^n a_i \prod_{j\in [n] \setminus i}x_i$. In this case, \[ P(X) = \sum_{i=1}^n a_i\det(X|_{[n] \setminus i}) \] Recall that the dual of $p(x)$ was defined in \Cref{sec:minor_lift}, as \[ p^*(x) = \sum_{i=1}^n a_i x_i. \] Abusing notation, we define $P^*$ to be \[ P^*(X) = \sum_{i=1}^n a_i X_{ii}. \] Define the adjugate matrix of $X$ by $\Adj(X) = \det(X) X^{-1}$. By Cramer's rule, the diagonal entries of the adjugate matrix are given by \[\Adj(X)_{ii} = \det(X|_{[n] \setminus i}).\] Hence, using \Cref{rmk:minor_lift_dual}, we see that $P^*(\Adj(X)) = P(X)$. The eigenvalues of $\Adj(X)$ are of the form $\mu_j = \prod_{i \in [n] \setminus j} \lambda_i$ where $\lambda$ is an eigenvalue vector of $X$. We see then that $p^*(\mu) = p(\lambda)$. Now we apply the Schur-Horn theorem to the linear form $p^*$ and the matrix $\Adj(X)$ to see that \[ \max_{\pi \in \mathfrak{S}_n} p^*(\pi(\mu)) = \max_{U \in O(n)} P^*(U^{\intercal}\Adj(X)U). \] Applying our identities relating the vector $\mu$ to $\lambda$, we see that \[ \max_{\pi \in \mathfrak{S}_n} p(\pi(\lambda)) = \max_{U \in O(n)} P(U^{\intercal}XU). \] \end{proof} From this, we immediately obtain a corollary. \begin{cor} There is an open set $U$ in the space of degree $n-1$ homogeneous multiaffine polynomials, such that $U$ contains $e_{n-1}$ and every element of $U$ is stable and has the spectral containment property. \end{cor} \subsection{Extensions of Elementary Symmetric Polynomials and the Schur--Horn Property} We note that it is unclear whether the Schur--Horn property is preserved by adding extra variables. We show that this holds for elementary symmetric polynomials. \begin{prop}\label{prop:SH} Fix natural numbers $d \le k \le n$. Let $p = \pm e_d(x_1, \dots, x_k) \in \mathbb{R}[x_1, \dots, x_n]$. Then, $p$ has the Schur-Horn property. \end{prop} We can reduce this to the classical Schur-Horn theorem. To do this, we require a lemma involving a construction, which is referred to in \cite[Chapter 3]{marvin1973finite} as a derivation of a matrix $X$. \begin{lemma}\label{lem:derivation} For any $n\times n$ symmetric matrix $X$, with eigenvalue vector $\lambda$, there exists a $\binom{n}{k}\times \binom{n}{k}$ symmetric matrix $D^{k,d} X$ with the following two properties: \begin{itemize} \item The eigenvalues of $D^{k,d} X$ are precisely those real numbers of the form \[e_d(\lambda_{s_1}, \lambda_{s_2}, \dots, \lambda_{s_k}),\] where we range over all possible values of $s_1, \dots, s_k\in [n]$ so that $s_1< s_2< \dots< s_k$. \item The diagonal entries of $D^{k,d} X$ are precisely those of the form $E_d(X|_{S})$, where $S$ ranges over the size $k$ subsets of $[n]$. \end{itemize} \end{lemma}\begin{proof}[Proof of \Cref{lem:derivation}] We will define $D^{k,d}X$ in terms of wedge powers. If we regard $X$ as an endomorphism from $\mathbb{R}^n$ to $\mathbb{R}^n$, then $D^{k,d}X$ is defined as an endomorphism of $\wedge^k \mathbb{R}^n$ by letting \[ D^{k,d}X(v_1\wedge v_2 \wedge \dots \wedge v_k) = \sum_{S \in \binom{[n]}{k}} w_{S,1}\wedge w_{S,1} \wedge \dots \wedge w_{S,k}, \] where \[ w_{S, k} = \begin{cases} Xv_k \text{ if }k\in S\\ v_k \text{ if }k \not \in S\end{cases}. \] It is not hard to see that if $v_1, \dots, v_k$ are linearly independent eigenvectors of $X$ with eigenvalues $\lambda_1, \dots, \lambda_k$ respectively, then $v_1 \wedge \dots \wedge v_k$ is an eigenvector of $D^{k,d}X$ with eigenvalue $e_d(\lambda_1, \dots, \lambda_k)$, and this clearly implies the first property in \Cref{lem:derivation}. On the other hand, if we use the natural basis of $\wedge^k \mathbb{R}^n$ given by $\{e_{s_1} \wedge e_{s_2} \wedge \dots \wedge e_{s_k}\}$, where $e_i$ is a standard basis vector, and $s_1 < s_2 <\dots <s_k$, then this basis is orthogonal under the natural inner product of $\wedge^k \mathbb{R}^n$, and also \[ (e_{s_1} \wedge e_{s_2} \wedge \dots \wedge e_{s_k})^{\intercal}D^{k,d}X(e_{s_1} \wedge e_{s_2} \wedge \dots \wedge e_{s_k}) = E_{d}(X|_{\{s_1, \dots, s_k\}}). \] This clearly implies the second property of \Cref{lem:derivation}. \end{proof} \begin{proof}[Proof of \Cref{prop:SH}] The classical Schur-Horn theorem implies that for any symmetric matrix $X$, \[ \max_{s_1 < s_2 < \dots < s_k} e_d(\lambda_{s_1}, \lambda_{s_2}, \dots, \lambda_{s_k}) \ge \max_{S \in \binom{[n]}{k}} E_d(X|_{S}) \ge E_d(X|_{1,\dots,k}), \] and also that \[ \min_{s_1 < s_2 < \dots < s_k} e_d(\lambda_{s_1}, \lambda_{s_2}, \dots, \lambda_{s_k}) \le \min_{S \in \binom{[n]}{k}} E_d(X|_{S}) \le E_d(X|_{1,\dots,k}). \] This first statement implies the Schur-Horn property for $e_d(x_1, \dots, x_k)$, and the second implies the Schur-Horn property for $-e_d(x_, \dots, x_k)$. \end{proof} \subsection{A Small Example of the Schur-Horn Property} We give one more example of the Schur-Horn property, which is noteworthy because our proof does not appeal to the classical Schur-Horn theorem. \begin{lemma} The polynomial $x_1(x_2+x_3) \in \mathbb{R}[x_1, x_2, x_3, x_4]$ has the Schur-Horn property. \end{lemma} \begin{remark} The polynomial $x_1(x_2+x_3) \in \mathbb{R}[x_1, x_2, x_3]$ clearly has the Schur-Horn property, by \Cref{thm:inverseSchurHorn}, but it is not clear that this remains the case if we introduce a new variable. \end{remark} \begin{proof} Let $D = \Diag(\lambda_1, \lambda_2, \lambda_3, \lambda_4)$. Let $U$ be in $\SO(4)$, and write its columns as \[ U = \begin{pmatrix} v & w & z & y \end{pmatrix}. \] It is not hard to see via an explicit computation that \[ P(UDU^{\intercal}) = \det \begin{pmatrix} \sum_{i=1}^4 \lambda_i v_i^2 & \sum_{i=1}^4 \lambda_i w_iv_i\\ \sum_{i=1}^4 \lambda_i w_iv_i & \sum_{i=1}^4 \lambda_i w_i^2\\ \end{pmatrix} + \det \begin{pmatrix} \sum_{i=1}^4 \lambda_i v_i^2 & \sum_{i=1}^4 \lambda_i z_iv_i\\ \sum_{i=1}^4 \lambda_i z_iv_i & \sum_{i=1}^4 \lambda_i z_i^2\\ \end{pmatrix}.\\ \] We expand this formulation out by multilinearity of the determinant to obtain the following \begin{align} \sum_{i=1}^4 \sum_{j=1}^4 \lambda_i \lambda_j \left(\det \begin{pmatrix} v_i^2 & w_jv_j\\ w_iv_i & w_j^2\\ \end{pmatrix} + \det \begin{pmatrix} v_i^2 & z_jv_j\\ z_iv_i & z_j^2\\ \end{pmatrix}\right)\label{eq:multilinear}\\ = \sum_{i=1}^4 \sum_{j<i} \lambda_i \lambda_j \left(\det \begin{pmatrix} v_i^2 & w_jv_j\\ w_iv_i & w_j^2\\ \end{pmatrix} + \det \begin{pmatrix} v_i^2 & z_jv_j\\ z_iv_i & z_j^2\\ \end{pmatrix} + \det \begin{pmatrix} v_j^2 & w_iv_i\\ w_jv_j & w_i^2\\ \end{pmatrix} + \det \begin{pmatrix} v_j^2 & z_iv_i\\ z_jv_j & z_i^2\\ \end{pmatrix}\right)\label{eq:weird}\\ = \sum_{i=1}^4 \sum_{j<i} \lambda_i \lambda_j \left(\det \begin{pmatrix} v_i & w_i\\ v_j & w_j\\ \end{pmatrix}^2 + \det \begin{pmatrix} v_i & z_i\\ v_j & z_j\\ \end{pmatrix}^2 \right). \end{align} We can think of this as a polynomial \[ \gamma(\lambda) = \sum_{i = 1}^4 \sum_{j < i}\gamma_{i,j} \lambda_i \lambda_j, \] where \[ \gamma_{i,j} = \det \begin{pmatrix} v_i & w_i\\ v_j & w_j\\ \end{pmatrix}^2 + \det \begin{pmatrix} v_i & z_i\\ v_j & z_j\\ \end{pmatrix}^2. \] We make the following claim: \begin{lemma} \label{lmma:convex_hull} $\gamma(\lambda)$ is in the convex hull of the polynomials \[ a_{i,j,k}(\lambda) = \lambda_i(\lambda_j + \lambda_k) \] where $\{i, j, k\} \in \binom{[4]}{3}$. \end{lemma} To see that \Cref{lmma:convex_hull} implies the theorem, observe that for any $i,j,k$, \[ a_{i,j,k}(\lambda) \ge \max_{\pi \in S_4} p(\lambda_{\pi(1)},\lambda_{\pi(2)} ,\lambda_{\pi(3)} ,\lambda_{\pi(4)}). \] Hence, in particular, any convex combination of the $a_{i,j,k}$ will be lower bounded by this same quantity. Therefore, since every symmetric matrix is diagonalizable, we have that for any symmetric matrix $X$, and any orthogonal $U$, \[ P(U^{\intercal}XU) \ge \max_{\pi \in S_4} p(\lambda_{\pi(1)},\lambda_{\pi(2)} ,\lambda_{\pi(3)} ,\lambda_{\pi(4)}). \] The opposite inequality is easy to see by choosing $U$ to be an orthogonal matrix diagonalizing $X$. \end{proof} It remains to show \Cref{lmma:convex_hull}. \begin{proof}[Proof of \Cref{lmma:convex_hull}] Since we work in dimension $4$, we can find the inequalities defining this convex hull explicitly using computational methods. It turns out that the polynomial $\gamma$ is in the convex hull of the $a_{i,j,k}$ if and only if for each $i,j \in [n]$, \[ \gamma_{i,j} \ge 0 \] and if $\{i,j,k,l\} = \{1,2,3,4\}$, then \[ \gamma_{i,j} + \gamma_{k,l} \le 1 \] and \[ \sum_{i,j \in \binom{[n]}{2}} \gamma_{i,j} = 2 \] For our particular value of $\gamma_{i,j}$, it is easy to see that it is the sum of two squares and hence nonnegative. Further notice that the sum of all of the coefficients of $\gamma$ is $\gamma(1,1,1,1)$, so that returning to the definition of the polynomial $\gamma$, \[ \gamma(1,1,1,1) = P(I) = 2. \] It remains to show that \[ \gamma_{i,j} + \gamma_{k,l} \le 1 \] Notice that \[ \gamma_{i,j} + \gamma_{k,l} = \det \begin{pmatrix} v_i & w_i\\ v_j & w_j\\ \end{pmatrix}^2 + \det \begin{pmatrix} v_i & z_i\\ v_j & z_j\\ \end{pmatrix}^2 + \begin{pmatrix} v_k & w_k\\ v_l & w_l\\ \end{pmatrix}^2 + \det \begin{pmatrix} v_k & z_k\\ v_l & z_l\\ \end{pmatrix}^2 \] We prove that this is at most 1. Recall that \[ U = \begin{pmatrix} v & w & z & y \end{pmatrix} = \begin{pmatrix} v_1 & w_1 & z_1 & y_1\\ v_2 & w_2 & z_2 & y_2\\ v_3 & w_3 & z_3 & y_3\\ v_4 & w_4 & z_4 & y_4\\ \end{pmatrix}. \] The Jacobi complementary minors theorem for matrix inverses implies that if $S \subseteq [4]$, then \[ \det(U|_{S, T}) = \det(U)\det(U^{-1}|_{S^c, T^c}) = \pm\det(U^{\intercal}|_{S^c, T^c}) = \pm\det(U^{\intercal}|_{T^c, S^c}) \] We now see that \[ \det \begin{pmatrix} v_i & w_i\\ v_j & w_k \end{pmatrix}^2= \det \begin{pmatrix} z_k & y_k\\ z_l & y_l \end{pmatrix}^2. \] Similarly, we must have \[ \det \begin{pmatrix} v_i & z_i\\ v_j & z_j \end{pmatrix}^2= \det \begin{pmatrix} w_k & y_k\\ w_l & y_l \end{pmatrix}^2. \] Let \[ M = \begin{pmatrix} v_3 & w_3 & z_3 & y_3\\ v_4 & w_4 & z_4 & y_4\\ \end{pmatrix}. \] Notice that since $U$ is orthogonal, these two rows are orthogonal, and so $M M^{\intercal}= I$. Applying the Cauchy--Binet theorem to $\det(MM^{\intercal}) = \det(I)$ we see that \begin{align*} 1 &= \det(MM^{\intercal}) \\ &= \sum_{S \subseteq \binom{[n]}{2}} \det(M_{S})\det(M_{S}^{\intercal})\\ &= \sum_{S \subseteq \binom{[n]}{2}} \det(M_{S})^2\\ & \ge \begin{pmatrix} z_k & y_k\\ z_l & y_l \end{pmatrix}^2 + \det \begin{pmatrix} v_i & w_i\\ v_j & w_j\\ \end{pmatrix}^2 + \begin{pmatrix} v_k & w_k\\ v_l & w_l\\ \end{pmatrix}^2 + \det \begin{pmatrix} v_k & z_k\\ v_l & z_l\\ \end{pmatrix}^2\\ &= \gamma_{i,j} + \gamma_{k,l} \, . \end{align*} The result now follows. \end{proof} \section{Introduction}\label{sec:intro_new}% A homogeneous polynomial $p \in \mathbb{R}[\x] := \mathbb{R}[x_1,\dots,x_n]$ is called \Def{hyperbolic} with respect to $\a \in \mathbb{R}^n$ if $p(\a) \neq 0$ and $p_{\a}(t) := p(\v-t \a) \in \mathbb{R}[t]$ has only real roots for all $\v \in \mathbb{R}^n$. The \emph{hyperbolicity cone} $H_a(p)$ of a polynomial $p$ hyperbolic with respect to $\a\in\mathbb{R}^n$ is the set of all $\v\in\mathbb{R}^n$ such that $p(\v-t\a)$ has only nonnegative roots. Originally conceived in the context of partial differential equations~\cite{Ga51}, hyperbolic polynomials were discovered to yield deep results in (non-)linear algebra, combinatorics, and optimization; see, for example,~\cite{MR3754960, borcea2008applications, BB09stabilitypreserver, MR3098077,SanSau, MR2738906}. \newcommand\1{\mathbf{1}}% A fundamental family of hyperbolic polynomials is given by the \Def{elementary symmetric polynomials} \[ e_k(\x) \ := \ \sum_{J} \prod_{i \in J}x_i\,, \] where $J$ ranges over all $k$-element subsets of $[n] := \{1,\dots,n\}$. The elementary symmetric polynomials are \Def{stable}: a multivariate polynomial $p\in\mathbb{R}[\x]$ is \Def{stable} if for all complex numbers $z_1,\ldots,z_n$ lying in the open upper half-plane, we have $p(z_1,\ldots,z_n)\neq0$. If $p$ is homogeneous, then it is stable if and only if it is hyperbolic with respect to all $\a \in \mathbb{R}^n_{>0}$, and we denote by $H(p) = H_{\1}(p)$ its hyperbolicity cone with respect to the vector $\1 = (1,\dots,1)$. Let $\X$ denote an $n\times n$ matrix of indeterminants, and for any $J \subseteq [n]$, we let $\X_J$ denote the principal submatrix of $\X$ indexed by $J$. We can then define a polynomial \[ E_k(\X) \ := \ \sum_{J} \det(\X_J) \, , \] where again $J$ ranges over all $k$-element subsets of $[n]$. It turns out that these polynomials do not vanish on the \Def{Siegel upper half-plane}, i.e., the set of all complex symmetric matrices with positive definite imaginary part. Such polynomials are called \Def{Dirichlet--G\r{a}rding}~\cite{harvey2009hyperbolic} or \Def{PSD-stable}~\cite{JorgensTheobald}. For a homogeneous polynomial $P$ this property is equivalent to being hyperbolic with respect to any positive definite matrix, and we denote by $H(P)$ its hyperbolicity cone (taken with respect to the identity matrix). When the context is clear, we will simply refer to PSD-stable polynomials $P(\X)$ as stable polynomials The starting point of our paper is the observation that $E_k(\X)$ is closely related to $e_k(x)$. For instance, if $\X = \Diag(x_1, \dots, x_n)$ is the diagonal matrix with diagonal entries $X_{ii} = x_i$, then $E_k(\X) = e_k(x_1, \dots, x_n)$. To generalize this observation, let $\R^{n \times n}_{\mathrm{sym}}$ be the vector space of real symmetric $n \times n$-matrices and let $\mathbb{R}[\X]$ be the ring of polynomials on it, where we regard $\X$ as being an $n\times n$ matrix of indeterminants. A polynomial $P(\X) \in \mathbb{R}[\X]$ is called a \Def{linear principal minor polynomial} or \Def{lpm-polynomial} if $P(\X)$ is of the form \[ P(\X) \ = \ \sum_{J} c_J \det(\X_J) \, , \] where $J$ ranges over all subsets of $[n]$. The first natural question we pursue is what interesting properties are shared by a homogeneous lpm polynomial $P(\X)$ and its diagonal restriction $p(\x)$. We show that $P(\X)$ is PSD-stable if and only if $p(\x)$ is stable. We obtain a similar result for the related concept of Lorentzian polynomials. We prove these facts using the theory of stability preservers \cite{weyl}. Having established these basic facts we generalize classical determinantal inequalities from linear algebra, such as the Hadamard--Fischer and Koteljanskii inequality to the setting of stable lpm polynomials. This generalizes the Hadamard-type inequalities for $k$-positive matrices obtained in~\cite{Hadamard-kPos}. Another interesting consequence of the above results is that they give construction of a new class of hyperbolic polynomials. Using lpm polynomials we construct a hyperbolic cubic in 6 variables which has a Rayleigh difference that is not a sum of squares. The previously smallest known example with $43$ variables was contructed by Saunderson in \cite{soshyperbolic}. Finally, we study whether the eigenvalue vector $\lambda$ of a matrix $X$ lying in the hyperbolicity cone of a stable lpm polynomial $P(\X)$ lies in the hyperbolicity cone of $p(x)$ and show how this is related to a potential generalization of the classical Schur--Horn theorem \cite{schur23, horn54}. We now discuss our results in detail. \section{The Minor Lift Map and Stability Preservers}\label{sec:minor_lift} Our goal in this section is to prove \Cref{thm:minor_lift_stable}. We first explain how to construct the minor lift map via partial derivatives of the determinant. Let $p\in\mathbb{R}[\x]$ be a multiaffine polynomial. The \emph{dual} of $p$ is \begin{equation}\label{eqn:dual} p^*(x) := x_1\cdot x_2\cdots x_n\cdot p\Bigl(\frac{1}{x_1}, \frac{1}{x_2}, \dots, \frac{1}{x_n}\Bigr) . \end{equation} For any polynomial $p \in \mathbb{R}[x_1, \dots, x_n]$, we consider the differential operator $p^*\left(\frac{\partial}{\partial X_{11}}, \frac{\partial}{\partial X_{22}}, \dots, \frac{\partial}{\partial X_{nn}}\right)$. For instance, if $p= x^S = \prod_{i \in S}x_i$ is a monomial, then the associated differential operator is $\prod_{i \notin S}\frac{\partial}{\partial X_{ii}}$. Applying the differential operator associated to $x^S$ to $\det(X)$ yields \[ \Bigl(\prod_{i \notin S}\frac{\partial}{\partial X_{ii}}\Bigr) \det(\X) = \det(X_{S}). \] By linearity, we then obtain that \[ P = \left(p^*\left(\frac{\partial}{\partial X_{11}}, \frac{\partial}{\partial X_{22}}, \dots, \frac{\partial}{\partial X_{nn}}\right)\right) \det(X) \, . \] This formulation of the minor lift map will allow us to easily apply the theory of stability preservers. \begin{remark}\label{rmk:minor_lift_dual} The minor lift operation interacts nicely with dualization. If $p$ is a multiaffine polynomial, then \[ \Phi(p^*)|_X = \det(X)\cdot\Phi(p)|_{X^{-1}}. \] Here, $\cdot|_X$ denotes the evaluation of a polynomial at $X$. This result follows directly from the Jacobi complementary minors identity, found in \cite{MR1411115}, which states that $\det(X|_{S^c}) = \det(X^{-1}|_S) \det(X)$. This is a matrix analogue of~\eqref{eqn:dual}. \end{remark} Before we go on, we need the following facts about hyperbolicity cones that can be found in \cite{MR2738906}. \begin{lemma}\label{lem:hyp1} Let $p\in\mathbb{R}[\x]$ be a homogeneous polynomial and $K\subset\mathbb{R}^n$ a cone. The following are equivalent: \begin{enumerate} \item $p$ is hyperbolic with respect to all $\a\in K$, and \item $p(\v+\textnormal{i}\a)\neq0$ for all $\v\in\mathbb{R}^n$ and $\a\in K$. \end{enumerate} \end{lemma} \begin{lemma}\label{lem:hyp2} Let $p\in\mathbb{R}[\x]$ be hyperbolic with respect to $\a\in\mathbb{R}^n$. Then $p$ is hyperbolic with respect to every point in the connected component of $\{\v\in\mathbb{R}^n:\, p(\v)\neq0\}$ that contains $\a$. \end{lemma} Our first step is the following observation: \begin{lemma}\label{lem:psd_stable} Let $P\in\mathbb{R}[\X]$ be a homogeneous polynomial. Then $P$ is PSD-stable if and only if the following two conditions hold: \begin{enumerate} \item $P(A)\neq0$ for all positive definite matrices $A$; \item $P(\Diag(x_1, \dots, x_n) + M)\in\mathbb{R}[\x]$ is stable for every real symmetric matrix $M$. \end{enumerate} \end{lemma} \begin{proof} First assume that $P$ is PSD-stable and let $A$ be a positive definite matrix. By definition we have $P(\mathrm{i} A)\neq0$. Since $P$ is homogeneous, this implies that $P(A)\neq0$. Further let $z_i=a_i+ \mathrm{i} b_i$ in the upper half-plane. Then $P(\Diag(z_1, \dots, z_n) + M)$ is nonzero for any real symmetric matrix $M$, since $\Diag(b_1, \dots, b_n)$ is a positive definite matrix. For the other direction we first observe that condition (2) implies that $P$ is hyperbolic with respect to the identity matrix. Indeed, the univariate polynomial $P(tI+M)$ is stable and thus real-rooted for every real symmetric matrix $M$. Now condition (1) together with Lemmas \ref{lem:hyp1} and \ref{lem:hyp2} imply the claim. \end{proof} \begin{proof}[Proof of \Cref{thm:minor_lift_stable}] Let $p\in\mathbb{R}[\x]$ be multiaffine, homogeneous and stable. Then by \cite[Thm.~6.1]{halfplane} all nonzero coefficients of $p$ have the same sign. Without loss of generality assume that all are positive. Then $P=\Phi(p)$ is clearly positive on positive definite matrices since the minors of a positive definite matrix are positive. Thus by \Cref{lem:psd_stable}, it remains to show that $$P(\Diag(x_1, \dots, x_n) + M)=\left(p^*\left(\frac{\partial}{\partial x_{1}}, \frac{\partial}{\partial x_{2}}, \dots, \frac{\partial}{\partial x_{n}}\right)\right) \det(\Diag(x_1, \dots, x_n) + M)$$ is stable for every real symmetric matrix $M$. The polynomial $\det(\Diag(x_1, \dots, x_n) + M)$ is stable as well as $p^*$ by \cite[Prop.~4.2]{halfplane}. Thus the polynomial $P(\Diag(x_1, \dots, x_n) + M)$ is also stable by \cite[Thm.~1.3]{weyl}. Let $A \in \PSD_k \subseteq \R^{n \times n}_{\mathrm{sym}}$ be $k$-locally PSD. Then for every $k$-subset $S \subseteq [n]$, we have $\det((A+t\I)|_S) > 0$ for all $t > 0$. Hence, if $p$ has degree $k$ with all coefficients positive, then $P(A-t\I) > 0$ for all $t < 0$ and hence all roots are non-negative. This implies that $A \in H(P)$. \end{proof} \begin{remark} Given a multiaffine homogeneous stable polynomial $p\in\mathbb{R}[x_1,\ldots,x_n]$, the minor lift map gives a hyperbolic polynomial $P$ in the entries of a symmetric $n\times n$ matrix whose restriction to the diagonal equals to $p$. Such polynomials can also be constructed for stable polynomials that are not necessarily multiaffine. Since we are mainly interested in multiaffine polynomials, we only briefly sketch one possible such construction. To a stable homogeneous polynomial $p\in\mathbb{R}[x_1,\ldots,x_n]$ one can find a multiaffine stable polynomial $q\in\mathbb{R}[z_{11},\ldots,z_{1d_1},\ldots, z_{nd_n}]$ such that we can recover $p$ from $q$ by substituting each variable $z_{ij}$ by $x_i$, see \cite[\S 2.5]{halfplane}. This polynomial $q$ is called a \emph{polarization} of $p$. If we restrict the minor lift of $q$ to suitable block-diagonal matrices, we obtain a hyperbolic polynomial with the desired properties for $p$. \end{remark} \begin{remark} Using \cite[Thm.~3.2]{branden2020lorentzian} one can show that the analogous statement to \Cref{thm:minor_lift_stable} for \emph{Lorentzian polynomials}, a recent generalization of stable polynomials, holds as well. \end{remark} \section{Hyperbolic polynomials and sums of squares}\label{sec:soshyperbolicity} Let $p\in\mathbb{R}[x]$ be hyperbolic with respect to $v \in \mathbb{R}^n$ and $a,b \in H_v(p)$. Then the mixed derivative \[ \Delta_{a,b}(p)=\operatorname{D}_ap\cdot\operatorname{D}_bp-p\cdot\operatorname{D}_a\operatorname{D}_bp \] is globally nonnegative by Theorem 3.1 in \cite{interlacers}. If some power $p^r$ has a definite symmetric determinantal representation, i.e., can be written as \[ p^r=\det(x_1A_1+\cdots + x_n A_n) \] for some real symmetric (or complex hermitian) matrices $A_1,\dots,A_n$ with $v_1A_1+\ldots+v_nA_n$ positive definite, then $\Delta_{a,b}(p)$ is even a sum of squares \cite[Cor.~4.3]{interlacers}. Therefore, any instance where $\Delta_{a,b}(p)$ is not a sum of squares gives an example of a hyperbolic polynomial none of whose powers has a definite symmetric determinantal representation. Another source of interest in such examples comes from the point of view taken in \cite{soshyperbolic}, as these give rise to families of polynomials that are not sums of squares but whose nonnegativity can be certified via hyperbolic programming. Saunderson \cite{soshyperbolic} characterized all pairs $(d,n)$ for which there exists such a hyperbolic polynomial $p\in\mathbb{R}[x]=\mathbb{R}[x_1,\ldots,x_n]$ of degree $d$, except when $d=3$. In this section we will construct an explicit hyperbolic cubic $p$ in $6$ variables for which there are two points $a,b$ in the hyperbolicity cone such that $\Delta_{a,b}(p)$ is not a sum of squares. \begin{remark} If there are two points $a,b$ in the closed hyperbolicity cone of $p$ such that $\Delta_{a,b}(h)$ is not a sum of squares, then there are also such points in the interior of the hyperbolicity cone as the cone of sums of squares is closed. \end{remark} \begin{remark} In \cite{soshyperbolic} Saunderson constructs a hyperbolic cubic in $43$ variables whose \emph{B\'ezout matrix} is not a matrix sum of squares. This is the smallest such example that has been known so far. The top left entry of the B\'ezout matrix is the mixed derivative that we are studying. Thus if the latter is not a sum of squares, then the B\'ezout matrix is not a matrix sum of squares. \end{remark} Consider the complete graph $K_4$ on $4$ vertices. We define the spanning tree polynomial of $K_4$ as the element of $\mathbb{R}[x_e : e \in E(K_4)]$ given by \[ t_{K_4}(x) \ = \ \sum_{\tau} \prod_{e \in \tau}x_e \, , \] where $\tau \subset E(K_4)$ ranges over all edge sets of spanning trees of $K_4$. The polynomial $t_{K_4}$ is multiaffine, homogeneous and stable \cite[Thm.~1.1]{halfplane}. Let $T$ be its minor lift. Finally, let $p$ be the polynomial obtained from $T$ by evaluating $T$ at the matrix of indeterminants \[ A=\bordermatrix{ & 12 & 13 & 14 & 23 & 24 & 34 \cr &x_{1}&0&0&0&0&0\cr &0&x_{2}&a&b&c&0\cr &0&a&x_{2}&c&b&0\cr &0&b&c&x_{2}&a&0\cr &0&c&b&a&x_{2}&0\cr &0&0&0&0&0&x_{3}}. \] Thus $p$ is hyperbolic with respect to every positive definite matrix that can be obtained by specializing entries of $A$ to some real numbers. In particular, the polynomial $$W=\frac{\partial p}{\partial x_{1}}\cdot\frac{\partial p}{\partial x_{3}}-p\cdot\frac{\partial^2 p}{\partial x_{1}\partial x_{3}}$$is nonnegative. We will show that it is not a sum of squares. We first study the real zero set of $W$. \begin{lemma}\label{lem:idealj} The polynomial $W$ is contained in the ideals $J_1,J_2,J_3$ and $J_4$ where \begin{enumerate} \item $J_1$ is generated by all $2\times2$ minors of $A$, \item $J_2$ is generated by all off-diagonal entries of $A$, \item $J_3$ is generated by $a, c$ and $x_2$, \item $J_4$ is generated by $b, c$ and $x_2$. \end{enumerate} \end{lemma} \begin{proof} Part (1) follows from the fact that both $h$ and $\frac{\partial h}{\partial x_{1}}$ are in $J_1$. The other claims are apparent since \[\frac{1}{4}W=a^2 b^2+a^2 c^2+b^2 c^2+c^4-8 a b c x_2+2 a^2 x_2^2+2 b^2 x_2^2.\qedhere\] \end{proof} \begin{defi} An ideal $I$ in a ring $A$ is called \Def{real radical} if $g_1^2+\cdots+g_r^2\in I$ implies $g_1,\ldots,g_r\in I$ for all $g_1,\ldots,g_r\in A$. \end{defi} \begin{lemma}\label{lem:realradj} The ideal $J=\bigcap_{k=1}^4J_k$ is real radical. \end{lemma} \begin{proof} It suffices to show that each $J_k$ is a radical ideal such that the real points lie Zariski dense in its zero set. This is clear for $J_2,J_3$ and $J_4$. Using Macaulay2 \cite{M2} we checked that $J_1$ is radical. Moreover, the primary decomposition of $J_1$ shows that the zero set of $J_1$ is a union of linear spaces. This implies that the real zeros of $J_1$ are dense as well. \end{proof} \begin{theorem} The polynomial $W$ is not a sum of squares. \end{theorem} \begin{proof} Assume for the sake of a contradiction that $W=g_1^2+\cdots+g_r^2$ for some polynomials $g_i$. Since $W\in J$ by Lemma \ref{lem:idealj}, Lemma \ref{lem:realradj} implies that each $g_i$ is in $J$. Thus $W$ is even in the ideal $J\cdot J$. Using Macaulay2 \cite{M2} one checks that this is not the case. \end{proof} \begin{remark} In the terminology of \cite{soshyperbolic} this shows in particular that $h$ is neither SOS-hyperbolic nor weakly SOS-hyperbolic. \end{remark} \section{Open problems}\label{sec:openproblems} Our work sparks a wide range of open problems. We mention some of them here. For several of these problems, we presented proofs for some special cases, whereas the general case remains open. \subsection{Hyperbolic Schur-Horn Theorem} In \Cref{sec:hadamard-fischer} we proved the hyperbolic generalization of Hadamard-Fischer inequality as well as Koteljanskii's inequality, in \Cref{thm:hadamard-fischer} and \Cref{thm:kota}. Here we present another potential generalization of classical linear algebra results in Schur-Horn theorem. Schur-Horn theorem appears in our previous section on Spectral containment Property, where it plays a major role and a generalized version called Schur-Horn property was formed. Here we will form a different generalization of Schur-Horn theorem in terms of hyperbolic polynomials. We will formulate our generalization in the language of majorization. Given polynomials $p$ and $q$ of the same degree, both hyperbolic with respect to the direction $v$, we say that $p$ majorizes $q$ in direction $v$ if for all $x \in \mathbb{R}^n$, the roots of $p(x-tv)$ majorize the roots of $q(x-tv)$. Recall that given $\alpha,\beta\in\mathbb{R}^k$, $\alpha$ majorizes $\beta$ if $\sum_{i=1}^k \alpha_i=\sum_{i=1}^k \beta_i$ and the following holds: let $\alpha',\beta'$ be obtained from $\alpha,\beta$ by reordering coordinates such that $\alpha'_1\ge...\ge\alpha'_k$ and $\beta'_1\ge...\ge\beta'_k$, then for each $1\le m<k,\sum_{i=1}^m \alpha'_i\ge\sum_{i=1}^m \beta'_i$. Equivalently, $\alpha$ majorizes $\beta$ if and only if $\beta\in\conv(\mathfrak{S}_k(\alpha))$, where the symmetric group $\mathfrak{S}_k$ acts on $\alpha$ by permuting its coordinates. In this language, we can restate the Schur direction of the Schur-Horn theorem as follows: \begin{lemma}(Schur) $\det(X)$ majorizes $\det(\diag(X))$ in the identity direction. \end{lemma} We conjecture that this holds for all homogeneous PSD-stable lpm-polynomials. \begin{conj} Let $P$ be a homogeneous PSD-stable lpm-polynomial. Then $P(X)$ majorizes $P(\diag(X))$ in the identity direction. \end{conj} Recall for $1\le k\le n$ we defined $E_k(X)=\sum_{|S|=k}\det X_S$ to be the minor lift of degree $k$ elementary symmetric polynomial, i.e., sum of all $k\times k$ principal minors of $X$. We are able to prove this conjecture for rescalings of $E_k$. Our proof will use the following result from \cite{BB10majorizationpreserver}, which follows from Theorem 1 of their paper. \begin{prop}\label{prop:derivative-maj} Suppose $p$ majorizes $q$ in direction $v$, then $D_v p$ majorizes $D_v q$, where $D_v$ denotes the directional derivatives in the $v$ direction. \end{prop} Now we are ready to state and prove our result. \begin{prop} Let $D$ be any positive diagonal matrix, and $P(X)=E_k(D^{-1/2}XD^{-1/2})$. Then $P(X)$ majorizes $P(\diag(X))$ in the identity direction. \end{prop} \begin{proof} First notice that $\det(X)$ majorizes $\det(\diag(X))$ in the $D$ direction, i.e., roots of $\det(X-tD)$ majorize roots of $\det(\diag(X)-tD)$ for any $X$. This follows from applying original Schur's theorem to the symmetric matrix $D^{-1/2}XD^{-1/2}$, since we have $\det(X-tD)=\det(D)\det(D^{-1/2}XD^{-1/2}-tI)$ and similarly $\det(\diag(X)-tD)=\det(D)\det(D^{-1/2}\diag(X)D^{-1/2}-tI)$. Also notice that $D^{-1/2}\diag(X)D^{-1/2}=\diag(D^{-1/2}XD^{-1/2})$. Now we apply \Cref{prop:derivative-maj} $(n-k)$ times, where $p=\det(X),q=\det(\diag(X)),v=D$. This shows that $p^{(k)}(X)=\sum_{|S|=k}\det(X_S)\prod_{i\notin S}D_{ii}$ majorizes $q^{(k)}(X)=\sum_{|S|=k}\det(\diag(X)_S)\prod_{i\notin S}D_{ii}$ in the $D$ direction. Computing $p^{(k)}(X-tD)$ we have \begin{align*} p^{(k)}(X-tD)&=\sum_{|S|=k}\det(X_S-t D_S)\prod_{i\notin S}D_{ii}\\ &=\sum_{|S|=k}\det (D_S)\det(D_S^{-1/2}X_S D_S^{-1/2}-t I_S)\prod_{i\notin S}D_{ii}\\ &=\det(D)\sum_{|S|=k}\det(D_S^{-1/2}X_S D_S^{-1/2}-t I_S)\\ &=\det(D)E_k(D^{-1/2}XD_{-1/2}-tI) \end{align*} Similarly, $q^{(k)}(X-tD)=\det(D)E_k(D^{-1/2}\diag(X)D_{-1/2}-tI)$. This shows that roots of $E_k(D^{-1/2}XD_{-1/2}-tI)$ majorize roots of $E_k(D^{-1/2}\diag(X)D_{-1/2}-tI)$. This completes the proof. \end{proof} We may also formulate a hyperbolic generalization of Horn's theorem, which we conjecture to be true but do not have any results. \begin{conj} Let $P$ be any degree $k$ lpm-polynomial. Let $\lambda,\mu\in \mathbb{R}^k$ such that $\lambda$ majorizes $\mu$. Then there exists a symmetric matrix $X$ such that roots of $P(X-tI)$ are given by $\lambda$, and roots of $P(\diag(X)-tI)$ are given by $\mu$. \end{conj} \subsection{Spectral containment property and the Schur-Horn property} We showed that many polynomials have the spectral containment property. Based on these examples and additional computational evidence we conjecture the following: \begin{conj} All homogeneous multiaffine stable polynomials have the spectral containment property. \end{conj} There are several special cases of this conjecture which are of particular interest, which we enumerate separately. \begin{conj} All quadratic homogeneous multiaffine stable polynomials have the spectral containment property. \end{conj} This case is of special interest because quadratic multiaffine polynomials have especially simple minor lifts. Namely, if \[ p(x) = \sum_{i \neq j}a_{ij}x_ix_j, \] then \[ P(X) = p(\diag(X)) - \sum_{i \neq j}a_{ij}X_{ij}^2. \] It is therefore plausible that this conjecture could be proved (or disproved) by exploiting this special structure. \begin{conj}\label{conj:diagonal_rescaling} Let $D$ be a positive definite diagonal matrix, and let $p(x) = e_k(Dx)$. Then $p(x)$ has the spectral containment property. \end{conj} Again, this is of special interest because of its relation to diagonal congruence as we now explain. \begin{lemma} Let $p$ be a homogeneous, multiaffine stable polynomial, let $D$ be a positive definite diagonal matrix, and let $q = p(Dx)$. Then $x \in H(q)$ if and only if $Dx \in H(p)$, and $X \in H(Q)$ if and only if $DXD \in H(P)$. \end{lemma} \begin{proof} $x \in H(q)$ if and only if $q(x + t\vec{1}) \ge 0$ for all $t \ge 0$. This is equivalent to the statement that $p(D(x + t\vec{1})) = p(Dx + t\diag(D)) \ge 0$ for all $t \ge 0$. Notice though that if $D$ is positive definite, then $\diag(D)$ is in the interior of the hyperbolicity cone of $p$. Therefore, $p(Dx + t\diag(D)) \ge 0$ for all $t \ge 0$ if and only if $Dx \in H(p)$. Similarly, if $p(x) = \sum_{S\subseteq [n]} a_S \prod_{i \in S}x_i$, we see that $q(x) = \sum_{S\subseteq [n]} (\prod_{i \in S} D_{ii}a_S) \prod_{i \in S}x_i$. Therefore, \[ Q(X) = \sum_{S \subseteq [n]} (\prod_{i\in S}D_{ii}a_S) \det(X|_S) = \sum_{S \subseteq [n]} a_S \det((D^{1/2}XD^{1/2})|_S) = P(D^{1/2}XD^{1/2}). \] We thus have that \[ Q(X+tI) = P(D^{1/2}(X+tI)D^{1/2}) = P(D^{1/2}XD^{1/2} + tD). \] Because $D$ is positive definite, it is in the interior of $H(P)$, and therefore, $P(D^{1/2}XD^{1/2} + tD) \ge 0$ for all $t \ge 0$ if and only if $D^{1/2}XD^{1/2} \in H(P)$. This implies the result. \end{proof} From, this we see that \Cref{conj:diagonal_rescaling} is equivalent to the statement that for any $X \in H(E_k)$, and any positive definite diagonal matrix $D$, we have that there exists an eigenvalue vector $\lambda$ of $D^{1/2}XD^{1/2}$ so that $D^{-1}\lambda \in H(e_k)$. This gives us a very quantitative relationship between the eigenvalues of a symmetric matrix $X$ and those of $D^{1/2}XD^{1/2}$, which are of fundamental interest in a number of situations. The Schur-Horn property is another interesting property of a multiaffine polynomial. Once again, despite computer search, we are unable to find an example of a multiaffine homogeneous polynomial that does not have the Schur-Horn property. From this, we conjecture \begin{conj} All homogeneous multiaffine polynomials have the Schur-Horn property. \end{conj} \section{Our results in detail}\label{sec:results} Our discussion of lpm polynomials can also be viewed from a different perspective. A polynomial $p \in \mathbb{R}[\x] := \mathbb{R}[x_1,\dots,x_n]$ is \Def{multi-affine} if it is a linear combination of square-free monomials $x^J = \prod_{j \in J} x_j$ for $J \subseteq [n]$. We define a linear map $\Phi$ from the vector subspace of multi-affine polynomials in $x_1,\ldots,x_n$ to the vector space of lpm polynomials, which we call the \Def{minor lift map}, as follows. The minor lift of \[ p(\x) = \sum_{J \subseteq [n]} a_J \prod_{i \in J} x_i, \] is the polynomial $P = \Phi(p)$ given by \[ P(X) = \sum_{J \subseteq [n] } a_J \det(\X_J). \] We note that $\deg(\Phi(p))=\deg(p)$ and that $\Phi(p)$ is homogeneous if and only if $p$ is homogeneous. When it is unambiguous, we will use lower case letters such as $p$ to denote homogeneous, multiaffine $p \in \mathbb{R}[x_1, \dots, x_n]$, and use the corresponding upper case letters for the minor lift, so that $P$ is equal to $\Phi(p)$. \newcommand\PSD{\mathrm{PSD}}% \subsection{Properties of the minor lift map and constructions} Our first result is that the minor lift map sends stable polynomials to PSD-stable polynomials. Stronger even, let us call a matrix $A$ \Def{$k$-locally PSD} if every principal $k\times k$-submatrix $A_J$ of $A$ is positive semidefinite. The collection $\PSD_k$ of $k$-locally PSD matrices is a closed convex cone and $\PSD_d \subset \PSD_{d-1} \subset \cdots \subset \PSD_1$. \begin{theorem}\label{thm:minor_lift_stable} Let $p$ be a homogeneous multiaffine polynomial of degree $k$. If $p$ is stable, then $P = \Phi(p)$ is hyperbolic with $\PSD_k \subseteq H(P)$. In particular, $P$ is PSD-stable. \end{theorem} For $A \in \R^{n \times n}_{\mathrm{sym}}$, let $\pi(A) = (A_{11},A_{22},\dots,A_{nn})$ be the projection to the diagonal. A first implication for the associated hyperbolicity cones is as follows. \begin{cor}\label{cor:diag} Let $p$ be a homogeneous multiaffine stable polynomial and $P = \Phi(p)$. If $A \in H(P)$, then $p(\pi(A)) \geq P(A)$ and $\pi(A) \in H(p)$. \end{cor} Using Theorem \ref{thm:minor_lift_stable}, we are able to construct new interesting hyperbolic polynomials. Given a hyperbolic polynomial $p$ and points $\a,\v$ in the hyperbolicity cone of $p$, the \emph{Rayleigh difference} $\Delta_{\v,\a}(p)=D_{\v}p \cdot D_{\a} p - p \cdot D_{\v} D_{\a} p$ is a polynomial nonnegative on $\mathbb{R}^n$ \cite{interlacers}. If the polynomial $\Delta_{\v,\a}(p)=D_{\v}p \cdot D_{\a} p - p \cdot D_{\v} D_{\a} p$ is not a sum of squares, this has interesting implications for determinantal representations as well as a hyperbolic certificate of nonnegativity of $\Delta_{\v,\a}(p)$ which cannot be recovered by sums of squares. Saunderson \cite{soshyperbolic} characterized all pairs $(d,n)$ for which there exists such a hyperbolic polynomial $p\in\mathbb{R}[x_1,\ldots,x_n]$ of degree $d$, except when $d=3$, where the smallest known example with a Rayleigh difference that is not a sum of squares depends on 43 variables. We are able to reduce the number of variables to 6. See \Cref{sec:soshyperbolicity} for more details. \begin{theorem}\label{thm:nonsoshyperbolic There exists an (explicit) degree-3 hyperbolic polynomial $p$ in $6$ variables and vectors $\v,\a \in H(p)$ such that the Rayleigh difference $\Delta_{\v,\a}(p)$ is not a sum-of-squares. \end{theorem} \subsection{Hyperbolic determinantal inequalities} We generalize some well known theorems from linear algebra to the setting of lpm polynomials. Note that the cone of positive semidefinite matrices is precisely the hyperbolicity cone of $\det(\X)$, which is the minor lift of $e_n(x) = x_1\cdots x_n$. For our generalizations, we replace the determinant by the minor lift of a homogeneous multiaffine stable polynomial, and the cone of positive semidefinite matrices by the hyperbolicity cone of the minor lift. Hadamard's inequality is a classical result comparing the determinant of any positive semidefinite matrix with the product of its diagonal entries. \begin{theorem*}[Hadamard's inequality] Let $A$ be a $n\times n$ positive semidefinite matrix, then $\det (A)\le \prod_{i=1}^n A_{ii}$. \end{theorem*} An equivalent statement of this inequality is as follows: if $V$ is any, not necessarily symmetric, real $n\times n$-matrix with columns $\v_1,\dots, \v_n$, then $\det(V) \le \prod_{i=1}^n \|\v_i\|_2$. This yields a geometric interpretation, since the absolute value of determinant is the volume of an $n$-dimensional parallelepiped with edges $\v_1,\dots,\v_n$. Fischer's inequality generalizes Hadamard's inequality, and relates the determinant of a positive semidefinite matrix to its principal minors. Let $\Pi = \{S_1, \dots, S_m\}$ be a partition of the set $[n]$ into $m$ disjoint subsets. Given such a partition, we write $i \sim j$ if $i,j \in S_k$ for some $k=1,\dots,m$. Let $\mathcal{D}_{\Pi}$ be the vector space of symmetric matrices that are \Def{block diagonal} with respect to $\Pi$ \[ \mathcal{D}_{\Pi} = \{A \in \R^{n \times n}_{\mathrm{sym}} : A_{ij} = 0 \textnormal{ if } i \not \sim j \}. \] Let $\pi_{\Pi}$ be the orthogonal projection from $\R^{n \times n}_{\mathrm{sym}}$ onto the subspace $\mathcal{D}_{\Pi}$. \begin{theorem*}[Fischer's inequality] Let $A$ be a positive semidefinite matrix. Then \[ \det (\pi_{\Pi}(A)) \ \geq \ \det (A) \, . \] \end{theorem*} \noindent Observe that Hadamard inequality is simply Fischer's inequality with partition $\Pi = \{ \{1\},\dots,\{n\}\}$. We now give a hyperbolic generalization of Fischer-Hadamard inequality. For $P = \Phi(e_k)$, our hyperbolic Hadamard inequality was obtained in~\cite{Hadamard-kPos}. \begin{theorem}[Hyperbolic Fischer--Hadamard inequality]\label{thm:hadamard-fischer Let $P$ be a homogeneous PSD-stable lpm-polynomial and $\Pi$ a partition. Then \[ P(\pi_{\Pi}(A)) \ \ge \ P(A) \] holds for all $A \in H(P)$. \end{theorem} The classical Fischer--Hadamard inequality is a consequence of a more general inequality known as Koteljanskii's inequality, which handles the case of overlapping blocks \cite{Koteljanskii}. \begin{theorem*}[Koteljanski's inequality] Let $S$ and $T$ be two subsets of $[n]$ and $A$ be a positive semidefinite $n \times n$ matrix. Then \[ \det(A_S) \det(A_T) \ \ge \ \det(A_{S \cup T}) \det(A_{S \cap T}) \, . \] \end{theorem*} While we were not able to generalize Koteljanskii's inequality in a way that implies the hyperbolic Fischer--Hadamard inequality, we found a hyperbolic generalization of Koteljanskii's inequality, which uses a different interpretation of what it means to take a minor of a matrix. \begin{defi}\label{def:restriction} Given a degree $k$ homogeneous lpm polynomial $P$ and $T\subseteq [n]$ with $|T|\ge n-k$, we define the \Def{restriction} \[ P|_T \ := \ \Bigl(\prod_{i\in[n]\setminus T}\frac{\partial }{\partial X_{ii}}\Bigr)P \, , \] where we take partial derivative with respect to diagonal variables not in $T$. \end{defi} With this definition we can state the hyperbolic Koteljanskii inequality, which is related to the negative lattice condition in \cite{BBL09negativedependence}: \begin{theorem}[Hyperbolic Koteljanskii inequality]\label{thm:kota} Let $P$ be a homogeneous PSD-stable lpm-polynomial and $S,T\subseteq [n]$. Then \[ P|_S (A) P|_T (A) \ \ge \ P|_{S\cup T} (A) P|_{S\cap T} (A) \] holds for all $A\in H(P)$. \end{theorem} \subsection{Spectral containment property} If $A$ is an $n\times n$ symmetric matrix, we say that $\lambda \in \mathbb{R}^n$ is an eigenvalue vector of $A$ if the entries of $\lambda$ are precisely the eigenvalues of $A$ with appropriate multiplicities. Note that the set of eigenvalue vectors of a symmetric matrix $A$ are invariant under permutations. We recall the example of the $k$-th elementary symmetric polynomial $e_k(\x)$ and its minor lift $E_k(\X)$ from the introduction. It is well-known that $E_k(A)=e_k(\lambda)$ where $\lambda$ is an eigenvalue vector of $A$. In particular, it follows that $A\in H(P)$ implies that $\lambda \in H(p)$. Notice that since $e_k$ is invariant under permutations of coordinates, the order in which we list the eigenvalues of $A$ in $\lambda(A)$ does not matter. This motivates the following definition. \begin{defi}\label{def:eigenvalue_containment} A homogeneous multiaffine stable polynomial $p \in \mathbb{R}[x_1,\dots,x_n]$ has the \Def{spectral containment property} if for any $A \in H(P) \subset \R^{n \times n}_{\mathrm{sym}}$, there is an eigenvalue vector $\lambda \in \mathbb{R}^n$ of $A$ such that $\lambda \in H(p)$. \end{defi} \begin{remark} We could make a stronger requirement in \Cref{def:eigenvalue_containment} that for all $A \in H(P)$, \emph{all} eigenvalue vectors of $A$ lie in $H(p)$, seems to be too restrictive; we do not have any examples of polynomials besides the elementary symmetric polynomials with this stronger property. \end{remark} We now give a number of polynomials which have the spectral containment property: \begin{theorem}\label{thm:eigenvalue_containment The following classes of polynomials have the spectral containment property: \begin{enumerate} \item The elementary symmetric polynomials $e_1,\dots,e_n$. \item For any $n \ge k \ge d$, and any $|\varepsilon|$ sufficiently small, $e_d(x_1, \dots, x_n) + \varepsilon e_d(x_1, \dots, x_k)$. \item Stable linear polynomials. \item Any degree $n-1$ stable polynomial that interlaces $e_{n-2}$. \item $e_2(x_1, x_2, x_3, x_4) - \varepsilon(x_1x_2 + x_1x_3)$ for $\varepsilon$ sufficiently small. \end{enumerate} Moreover, if $p$ has the spectral containment property, and $x_0$ is a variable not used in $p$, then $x_0p$ has the spectral containment property. \end{theorem} While this property may seem mysterious, we conjecture that it is in fact ubiquitous: \begin{conj}\label{conj:eigenvalue_containment_strong} Every homogeneous multaffine stable polynomial has the spectral containment property. \end{conj} If \Cref{conj:eigenvalue_containment_strong} is true, then \Cref{thm:minor_lift_stable} implies that for every $k$-locally PSD matrix $A$ and homogeneous multiaffine stable polynomial $p$, some eigenvalue vector of $A$ is contained in $H(p)$. This may seem like a very strong condition on the eigenvalues of $A$, but as we show below it is equivalent to the fact that every eigenvalue vector of $A$ is contained in $H(e_k)$, which we already observed above. Let $\mathfrak{S}_n$ denote the symmetric group on $n$ letters and let it act on $\mathbb{R}^n$ by permuting coordinates. \begin{theorem}\label{thm:permutation} Let $e_k\in\mathbb{R}[\x]$ be the elementary symmetric polynomial of degree $k$ and $h\in\mathbb{R}[\x]$ be a nonzero homogeneous multiaffine stable polynomial of degree $k$. If $\v \in H(e_k)$, then there exists a permutation $\tau \in \mathfrak{S}_n$ such that $\tau(\v) \in H(h)$. \end{theorem} In \Cref{sec:schurhornprop} we will also show that \Cref{conj:eigenvalue_containment_strong} would be implied in many cases by another conjecture generalizing the classical Schur--Horn Theorem. \section{Hyperbolic Hadamard-Fischer Inequality}\label{sec:hadamard-fischer} Our goal in this section is to prove \Cref{thm:hadamard-fischer}. We start by making some general observations about supporting hyperplanes of the hyperbolicity cone: \begin{lemma}\label{lem:dual} Let $p\in\mathbb{R}[x]$ be hyperbolic with respect to $a\in\mathbb{R}^n$ and $H_a(p)$ the corresponding hyperbolicity cone. Assume that $p(a)>0$ and that $p$ is reduced in the sense that all its irreducible factors are coprime. Then we have the following: \begin{enumerate} \item For all $v\in H_a(p)$ the linear form $L_v=\langle\nabla p(v), x\rangle$ is nonnegative on $H_a(p)$. \item If $v\in\partial H_a(p)$, then $L_v(v)=0$. \item If $b\not\in H_a(p)$, then there exists $v\in\partial H_a(p)$ such that $L_v(b)<0$. \end{enumerate} \end{lemma} \begin{proof} Part (2) is just Euler's identity since $p$ vanishes on $\partial H_a(p)$. If $\nabla p(v) = 0$, then (1) is trivial. Otherwise, the hyperplane $\{ x : \langle\nabla p(v), x\rangle = 0 \}$ is tangent to $\partial H_a(p)$ at $v$. Since $H_a(p)$ is convex and $p$ positive on the interior of $H_a(p)$, this implies that $L_v(x) \ge 0$ for all $x \in H_a(p)$. In order to prove (3), we first note that by our assumption on $p$, the set of points $c\in\partial H_a(p)$ where $\nabla p(c) = 0$ is nowhere dense. Thus if $b\not\in H_a(p)$, then there is a point $e$ in the interior of $H_a(p)$ such that the line segment $[e,b]$ intersects $\partial H_a(p)$ in a smooth point $v$. Since $L_v(e)>0$ and $L_v(v)=0$, we have $L_v(b)<0$. \end{proof} We now apply the above observations to lpm polynomials. Recall that for a partition $\Pi = \{S_1, \dots, S_m\}$ of $[n]$, we denote by $\mathcal{D}_{\Pi}$ the vector space of block diagonal symmetric matrices with blocks given by $\Pi$ and $\pi_{\Pi}$ is the orthogonal projection of $\R^{n \times n}_{\mathrm{sym}}$ onto the subspace $\mathcal{D}_{\Pi}$. Further recall that we write $a\sim b$ for $a,b \in [n]$ if $a, b \in S_k$ for some $k=1,\dots,m$. \begin{lemma}\label{lem:offblock} Fix a partition $\Pi=\{S_1, \dots, S_m\}$ of $[n]$ and let $B\subseteq [n]$ be any subset. Then for any $\sigma\in \mathfrak{S}_B$, we have $|\{b\in B | b\not\sim \sigma(b) \}|\ne 1$. \end{lemma} \begin{proof} For $b \in B$, consider the orbit $b, \sigma(b),\sigma^2(b), \dots, \sigma^{t-1}(b), \sigma^t(b) = b$. If $b \in S_k$ but the orbit is not fully contained in $S_k$, then there are $0 \le r < s < t$ such that $\sigma^r(b),\sigma^{s+1}(b) \in S_k$ but $\sigma^{r+1}(b),\sigma^{s}(b) \not\in S_k$. \end{proof} \begin{lemma}\label{lem:derioff} Let $P$ be an lpm polynomial. If $A \in \mathcal{D}_{\Pi}$, then $\nabla P(A) \in \mathcal{D}_{\Pi}$. \end{lemma} \begin{proof} Since $P$ is a sum of terms of the form $a_B\det(X_B)$ with $B \subseteq [n]$, it suffices to prove the claim for $P=\det(X_B)$. In that case, this is equivalent to saying that if $A \in \mathcal{D}_{\Pi}$ and $i\not\sim j$, then \[ \Bigl(\frac{\partial}{\partial X_{ij}}\det(X_B)\Bigr)(A) = 0. \] Now $\det X_B=\sum_{\sigma\in \mathfrak{S}_B} \mathrm{sgn}(\sigma) \prod_{i\in B} X_{i,\sigma(i)}$ and Lemma~\ref{lem:offblock} applied to each term yields the claim. \end{proof} The preceding lemma allows us to show that the hyperbolicity cone of a hyperbolic lpm polynomial is closed under projections onto $\mathcal{D}_{\Pi}$. \begin{lemma}\label{lem:lpm_projection} Let $P$ be a homogeous PSD-stable lpm polynomial. If $A \in H(P)$, then $\pi_{\Pi}(A) \in H(P)$. \end{lemma} \begin{proof} Let $P_{\Pi}$ be the restriction of the polynomial $P$ to $\mathcal{D}_{\Pi}$, i.e. $P_{\Pi} = P\circ\iota$ where $\iota : \mathcal{D}_{\Pi} \rightarrow \R^{n \times n}_{\mathrm{sym}}$ is the inclusion map. We have $H(P) \cap \mathcal{D}_{\Pi} = H(P_{\Pi})$. For $A\in H(P)$ we thus have to show that $\pi_{\Pi}(A) \in H(P_\Pi)$. By \Cref{lem:dual} this is equivalent to $\langle\nabla P_\Pi(B), \pi_\Pi(A)\rangle\ge0$ for all $B\in H(P_\Pi)$. But by the previous lemma we have $\langle\nabla P_\Pi(B), \pi_\Pi(A)\rangle=\langle\nabla P(B), A\rangle$ which is nonnegative by \Cref{lem:dual} since $A \in H(P)$. \end{proof} We are now able to show the hyperbolic Fischer--Hadamard inequality. Our proof technique is inspired by the proof of \cite[Thm.~5]{Garding59ineq}. \begin{proof}[Proof of \Cref{thm:hadamard-fischer}] Without loss of generality, we can assume that $P(I)>0$. If $A$ is on the boundary of $H(P)$, then $P(A)=0$ and we are done since $\pi_{\Pi}(A)\in H(P)$ implies $P(\pi_{\Pi}(A)) \geq 0$. Therefore, we may assume that $A$ is in the interior of $H(P)$. In this case, let $\epsilon>0$ be sufficiently small such that $A-\epsilon I\in H(P)$, then $\pi_\Pi(A)-\epsilon I=\pi_\Pi(A-\epsilon I)$ is also in $H(P)$. This shows that $\pi_{\Pi}(A)$ is in the interior of $H(P)$ and $P(\pi_{\Pi}(A))>0$. Thus $P$ is hyperbolic with respect to $A$ and $q(t) = P(tA+\pi_{\Pi}(A))\in\mathbb{R}[t]$ is real rooted with negative roots. Let $d$ be the degree of $q(t)$ degree and $-\lambda_1,\ldots,-\lambda_d$ the roots of $q(t)$ with each $\lambda_i>0$. We consider the coefficients of $t$ in $q(t)$: \begin{itemize} \item[--] The coefficient of $t^d$ is $P(A)$. \item[--] The coefficient of $t$ is $dP(\pi_{\Pi}(A))$, since $\frac{d}{dt}q(t)|_{t=0} = \langle \nabla P|_{\pi_{\Pi}(A)}, A\rangle$, and by \Cref{lem:derioff}, $\langle \nabla P|_{\pi_{\Pi}(A)}, A\rangle = \langle \nabla P|_{\pi_{\Pi}(A)}, \pi_{\Pi}(A)\rangle = dP(\pi_{\Pi}(A))$. This last equality is due to Euler's identity. \item[--] The constant coefficient is $P(\pi_{\Pi}(A))$. \end{itemize} Thus we have $e_{d-1}(\lambda)=\frac{dP(\pi_{\Pi}(A))}{P(A)}$, and $e_d(\lambda)=\lambda_1\cdots\lambda_d=\frac{P(\pi_{\Pi}(A))}{P(A)}$. Since all $\lambda_i$ are positive, from the AM-GM inequality we have $$\frac{P(\pi_{\Pi}(A))}{P(A)}=\frac{e_{d-1}(\lambda)}{d}\geq (\lambda_1\cdots\lambda_d)^{\frac{d-1}{d}}=\left(\frac{P(\pi_{\Pi}(A))}{P(A)}\right)^{\frac{d-1}{d}}.$$This proves the claim. \end{proof} When $P(X)=\det X$, then $H(P)$ is the cone of positive semidefinite matrices and our theorem implies the well-known Fischer's inequality: \begin{cor}[Fischer's inequality] If $A$ is positive semidefinite, then $\det \pi_{\Pi}(A)\geq \det A$. \end{cor} \begin{remark} The usual statement of Fischer's inequality corresponds to the case of two blocks. This is equivalent to our multi-block version since principal submatrices of a positive semidefinite matrix are also positive semidefinite. \end{remark} In the case, where $\Pi=\{\{1\},...,\{n\}\}$, \Cref{thm:hadamard-fischer} and \Cref{lem:lpm_projection} imply \Cref{cor:diag}. We also get the following strengthening of \Cref{thm:hadamard-fischer}. \begin{cor} \label{thm:incfh} Let $P$ be a homogeneous and PSD-stable lpm-polynomial. If $A \in H(P)$, then the polynomial $P((1-t)A + t\pi_{\Pi}(A))$ is monotonically increasing for $t \in [0,1]$. \end{cor} \begin{proof} The polynomial $q(t) = P(tX + (\pi_{\Pi}(X) - X))$ is real rooted, and \[ P((1-t) X + t\pi_{\Pi}(X)) = q^*(t) \] so that $q^*(t)$ is real rooted. Because both $A$ and $\pi_{\Pi}(A)$ are in $H(P)$, we have $q^*(t) \ge 0$ for $t \in [0,1]$. Hence, by interlacing $\frac{d}{dt} q^*(t)$ has at most one root in the interval $[0,1]$. If there were a root of $\frac{d}{dt} q^*(t)$ in the interval $[0, 1)$, then at such a root $q^*(t)$ would have a maximum on this interval. This is a contradiction to the fact that $q^*(t)$ is maximized at $t = 1$ by \Cref{thm:hadamard-fischer}. Hence, we have that $\frac{d}{dt}q^*(t)$ must in fact be nonnegative on this interval. \end{proof} \section{Hyperbolic Koteljanskii inequality}\label{sec:koteljanskii} Koteljanskii's inequality \cite{Koteljanskii} states that for any $n\times n$ positive semidefinite matrix $A$ and $S,T\subset [n]$, $\det A_S \det A_T\ge \det A_{S\cap T} \det A_{S\cup T}$. This is a generalization of the Hadamard--Fischer inequality. Later this inequality was proven to hold for other classes of (possibly non-symmetric) matrices \cite{JN14koteljanskii}. In this section we prove \Cref{thm:kota}, a generalization of Koteljanskii's inequality, where the determinant can be replaced by a PSD-stable lpm polynomial. First we need the hyperbolic counterpart of the fact that principal submatrices of a positive semidefinite matrix are again positive semidefinite, and hence have nonnegative determinant. For this we use Renegar derivatives~\cite{Renegar06derivative}. \begin{theorem}\label{thm:Renegar} Let $p$ be a polynomial, hyperbolic with respect to $v$. Let $D_v p$ denote the directional derivative of $p$ in direction $v$. Then $D_v p$ is also hyperbolic with respect to $v$. Furthermore, their hyperbolicity cones satisfy $H_v(p)\subseteq H_v(D_v p)$. \end{theorem} Recall from \Cref{def:restriction} that $P|_T=(\prod_{i\in[n]\setminus T}\frac{\partial }{\partial X_{ii}})P$. Then we have: \begin{cor}\label{cor:subdet} Let $P$ be a homogeneous PSD-stable lpm polynomial of degree $k$ and $A\in H(P)$. Let $T\subseteq [n]$ with $|T|\ge n-k$. Then $P|_T$ is PSD-stable as well and $A\in H(P|_T)$. \end{cor} Now we use the result from \cite{BBL09negativedependence} on negative dependence. For any polynomial $p\in\mathbb{R}[x]$ and index set $S\subseteq [n]$ we denote $\partial^S p=(\prod_{i\in S}\frac{\partial }{\partial x_{i}})p$. \begin{theorem}[{\cite[Sect.~2.1 and Thm.~4.9]{BBL09negativedependence}}]\label{thm:NLC} Let $p$ be a multiaffine stable polynomial with nonnegative coefficients. Then $p$ satisfies the nonnegative lattice condition: for all $S,T\subseteq [n]$ \[ \partial^S p(0) \partial^T p(0) \ \ge \ \partial^{S\cup T} p(0)\partial^{S\cap T}p(0) \, . \] \end{theorem} This theorem directly implies the generalization of Koteljanskii's inequality. \begin{proof}[Proof of \Cref{thm:kota}] Without loss of generality assume that $P(I)>0$. Let $P_A(x)=P(A+\Diag(x))\in \mathbb{R}[x_1,...,x_n]$. It is clear that $P_A$ is multiaffine and $\partial^S P_A(0)=P|_S(A)$ for all $S\subseteq [n]$. It follows from Corollary \ref{cor:subdet} that $P_A$ is stable and has nonnegative coefficients. Thus by Theorem \ref{thm:NLC} it satisfies the nonnegative lattice condition, i.e. for all $S,T\subseteq [n],\partial^S P_A(0) \partial^T P_A(0)\ge \partial^{S\cup T} P_A(0)\partial^{S\cap T}P_A(0)$. This completes the proof. \end{proof} \section{The permutation property} The goal of this section is to prove \Cref{thm:permutation}. It says that given any point $v$ in the hyperbolicity cone of $e_k$ and any other homogeneous stable multiaffine polynomial $h$ of the same degree, some permuation of the coordinates of $v$ is in the hyperbolicity cone of $h$. We call this remarkable propery of $e_k$ the \emph{permutation property}. We first need some preparation. \begin{lemma} Assume that the homogeneous stable polynomials $g,h\in\mathbb{R}[x_1,\ldots,x_n]$ have nonnegative coefficients and a common interlacer. Then $f=g+h$ is stable. If $v$ is in the hyperbolicity cone of $f$, then $v$ is in the hyperbolicity cone of $g$ or in the hyperbolicity cone of $h$. \end{lemma} \begin{proof} Let $e$ be the all-ones vector. The univariate polynomials $F=f(te-v), G=g(te-v)$ and $H=h(te-v)$ have a common interlacer. Further, all roots of $F$ are nonnegative. The existence of a common interlacer implies that $G$ and $H$ have at most one negative root each. Assume for the sake of a contradiction that both $G$ and $H$ have a negative root. Then $G$ and $H$ have the same (nonzero) sign on the smallest root of $F$. This contradicts $F=G+H$. Thus either $G$ or $H$ have only nonnegative roots which implies the claim. \end{proof} \begin{lemma} Let $h\in\mathbb{R}[x_1,\ldots,x_n]$ be homogeneous, multiaffine and stable. Let $\tau\in\mathfrak{S}_n$ be a transposition. Then $h$ and $\tau(h)$ have a common interlacer. \end{lemma} \begin{proof} Without loss of generality assume that $\tau=(12)$ and let $g=\tau(h)$. We can write $$h=A\cdot x_1\cdot x_2+B\cdot x_1+C\cdot x_2+D$$for some multiaffine $A,B,C,D\in\mathbb{R}[x_3,\ldots,x_n]$. Then the polynomial $$\left(\frac{\partial}{\partial x_1}+\frac{\partial}{\partial x_2}\right)h=A\cdot(x_1+x_2)+B+C=\left(\frac{\partial}{\partial x_1}+\frac{\partial}{\partial x_2}\right)g$$is a common interlacer of $h$ and $g$. \end{proof} \begin{cor}\label{cor:transcone} Let $h\in\mathbb{R}[x_1,\ldots,x_n]$ be homogeneous, multiaffine and stable. Let $\tau\in\mathfrak{S}_n$ be a transposition, $g=\tau(h)$ and $f=\lambda g+\mu h$ for some nonnegative $\lambda, \mu\in\mathbb{R}$. Then $\textnormal{C}(f,e)\subset\textnormal{C}(g,e)\cup\textnormal{C}(h,e)$. \end{cor} \begin{proof} This is a direct consequence of the two preceding lemmas. \end{proof} Let $\mathbb{Q}[\mathfrak{S}_n]$ be the group algebra of the symmetric group $\mathfrak{S}_n$ on $n$ elements, i.e. $\mathbb{Q}[\mathfrak{S}_n]$ is the vector space over $\mathbb{Q}$ with basis $e_g$ for $g\in\mathfrak{S}_n$ whose ring structure is defined by extending $e_g\cdot e_h:=e_{g\cdot h}$ linearly. In $\mathbb{Q}[\mathfrak{S}_n]$ we have the identity \begin{equation}\label{eq:factor} \prod_{j=2}^n\prod_{i=1}^{j-1} \left(1+\frac{1}{j-i}\cdot e_{(ij)}\right)=\sum_{g\in\mathfrak{S}_n}e_g, \end{equation} see for example \cite[p.~192]{nazarov}. From this we obtain our desired theorem. \begin{theorem} Let $\sigma_d\in\mathbb{R}[x_1,\ldots,x_n]$ be the elementary symmetric polynomial of degree $d$ and $h\in\mathbb{R}[x_1,\ldots,x_n]$ any other nonzero homogeneous multiaffine stable polynomial of degree $d$. If $v$ is in the hyperbolicity cone of $e_d$, then $\tau(v)$ is in the hyperbolicity cone of $h$ for some permutation $\tau\in\mathfrak{S}_n$. \end{theorem} \begin{proof} We have $c\cdot\sigma_d=(\sum_{g\in\mathfrak{S}_n}e_g)h$ for some nonzero scalar $c\in\mathbb{R}$. Thus by \Cref{eq:factor} we can write $$c\cdot\sigma_d=\left(\prod_{i=1}^r(1+\lambda_ie_{\tau_i})\right)h$$ for some positive $\lambda_i\in\mathbb{R}$, transpositions $\tau_i\in\mathfrak{S}_n$ and $r=\binom{n}{2}$. We define $h_k=\left(\prod_{i=1}^k(1+\lambda_ie_{\tau_i})\right)h$ for $k=0,\ldots,r$. Since $h_k=h_{k-1}+\lambda_k\tau_k(h_{k-1})$, \Cref{cor:transcone} implies that if $v$ is in the hyperbolicity cone of $h_k$, then either $v$ or $\tau_k(v)$ is in the hyperbolicity cone of $h_{k-1}$. Since $h_r=c\cdot\sigma_d$ and $h_0=h$, this argument shows that if $v$ is in the hyperbolicity cone of $\sigma_d$, then $(\tau_{i_1}\circ\cdots\circ\tau_{i_s})(v)$ is in the hyperbolicity cone of $h$ for some $1\leq i_1<\cdots<i_s\leq r$. \end{proof}
1,477,468,750,712
arxiv
\section{Introduction} The success of deep learning in computer vision \citep{alexnet, resnet}, natural language processing \citep{bert, gpt3}, game playing \citep{alphago, atari1, EfficientZero}, and more keeps motivating a growing body of applications of deep learning on an increasingly wide variety of domains. In particular, deep learning is now routinely applied to few-shot learning -- a research challenge that assesses a model's ability to learn to adapt to new tasks, new distributions, or new environments. This has been the main research area where meta-learning algorithms have been applied -- since such a strategy seems promising in a small data regime due to its potential to \textit{learn to learn } or \textit{learn to adapt}. However, it was recently shown \citep{rfs} that a transfer learning model with a fixed embedding can match and outperform many modern sophisticated meta-learning algorithms on numerous few-shot learning benchmarks \citep{Chen2019, Chen, Dhillon2019, Huang2019}. This growing body of evidence -- coupled with these surprising results in meta-learning -- raises the question if researchers are applying meta-learning with the right inductive biases \citep{inductive_bias, ShaiShalevShwartz2014} and designing appropriate benchmarks for meta-learning. Our evidence suggests this is not the case. In this work, we show that when the task diversity -- a novel measure of variability across tasks -- is low, then MAML (Model-Agnostic Meta-Learning) \citep{maml} learned solutions have the same accuracy as transfer learning (i.e., a supervised learned model with a fine-tuned final linear layer). We want to emphasize the importance of doing such an analysis fairly: with the same architecture, same optimizer, and all models trained to convergence. We hypothesize this was lacking in previous work \citep{Chen2019, Chen, Dhillon2019, Huang2019}. This empirical equivalence remained true even as the model size changed -- thus further suggesting this equivalence is more a property of the data than of the model. Therefore, we suggest taking a problem-centric approach to meta-learning and suggest applying Marr's level of analysis \citep{marr_for_ml, marr1982} to few-shot learning -- to identify the family of problems suitable for meta-learning. Marr emphasized the importance of understanding the computational problem being solved and not only analyzing the algorithms or hardware that attempts to solve them. An example given by Marr is marveling at the rich structure of bird feathers without also understanding the problem they solve is flight. Similarly, there has been analysis of MAML solutions and transfer learning without putting the problem such solutions should solve into perspective \citep{Raghu, rfs}. Therefore, in this work, we hope to clarify some of these results by partially placing the current state of affairs in meta-learning from a problem-centric view. In addition, an important novelty of our analysis is that we put analysis of intrinsic properties of the data as the driving force. \textbf{Our contributions} are summarized as follows: \begin{enumerate} \item \textit{We propose a novel metric that quantifies the \textbf{intrinsic diversity} of the data of a few-shot learning benchmark}. We call it the \textit{diversity coefficient}. It enables analysis of meta-learning algorithms through a \textit{problem-centric framework}. It also goes beyond counting the number of classes or number of data points or counting the number of concatenated data sets -- and instead quantifies the expected diversity/variability of tasks in a few-shot learning benchmark. \item \textit{We analyze the two most prominent few-shot learning benchmarks -- MiniImagenet and Cifar-fs -- and show that their diversity is low.} These results are robust across different ways to measure the diversity coefficient, suggesting that our approach is robust. In addition, we quantitatively also show that the tasks sampled from them are highly homogeneous. \item With this context, we partially clarify the surprising results from \citep{rfs} by comparing their transfer learning method against models trained with MAML \citep{maml}. \textit{In particular, when making a fair comparison, the transfer learning method with a fixed feature extractor fails to outperform MAML.} We define a fair comparison when the two methods are compared using the same architecture (backbone), same optimizer, and all models trained to convergence. We also show that their final layer makes similar predictions according to neural network distance techniques like distance based Singular Value Canonical Correlation Analysis (SVCCA), Projection Weighted (PWCCA), Linear Centered Kernel Analysis (LINCKA), and Orthogonal Procrustes Distance (OPD). This equivalence holds even as the model size increases. \item Interestingly, we also find that even in the regime where task diversity is low (in MiniImagenet and Cifar-fs), the features extracted by supervised learning and MAML are different -- implying that the mechanism by which they function is different despite the similarity of their final predictions. \item \textit{As an actionable conclusion, we provide a metric that can be used to analyze the intrinsic diversity of the data in a few-shot learning benchmarks and therefore build more thoughtful environments to drive research in meta-learning}. In addition, our evidence suggests the following test to predict the empirical equivalence of MAML and transfer learning: if the task diversity is low, then transfer learned solutions might fail to outperform meta-learned solutions. This test is easy to run because our diversity coefficient can be done using the Task2Vec method \citep{task2vec} using pre-trained neural network. In addition, according to our synthetic experiments that also test the high diversity regime, this test provides preliminary evidence that the diversity coefficient might be predictive of the difference in performance between transfer learning and MAML. \end{enumerate} We hope that this line of work inspires a problem-centric first approach to meta-learning -- which appears to be especially sensitive to the properties of the problem in question. Therefore, we hope future work takes a more thoughtful and \textbf{quantitative} approach to benchmark creation -- instead of focusing only on making huge data sets. \section{Background}\label{background} In this section, we provide a summary of the background needed to understand our main results. \textbf{Model-Agnostic Meta-Learning (MAML):} The MAML algorithm \citep{maml} attempts to meta-learn an initialization of parameters for a neural network so that it is primed for fast gradient descent adaptation. It consists of two main optimization loops: 1) an outer loop used to prime the parameters for fast adaptation, and 2) an inner loop that does the fast adaptation. During meta-testing, only the inner loop is used to adapt the representation learned by the outer loop. \textbf{Transfer Learning with Union Supervised Learning (USL):} Previous work \citep{rfs} shows that an initialization trained with supervised learning, on a union of all tasks, can outperform many sophisticated methods in meta-learning. In particular, their method consists of two stages: 1) first they use a union of all the labels in the few-shot learning benchmark during meta-training and train with standard supervised learning (SL), then 2) during the meta-testing, they use an inference method common in transfer learning: extract a fixed feature from the neural network and fully fine-tune the final classification layer (i.e., the head). Note that our experiments only consider when the final layer is regularized Logistic Regression trained with LBGFS (Limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm). \textbf{Distances for Deep Neural Network Feature Analysis:} To compute the distance between neural networks we use the distance versions of Singular Value Canonical Correlation Analysis (SVCCA) \citep{svcca}, Projection Weighted Canonical Correlation (PWCCA) \citep{pwcca}, Linear Centered Kernel Analysis (LINCKA) \citep{cka} and Orthogonal Procrustes Distance (OPD) \citep{opd}. These distances are in the interval $[0, 1]$ and are not necessarily a formal distance metric but are guaranteed to be zero when their inputs are equal and nonzero otherwise. This is true because SVCCA, PWCCA, LINCKA are based on similarity metrics and OPD is already a distance. Note that we use the formula $d(X, Y) = 1 - sim(X, Y)$ for our distance metrics where $sim$ is one either SVCCA, PWCCA, LINCKA similarity metric and $X, Y$ are matrices of activations (called layer matrices). The distance between two models is computed by choosing a layer and then comparing the features/activations after adaptation for that layer given a batch of tasks represented as a support and query set. A more thorough overview of these metrics for the analysis of internal representations for convolutional neural networks (CNNS) can be found in the appendix, section \ref{cca_background}. \textbf{Task2Vec Embeddings for Distance computation between Tasks:} The diversity coefficient we propose is the expectation of the distance between tasks (explained in more detail in section \ref{def_div}). Therefore, it is essential to define the distance between different pairs of tasks. We focus on the cosine distance between Task2Vec (vectorial) embeddings as in \citep{task2vec}. Therefore, we provide a summary of the Task2Vec method to compute task embeddings. The vectorial representation of tasks provided by Task2Vec \citep{task2vec} is the vector of diagonal entries of the Fisher Information Matrix (FIM) given a fix neural network as a feature extractor -- also called a \textbf{probe network} -- after fine-tuning the final classification layer to the task. The authors explain this is a good vectorial representation of tasks because 1. It approximately indicates the most informative weights for solving the current task (up to a second order approximation) 2. For rich probe networks like CNNs, the diagonal is more computationally tractable. We choose Task2Vec because the original authors provide extensive evidence that their embeddings correlate with semantic and taxonomic relations between different visual classes -- making it a convincing embedding for tasks \citep{task2vec}. The Task2Vec embedding of task $\tau$ is the diagonal of the following matrix: \begin{equation}\label{fim} \hat F_{D_\tau, f_w} = \hat F(D_\tau, f_w ) = \mathbb E_{x, y \sim \hat p(x \mid \tau) p(y \mid x, f_w)}[ \nabla_w \log p(y \mid x, f_w) \nabla_w p(y \mid x, f_w)^{\top}] \end{equation} where $f_w$ is the neural networks used as a feature extractor with architecture $f$ and weights $w$, $\hat p(x \mid \tau)$ is the empirical distribution defined by the training data $D_{\tau} = \{ (x_i, y_i ) \}^n_{i=1}$ for task $\tau$, and $p(y \mid x, f_w)$ is a deep neural network trained to approximate the (empirical) posterior $\hat p(y \mid x, \tau)$. We'd like to emphasize that the there is a dependence on target label since Task2Vec fixes the feature extractor (using $f_w$) and then fits the final layer (or ``head") to approximate the task posterior distribution $\hat p(y \mid x, \tau)$. In addition, it's important to have a fixed probe network to make different embeddings comparable \citep{task2vec}. \section{Definition of the Diversity Coefficient}\label{def_div} The diversity coefficient aims to measure the intrinsic diversity (or variability) of tasks in a few-shot learning benchmark. At a high level, the diversity coefficient is the expected distance between a pair of different tasks \textbf{given a fixed probe network}. In this work, we choose the distance to be the cosine distance between vectorial representations (i.e. embeddings) of tasks according to Task2Vec as described in section \ref{background}. Using a fixed probe networks is essential because: 1. Using a fixed probe network means that the distances between different tasks are comparable, as discussed in the original Task2Vec \citep{task2vec} and 2. Since we are computing the distance between different tasks, we need to make sure the difference comes from intrinsic properties of the data and not from a different source, e.g. if one uses different models then this might confound the source of variability in our metric. We define the \textbf{diversity coefficient} of a few-shot learning benchmark $B$ as follows: \begin{equation}\label{true_diversity_coeff1} \hat div(B) = {\mathbb E}_{\tau_1 \sim \hat p(\tau \mid B), \tau_2 \sim \hat p(\tau \mid B)} {\mathbb E}_{D_1 \sim \hat p(x_1, y_1 \mid \tau_1), D_2 \sim \hat p(x_2, y_2 \mid \tau_2) } \left[ d(\hat F_{D_1, f_w}, \hat F_{D_2, f_w} ) \right] \end{equation} where $f_w$ is the neural networks used as a feature extractor with architecture $f$ and weights $w$, $\hat p(x \mid \tau)$ is the empirical distribution defined by the training data $D_{\tau} = \{ (x_i, y_i ) \}^n_{i=1}$ for task $\tau$, $\tau_1, \tau_2$ are tasks sampled from the empirical distribution of tasks $\hat p(\tau \mid B)$ for the current benchmark $B$ (i.e. a batch of tasks with their data sets $\mathcal D = {(\tau_i, D_{\tau_i}) }^{N}_{i=1}$), a task $\tau_i$ is the probability distribution $p(x, y \mid \tau)$ of the data, $d$ is a distance metric (for us cosine), $f_w$ is the neural networks used as a feature extractor with architecture $f$ and weights $w$, and $\hat p(x \mid \tau)$ is the empirical distribution defined by the training data $D_{\tau} = \{ (x_i, y_i ) \}^n_{i=1}$ for task $\tau$. We'd also like to recall the reader that the definition of a task in this setting is of a n-way, k-shot few-shot learning task. Therefore, each task has n classes sampled with k examples used for the adaptation. We'd like to emphasize that the adaptation here is only to fine-tune the final layer according to the Task2Vec method for the correct computation of the FIM. Therefore, in this setting we combine the support and query set as the split is not relevant for the computation of the task embedding using Task2Vec. Note that the above formulation can be easily adapted to any distance function between tasks, and is not necessarily specific to using the FIM or cosine distance. For example, given the true distributions for tasks one can use real distances between probability distributions e.g., Hellinger distance. In addition, it is obvious one can use a distance function besides the cosine distance -- but we choose it in accordance to the original work of Task2Vec \citep{task2vec}. \section{Experiments} This section explains the experiments backing up our main results outlined in our list of contributions. Experimental details are provided in the supplementary section \ref{experimental_details} and the learning curves displaying the convergence for a fair comparison are in supplementary section \ref{fair_comparison}. \subsection{The Diversity Coefficient of MiniImagenet and Cifar-fs} To put our analysis into a problem-centric framework, we first analyze the problem to be solved through the lenses of the diversity coefficient. Recall that the diversity coefficient aims to quantify the intrinsic variation (or distance) of tasks in a few-shot learning benchmark. We show that the diversity coefficient of the popular MiniImagenet and Cifar-fs benchmarks are low with good confidence intervals using four different probe networks in table \ref{div_mi_table}. We argue it's low because the diversity values are approximately in the interval $[0.06,0.117]$ -- given that the minimum and maximum values would be $0.0$ and $1.0$ for the cosine distance. In addition, the individual distances between pairs of tasks are low and homogenous, as shown in the heat maps in the supplementary section \ref{heat_maps}, figures \ref{heat_map_resnets_divs_mi} and \ref{heat_map_resnets_divs_cifarfs}. \begin{table}[!h] \centering \begin{tabular}{lll} \toprule Probe Network & Diversity on MI & Diversity on Cifar-fs \\ \midrule Resnet18 (pt) & 0.117 $\pm$ 2.098e-5 & 0.100 $\pm$ 2.18e-5 \\ Resnet18 (rand) & 0.0955 $\pm$ 1.29e-5 & 0.103 $\pm$ 1.05e-5 \\ Resnet34 (pt) & 0.0999 $\pm$ 1.95e-5 & 0.0847 $\pm$ 3.06e-5 \\ Resnet34 (rand) & 0.0620 $\pm$ 8.12e-6 & 0.0643 $\pm$ 9.64e-6 \\ \bottomrule \end{tabular} \caption{ \textbf{ The diversity coefficient of MiniImagenet (MI) and Cifar-fs is low.} The diversity coefficient was computed using the cosine distance between different standard 5-way, 20-shot classification tasks from the few-shot learning benchmark using the Task2Vec method described in section \ref{def_div}. We used 20 shots (number of examples per class) since we can use the whole task data to compute the diversity coefficient (no splitting of support and query set required for the diversity coefficient). We used Resnet18 and Resnet34 networks as probe networks -- both pre-trained on ImageNet (indicated as ``pt" on table) and randomly initialized (indicated as ``rand" on table). We observe that both type of networks and weights give similar diversity results. All confidence intervals were at 95\%. To compute results, we used 500 few-shot learning tasks and only compare pairs of different tasks. This results in $(500^2 - 500)/2 = 124,750$ pair-wise distances used to compute the diversity coefficient. } \label{div_mi_table} \end{table} \subsection{Low Diversity Correlates with Equivalence of MAML and Transfer Learning}\label{mini_imagenet_sl_vs_ml} Now that we have placed ourselves in a problem-centric framework and shown the diversity coefficient of the popular MiniImagenet and Cifar-fs benchmarks are low -- we proceed to show the failure of transfer learning (with USL) to outperform MAML. Crucially, the analysis was done using a fair comparison: using the same model architecture, optimizer, and training all models to convergence -- details in section \ref{experimental_details}. We used the five-layer CNN used in \citep{maml, ravi} and Resnet12 as in \citep{rfs}. We provide evidence that in the setting of low diversity: \begin{enumerate} \item The accuracy of an adapted MAML meta-learner vs. an adapted USL pre-trained model are similar and statistically significant, except for one result where transfer learning with USL is worse. This is shown in table \ref{sl_vs_maml_accuracy_comparison_table} and \ref{all_archs_all_data_sets_maml_vs_tl_mi_cirfarfs_5cnn_resnet12rfs_perf_comp_bar_fig}. \item The distance for the classification layer decreases sharply according to four distance-based metrics -- SVCCA, PWCCA, LINCKA, and OPD -- as shown in figure \ref{prediction_adapted_sl_vs_adapted_maml}. This implies the predictions of the two are similar. \end{enumerate} For the first point, we emphasize that tables \ref{div_mi_table} and table \ref{sl_vs_maml_accuracy_comparison_table} taken together support our central hypothesis: that models trained with meta-learning are not inferior to transfer learning models (using USL) when the diversity coefficient is low. Careful inspection reveals that the methods have the same meta-test accuracy with intersecting confidence intervals -- making the results statistically significant across few-shot benchmarks and architectures. The one exception is the third set of bar plots, where transfer learning with USL is in fact worse. For the second point, refer to figure \ref{prediction_adapted_sl_vs_adapted_maml} and observe that as the depth of the network increases, the distance between the activation layers of a model trained with MAML vs USL increases until it reaches the final classification layer -- where all four metrics display a noticeable dip. In particular, PWCCA considers the two prediction layers identical (approximately zero distance). This final point is particularly interesting because PWCCA is weighted according to the CCA weights that stabilize with the final predictions of the network. This means that the PWCCA distance value is reflective of what the networked actually learned and gives a more reliable distance metric (for details, refer to the appendix section \ref{pwcca}). This is important because this supports our main hypothesis: that at prediction time there is an equivalence between transfer learning and MAML when the diversity coefficient is low. \begin{table*}[h!] \begin{tabular}{lll} \toprule Meta-train Initialization & Adaptation at Inference & Meta-test Accuracy \\ \midrule Random & no adaptation & 19.3 $\pm$ 0.80 \\ MAML0 & no adaptation & 20.0 $\pm$ 0.00 \\ USL & no adaptation & 15.0 $\pm$ 0.26 \\ \midrule Random & MAML5 adaptation & 34.2 $\pm$ 1.16 \\ \textbf{MAML5} & \textbf{MAML5 adaptation} & \textbf{62.4 $\pm$ 1.64} \\ USL & MAML5 adaptation & 25.1 $\pm$ 0.98 \\ \midrule Random & MAML10 adaptation & 34.1 $\pm$ 1.23 \\ \textbf{MAML5} & \textbf{MAML10 adaptation} & \textbf{62.3 $\pm$ 1.50} \\ USL & MAML10 adaptation & 25.1 $\pm$ 0.97 \\ \midrule Random & Adapt Head only (with LR) & 40.2 $\pm$ 1.30 \\ MAML5 & Adapt Head only (with LR) & 59.7 $\pm$ 1.37 \\ \textbf{USL} & \textbf{Adapt Head only (with LR)} & \textbf{60.1 $\pm$ 1.37} \\ \bottomrule \end{tabular} \centering \caption{ \textbf{ MAML trained representations and supervised trained representation have statistically equivalent meta-test accuracy on MiniImagenet -- which has low diversity.} The transfer model's adaptation is labeled as ``Adapted Head only (with LR)" -- which stands for ``Logistic Regression (LR)" used in \citep{rfs}. More precisely, we used Logistic Regression (LR) with LBFGS with the default value for the l2 regularization parameter given by Python's Sklearn. Note that an increase in inner steps from 5 to 10 with the MAML5 trained model does not provide an additional meta-test accuracy boost, consistent with previous work \citep{Miranda2020}. Note that the fact that the MAML5 representation matches the USL representation when both use the same adaptation method is not surprising -- given that: 1) previous work has shown that the distance between the body of an adapted MAML model is minimal compared to the unadapted MAML (which we reproduce in \ref{feature_extractor_sl_vs_maml_vs_adapted_maml} in the green line) and 2) the fact that a MAML5 adaptation is only 5 steps of MAML while LR fully converges the prediction layer. We want to highlight that only the MAML5 model achieved the maximum meta-test performance of 0.6 with the MAML5 adaptation -- suggesting that the USL and MAML5 meta-learning algorithms might learn different representations. For USL to have a fair comparison during meta-test time when using the MAML adaptation, we provide the MAML final layer learned initialization parameters to the USL model (but any is fine due to convexity when using a fixed feature extractor). This is needed since during meta-training USL is trained with a union of all the labels (64) -- so it does not even have the right output size of 5 for few-shot prediction. Meta-testing was done in the standard 5-way, 5-shot regime. } \label{sl_vs_maml_accuracy_comparison_table} \end{table*} \begin{figure}[h!] \centering \includegraphics[width=0.45\linewidth] {figs/all_archs_all_data_sets_maml_vs_tl_mi_cirfarfs_5cnn_resnet12rfs_perf_comp_bar.png} \caption{ \textbf{MAML trained models and union supervised trained (USL) models have statistically equivalent meta-test accuracy for MiniImagenet and Cifar-fs with Resnet12 and five layer CNNs.} This holds for both the Resnet12 architecture used in \citep{rfs} and the 5 layer CNN (indicated as ``5CNN") in \citep{ravi}. Results used a (meta) batch-size of 100 tasks and 95\% confidence intervals. All MAML models were trained with 5 inner steps during meta-training. ``MAML5" and ``MAML10" in the bar plot indicates the adaptation method used at test time i.e. we used 5 inner steps and 10 inner steps at test time. MiniImagenet is abbreviated as ``MI" in the figure. } \label{all_archs_all_data_sets_maml_vs_tl_mi_cirfarfs_5cnn_resnet12rfs_perf_comp_bar_fig} \end{figure} \begin{figure*}[h!] \centering \includegraphics[width=0.8\linewidth]{figs/lr_sl_vs_adapted_maml_adapted_maml_all_metrics_head_adaptation.png} \caption{ \textbf{The classification layer of transfer learning and a MAML5 model decrease in distance -- implying similar predictions.} More precisely, an initialization trained with 5 inner steps (MAML5) has an increasingly similar head (classifier) after adaptation with MAML5 compared to the classifier layer of the Union Supervise-Learned (USL) model that has been adapted only at the final layer. In particular, the USL model has been adapted with Logistic Regression (LR) with LBFGS with the default value for the l2 regularization parameter given by Python's Sklearn (as in \citep{rfs}). We showed this trend with four different distance metrics - SVCCA, PWCCA, LICKA, and OPD - as referenced in section \ref{background}. Observe that according to PWCCA, the distance between the predictions is zero. This is true because the distance of classification layer (indicated as ``head" in the figure) is zero. The architecture used here is a five layer CNN as in \citep{maml, ravi} with their same setup. The benchmark used for this analysis is MiniImagenet. } \label{prediction_adapted_sl_vs_adapted_maml} \end{figure*} \subsection{Is the Equivalence of MAML and Transfer Learning related to Model Size or Low Diversity?} An alternative hypothesis to explain the equivalence of transfer learning (with USL) and MAML could be due to the capabilities of large neural networks to be better meta-learners in general. Inspired by the impressive ability of large language models to be few-shot (or even zero-shot) learners \citep{gpt3, foundation_models, clip, bert} -- we hypothesized that perhaps the meta-learning capabilities of deep learning models is a function of the model size. If this were true, then we expected to see the difference in meta-test accuracy of MAML and USL to be larger for smaller models and the difference to decrease as the model size increased. Once the two models were, of the same size but large enough, we hypothesized that the meta-test accuracy would be the same. We tested this to rule out that our observations were a consequence of the model size. The results were negative and surprisingly the equivalence between MAML and USL seems to hold even as the model increased -- strengthening our hypothesis that the low task diversity might be a bigger factor explaining our observations. We show this in figure \ref{model_size}, and we want to draw attention to the fact this statistical equivalence holds even when using only four filters -- the case where we expected the biggest difference. \begin{figure*}[h!] \centering \includegraphics[width=0.8\linewidth]{figs/model_size.png} \caption{ \textbf{The meta-test accuracy of MAML and transfer learning using USL is similar in a statistically significant way -- regardless of the model size}. In this experiment, we used the MiniImagenet benchmark, the five layer CNN used in \citep{maml, ravi}, and only increased the filter size using sizes 4, 8, 16, and 32. We made sure the comparison was fair by using the same architecture, optimizer, and trained all models to convergence. During meta-training, the MAML model was trained using 5 inner steps. The legends indicating MAMl5 and MAML10 refer to the number of inner steps used at test time. We used a (meta) batch size of 100 tasks. } \label{model_size} \end{figure*} \subsection{MAML learns a different base model compared to Union Supervised Learned models -- even in the presence of low task diversity} The first four layers of figure \ref{prediction_adapted_sl_vs_adapted_maml} shows how large the distance is of a MAML representation compared to a SL representation. In particular, it is much larger than the distance value in the range $[0,0.1]$ from previous work that compared MAML vs. adapted MAML \citep{Raghu}. We reproduced that and indeed MAML vs. adapted MAML has a small difference (smaller for us) -- supporting our observations that a MAML vs. a USL learned representations are different at the feature extractor layer even when the diversity is low. Results are statistically significant. \subsection{Synthetic Experiments showing closeness of MAML and Transfer Learning as Diversity Changes} In this section, we show the closeness of MAML and transfer learning (with USL) for synthetic experiments for low and high diversity regimes in Figure \ref{meta_test_vs_task2vec_div}. In the low regime, the two methods are equivalent in a statistically significant way -- which supports the main claims of our paper. As the diversity increases, however, the difference between USL and MAML increases (in favor of USL). This will be explored further in future work. The task is the usual n-way, k-shot tasks, but the data comes from a Gaussian and the meta-learners are tasked with classifying from which Gaussian the data points came from in a few-shot learning manner. Benchmarks are created by sampling a Gaussian distribution with means moving away from the origin as the benchmark changes. Therefore, the Gaussian benchmark with the highest diversity coefficient has Gaussians that are the furthest from the origin. We computed the task diversity coefficients using the Task2Vec method as outlined in Section \ref{def_div}, using a random 3-layer fully connected probe network described in Section \ref{experimental_details}. \begin{figure*}[h!] \centering \includegraphics[width=0.8\linewidth]{figs/task2vec_gaussian_mamlusl.png} \caption{ \textbf{The meta-test accuracy of MAML and transfer learning using USL is similar in a statistically equivalent way in the low diversity regime in the 5-way, 10-shot Gaussian Benchmarks}. MAML models were trained with 5 inner steps. MAML5 and MAML10 indicate the adaptation procedure at test time. Results used a (meta) batch-size of 500 tasks and 95\% confidence intervals. As the diversity of the benchmark increases, the Gaussian tasks are sampled further away from the origin. Note, as the diversity increases, the difference between USL and MAML increases (in favor of USL). } \label{meta_test_vs_task2vec_div} \end{figure*} \section{Related Work} Our work proposes a problem-centric framework for the analysis of meta-learning algorithms inspired from previous puzzling results \citep{rfs}. We propose to use a pair-wise distance between tasks and analyze how this metric might correlate with meta-learning. The closest line of work for this is the long line of work by \citep{task2vec} where they suggest methods to analyze the complexity of a task, propose unsymmetrical distance metrics for data sets, reachability of tasks with SGD, ways to embed entire data sets and more \citep{task2vec, DynamicDistance, DynamicsReachability, ComplexityTasks}. We hypothesize this line of work to be very fruitful and hope that more people adopt tools like the ones they suggest and we propose in this paper before researching or deploying meta-learning algorithms. We hope this helps meta-learning methods succeed in practice -- since cognitive science suggests meta-learning is a powerful method humans use to learn \citep{Lake2016}. In the future, we hope to compare \citep{task2vec}'s distance metrics between tasks with ours to provide a further unified understanding of meta-learning and transfer learning. A contrast between their work and ours is that we focus our analysis from a meta-learning perspective applied to few-shot learning -- while their focus is understanding transfer learning methods between data sets. Our analysis of the feature extractor layer is identical to the analysis by \citep{Raghu}. They showed that MAML functions mainly via feature re-use than by rapid learning i.e., that a model trained with MAML changes very little after the MAML adaptation. The main difference of their work with our is: 1) that we compare MAML trained models against union supervised learned models (USL) instead of only comparing MAML against adapted MAML, and 2) that we explicitly analyzed properties of the data sets. In addition, we use a large set of distance metrics for our analysis including: SVCCA, PWCCA, LINCKA and OPD as proposed by \citep{svcca, pwcca, cka, opd}. Our work is most influenced by previous work suggesting modern meta-learning requires rethinking \citep{rfs}. The main difference of our work with theirs is that we analyzed the internal representation of the meta-learning algorithms and contextualize these with quantifiable metrics of the problem being solved. Unlike their work, we focused on a fair comparison between meta-learning methods by ensuring the same neural network backbone was used. Another difference is that they gained further accuracy gains by using distillation -- a method we did not analyze and leave for future work. A related line of work \citep{empericalstudypresentation2020, Miranda2020} first showed that there exist synthetic data sets that are capable of exhibiting higher degrees of adaptation as compared to the original work by \citep{Raghu}. The difference is that they did not compare MAML models against transfer learning methods like we did here. Instead, they focused on comparing adapted MAML models vs. unadapted MAML models. Another related line of work is the predictability of adversarial transferability and transfer learning. They show this both theoretically and with extensive experiments \citep{Liang2021}. The main difference between their work and ours is that they focus their analysis mainly on transfer learning, while we concentrated on meta-learning for few-shot learning. In addition, we did not consider adversarial transferability -- while that was a central piece of their analysis. Further, related work is outlined in the supplementary section \ref{related_work2}. \section{Discussion and Future Work} In this work, we presented a problem-centric framework when comparing transfer learning methods with meta-learning algorithms -- using USL and MAML as the representatives of transfer and meta-learning methods respectively. We showed the diversity coefficient of the popular MiniImagenet and Cifar-fs benchmark is low and that under a fair comparison -- MAML is very similar to transfer learning (with USL) at test time. This was also true even when changing the model size -- removing the alternative hypothesis that the equivalence of MAML and transfer learning with USL held due to large models. Instead, this strengthens our hypothesis that the diversity of the data might be the driving factor. The equivalence of MAML and USL was also replicated in synthetic experiments. Therefore, we challenge the suggestions from previous work \cite{rfs} that only a good embedding can beat more effective than sophisticated meta-learning -- especially in the low diversity regime, and instead suggest this observation might be due to lack of good principles to design meta-learning benchmarks. In addition, our synthetic experiments show a promising scenario where we can systematically differentiate meta-learning algorithms from transfer learning algorithms -- which supports our actionable suggestion to use the diversity coefficient to effectively study meta-learning and transfer learning algorithms. We hope to study this in more depth in the future with real and synthetic data. In addition, this problematizes the observations that fo-MAML in meta-data set \citep{Triantafillou2019} is better than transfer learning solutions -- since our synthetic experiments show MAML is not better than USL in the high diversity regime. To further problematize, we want to point out that meta-learning methods are not better than transfer learning as observed by \citep{bscd_fsl} -- as observed in our synthetic experiments. Meaning that further research is needed in both data sets -- especially from a problem centric perspective with quantitative methods like the ones we suggest. We also have theoretical results from a statistical decision perspective in the supplementary section \ref{statistical_decision_theory_for_meta_learning} that inspired this work and suggest that when the distance between tasks is zero -- then the predictions of transfer learning, meta-learning and even a fixed model with no adaptation are all equivalent (with the l2 loss). The results are theoretically limited because we can only reason when the diversity is exactly zero, but regardless provided an interesting perspective to study and inspire empirical work. We hope this work inspires the community in meta-learning and machine learning to construct benchmarks from a problem-centric perspective -- that go beyond only large scale data sets -- and instead use quantitative metrics for the construction of such research challenges.
1,477,468,750,713
arxiv
\section{Introduction}\label{sec:intro} \emph{Top-$k$ queries}~\cite{DBLP:conf/cikm/HuangWQZCH11,Ilyas:2008:STK:1391729.1391730,4221738} and \emph{skyline queries}~\cite{borzsony2001skyline,papadias2003optimal,tan2001efficient} have been used traditionally to return a representative subset $\mathcal{S}$ of a database $\mathcal{D}$ to a query user when the entire database is too large to be explored fully by the query user. These two types of queries suffer in either requiring a predefined \emph{utility function} to model the query user's preference over the data objects, or returning an unbounded number of data objects. Recent studies~\cite{kessler2015k,nanongkai2010regret,zeighami2016minimizing} aim to overcome these limitations by a new type of queries, the \emph{$k$-regret query}, which returns a size-$k$ subset $\mathcal{S} \subseteq \mathcal{D}$ that minimizes the \emph{maximum regret ratio} of any query user. The concept of \emph{regret} comes from microeconomics~\cite{ligett2011beating}. Intuitively, if a query user had selected the local optimal object in $\mathcal{S}$, and were later shown the overall optimal object in the entire database $\mathcal{D}$, the query user may regret. The $k$-regret query uses the \emph{regret ratio} to model how regretful the query user may be, which is the level of difference in the \emph{optimality} between the local optimal object in $\mathcal{S}$ and the overall optimal object in $\mathcal{D}$. Here, the optimality is computed by a utility function. The $k$-regret query does not require any specific utility function to be given. Instead, it can return a set to minimize the maximum regret ratio for a family of utility functions such as the linear summation functions. \begin{table} \centering \begin{small} \caption{A Computer Database}\label{tbl:example} \vspace{-2mm} \BlankLine\BlankLine \begin{tabular}{ccc} \toprule \multicolumn{1}{c}{Computer} & CPU ($p_i.c_1$) & Brand recognition ($p_i.c_2$)\\ \midrule $p_1$ & 2.3 & 80\\ $p_2$ & 1.7 & 90\\ $p_3$ & 2.8 & 50\\ $p_4$ & 2.1 & 55\\ $p_5$ & 2.1 & 50\\ $p_6$ & 3.0 & 55\\ \bottomrule \end{tabular} \end{small} \vspace{-5mm} \end{table} To illustrate the $k$-regret query, consider an online computer shop with a database $\mathcal{D}$ of computers as shown in Table~\ref{tbl:example}. There are six computers, i.e., $\mathcal{D} = \{p_1, p_2, ..., p_6\}$. Every computer $p_i$ has two attributes: CPU clock speed and brand recognition, denoted as $p_i.c_1$ and $p_i.c_2$, respectively. Here, brand recognition represents how well a brand is recognized by the customers. A larger value means that the brand is better recognized. Since the entire database may be too large to be all shown, the shop considers showing only a size-$k$ subset $\mathcal{S} \subseteq \mathcal{D}$ in the front page as a recommendation. Such a subset may be $\mathcal{S} = \{p_1, p_3, p_5\}$ (i.e., $k = 3$). When a customer visits the shop, assume that her preference can be expressed as a utility function $f_1(p_i) = 0.5 \cdot p_i.c_1 + 0.5 \cdot p_i.c_2$. Then the customer may purchase $p_1$ from the recommended subset $\mathcal{S}$ since $p_1$ has the largest utility value $f_1(p_1) = 0.5 \cdot p_1.c_1 + 0.5 \cdot p_1.c_2 = 0.5\times 2.3+0.5\times 80 = 41.15 > f_1(p_3) = 26.40 > f_1(p_5) = 26.05$. Note that another computer $p_2 \in \mathcal{D}$ exists with an even larger utility value $f_1(p_2) = 0.5\times 1.7+0.5\times 90=45.85$. If the customer later sees $p_2$, she may regret. Her regret ratio is computed as $\displaystyle \frac{f_1(p_2) - f_1(p_1)}{f_1(p_2)} \approx 10.25\%$. For another customer with a different utility function $f_2(p_i) = 0.99 \cdot p_i.c_1 + 0.01 \cdot p_i.c_2$, the computer in $\mathcal{S}$ that best suits her preference is $p_3$, i.e., $f_2(p_3) = 0.99 \times 2.8+0.01\times 50 \approx 3.27 > f_2(p_1) \approx 3.08 > f_2(p_5) \approx 2.58$. Meanwhile, the overall best computer in $\mathcal{D}$ is $p_6$, i.e., $f_2(p_6) = 0.99 \times 3.0+0.01\times 55 = 3.52$. If the customer purchases $p_3$, her regret ratio will be $\displaystyle \frac{f_2(p_6) - f_2(p_3)}{f_2(p_6)} \approx 7.05\%$. Since every customer may have a different preference and hence a different utility function, it is unpractical to satisfy multiple customers with a single set $\mathcal{S}$. Instead of trying to satisfy some customer with a certain utility function, the $k$-regret query aims to generate a subset $\mathcal{S}$ that minimizes the \emph{maximum regret ratio} for a family of (infinite) utility functions. Existing studies on the $k$-regret query have focused on \emph{additive utility functions} where the overall utility of an object is computed as the sum of the utility in each attribute of the object. The linear summation functions $f_1$ and $f_2$ are examples. They can be written in a more general form: $f(p_i) = \sum_{j=1}^{d}\alpha_j \cdot p_i.c_j$, where $d$ denotes the number of attributes, and $\alpha_j$ is the \emph{weight} of attribute~$j$. Studies~\cite{kessler2015k,nanongkai2010regret} have shown that the maximum regret ratio of the $k$-regret query with additive utility functions can be bounded. However, to the best of our knowledge, so far, no existing bound has been obtained for the $k$-regret query with \emph{multiplicative utility functions} (MUFs). In this paper, we break the barrier of bounding the maximum regret ratio for the $k$-regret query with MUFs. An MUF computes the overall utility of an object as the product of the utility in each attribute, i.e., $f(p_i) = \prod_{j=1}^{d} p_i.c_j^{\alpha_j}$. It is more expressive especially in modeling the \emph{diminishing marginal rate of substitution} (DMRS)~\cite{varian1992microeconomic}. The DMRS is a common phenomenon. It refers to the fact that, as the utility in an attribute $j$ gets larger, the extent to which this utility can make up (substitute) for the utility in any other attribute $j'$ decreases (diminishes). For example, a higher CPU clock speed may make up for a less known brand. However, as the CPU clock speed gets higher, the extent to which it can make up for the brand recognition decreases, since most customers only need a moderate CPU rather than an extremely fast one. This phenomenon cannot be modeled by an additive utility function such as $f_1$ or $f_2$, where a given amount of increase in attribute 1 always makes up for a fixed amount of utility in attribute 2, and vice versa. As a result, $f_1$ and $f_2$ favor objects with maximum values in certain attributes (e.g., $p_2$ in $c_2$ and $p_6$ in $c_1$). An MUF such as $f_3(p_i) = p_i.c_1^{0.5} \cdot p_i.c_2^{0.5}$, on the other hand, models the DMRS much better. It favors objects with a good (but not necessarily the best) utility in more attributes, e.g., $p_1$ suits $f_3$ the best, which is reasonably good in both attributes. The higher expressive power of MUFs also brings significant challenges in bounding the maximum regret ratio for them. It is much more difficult to tightly bound the product of a series of exponential expressions. In this study, we overcome these challenges with an algorithm named \emph{MinVar}. We show that this algorithm can obtain a maximum regret ratio bounded between $\displaystyle \Omega(\frac{1}{k^2})$ and $\displaystyle O(\ln(1+\frac{1}{k^{\frac{1}{d-1}}}))$ for the $k$-regret query with MUFs. To showcase the applicability of the MinVar algorithm in real world scenarios, we apply it on $k$-regret queries with a special family of MUFs, the \emph{Cobb-Douglas functions}, which are used extensively in economics studies~\cite{10.2307/1811556,DIAMOND19801,vilcu2011geometric}. As a by-product, we obtain a new upper bound $\displaystyle O(\frac{1}{k^{\frac{1}{d-1}}})$ of the maximum regret ratio for the $k$-regret query with the \emph{Constant Elasticity of Substitution (CES) functions}~\cite{vilcu2011geometric}. This type of function is closely related to the Cobb-Douglas functions. Our upper bound is tighter than a previously obtained upper bound~\cite{kessler2015k}. To summarize, this paper makes the follows contributions: \begin{itemize} \item We are the first to study the $k$-regret query with multiplicative utility functions. \item We propose an algorithm named MinVar to process the query. Based on this algorithm, we obtain bounds of the maximum regret ratio for the $k$-regret query with multiplicative utility functions. We introduce an extra heuristic based on redundancy elimination to further lower the maximum regret ratio, which results in an improved algorithm named RF-MinVar. \item We showcase the applicability of the proposed algorithms on a special type of multiplicative utility functions, the Cobb-Douglas functions, and a closely related type of functions, the CES functions. \item We perform extensive experiments using both real and synthetic data to verify the effectiveness and efficiency of the proposed algorithms. The results show that the regret ratios obtained by the proposed algorithms are constantly small. Meanwhile, the proposed algorithms are more efficient than the baseline algorithms. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:relatedwork} reviews related studies. Section~\ref{sec:preliminaries} presents the basic concepts. Section~\ref{sec:minvar} describes the proposed algorithms to process $k$-regret queries with MUFs, and derives the bounds of the maximum regret ratio. Section~\ref{sec:casestudy} applies the proposed algorithms on two special types of utility functions, the Cobb-Douglas function and the CES function, and shows their bounds of the maximum regret ratio. Section~\ref{sec:exp} presents experimental results and Section~\ref{sec:conclusions} concludes the paper. \section{Related Work}\label{sec:relatedwork} We review two groups of related studies: the skyline queries and the $k$-regret queries. \textbf{Skyline queries.} The skyline query~\cite{borzsony2001skyline} is an earlier attempt to generate a representative subset $\mathcal{S}$ of the entire database $\mathcal{D}$ without the need of specifying any utility functions. This query assumes a database $\mathcal{D}$ of $d$-dimensional points ($d \in \mathbb{N}_+$). Let $p_i$ and $p_j$ be two points in $\mathcal{D}$ where $i\neq j$. Point $p_i$ is said to \emph{dominate} point $p_j$ if and only if $ \forall l \in [1..d], p_i.c_l \ge p_j.c_l $, where $p_i.c_l$ ($p_j.c_l$) denotes the coordinate of $p_i$ ($p_j$) in dimension $l$. Here, the ``$\ge$'' operator represents the user preference relationship. A point with a larger coordinate in dimension $l$ is deemed more preferable in that dimension. The skyline query returns the subset $\mathcal{S} \subseteq \mathcal{D}$ where each point is \emph{not} dominated by any other point in $\mathcal{D}$. The skyline query can be answered by a two-layer nested loop over the points in $\mathcal{D}$ and filtering those dominated by other points. The points remaining after the filtering are the \emph{skyline points} which are the query answer. More efficient algorithms have been proposed in literature~\cite{papadias2003optimal,tan2001efficient}. While the skyline query does not require a utility function, it suffers in having no control over the size of the set $\mathcal{S}$ returned. In the worst case, the entire database $\mathcal{D}$ may be returned. Studies have tried to overcome this limitation by combining the skyline query with the top-$k$ query. For example, Xia et al.~\cite{xia2008skylining} introduce the \emph{$\varepsilon$-skyline query} which adds a \emph{weight} to each dimension of the data points to reflect users' preference towards the dimension. The weights create a built-in rank for the points which can be used to answer the \emph{top-$k$ skyline query}. Chan et al.~\cite{chan2006high} rank the points by the \emph{skyline frequency}, i.e., how frequently a point appears as a skyline point when different numbers of dimensions are considered. A few other studies extract a representative subset of the skyline points. Lin et al.~\cite{lin2007selecting} propose to return the $k$ points that together dominate the largest number of non-skyline points as the \emph{$k$ most representative skyline subset}. Tao et al.~\cite{tao2009distance} select \emph{$k$ representative skyline points} based on the distance between the skyline points instead. While the studies above return a size-$k$ subset of the entire database, the maximum regret ratio of such a subset is unbounded. \textbf{$K$-regret queries.} Nanongkai et al.~\cite{nanongkai2010regret} introduce the concept of \emph{regret minimization} to top-$k$ query processing and propose the \emph{$k$-regret query}. The advantage of this query type is that it does not require query users to specify their utility functions. Instead, it can find the subset $\mathcal{S}$ that minimizes the \emph{maximum regret ratio} for a whole family of utility functions. Nanongkai et al. propose the \emph{CUBE} algorithm to process the $k$-regret query for a family of linear summation utility functions, i.e., each utility function $f$ is in the form of $f(p_i) = \sum_{j = 1}^{d} \alpha_j \cdot p_i.c_j$ where $\alpha_j$ denotes the \emph{weight} of dimension $j$. The CUBE algorithm is efficient, but the maximum regret ratio it obtains is quite large in practice. To obtain a smaller maximum regret ratio, in a different paper~\cite{nanongkai2012interactive}, Nanongkai et al. propose an interactive algorithm where query users are involved to guide the search for answers with smaller regret ratios. Peng and Wong~\cite{peng2014geometry} advance the $k$-regret query studies by utilizing geometric properties to improve the query efficiency. Chester et al.~\cite{chester2014computing} also consider linear summation utility functions, and propose to compute the \emph{$k$-regret minimizing sets}, which is NP-hard. Kessler Faulkner et al.~\cite{kessler2015k} build on top of the CUBE algorithm and propose three algorithms, \emph{MinWidth}, \emph{Area-Greedy}, and \emph{Angle}. These three algorithms can process $k$-regret queries with the ``concave'', ``convex'', and CES utility functions. Nevertheless, the ``concave'' and ``convex'' utility functions considered are restricted to \emph{additive forms} (See Braziunas and Boutilier~\cite{braziunas2007minimax} and Keeney and Raiffa~\cite{keeney1993decisions} for more details on \emph{additive utilities} and \emph{additive independence}). They are in fact summations over a set of concave and convex functions. The CES utility functions also sum over a set of terms. In this paper, we move the study on $k$-regret queries from additive utility functions to multiplicative utility functions. We present an algorithm that can produce answers with bounded maximum regret ratios for $k$-regret queries with multiplicative utility functions. As a by-product, we also obtain a new upper bound on the maximum regret ratio for the CES utility functions which is tighter than a previously obtained upper bound~\cite{kessler2015k}. Zeighami and Wong~\cite{zeighami2016minimizing} propose to compute the \emph{average regret ratio}. They do not assume any particular type of utility functions, but use sampling to obtain a few utility functions for the computation. This study is less relevant to our work and is not discussed further. \section{Preliminaries}\label{sec:preliminaries} \begin{table} \centering \begin{small} \caption{Frequently Used Symbols}\label{tab:symbols} \BlankLine\BlankLine \begin{tabular}{cl} \toprule Symbol & Description \\ \midrule $\mathcal{D}$ & A database\\ $n$ & The cardinality of $\mathcal{D}$\\ $d$ & The dimensionality of $\mathcal{D}$\\ $k$ & The $k$-regret query parameter\\ $\mathcal{S}$ & A size-$k$ subset selected from $\mathcal{D}$\\ $p_i$& A point in $\mathcal{D}$\\ $p_i.c_j$& The coordinate of $p_i$ in dimension $j$\\ $t$ & The number of intervals that the data domain \\ & is partitioned into in each dimension\\ \bottomrule \end{tabular} \end{small} \vspace{-5mm} \end{table} We present basic concepts and a problem definition in this section. The symbols frequently used in the discussion are summarized in Table~\ref{tab:symbols}. We consider a database $\mathcal{D}$ of $n$ data objects. Every data object $p_i \in \mathcal{D}$ is a $d$-dimensional point in $\mathbb{R}^{d}_+$, where $d$ is a positive integer and the coordinates of the points are all positive numbers. We use $p_i.c_j$ to denote the coordinate of $p_i$ in dimension $j$. This coordinate represents the \emph{utility} of the data object in dimension $j$. A larger value of $p_i.c_j$ denotes a larger utility and hence $p_i$ is more preferable in dimension $j$. A query parameter $k$ is given. It specifies the size of the answer set $\mathcal{S} \ (\mathcal{S} \subseteq \mathcal{D})$ to be returned by a $k$-regret query. We assume $d \le k \le n$. Let $f: \mathcal{D} \rightarrow \mathbb{R_+}$ be a function that models the utility of a data object, i.e., how preferable the data object is by a query user. The $gain$ of a query user over a set $\mathcal{S}$ models the utility of the set to the query user. \BlankLine \begin{defn}[Gain] Given a utility function $f$ of a query user and a subset of data objects $\mathcal{S} \subseteq \mathcal{D}$, the \emph{gain} of the query user over $\mathcal{S}$, $gain(\mathcal{S}, f)$, is defined as the maximum utility of any object in $\mathcal{S}$, i.e., $$gain(\mathcal{S},f)=\max_{p_i \in \mathcal{S}}f(p_i)$$ \end{defn} Continue with the example shown in Table~\ref{tbl:example}. If $\mathcal{S} = \{p_1, p_3, p_5\}$ and $f_3(p_i) = p_i.c_1^{0.5} \cdot p_i.c_2^{0.5}$, then $gain(\mathcal{S},f_3) = \max_{p_i \in \mathcal{S}}f_3(p_i) = f_3(p_1) = 2.3^{0.5} \times 80^{0.5} \approx 13.56$. By definition, when $\mathcal{S}=\mathcal{D}$, the gain of the query user is maximized. However, the entire database $\mathcal{D}$ is usually too large to be returned to or fully explored by the query user. When $\mathcal{S}\subset \mathcal{D}$, the query user may potentially suffer from ``losing'' some gain comparing with the case where $\mathcal{S}=\mathcal{D}$. This loss of gain models how much the query user may \emph{regret} if she selects the local optimal object from $\mathcal{S}$ and is later shown the overall optimal object in $\mathcal{D}$. \begin{defn}[Regret] Given the utility function $f$ of a query user and a subset of data objects $\mathcal{S} \subseteq \mathcal{D}$, the \emph{regret} of the query user if she selects the optimal object from $\mathcal{S}$, $regret_{\mathcal{D}}(\mathcal{S}, f)$, is defined as the difference between the gain of the user over $\mathcal{D}$ and her gain over $\mathcal{S}$, i.e., $$regret_{\mathcal{D}}(\mathcal{S},f) = gain(\mathcal{D},f) - gain(\mathcal{S}, f)$$ \end{defn} The \emph{regret ratio} is a relative measure of the regret. \begin{defn}[Regret Ratio] Given the utility function $f$ of a query user and a subset of data objects $\mathcal{S} \subseteq \mathcal{D}$, the \emph{regret ratio} of the query user if she selects the optimal object from $\mathcal{S}$, $r\_ratio_{\mathcal{D}}(\mathcal{S}, f)$, is defined as $regret_{\mathcal{D}}(\mathcal{S},f)$ over $gain(\mathcal{D},f)$, i.e., \begin{align*} r\_ratio_\mathcal{D}(\mathcal{S},f)&=\frac{regret_\mathcal{D}(\mathcal{S},f)}{gain(\mathcal{D},f)}=\frac{regret_D(\mathcal{S},f)}{regret_\mathcal{D}(\mathcal{S},f)+gain(\mathcal{S},f)}\\ &=\frac{regret_\mathcal{D}(S,f)}{regret_\mathcal{D}(\mathcal{S},f)+\max_{p_j\in \mathcal{S}}f(p_j)}\\ &=\frac{max_{p_i\in \mathcal{D}}f(p_i)-max_{p_j\in \mathcal{S}}f(p_j)}{max_{p_i \in \mathcal{D}}f(p_i)} \end{align*} \end{defn} When $\mathcal{S} = \{p_1, p_3, p_5\}$ and $f_3(p_i) = p_i.c_1^{0.5} \cdot p_i.c_2^{0.5}$, we have $gain(\mathcal{S},f_3) = gain(\mathcal{D},f_3) = f_3(p_1) \approx 13.56$. Thus, $regret_{\mathcal{D}}(\mathcal{S},f_3) = 0$, and $r\_ratio_\mathcal{D}(\mathcal{S},f_3) = 0\%$. For a different utility function $f_4(p_i)= p_i.c_1^{0.99} \cdot p_i.c_2^{0.01}$, $gain(\mathcal{S},f_4) = f_4(p_3) \approx 2.88$ and $gain(\mathcal{D},f_4) = f_4(p_6) \approx 3.09$. Thus, $regret_{\mathcal{D}}(\mathcal{S},f_4) \approx 0.21$, and $r\_ratio_\mathcal{D}(\mathcal{S},f_4) \approx \frac{0.21}{3.09} \approx 6.80\%$. Given a family of utility functions $\mathcal{F}$, the \textit{maximum regret ratio} formulates how regretful a query user can possibly be if she uses any utility function in $\mathcal{F}$ and is given the set $\mathcal{S}$. \begin{defn}[Maximum Regret Ratio] Given a family of utility functions $\mathcal{F}$ and a subset of data objects $\mathcal{S} \subseteq \mathcal{D}$, the \emph{maximum regret ratio} if a query user with any utility function in $\mathcal{F}$ selects the optimal object in $\mathcal{S}$, $mr\_ratio_{\mathcal{D}}(\mathcal{S}, \mathcal{F})$, is defined as the supremum of the regret ratio of any utility function, i.e., \begin{align*} mr\_ratio_\mathcal{D}(\mathcal{S},\mathcal{F})&=\sup_{f\in\mathcal{F}}r\_ratio_\mathcal{D}(\mathcal{S},f)\\ &=\sup_{f\in\mathcal{F}}\frac{max_{p_i\in \mathcal{D}}f(p_i)-max_{p_j\in \mathcal{S}}f(p_j)}{max_{p_i \in \mathcal{D}}f(p_i)} \end{align*} \end{defn} Here, the supremum is used instead of the maximum because the family of utility functions $\mathcal{F}$ may be infinite. Continue with the example above. If $\mathcal{F} = \{f_3, f_4\}$ then $mr\_ratio_\mathcal{D}(\mathcal{S},\mathcal{F}) = \max \{0\%, 6.80\%\} = 6.80\%$. The \emph{$k$-regret} query aims to return the size-$k$ subset $\mathcal{S} \subseteq \mathcal{D}$ that \emph{minimizes} the maximum regret ratio. \begin{defn}[$K$-Regret Query] Given a family of utility functions $\mathcal{F}$, the \emph{$k$-regret} query returns a size-$k$ subset $\mathcal{S} \subseteq \mathcal{D}$, such that the maximum regret ratio over $\mathcal{S}$ is smaller than or equal to that over any other size-$k$ subset $\mathcal{S}' \subseteq \mathcal{D}$. Formally, $$\forall \mathcal{S}' \subseteq \mathcal{D} \cap |\mathcal{S}'| = k: mr\_ratio_\mathcal{D}(\mathcal{S},\mathcal{F}) \le mr\_ratio_\mathcal{D}(\mathcal{S}',\mathcal{F})$$ \end{defn} Specific utility functions are not always available because the query users are not usually pre-known and their utility functions may not be specified precisely. The $k$-regret query does not require any specific utility functions to be given. Instead, the query considers a family of functions of a certain form such as the linear functions~\cite{nanongkai2010regret}, i.e., $f(p_i) = \sum_{j=1}^{d} \alpha_j \cdot p_{i}.c_{j}$ where $\alpha_j$ is the \emph{weight} of dimension $j$. The $k$-regret query minimizes the maximum regret ratio of any utility function of such form (without knowing the value of $\alpha_j$). Our contribution to the study of $k$-regret queries is the consideration of a family of \emph{multiplicative utility functions} (MUFs). \begin{defn}[Multiplicative Utility Function] We define a \emph{multiplicative utility function (MUF)} $f$ to be a utility function of the following form: $$f(p_i) = \prod_{d = 1}^{n} p_i.c_j^{\alpha_j},$$ where $\alpha_j \ge 0$ is a function parameter and $\sum_{j=1}^{d}\alpha_j \le 1$. \end{defn} \begin{defn}[$K$-Regret Query with MUFs] The \emph{$k$-regret query with MUFs} takes a database $\mathcal{D}$ of $d$-dimensional points and a set $\mathcal{F}$ of MUFs as the input, where the parameters values $\{\alpha_1, \alpha_2, ..., \alpha_d\}$ of each MUF may be different. The query returns a size-$k$ subset $\mathcal{S} \subseteq \mathcal{D}$, such that the maximum regret ratio $mr\_ratio_\mathcal{D}(\mathcal{S},\mathcal{F})$ is minimized. \end{defn} We assume that $p_i.c_j$ has been normalized into the range of $(1,2]$ to simplify the derivation of the maximum regret ratio bounds. This can be done by a normalization function $\displaystyle \mathcal{N}(p_i.c_j) = 1+ \frac{p_i.c_j}{\max_{p_i \in \mathcal{D}}\{p_i.c_j\}}$. In fact, it is common to normalize the data domain in different dimensions into the same range, so that utility values of different dimensions become more comparable. Note that our derivation of the bounds still holds without this assumption, although the bounds may become less concise. \textbf{Scale invariance.} It has been shown~\cite{kessler2015k,nanongkai2010regret} that $k$-regret queries with additive utility functions are \emph{scale invariant}, i.e., scaling the data domain in any dimension does not change the maximum regret ratio of a set $\mathcal{S}$. This property is preserved in $k$-regret queries with MUFs. For an MUF $f(p_i) = \prod_{d = 1}^{n} p_i.c_j^{\alpha_j}$, we can scale each dimension by a factor $\lambda_j > 0$, resulting in a new MUF $f'(p_i) = \prod_{d = 1}^{n} (\lambda_j \cdot p_i.c_j)^{\alpha_j} = \prod_{d = 1}^{n} \lambda_j^{\alpha_j} \cdot \prod_{d = 1}^{n}p_i.c_j^{\alpha_j} = (\prod_{d = 1}^{n} \lambda_j^{\alpha_j}) f(p_i)$. Such scaling does not affect the regret ratio (and hence the maximum regret ratio), i.e., $r\_ratio_{\mathcal{D}}(\mathcal{S},f') = r\_ratio_{\mathcal{D}}(\mathcal{S},f)$: \begin{align*} &r\_ratio_\mathcal{D}(\mathcal{S},f')\\ &=\frac{max_{p_i\in \mathcal{D}}f'(p_i)-max_{p_j\in \mathcal{S}}f'(p_j)}{max_{p_i \in \mathcal{D}}f'(p_i)}\\ &=\frac{(\prod_{d = 1}^{n} \lambda_j^{\alpha_j})(max_{p_i\in \mathcal{D}}f(p_i)-max_{p_j\in \mathcal{S}}f(p_j))}{(\prod_{d = 1}^{n} \lambda_j^{\alpha_j})max_{p_i \in \mathcal{D}}f(p_i)}\\ &=\frac{max_{p_i\in \mathcal{D}}f(p_i)-max_{p_j\in \mathcal{S}}f(p_j)}{max_{p_i \in \mathcal{D}}f(p_i)}=r\_ratio_{\mathcal{D}}(\mathcal{S},f) \end{align*} In what follows, for conciseness, we refer to the regret, the regret ratio, and the maximum regret ratio of a query user simply as the regret, the regret ratio, and the maximum regret ratio, respectively. \section{The K-Regret Query with Multiplicative Utility Functions}\label{sec:minvar} We propose an algorithm named \emph{MinVar} to process $k$-regret queries with MUFs in Section~ \ref{sec:thealgo}. We derive the bounds of the maximum regret ratios of the query answers returned by MinVar in Sections~\ref{sec:upperbound} and~\ref{sec:lowerbound}. We further improve the maximum regret ratios of the query answers through a heuristic based algorithm in Section~\ref{sec:heuristic}. \subsection{The MinVar Algorithm}\label{sec:thealgo} MinVar shares a similar overall algorithmic approach with that of CUBE~\cite{nanongkai2010regret} and MinWidth~\cite{kessler2015k} which were proposed to process $k$-regret queries with additive utility functions. As summarized in Algorithm~\ref{alg:minvar}, MinVar first finds the optimal point $p^*_i$ in dimension $i$ for the first $d-1$ dimensions, i.e, $p_i^*.c_i$ is the largest utility in dimension $i$ ($i = 1, 2, ..., d-1$). These $d-1$ points are added to $\mathcal{S}$ (Lines~1~to~5). Then, the algorithm partitions each dimension $i$ of the data domain into $t = \lfloor (k-d+1)^{\frac{1}{d-1}} \rfloor$ intervals for the first $d-1$ dimensions (Lines 6 to 8). Here, the value of $t$ is chosen so that we can obtain sufficient points to be added to $\mathcal{S}$ to create a size-$k$ subset. The $t$ intervals in each of the first $d-1$ dimensions together partition the $d$ dimensional data space into $t^{d-1}$ buckets. The algorithm selects one point $s^*$ in each bucket that has the largest utility $s^*.c_d$ in dimension $d$, and adds $s^*$ to $\mathcal{S}$ (Lines~9~to~12). There are $t^{d-1} = k-d+1$ points added in this step. Together with the $d-1$ points previously added, $\mathcal{S}$ now has $k$ points, which are then returned as the query answer (Line 13). Figure~\ref{fig:minvar} gives an example. Suppose $d = 3$ and $k = 6$, then $t = \lfloor (6-3+1)^{\frac{1}{3-1}} \rfloor = 2$. We first add the two points $p_1^*$ and $p_2^*$ to $\mathcal{S}$ which have the largest utility in dimensions 1 and 2, respectively. Then the data domain in dimensions 1 and 2 are each partitioned into $t=2$ intervals, forming $t^{d-1}= 2^{3-1} = 4$ buckets in the data space. Four more points $s_1^*, s_2^*, s_3^*$, and $s_4^* $ are added to $\mathcal{S}$, each from a different bucket and is the point with the largest utility in dimension 3 in that bucket. \begin{figure} \vspace{-2mm} \centering \includegraphics[width=2.5in]{minvar.eps} \vspace{-2mm} \caption{The MinVar algorithm}\label{fig:minvar} \vspace{-4mm} \end{figure} \vspace{-2mm} \begin{algorithm} \begin{small} \caption{MinVar} \label{alg:minvar} \KwIn{$\mathcal{D}=\{p_1, p_2, ... , p_n\}$: a $d$-dimensional database; $k$: the size of the answer set.} \KwOut{$\mathcal{S}$: a size-$k$ subset of $\mathcal{D}$.} $\mathcal{S} \leftarrow \emptyset$\; \For {$i = 1, 2,..., d-1$} { Find $p^*_{i}$ which has the largest utility $ p^*_{i}.c_i$ in dimension $i$\; $c_i^\tau \leftarrow p^*_{i}.c_i$\; $\mathcal{S} \leftarrow \mathcal{S} \cup \{p^*_{i}\}$\; } $t \leftarrow \lfloor (k-d+1)^{\frac{1}{d-1}} \rfloor$\; \For{$i=1,2,...,d-1$}{ $bps[i] \leftarrow FindBreakpoints(\mathcal{D}, t, n, i, c_i^\tau)$\; } \For{each $(d-1)$-integer combination $1 \le j_1 \le t, 1 \le j_2 \le t, ..., 1 \le j_{d-1} \le t$} { $B \leftarrow \{p \in \mathcal{D} | \forall i \in [1..d-1]: bps[i][j_i].lo \le p.c_i \le bps[i][j_i].hi \}$\; $s^* \leftarrow argmax_{p\in B} \ p.c_d$\; $\mathcal{S} \leftarrow \mathcal{S} \cup \{s^*\}$\; } \textbf{return} $\mathcal{S}$\; \end{small} \end{algorithm} \vspace{-3mm \textbf{The FindBreakpoints algorithm.} Our contribution of MinVar lies in the sub-algorithm \emph{FindBreakpoints} to find the breakpoints to partition each dimension $i$ of the data domain into $t$ intervals (Line 8). The intuition of the algorithm is as follows. The optimal point $p^*$ for any utility function $f$ must lie in one of the buckets created by MinVar. Let this bucket be $\mathcal{B}$. The algorithm selects a point $s^*$ from $\mathcal{B}$ to represent this bucket and adds it to $\mathcal{S}$. If $p^*$ is selected to be $s^*$, then the regret ratio is $0$. To maximize the worst-case probability of $p^*$ being selected, we should partition dimension $i$ such that each interval contains the same number of points. Otherwise, if the intervals are skewed and $p^*$ lies in a large interval, its probability of being selected is small. \begin{figure} \vspace{-2mm} \centering \includegraphics[width=2.5in]{intervals.eps} \vspace{-4mm} \caption{Creating the intervals in a dimension}\label{fig:breakpoints} \vspace{-4mm} \end{figure} In CUBE~\cite{nanongkai2010regret}, the data domain is simply broken evenly, i.e., each interval has the same size of $\displaystyle \frac{c_i^\tau}{t}$, where $c_i^\tau$ denotes the largest utility in dimension $i$ (assuming that the data domain starts at 0). When the data points are not uniformly distributed, the probability of $p^*$ being selected to represent its bucket is small. Figure~\ref{fig:breakpoints} gives an example where $d=2$. The two intervals created by CUBE in dimension 1 are highly unbalanced in the number of points in each interval. The point $p^*$ has a large utility in both dimensions and may be optimal for many MUFs. However, it will not be selected by CUBE to represent its bucket, since it falls in a dense bucket and there is another point $s^*$ with a larger utility in dimension 2. In MinWidth~\cite{kessler2015k}, the data domain is broken with a greedy heuristic. This heuristic leaves out some \emph{empty intervals} with no data points, and uses a binary search to determine the minimal interval width such that $t$ intervals of such width can cover the rest of the data domain. This algorithm handles sparse data better, but the equi-width intervals still do not handle skewed data well. Figure~\ref{fig:breakpoints} shows two intervals created by MinWidth which are still unbalanced (one has 5 points and the other has 3). The two points $p^*$ and $s^*$ are still in the same bucket. \vspace{-2mm} \begin{algorithm} \begin{small} \caption{FindBreakpoints}\label{alg:bps} \KwIn{$\mathcal{D}=\{p_1, p_2, ... , p_n\}$: a $d$-dimensional database; $t$: number of intervals; $n:$ size of $\mathcal{D}$; $i$: dimension number for which the breakpoints are to be computed; $c_i^\tau$: the largest utility in dimension $i$.} \KwOut{$bps[i]$: an array of $t$ pairs of breakpoints.} Sort $\mathcal{D}$ ascendingly on dimension $i$; let the sorted point sequence be $p_1', p_2', ..., p_n'$\; $hi \leftarrow 0,\ \delta \leftarrow 0$\; \While{$hi \neq n$}{ $lo \leftarrow 1$\; \For{j $=1,2, ..., t$} { $hi' \leftarrow \min\{lo+\lceil n/t\rceil-1+\delta, n\}$\; Find the largest ${hi} \in [lo..hi']$ such that $p'_{hi}.c_i - p'_{lo}.c_i \displaystyle{\leq\frac{c_i^\tau}{t}}$\; $bps[i][j].lo \leftarrow p'_{lo}.c_i$\; $bps[i][j].hi \leftarrow p'_{hi}.c_i$\; $lo \leftarrow hi +1$\; } $\delta \leftarrow \delta + inc$\; } \textbf{return} $bps[i]$\; \end{small} \end{algorithm} \vspace{-3mm} To overcome these limitations, our FindBreakpoints algorithm adaptively uses $t$ variable-width intervals such that the number of points in each interval is as close to $\displaystyle \lceil \frac{n}{t}\rceil$ as possible (cf. Figure~\ref{fig:breakpoints}, where the two gray intervals created by MinVar contain $\displaystyle \lceil \frac{8}{2}\rceil = 4$ points each; $p^*$ will be selected to represent its bucket). To help derive the maximum regret ratio bounds in the following subsections, we also require that the width of each interval does not exceed $\displaystyle \frac{c_i^\tau}{t}$.\footnote{It should be $\frac{c_i^\tau-1}{t}$ to be exact where 1 is the lower bound of the data domain. We write $ \frac{c_i^\tau}{t}$ here to keep it consistent with CUBE and to ease the discussion.} Under this constraint it is not always possible to create $t$ intervals with exactly $\displaystyle \lceil \frac{n}{t}\rceil$ points in each interval. We allow $\displaystyle \lceil \frac{n}{t}\rceil + \delta$ data points in each interval, where $\delta$ is a parameter that will be adaptively chosen by the algorithm. At start $\delta = 0$. Algorithm~\ref{alg:bps} summarizes the FindBreakpoints algorithm. This algorithm first sorts the data points in ascending order of their coordinates in dimension~$i$. The sorted points are denoted as $p'_1, p'_2, ..., p'_n$ (Line~1). The algorithm then creates $t$ intervals, where $lo$ and $hi$ represent the subscript lower and upper bounds of the data points to be put into one interval, respectively. At start, $lo = 1$ (Line~4). Between $lo$ and $lo+ \lceil \frac{n}{t}\rceil-1 + \delta$, the algorithm finds the largest subscript $hi$ such that $p'_{hi}.c_i - p'_{lo}.c_i$ is bounded by $\displaystyle{\frac{c_i^{\tau}}{t}}$. Then we have obtained the two breakpoints of the first interval $bps[i][1].lo = p'_{lo}.c_i$ and $bps[i][1].hi = p'_{hi}.c_i$, where $bps[i]$ is an array to store the intervals in dimension $i$. We update $lo$ to be $hi+1$, and repeat the process above to create the next interval (Lines 5 to 10). When $t$ intervals are created, if they cover all the $n$ points, we have successfully created the intervals for dimension~$i$. Otherwise, we need to allow a larger number of points in one interval. We increase $\delta$ by $inc$ which is a system parameter (Line 11), and repeat the above procedure to create $t$ intervals until $n$ points are covered. Then we return the interval array $bps[i]$ (Line 12). Note that the algorithm always terminates, because when $\delta$ increases to $n$, the algorithm will simply create intervals with width $\displaystyle{\frac{c_i^\tau}{t}}$. The $t$ intervals must cover the entire data domain and hence cover all $n$ points. \textbf{Complexity.} FindBreakpoints uses a database $\mathcal{D}$ of $n$ points and an array $bps[i]$ to store $t$ intervals. The space complexity is $O(n+t)$ where $t=\lfloor (k-d+1)^{\frac{1}{d-1}} \rfloor$ is usually small. The inner loop of FindBreakpoints (Lines 5 to 10) has $t$ iterations. In each iteration, computing $hi$ requires a binary search between $p'_{lo}$ and $p'_{hi'}$, which takes $O(\log n)$ time. Thus, The inner loop takes $O(t\log n)$ time. The outer loop has $\displaystyle \frac{n}{inc}$ iterations in the worst case. Together, FindBreakpoints takes $O(\displaystyle \frac{tn\log n}{inc})$ time. MinVar uses a database $\mathcal{D}$ of $n$ points, an answer set $\mathcal{S}$ of size $k$, a $(d-1)\times t$ two dimensional array $bps$. An array of size $t^{d-1} = k-d+1$ is also needed to help select the points $s^*$ in the $k-d+1$ buckets. The space complexity is $O(n+k+dt)$. The first loop of MinVar (Lines 2 to 5) takes $O(nd)$ time. The second loop (Lines 7 and 8) calls FindBreakpoints for $d-1$ times, which takes $O(\displaystyle \frac{tdn\log n}{inc})$ time. The third loop (Lines 9 to 12) finds a point $s^*$ in each of the $k-d+1$ buckets. A linear scan on the database $\mathcal{D}$ is needed for this task. For each point $p$ visited, we need a binary search on each of the $d-1$ arrays $bps[i]$ to identify the bucket of $p$, and to update the point selected $s^*$ in that bucket if needed. This takes $O(nd\log t)$ time. Overall, MinVar takes $O(nd + \displaystyle \frac{tdn\log n}{inc} + nd\log t) $ time. Here, $\displaystyle \frac{n}{inc}$ is a controllable parameter of the system. In the experiments, we set $inc = 0.01\%n$. The time complexity then simplifies to $O(nd\log t)$. \subsection{Upper Bound}\label{sec:upperbound} We derive an upper bound for the maximum regret ratio of a set $\mathcal{S}$ returned by MinVar. \begin{thm} Let $\mathcal{F} = \{f| f(p_i)=\prod^d_{j=1} p_i.c_j^{\alpha_j}\}$ be a set of MUFs, where $\alpha_j\geq 0, \sum_{j=1}^{d}\alpha_j \le 1$, and $1 \le p_i.c_j \le 2$. The maximum regret ratio $mr\_ratio_{\mathcal{D}}(\mathcal{S}, \mathcal{F})$ of an answer set $\mathcal{S}$ returned by MinVar satisfies $$mr\_ratio_{\mathcal{D}}(\mathcal{S}, \mathcal{F}) \le {\ln(1+\frac{1}{t})}$$ Here, $t = \lfloor (k-d+1)^{\frac{1}{d-1}} \rfloor$. \end{thm} \begin{proof} We prove the theorem by showing that for each function $f \in \mathcal{F}$, the regret ratio $r\_ratio_{\mathcal{D}}(\mathcal{S}, f) \le {\ln(1+\frac{1}{t})}$, which means that the maximum regret ratio of $\mathcal{F}$ must be less than or equal to $\ln(1+\frac{1}{t})$. Let $p^*$ be the point in $\mathcal{D}$ with the largest utility computed by $f$, i.e., $$p^*= \underset{p_i\in \mathcal{D}}{\mathrm{argmax}} f(p_i)$$ Let $s^*$ be the point in $\mathcal{S}$ that is selected by MinVar in the same bucket where $p^*$ lies in. We have: \begin{align*} regret_\mathcal{D}(\mathcal{S},f)& = \max_{p_i\in\mathcal{D}}f(p_i) - \max_{p_i\in\mathcal{S}}f(p_i)\\ &\le f(p^*)-f(s^*)\\ &=\prod^d_{j=1} p^*.c_j^{\alpha_j}-\prod^d_{j=1} s^*.c_j^{a_j}\\ &=\exp{(\ln \prod^d_{j=1} p^*.c_j^{\alpha_j})}-\exp{(\ln \prod^d_{j=1} s^*.c_j^{a_j})} \end{align*} Since $g(x) = e^x$ is convex, based on Lagrange's Mean Value Theorem, we have $e^x-e^y\leq(x-y)\cdot e^x$ where $x\geq y$. Thus, \begin{align*} regret_\mathcal{D}(\mathcal{S},f)& \le (\ln \prod^d_{j=1} p^*.c_j^{\alpha_j} - \ln \prod^d_{j=1} s^*.c_j^{\alpha_j})\cdot \prod_{j=1}^d p^*.c_j^{\alpha_j}\\ &=(\sum_{j=1}^d \alpha_j\ln p^*.c_j-\sum_{j=1}^d \alpha_j\ln s^*.c_j)\cdot \prod_{j=1}^d p^*.c_j^{\alpha_j}\\ &=\left[\sum_{j=1}^{d} \alpha_j(\ln p^*.c_j-\ln s^*.c_j) \right]\cdot \prod_{j=1}^d p^*.c_j^{\alpha_j} \end{align*} Since MinVar selects the point in a bucket with the largest value in dimension $d$, we know that $ p^*.c_d\leq s^*.c_d$ and hence $\ln p^*.c_d\leq \ln s^*.c_d$, i.e., $\ln p^*.c_d - \ln s^*.c_d \le 0$. Thus, we can remove the utility in dimension $d$ from the computation and relax the regret to be: \begin{align*} regret_\mathcal{D}(\mathcal{S},f)&\leq \left[\sum_{j=1}^{d-1} \alpha_j(\ln p^*.c_j-\ln s^*.c_j) \right] \cdot \prod_{j=1}^d p^*.c_j^{\alpha_j} \\ &=\left[\sum_{j=1}^{d-1} \alpha_j \ln (\frac{ p^*.c_j}{s^*.c_j}) \right]\cdot \prod_{j=1}^d p^*.c_j^{\alpha_j} \end{align*} Since $s^*$ is selected from the same bucket where $p^*$ lies in, $p^*.c_j-s^*.c_j$ must be constrained by the bucket size in dimension $j$, which is $\displaystyle \frac{c_j^{\tau}-1}{t}$ where $c_j^{\tau}$ and 1 are the largest and smallest utility values in dimension $j$, i.e., $$\forall j \in [1..d-1], p^*.c_j-s^*.c_j\leq \frac{c_j^{\tau}-1}{t}$$ Thus, $$\frac{p^*.c_j}{s^*.c_j}\leq1+\frac{c_j^\tau-1}{t\cdot s^*.c_j}$$ Since $1 < s^*.c_j\leq c_j^\tau \le 2$, we have $$\frac{p^*.c_j}{s^*.c_j}<1+\frac{1}{t}$$ Therefore, \begin{align*} regret_\mathcal{D}(\mathcal{S},f)&< \left[\sum_{j=1}^{d-1} \alpha_j \ln (1+\frac{1}{t}) \right]\cdot \prod_{j=1}^d p^*.c_j^{\alpha_j} \end{align*} For the regret ratio $r\_ratio_\mathcal{D}(\mathcal{S},f)$ we have \begin{align*} r\_ratio_\mathcal{D}(\mathcal{S},f)&=\frac{regret_\mathcal{D}(\mathcal{S},f)}{gain(\mathcal{D}, f)}\\ &< \frac{ \left[\sum_{j=1}^{d-1} \alpha_j \ln (1+\frac{1}{t}) \right]\cdot \prod_{j=1}^d p^*.c_j^{\alpha_j}}{ \prod_{j=1}^d p^*.c_j^{\alpha_j}}\\ &=\sum_{j=1}^{d-1} \alpha_j \ln (1+\frac{1}{t})=\ln(1+\frac{1}{t})^{\sum_{j=1}^{d-1}\alpha_j}\\ &\leq \ln(1+\frac{1}{t}) \end{align*} \vspace{-3mm} \end{proof} In the theorem, $\displaystyle{t=\lfloor (k-d+1)^{\frac{1}{d-1}}\rfloor}$. This means that as $k$ increases, the maximum regret ratio is expected to decrease; when $d$ increases, the maximum regret ratio is expected to increase. Intuitively, if we return more points (a larger $k$), then the probability of returning the points that better satisfy the utility functions is higher, and hence the regret ratios would be smaller. If there are more dimensions (a larger $d$), then more regret may be accumulated over the dimensions, and hence the regret ratios would be larger. This shows that the upper bound obtained has a good explanatory power. For simplicity, we say that the upper bound grows in a scale of $O(\ln(1+\frac{1}{k^{\frac{1}{d-1}}}))$. To give an example, consider $\mathcal{F} = \{f_3, f_4\}$ where $f_3(p_i) = p_i.c_1^{0.5} \cdot p_i.c_2^{0.5}$ and $f_4(p_i) = p_i.c_1^{0.99} \cdot p_i.c_2^{0.01}$. Then $d = 2$. Let $k = 3$, which means $t = 2$. The upper bound of the maximum regret ratio $\ln(1+\frac{1}{t}) = \ln \frac{3}{2} \approx 40.54\%$. As $k$ increases (e.g., to $20$), this upper bound will decrease (e.g., to $\ln \frac{20}{19} \approx 5.13\%$). \subsection{Lower Bound}\label{sec:lowerbound} We derive a lower bound of the maximum regret ratio by showing that, given a family of MUFs $\mathcal{F}$, it is impossible to bound the regret ratio to below $\displaystyle \Omega(\frac{1}{k^2})$ for a database $\mathcal{D}$ of 2-dimensional points (i.e., $d=2$). \begin{thm} Given $k>0$, there must be a database $\mathcal{D}$ of 2-dimensional points such that the maximum regret ratio of any size-$k$ subset $\mathcal{S} \subseteq \mathcal{D}$ over a family of MUFs $\mathcal{F}$ is at least $\displaystyle \Omega(\frac{1}{k^2})$. \end{thm} \begin{proof} We assume a data space of $(1,e] \times (1,e]$ in this proof. Consider an infinite set $\mathcal{D}$ of 2-dimensional points, where each point $p$ satisfies $$\begin{cases} p.c_1=e^{\cos\theta}\\ p.c_2=e^{\sin\theta} \end{cases}0 < \theta\leq\frac{\pi}{2}$$ Given a size-$k$ subset $\mathcal{S} \subseteq \mathcal{D}$, each point $s_i \in \mathcal{S}$ corresponds to a $\theta_i \in (0, \displaystyle \frac{\pi}{2}] $ where $s_i.c_1=e^{\cos\theta_i}$ and $s_i.c_2=e^{\sin\theta_i}$. Assume that the points $s_1, s_2, ..., s_k$ in $\mathcal{S}$ are sorted in ascending order of their corresponding $\theta_i$ values, i.e., $0< \theta_1 \le \theta_2 \le ... \le \theta_k \le \displaystyle \frac{\pi}{2}$. Further, let $\theta_0=0$ and $\theta_{k+1}=\displaystyle \frac{\pi}{2}$. Consider a polar coordinate system as illustrated in Figure~\ref{fig:lower}. Then $\theta_i \ (i\in[0..k+1])$ can be represented as a point on a unit circle. Based on the pigeonhole principle, there must be some $j \ (j\in[0..k])$ such that $$\theta_{j+1}-\theta_{j}\geq\frac{\pi}{2(k+1)}$$ Let $\theta^*$ be in the middle of $\theta_{j}$ and $\theta_{j+1}$, i.e., $$\theta^* = \displaystyle \frac{\theta_{j}+\theta_{j+1}}{2}$$ We construct an MUF $f$ where the optimal point $p^*$ corresponds to $\theta^*$, i.e., $p^*.c_1=e^{\cos\theta^*}$ and $p^*.c_2=e^{\sin\theta^*}$, and prove the theorem based on the regret ratio of $f$. Consider an MUF $f(p)=p.c_1^{\cos\theta^*}\cdot p.c_2^{\sin \theta^*}$. \begin{align*} \ln f(p)&=\ln{(p.c_1^{\cos\theta^*}\cdot p.c_2^{\sin\theta^*})}\\ &=\cos\theta^*\cdot \ln p.c_1+\sin\theta^*\cdot \ln p.c_2\\ &=\cos\theta^*\cdot \cos\theta+\sin\theta^*\cdot \sin \theta \end{align*} Let $g(\theta) = \cos\theta^*\cdot \cos\theta+\sin\theta^*\cdot \sin \theta$. By letting $g'(\theta) = 0$ we obtain $\theta = \theta^*$, which means that $\ln f(p)$ is maximum when $\theta = \theta^*$, and $f(p)$ is maximum when $p = p^*.$ $$\ln f(p^*) = \cos^2\theta^*+\sin^2\theta^* = 1; \quad \quad f(p^*) = e.$$ \begin{figure} \centering \includegraphics[width=2.0in]{lowerbound.eps} \vspace{-2mm} \caption{Lower bound illustration}\label{fig:lower} \vspace{-3mm} \end{figure} Meanwhile, let $s_i$ be the optimal point for $f$ in $\mathcal{S}$. Since there is no other points in $\mathcal{S}$ that is between $\theta_j$ and $\theta_{j+1}$, $$|\theta_i-\theta^*|=\Delta \geq \theta_{j+1} -\theta^* = \theta^* -\theta_j \ge \frac{\pi}{4(k+1)}$$ We consider the case where $\theta_i-\theta^*=\Delta$. The other case where $\theta^*-\theta_i=\Delta$ is symmetric. We omit it for conciseness. \begin{align*} \ln f(s_i)&=\ln{(s_i.c_1^{\cos\theta^*}\cdot s_i.c_2^{\sin\theta^*})}\\ &=\cos(\theta^*+\Delta)\cdot \cos\theta^*+\sin(\theta^*+\Delta)\cdot \sin\theta^*\\ &=(\cos\theta^*\cos\Delta-\sin\theta^*\sin\Delta)\cdot\cos\theta^*+\\ &\quad\ (\sin\theta^*\cos\Delta+\cos\theta^*\sin\Delta)\cdot\sin\theta^*\\ &=\cos^2\theta^*\cdot\cos\Delta+\sin^2\theta^*\cdot\cos\Delta\\ &=\cos\Delta \end{align*} Thus, $f(s_i) = e^{\cos\Delta}$, and $r\_ratio_{\mathcal{D}}(\mathcal{S}, f)$ satisfies $$r\_ratio_{\mathcal{D}}(\mathcal{S}, f) = \frac{f(p^*) - f(s_i)}{f(p^*) } = \frac{e-e^{\cos\Delta}}{e}=1-e^{\cos\Delta-1}$$ Based on the Maclaurin series, we have $$e^{\cos{\Delta}-1}=1-\frac{\Delta^2}{2}+\frac{\Delta^4}{6}-\cdots$$ Thus, $$r\_ratio_{\mathcal{D}}(\mathcal{S}, f) =\frac{\Delta^2}{2}-\frac{\Delta^4}{6}+\cdots$$ We already know that $$\Delta\geq\frac{\pi}{4(k+1)}$$ Therefore, $$r\_ratio_{\mathcal{D}}(\mathcal{S}, f) \ge \frac{\pi^2}{32(k+1)^2}-o(\frac{1}{k^4})$$ This means that $r\_ratio_{\mathcal{D}}(\mathcal{S}, f)$ is at least $\displaystyle \Omega(\frac{1}{k^2})$. \end{proof} \subsection{Heuristic to Lower the Regret Ratio}\label{sec:heuristic} The upper bound derived in Section~\ref{sec:upperbound} suggests that the maximum regret ratio decreases with the increase of $t$. This parameter decides the number of buckets from which the points in the answer set $\mathcal{S}$ are selected. The parameter itself is determined by $d$ and $k$ together, which are fixed once a database $\mathcal{D}$ and a $k$-regret query is given. In this subsection, we propose a heuristic to increase the value of $t$ without changing $d$ or $k$, aiming to obtain lower regret ratios. This heuristic is based on the \emph{dominate} relationship~\cite{borzsony2001skyline}. Let $p_i$ and $p_j$ be two points in a set $\mathcal{S}$ where $i\neq j$. Point $p_i$ is said to dominate point $p_j$ if and only if $ \forall l \in [1..d], p_i.c_l \ge p_j.c_l $. If $p_i$ dominates $p_j$, then $f(p_i) \ge f(p_j)$ holds for \emph{any} MUF $f$. Thus, once $p_i$ has been added to $\mathcal{S}$, $p_j$ can be discarded. We call $p_j$ a \emph{redundant point}. Based on this observation, we modify the MinVar algorithm to remove the redundant points in $\mathcal{S}$, aiming to obtain a redundant free set $\mathcal{S}$. We call this modified algorithm \emph{RF-MinVar}. As summarized in Algorithm~\ref{alg:rf}, RF-MinVar uses the same procedure as that of MinVar to compute an answer set $\mathcal{S}$ based on a given $t$ (Lines 1 to 5 and 8 to 14). After that, RF-MinVar removes the redundant points from $\mathcal{S}$ (Line~15). This is done by a simple scan to check for any points being dominated. When the redundant points are removed, there may be a few vacancies open in $\mathcal{S}$. To fill up the vacancies, we wrap the procedure of computing $\mathcal{S}$ with a loop (Line~7). In each iteration we increase the value of $t$ by 1 (Line~8), which creates more buckets and leads to more points being added to $\mathcal{S}$. If $|\mathcal{S}| \ge k$ after removing the redundant points, or a predefined number of iterations $itr_{max}$ is reached, the loop terminates. Then, if $|\mathcal{S}| < k$, we randomly select points in $\mathcal{D}$ to fill up $\mathcal{S}$ (Line 17); if $|\mathcal{S}| > k$, only the first $k$ points are kept. We then return the set $\mathcal{S}$ (Line 18). \vspace{-3mm} \begin{algorithm} \begin{small} \caption{RF-MinVar} \label{alg:rf} \KwIn{$\mathcal{D}=\{p_1, p_2, ... , p_n\}$: a $d$-dimensional database; $k$: the size of the answer set.} \KwOut{$\mathcal{S}$: a size-$k$ subset of $\mathcal{D}$.} $\mathcal{S} \leftarrow \emptyset$\; \For {$i = 1, 2,..., d-1$} { Find $p^*_{i}$ which has the largest utility $ p^*_{i}.c_i$ in dimension $i$\; $c_i^\tau \leftarrow p^*_{i}.c_i$\; $\mathcal{S} \leftarrow \mathcal{S} \cup \{p^*_{i}\}$\; } $itr \leftarrow 0$\; \While{$|\mathcal{S}|< k$ and $itr < itr_{max}$}{ $t \leftarrow \lfloor (k-d+1)^{\frac{1}{d-1}} \rfloor + itr$\; \For{$i=1,2,...,d-1$}{ $bps[i] \leftarrow FindBreakpoints(\mathcal{D}, t, n, i, c_i^\tau)$\; } \For{each $(d-1)$-integer combination $1 \le j_1 \le t, 1 \le j_2 \le t, ..., 1 \le j_{d-1} \le t$} { $B \leftarrow \{p \in \mathcal{D} | \forall i \in [1..d-1]: bps[i][j_i].lo \le p.c_i \le bps[i][j_i].hi \}$\; $s^* \leftarrow argmax_{p\in B} \ p.c_d$\; $\mathcal{S} \leftarrow \mathcal{S} \cup \{s^*\}$\; } Eliminate redundant points from $\mathcal{S}$\; $itr++$\; } $\mathcal{S} \leftarrow \mathcal{S} \cup \{k - |\mathcal{S}|$ random points in $\mathcal{D}$ that are not in $\mathcal{S}\}$\; \textbf{return} First $k$ points in $\mathcal{S}$\; \end{small} \end{algorithm} \vspace{-3mm} \textbf{Discussion.} RF-MinVar keeps the non-dominated points in $\mathcal{S}$ computed when $t = \lfloor (k-d+1)^{\frac{1}{d-1}} \rfloor$. Thus, it retains the upper bound of the maximum regret ratio derived in Section~\ref{sec:upperbound}. RF-MinVar can reduce the actual regret ratios obtained, as it may also return the points computed with larger $t$ values. The extra cost to scan and remove the redundant points is minimal, as $k$ is usually a very small number (e.g., less than 100). The main cost of RF-MinVar is in the loop to iterate through multiple values of $t$ and compute $\mathcal{S}$. In the experiments, we use $itr_{max}$ to control the maximum number of iterations. We observe that $itr_{max} = 11$ is sufficient in the data sets tested. \section{Case Studies}\label{sec:casestudy} In this section showcase the applicability of the MinVar algorithm by deriving the maximum regret ratio bounds when applying MinVar on $k$-regret queries with two special types of utility functions, the Cobb-Douglas function and the Constant Elasticity of Substitution (CES) function. \subsection{The K-Regret Query with Cobb-Douglas Functions} The Cobb-Douglas function was first proposed as a production function to model the relationship between multiple inputs and the amount of output generated~\cite{10.2307/1811556}. It was later generalized~\cite{vilcu2011geometric} and used as a utility function~\cite{DIAMOND19801}. \begin{defn}[Cobb-Douglas function] A generalized \emph{Cobb-Douglas function} with $d$ inputs $x_1, x_2, ..., x_d$ is a mapping $\mathcal{X}:\mathbb{R}_+^d\rightarrow\mathbb{R}_+$, $$\mathcal{X}(x_1,x_2, ..., x_d)=A\prod^d_{j=1}x_j^{\alpha_j}$$ Here, $A>0$ and $\alpha_j\geq0$ are the function parameters. \end{defn} The generalized Cobb-Douglas function is very similar to the MUF introduced in Section~\ref{sec:preliminaries}. The $d$ inputs here can be seen as a data point of $d$ dimensions and input $x_j$ is the utility in dimension $j$. MinVar can process the $k$-regret query with Cobb-Douglas functions straightforwardly. To derive an upper bound of the maximum regret ratio for a set of Cobb-Douglas functions $\mathcal{F} = \{\mathcal{X}_1, \mathcal{X}_2, ..., \mathcal{X}_n\}$, we transform each function $\mathcal{X}_i$ to an MUF by scaling the parameter $A$ to 1. It can be shown straightforwardly that this scaling does not affect the regret ratio or the maximum regret ratio. Assuming that $x_j$ has been normalized into the range of $(1,2]$. Then the regret ratio upper bound derived in Section~\ref{sec:upperbound} applies to the function $\mathcal{X}_i$, i.e., $$r\_ratio_\mathcal{D}(\mathcal{S}, \mathcal{X}_i) \le \ln(1+\frac{1}{t})^{\sum_{j=1}^{d-1}\alpha_j}$$ Here, each function $\mathcal{X}_i$ has a different set of parameters $\{\alpha_1, \alpha_2, ..., \alpha_d\}$. If $\sum^d_{j=1}\alpha_j\leq1$ holds for every $\mathcal{X}_i \in \mathcal{F}$, then the maximum regret ratio is bounded by $$mr\_ratio_\mathcal{D}(\mathcal{S}, \mathcal{F}) \le \ln(1+\frac{1}{t})$$ Otherwise, the maximum regret ratio is bounded by $$mr\_ratio_\mathcal{D}(\mathcal{S}, \mathcal{F}) \le \ln(1+\frac{1}{t})^{\alpha^\tau},$$ $\alpha^\tau = \max \{\sum_{j=1}^{d-1}\mathcal{X}_i.\alpha_j|\mathcal{X}_i \in \mathcal{F}, \mathcal{X}_i.\alpha_j \text{ is a parameter of } \mathcal{X}_i\}$. Similarly, the lower bound $\Omega(\frac{1}{k^2})$ of the maximum regret ratio derived in Section~\ref{sec:lowerbound} also applies. \subsection{The K-Regret Query with CES Functions} The CES function is closely related to the Cobb-Douglas function. It is also used as a production function as well as a utility function~\cite{varian1992microeconomic,vilcu2011some}. \begin{defn}[CES function] A generalized \emph{CES function} with $d$ inputs $x_1, x_2, ..., x_d$ is a mapping $\mathcal{X}:\mathbb{R}_+^d\rightarrow\mathbb{R}_+$, $$\mathcal{X}(x_1,x_2, ..., x_d)=A(\sum^d_{j=1}\alpha_j x_j^{\rho})^{\frac{\gamma}{\rho}}$$ Here, $A>0$, $\alpha_j\geq0$, $\rho<1$ ($\rho\ne 0$), and $\gamma>0$ are the function parameters. \end{defn} When $\rho$ approaches 0 in the limit, the CES function will become a Cobb-Douglas function. The MinVar algorithm can also process $k$-regret queries with CES utility functions. To derive bounds for the maximum regret ratio, we simplify and rewrite the CES function $\mathcal{X}$ as a function $f$ in the following form (assuming that $A = \gamma=1$): $$f(p_i)=(\sum^d_{j=1}\alpha_j \cdot p_i.c_j^b)^{\frac{1}{b}}$$ Here, $0<b<1$ and $\alpha_j \ge 0 $. It has been shown in an earlier paper~\cite{kessler2015k} that the maximum regret ratio for $k$-regret queries with CES utility functions is bounded between $\displaystyle \Omega (\frac{1}{bk^2})$ and $\displaystyle O(\frac{1}{bk^\frac{b}{d-1}})$. The lower bound also applies to our MinVar algorithm. In what follows, we derive a new upper bond which is tighter. We first derive a new upper bound for the regret ratio for a single CES utility function. \begin{thm} Let $f(p_i)=(\sum^d_{j=1}\alpha_j \cdot p_i.c_j^b)^{\frac{1}{b}}$ be a CES utility function, where $0<b<1$ and $\alpha_j \ge 0 $. The regret ratio $r\_regret_{\mathcal{D}}(\mathcal{S}, f)$ of a set $\mathcal{S}$ returned by MinVar satisfies $$r\_ratio_{\mathcal{D}}(\mathcal{S}, f) \le \displaystyle{\frac{(d-1)^{\frac{1}{b}}} {t+(d-1)^{\frac{1}{b}}}}$$ \end{thm} \begin{proof} Let $p^*$ be the point in $\mathcal{D}$ with the largest utility computed by $f$, and $s^*$ be the point in $\mathcal{S}$ that is selected in the same bucket where $p^*$ lies in. We have: \begin{align*} regret_\mathcal{D}(\mathcal{S},f)& = \max_{p_i\in\mathcal{D}}f(p_i) - \max_{p_i\in\mathcal{S}}f(p_i)\\ &\le f(p^*)-f(s^*)\\ &=(\sum_{j=1}^{d}\alpha_j \cdot p^*.c_j^b)^{\frac{1}{b}}-(\sum_{j=1}^{d}\alpha_j \cdot s^*.c_j^b)^{\frac{1}{b}} \end{align*} Since $g(x) = x^\frac{1}{b}$ is convex when $0 < b < 1$, we have $g(x) - g(y) \le (x-y)g'(x)$. Thus, \begin{align*} &regret_\mathcal{D}(\mathcal{S},f)\\ &\leq(\sum_{j=1}^{d}\alpha_j \cdot p^*.c_j^b-\sum_{j=1}^{d}\alpha_j \cdot s^*.c_j^b)\cdot \frac{1}{b}\cdot (\sum_{j=1}^{d}\alpha_j \cdot p^*.c_j^b)^{\frac{1}{b}-1}\\ &= \frac{1}{b}\left[\sum_{j=1}^{d}\alpha_j(p^*.c_j^b-s^*.c_j^b) \right](\sum_{j=1}^{d}\alpha_j \cdot p^*.c_j^b)^{\frac{1}{b}-1}\\ &\leq \frac{1}{b} \left[\sum_{j=1}^{d-1}\alpha_j(p^*.c_j^b-s^*.c_j^b) \right](\sum_{j=1}^{d-1}\alpha_j \cdot p^*.c_j^b)^{\frac{1}{b}-1} \end{align*} Consider another function $g(x) = x^b$, which is concave when $0<b<1$, and $g'(x)$ is monotonically decreasing. According to Lagrange's Mean Value Theorem, there must exist some $\xi$ between two values $x$ and $y$, such that $x^b-y^b=(x - y)\cdot b\cdot \xi ^{b-1}$. Thus, we have \begin{align*} p^*.c_j^b-s^*.c_j^b&\leq |p^*.c_j-s^*.c_j|\cdot b\cdot (\min\{p^*.c_j,s^*.c_j\})^{b-1}\\ &\le \frac{c_j^\tau}{t}\cdot b\cdot {c_j^\tau}^{b-1} = \frac{b}{t}{c_j^\tau}^b \end{align*} \begin{figure*}[!htb] \minipage[c]{0.5\linewidth} \centering \subfloat[Cobb-Douglas]{\includegraphics[height=1.7in]{nba_k_cb.eps}} \subfloat[CES]{\includegraphics[height=1.7in]{nba_k_ces.eps}} \vspace{-2mm} \caption{Varying $k$ (NBA)} \label{fig:k_nba} \endminipage\hfill \minipage[c]{0.5\linewidth} \centering \subfloat[Cobb-Douglas]{\includegraphics[height=1.7in]{st_k_cb.eps}} \subfloat[CES]{\includegraphics[height=1.7in]{st_k_ces.eps}} \vspace{-2mm} \caption{Varying $k$ (Stocks)} \label{fig:k_stocks} \endminipage \vspace{-4mm} \end{figure*} Thus, \begin{align*} &regret_\mathcal{D}(\mathcal{S},f)\\ &\le \frac{1}{b}(\sum_{j=1}^{d-1}\alpha_j \frac{b}{t}{c_j^\tau}^b ) (\sum_{j=1}^{d-1}\alpha_j \cdot p^*.c_j^b)^{\frac{1}{b}-1} \\ &\leq \frac{(d-1)}{t}\cdot \max_{j \in [1..d-1]}\{\alpha_j {c_j^\tau}^{b} \}\cdot \left[ (d-1)\cdot \max_{j \in [1..d-1]}\{\alpha_j{c_j^\tau}^b\}\right]^{\frac{1}{b}-1} \\ &= \frac{(d-1)^{\frac{1}{b}} }{t} (\max_{j \in [1..d-1]}\{\alpha_j{c_j^\tau}^b\})^{\frac{1}{b}} \\ &\leq \frac{(d-1)^{\frac{1}{b}} }{t} (\max_{j \in [1..d-1]}\{\sum_{l=1}^{d}\alpha_l\cdot {p^*_j.c_l}^b\})^{\frac{1}{b}}\\ &=\frac{(d-1)^{\frac{1}{b}} }{t} \max_{j \in [1..d-1]}\{(\sum_{l=1}^{d}\alpha_l \cdot {p^*_j.c_l}^b)^{\frac{1}{b}}\} \end{align*} Let $\sigma = \displaystyle \frac{(d-1)^{\frac{1}{b}} }{t}$. Then, $$regret_\mathcal{D}(\mathcal{S},f) \le \sigma \max_{j \in [1..d-1]}\{(\sum_{l=1}^{d}\alpha_l \cdot {p^*_j.c_l}^b)^{\frac{1}{b}}\}$$ Since $p_j^* \ (j \in [1..d-1])$ is in $\mathcal{S}$, we have \begin{align*} \max_{p_i\in \mathcal{S}}f(p_i) &\geq \max_{j \in [1..d-1]}f(p_j^*) = \max_{j \in [1..d-1]}\{ ( \sum^d_{l=1}{\alpha_l \cdot p^*_j.c_l}^b)^{\frac{1}{b}} \}\\ &= \frac{1}{\sigma}regret_\mathcal{D}(\mathcal{S},f). \end{align*} The regret ratio $r\_regret_{\mathcal{D}}(\mathcal{S}, f)$ is hence bounded by \begin{align*} r\_regret_{\mathcal{D}}(\mathcal{S}, f)&=\frac{regret_\mathcal{D}(\mathcal{S},f)}{regret_\mathcal{D}(\mathcal{S},f)+\max_{p_i\in \mathcal{S}}f(p_i)}\\ &= \frac{1}{1+{\frac{\max_{p_i\in \mathcal{S}}f(p_i)}{regret_\mathcal{D}(\mathcal{S},f)}}}\\ &\leq \frac{1}{1+\frac{1}{\sigma}}=\frac{\sigma}{1+\sigma} = \frac{(d-1)^{\frac{1}{b}}} {t+(d-1)^{\frac{1}{b}}} \end{align*} \vspace{-2mm} \end{proof} Therefore, given a set $\mathcal{F}$ of CES functions, the maximum regret ratio $mr\_regret_{\mathcal{D}}(\mathcal{S}, \mathcal{F})$ satisfies: $$mr\_regret_{\mathcal{D}}(\mathcal{S}, \mathcal{F}) \leq \frac{(d-1)^{\frac{1}{b}}} {t+(d-1)^{\frac{1}{b}}}$$ We can see from this bound that, when $k$ decreases or $d$ increases, the maximum regret ratio is expected to increase. For simplicity, we say that this bound grows in a scale of $O(\frac{1}{k^{\frac{1}{d-1}}})$. This bound is tighter than the bound $O(\frac{1}{bk^\frac{b}{d-1}})$ obtained in~\cite{kessler2015k} since $0<b<1$. To give an example, consider $\mathcal{F} = \{f_5, f_6\}$ where $f_5(p_i) = (0.5 p_i.c_1^{0.5} + 0.5 p_i.c_2^{0.5})^{2}$ and $f_6(p_i) =( 0.99 p_i.c_1^{0.5} +0.01 p_i.c_2^{0.5})^{2}$. Then $d = 2$. Let $k = 3$, which means $t = 2$. The upper bound of the maximum regret ratio $\frac{(d-1)^{\frac{1}{b}}} {t+(d-1)^{\frac{1}{b}}} = \frac{1} {2+1} \approx 33.33\%$. As $k$ increases (e.g., to $20$), this upper bound will decrease (e.g., to $ \frac{1} {19+1} = 5\%$). \section{Experiments}\label{sec:exp} In this section we evaluate the empirical performance of the two proposed algorithms MinVar and RF-MinVar. \subsection{Settings} All the algorithms are implemented in C++, and the experiments are conducted on a computer running the OS X 10.11 operating system with a 64-bit 2.7 GHz Intel\textsuperscript{\textregistered} Core\textsuperscript{(TM)} i5 CPU and 8 GB RAM. Both real and synthetic data sets are used in the experiments. The real data sets used are the \emph{NBA}\footnote{http://www.databasebasketball.com} and the \emph{Stocks}\footnote{http://pages.swcp.com/stocks} data sets, which have been used in previous studies on $k$-regret queries~\cite{peng2014geometry, peng2015k}. After filtering out data points with null fields, we obtain 20,640 data points of 7 dimensions in the NBA data set. The Stocks data set contains 122,574 data points of 5 dimensions. The synthetic data sets are generated by the \emph{anti-correlated} data set generator~\cite{borzsony2001skyline}, which is a popular data generator used in skyline query studies~\cite{papadias2003optimal,pei2007probabilistic,tao2009distance}. We generate synthetic data sets with cardinality ranging from 100 to 1,000,000 and dimensionality ranging from 2 to 10. We vary the query parameter $k$ from 10 to 34 in the experiments. Table~\ref{tbl:setting} summarizes the parameters and the values used. \begin{table} \vspace{-2mm} \centering \begin{small} \caption{Experimental Settings}\label{tbl:setting} \BlankLine\BlankLine \begin{tabular}{ccc} \toprule \multicolumn{1}{c}{Parameter} & Values & Default \\ \midrule Utility function & Cobb-Douglas, CES & - \\ Data set & NBA, Stocks, Anti-correlated & -\\ $n$ & 100, 1000, 10000, 100000, 1000000 & 10000 \\ $d$ & 2..10 & 3\\ $k$ & 10..34 & 20\\ \bottomrule \end{tabular} \end{small} \vspace{-4mm} \end{table} In each set of experiments, we randomly generate 10,000 different sets of parameters $\{\alpha_i| i\in[1..d], \alpha_i \in [0, 1]\}$ for each of the generalized Cobb-Douglas function and the CES function, where $\sum_{i=1}^{d}\alpha_i = 1$. The CES function has an extra parameter $b$. We generate random values of $b$ in the range of $[0.1,0.9]$. We run the algorithms on the data sets, and report the maximum regret ratio on the generated utility functions, denoted as \emph{RR} in the result figures. Five algorithms are tested in the experiments: \begin{figure*}[!htb] \minipage[c]{0.49\linewidth} \centering \subfloat[Cobb-Douglas]{\includegraphics[height=1.7in]{varing_k_cb.eps}} \subfloat[CES]{\includegraphics[height=1.7in]{varing_k_ces.eps}} \vspace{-2mm} \caption{Varying $k$ (Anti-correlated) } \label{fig:k_anti} \endminipage\hfill \minipage[c]{0.49\linewidth} \centering \subfloat[Cobb-Douglas]{\includegraphics[height=1.7in]{varing_d_cb.eps}} \subfloat[CES]{\includegraphics[height=1.7in]{varing_d_ces.eps}} \vspace{-2mm} \caption{Varying $d$ (Anti-correlated)} \label{fig:d_anti} \endminipage \vspace{-7mm} \end{figure*} \begin{figure*}[!htb] \minipage[c]{0.49\linewidth} \centering \subfloat[Cobb-Douglas]{\includegraphics[height=1.7in]{varing_n_cb.eps}} \subfloat[CES]{\includegraphics[height=1.7in]{varing_n_ces.eps}} \vspace{-2mm} \caption{Varying $n$ (Anti-correlated)} \label{fig:n_anti} \endminipage\hfill \minipage[c]{0.49\linewidth} \centering \subfloat[Varying $d$]{\includegraphics[height=1.7in]{time_varing_d.eps}} \subfloat[Varying $n$]{\includegraphics[height=1.7in]{time_varing_n.eps}} \vspace{-2mm} \caption{Running time tests (Anti-correlated)} \label{fig:time_anti} \endminipage \vspace{-5mm} \end{figure*} \begin{itemize} \item \emph{MinVar} is the algorithm proposed in Section~\ref{sec:thealgo}. We use $inc = 0.01\%n$ to control the number of iterations that the sub-algorithm FindBreakpoints runs for. \item \emph{RF-MinVar} is the improved MinVar algorithm powered by the heuristic proposed in Section~\ref{sec:heuristic}. We find that $itr_{max}=11$ is sufficient to handle the data sets tested. The results obtained are based on this setting. \item \emph{MinWidth} is an algorithm proposed by Kessler Faulkner et al.~\cite{kessler2015k} with bounded maximum regret ratios for $k$-regret queries with CES functions. \item \emph{Area-Greedy} is a greedy algorithm proposed by Kessler Faulkner et al.~\cite{kessler2015k} with good practical maximum regret ratios (but no bounds) for CES functions. Note that we do not compare with \emph{Angle}~\cite{kessler2015k} which is another greedy algorithm because it has been shown to produce larger regret ratios than those of Area-Greedy. \item \emph{MaxDom} is a greedy algorithm proposed by Lin et al.~\cite{lin2007selecting} that returns the $k$ representative skyline points which dominate the largest number of other points. \end{itemize} \subsection{Results} \textbf{Effect of $k$.} We first test the effect of $k$ on the maximum regret ratio. Figures~\ref{fig:k_nba} and~\ref{fig:k_stocks} show the result where $k$ is varied from 10 to 34 on the two real data sets. Note that Figure~\ref{fig:k_nba} shows up to $k = 22$ because the maximum regret ratios of the algorithms have become stable beyond this point. We also omit MinWidth as it is known to produce larger maximum regret ratios than those of Area-Greedy~\cite{kessler2015k}. We can see from these two figures that, as $k$ increases, the maximum regret ratios of the algorithms either decrease or stay stable. This confirms the upper bound derived. It is expected as a larger $k$ means more points are returned and a higher probability of satisfying more utility functions. On the NBA data set, the proposed algorithm MinVar outperforms the baseline algorithms by more than an order of magnitude on the Cobb-Douglas functions, and more than 5 times on the CES functions (note the logarithmic scale). This is because the algorithm selects the $k$ points from a much more evenly divided data space, and the points selected are much closer to the optimal points for the various utility functions. Powered by the redundant point removing heuristic, RF-MinVar achieves even smaller maximum regret ratios, as it avoids selecting the dominated points which do not contribute to the regret ratios. On the Stocks data set, similar patterns are observed. MinVar outperforms the baseline algorithms in most cases, and RF-MinVar outperforms the baseline algorithms in all cases tested. These confirm the superiority of the proposed algorithms. In Figure~\ref{fig:k_anti} we show the result of varying $k$ on a synthetic data set. On this data set, MaxDom obtains the smallest maximum regret ratios. This is because the data points in this synthetic data set follow an anti-correlated distribution. The data points tend \emph{not} to be dominated by each other and tend to be optimal for different utility functions. It is intrinsically difficult to find a small number of data points that are optimal for a large number of utility functions under such data distribution. MaxDom is designed to handle this type of data while our algorithms are not. Note that, even under such an extreme case, the maximum regret ratios obtained by the proposed algorithms are still very close to those obtained by MaxDom, i.e., the difference is less than 0.04 (cf. Figure~\ref{fig:k_anti} (b)). Also, \emph{MaxDom does not have a bound on the maximum regret ratio obtained while our algorithms do.} In the meantime, the proposed algorithms both again outperform the other baseline algorithm Area-Greedy. \textbf{Effect of $d$.} Next we test the algorithm performance by varying the number of dimensions $d$ from 2 to 10 with synthetic data. Figure~\ref{fig:d_anti} shows the result. Similar to Figure~\ref{fig:k_anti}, on the synthetic data, MaxDom obtains the smallest maximum regret ratios, while those of the proposed algorithms are very close. Another observation is that, as $d$ increases, the maximum regret ratios increase overall for all the algorithms. This again confirms the bounds obtained, and is expected as the difference between the optimal points in $\mathcal{D}$ and $\mathcal{S}$ accumulates as there are more dimensions. Fluctuations are observed in the maximum regret ratios. This is because the maximum regret ratios are quite small already (i.e., less than 0.20). A change in the data set may have a random impact on the maximum regret ratios, even though the overall trend is increasing. \textbf{Effect of $n$.} We further test the scalability of the proposed algorithms by varying the data set cardinality from 100 to 1,000,000. The comparative performance of the algorithms is shown in Figure~\ref{fig:n_anti}, which again is similar to that shown in Figure~\ref{fig:d_anti}. Note that, while the bounds of the \emph{maximum} regret ratio do not change when $n$ changes, the \emph{actual} regret ratios obtained by the algorithms may change as $n$ changes. This is natural as the optimal points have a smaller probability to be returned when there are more points in the data set. However, this change in the actual regret ratios is still bounded. We observe that, for the synthetic data sets with up to 1,000,000 data points, the maximum regret ratios of the proposed algorithms are below 0.10. This confirms the scalability of the proposed algorithms. \textbf{Running time results.} We also show the running time of the algorithms on the synthetic data sets in Figure~\ref{fig:time_anti}. Note that the utility functions are irrelevant in this set of experiments as the algorithms do not rely on any particular utility function. We see that the proposed algorithms are the fastest in almost all the cases tested. MinWidth is the only baseline algorithm with a relatively close performance, but it has much larger maximum regret ratios. The proposed algorithms outperform the other two baseline algorithms by more than an order of magnitude. MaxDom is the slowest. It takes almost 500 seconds to process 1,000,000 data points (cf. Figure~\ref{fig:time_anti}~(b)), while the proposed algorithms can finish in just over a second. This again confirms the scalability of the proposed algorithms. \vspace{-5mm} \section{Conclusions}\label{sec:conclusions} We overcame the barrier of multiplicative utility functions and presented an algorithm named MinVar to process $k$-regret queries with such functions. We showed that MinVar can produce query answers with a bounded maximum regret ratio. In particular, when applied on $k$-regret queries with Cobb-Douglas functions, MinVar achieved a maximum regret ratio bounded between $\displaystyle \Omega(\frac{1}{k^2})$ and $\displaystyle O(\ln(1+\frac{1}{k^{\frac{1}{d-1}}}))$; when applied on $k$-regret queries with Constant Elasticity of Substitution functions, MinVar achieved a maximum regret ratio bounded between $\displaystyle \Omega (\frac{1}{bk^2})$ and $\displaystyle O(\frac{1}{k^{\frac{1}{d-1}}})$. We further proposed a heuristic to lower the maximum regret ratio obtained by MinVar, resulting in an improved algorithm named RF-MinVar. We performed extensive experiments using both real and synthetic data to evaluate the performance of both algorithms. The results showed that the regret ratios of the answers produced by MinVar and RF-MinVar are constantly small. Meanwhile, both algorithms are more efficient than the baseline algorithms. This study opens up opportunities to explore $k$-regret queries with various other types of multiplicative utility functions. It would also be interesting to see how $k$-regret queries with a mix of both additive and multiplicative utility functions can be answered with a bounded regret ratio. \vspace{-6mm} \section*{Acknowledgments} We thank Dr. Ashwin Lall for sharing the code of MinWidth and Area-Greedy, and providing feedback on our work. \bibliographystyle{abbrv} \small
1,477,468,750,714
arxiv
\section{Introduction} Understanding interface kinetics in materials is an important theoretical and practical problem, since this process influences the microstructure, such as the grain size, the texture, and the interface type. From the theoretical point of view, the study of processes that involve surface and interface properties gained significant interest in non-equilibrium statistical mechanics \cite{barabasi,krug}. In particular, dislocations \cite{zaiser,zapperi} and grain boundaries \cite{moretti,moretti2} provide a concrete example of driven elastic manifolds in random media \cite{kardar}. Other example of this general problem are domain walls in ferromagnets \cite{lemerle,zapperi2}, flux line in type II superconductors \cite{bhattacharya,surdeanu}, contact lines \cite{schaffer,rolley} and crack fronts \cite{bouchaud,schmittbuhl}. From the point of view of applications, understanding grain boundary kinetics has a great importance for polycrystalline materials, since the resulting grain microstructure determines material properties such as strength, hardness, resistance to corrosion, conductivity etc. \cite{sutton}. Hence the ambitious goal of these studies is to be able to control the microstructural properties of polycrystals. Several approaches have been employed in the literature to study grain boundary kinetics. Ref.~\cite{trautt} employs molecular dynamics (MD) simulations with appropriate interatomic interactions to study the diffusion of grain boundaries at the atomic scale \cite{trautt}. The method allows to quantify the mobility of grain boundaries and to compare the results with experiments \cite{trautt}. While MD simulations provide a very accurate description of the dynamics, the method suffers from numerical limitations and it is difficult to reach the asymptotic regime. An alternative method is provided by the Langevin approach in which the grain boundary is assumed to evolve stochastically in an external potential \cite{risken}. The dynamics of the underlying crystalline medium enters in the problem only through the noise term (due to lattice vibrations) and the periodic potential (Peierls-Nabarro). Hence, the equations of motion of the atoms or molecules are not directly relevant. Indeed there is experimental evidence in supporting of separation of time scales in plastic flow \cite{ananthakrishna} and it is thus possible to integrate out the fast degrees of freedom (atomic vibrations) and consider only the slow ones (dislocations position). Here we study the evolution of a grain boundary (GB) in a crystalline material by the Langevin approach. The GB is treated as an array of interacting dislocations performing a thermally activated motion in a periodic (Peierls-Nabarro) potential. Similar models have been employed in the past to study the conductivity of superionic conductors \cite{fulde,dieterich}, the relaxational dynamics of rotators \cite{marchesoni} and Josephson tunneling junctions \cite{risken}. Notice that the crucial role played by long-range stresses is often disregarded in analyzing GB deformation. On the other hand, it has been shown in Ref.~\cite{moretti} that a surface tension approximation for the GB stiffness is inappropriate and one has to consider explicitly non-local interactions. The present model incorporates this effect in the equations of motion. We simulate the set of Langevin equations numerically to describe the GB kinetics and its fluctuations. The results are in good agreement with MD simulations \cite{trautt} and allow to clarify the origin of the short time deviations from the diffusive behavior observed in Ref.~\cite{trautt}. In addition, a linearized version of the model can be treated analytically and the asymptotic results are found in good agreement with the simulations. The manuscript is organized as follows: in Sec. II we introduce the model, which is first studied in the flat GB limit in Sec. III. Sec. IV presents numerical simulations of the full flexible GB problems and Sec. V discusses the continuum theory. Sec. VI is devoted to conclusions. \section{The model} To study the GB dynamics we consider a phenomenological mesoscopic approach. We consider in particular the case of a regularly spaced low-angle grain boundary schematized as an array of straight dislocations that interact with each other through long-range stress fields and with the crystalline Peierls-Nabarro (PN) potential. The GB is composed by $N$ dislocations where configurations are repeated {\it ad infinitum} because of periodic boundary conditions along the $y$ direction. Each dislocation has Burger vector of modulus $b$ parallel to the $x$ axis and the distance between two adjacent dislocations along the $y$ direction is fixed to be $a$. Each straight dislocation interacts with the lattice and with others dislocations through long-range stress fields. The effect of the lattice over each n-th dislocation can be decomposed as the sum of three contributions:\\ \begin{itemize} \item $F_{PN}(x_n)=-\mathcal{A}\frac{\mbox{\normalsize{$\mu b$}}}{\mbox{\normalsize{$2\pi r_0$}}}\sin(\frac{\mbox{\normalsize{$2\pi x_n$}}}{\mbox{\normalsize{$b$}}})$, the PN force where $\mathcal{A}$ is the area of the GB, $\mu$ is the shear modulus and $r_0$ the inter-atomic distance; \item -$\gamma \dot{x}_n(t)$, the average effect of the lattice fluctuations where $\gamma$ is the viscosity coefficient; \item $\gamma\eta_n(t)$, the impulsive effect of the lattice fluctuations assumed to be Gaussian for the central limit theorem and uncorrelated in space and time: $\langle\eta_n(t)\rangle=0$, $\langle\eta_n(t)\eta_m(t')\rangle=D\delta_{nm}\delta(t-t')$ where $D$ is the diffusion coefficient \cite{risken}. \end{itemize} \vspace{0.5cm} The long-range stress field exercised by all the other dislocations over the n-th, the Peach-Koehler force $F_{PK}^{n,N}({\bf x},{\bf y})$, is computed considering the image dislocations method to comply with periodic boundary conditions along the $y$ direction. Making use of calculations in \cite{hirth,friedel} one can find the following expression \begin{equation} \begin{array}{l} \fl F_{PK}^{n,N}({\bf x},{\bf y})= -\frac{\mbox{\normalsize{$\mu b^2\pi$}}}{\mbox{\normalsize{$N^2a^2(1-\nu)$}}}\sum_{m=1}^{N}(x_n-x_m)\cdot\\ \\ \cdot\frac{\mbox{\normalsize{$\{\cosh[2\pi(x_n-x_m)/Na]\cos[2\pi(y_n-y_m)/Na]-1\}$}}} {\mbox{\normalsize{$\{\cosh[2\pi(x_n-x_m)/Na]-\cos[2\pi(y_n-y_m)/Na]\}^2$}}}, \end{array} \end{equation} \vspace{0.5cm} where $\nu$ is the Poisson's ratio, $y_n=n\cdot a$ and $y_m=m\cdot a$. Finally the over-damped Langevin equation \cite{risken} for the GB reads \begin{equation}\label{general_GB} \gamma\dot{x}_n(t)=F_{PN}(x_n)+F_{PK}^{n,N}({\bf x},{\bf y})+\gamma\eta_n(t) , \end{equation} for $n=1,...,N$, or rather \begin{equation}\label{flexible_GB} \begin{array}{l} \fl \dot{x}_n(t) = -\mathcal{A}\frac{\mbox{\normalsize{$\mu b$}}}{\mbox{\normalsize{$2\pi r_0\gamma$}}}\sin(\frac{\mbox{\normalsize{$2\pi x_n$}}}{\mbox{\normalsize{$b$}}})-\frac{\mbox{\normalsize{$\mu b^2\pi$}}}{\mbox{\normalsize{$N^2a^2(1-\nu)\gamma$}}}\sum_{m=1}^{N}(x_n-x_m)\cdot\\ \\ \cdot\frac{\mbox{\normalsize{$\{\cosh[2\pi(x_n-x_m)/Na]\cos[2\pi(y_n-y_m)/Na]-1\}$}}} {\mbox{\normalsize{$\{\cosh[2\pi(x_n-x_m)/Na]-\cos[2\pi(y_n-y_m)/Na]\}^2$}}}+\eta_n(t) . \end{array} \end{equation} \vspace{0.5cm} To indicate the amplitude of the $F_{PN}$ and $F_{PK}$ forces we introduce respectively the parameters $A_{PN}=\mathcal{A}\mu b/2\pi r_0\gamma$ and $A_{PK}=\mu b^2\pi/a^2(1-\nu)\gamma$. The key quantities that we consider in order to characterize the dynamics of the GB are: \begin{itemize} \item the mean-square displacement of the center of mass, $\Delta x_{cm}(t)=\langle x_{cm}^2(t)\rangle-\langle x_{cm}(t)\rangle^2=\langle \overline{x_n(t)}^2\rangle-\langle \overline{x_n(t)}\rangle^2$ where $x_{cm}(t)=\overline{x_n}=1/N\sum_{n=1}^{N}x_n(t)$; \item the mean-square width $W^2(t)=\overline{\langle x_n^2\rangle}-\langle(\overline{x_n})^2\rangle$. \end{itemize} In the following, we first analyze the case of a flat GB for which a comparison with MD simulations approach \cite{trautt} is made. Next we consider the full flexible description of the GB. Finally, we discuss a linearized version of the model that can be treated analytically. \section{Flat grain boundary} For many applications a good approximation is to consider a flat GB with a single degree of freedom, for which $F_{PK}^{n,L}({\bf x},{\bf y})=0$ and $x_n(t)=x_{cm}(t)$ for $n=1,...,N$. In other words, the flat GB is described by the following equation \begin{equation}\label{flat_GB} \dot{x}_{cm}(t)=-\mathcal{A}\frac{\mu b}{2\pi r_0\gamma}\sin(\frac{2\pi x_{cm}}{b})+\eta(t) , \end{equation} where the correlation properties of the thermal fluctuations are: $\langle\eta(t)\rangle=0$ and $\langle\eta(t)\eta(t')\rangle=D\delta(t-t')$. This type of equation, also known as the Kramers equation with periodic potential, has been extensively studied in the literature \cite{risken}. In particular, the mean-square displacement is known to display a combination of oscillatory and diffusive behavior \cite{risken,ferrando}. Different dynamical regimes are found as the potential strength or the friction varies \cite{ferrando}. In fact, we show next that this simple model allows to understand the short-time deviations from diffusive behavior observed in MD simulations \cite{trautt}. Integrating Eq.~\ref{flat_GB} with the initial condition $x_{cm}(0)=0$ by means of computer simulations (fitting $D$ and $\gamma$ with the condition $D\gamma=D_R/\mathcal{M}$ \cite{risken} where $\mathcal{M}$ is the mobility and $D_R$ is the renormalized diffusion coefficient considered in \cite{trautt}), we have compared the mean-square displacement $\Delta x_{cm}(t)$ to the one obtained from MD simulation in Ref.~\cite{trautt}. In Fig.~\ref{graphic1} this comparison is displayed together with the mean-square displacement of the renormalized free Brownian motion (described by the equation: $\dot{x}=\eta(t)$ with $\langle\eta(t)\rangle=0$ and $\langle\eta(t)\eta(t')\rangle=D_R\delta(t-t')$). The agreement between the two simulations is extremely good. For higher times ($t>80ps$), the mean-square displacement tends to the renormalized Brownian motion. Hence taking explicitly into account the sinusoidal Peierls-Nabarro force in the Langevin equation allows to describe the mean-square displacement for early times of the dynamics. \begin{figure}[h!] \begin{center} \includegraphics[clip=true,width=10cm]{figure1.eps} \end{center} \caption{Mean-square displacement $\Delta x_{cm}(t)$ of the flat grain boundary comparison between molecular dynamic simulation \cite{trautt} (green line) and Langevin approach simulation (dashed blue line). Both type of simulations predicts a linear dependence in time of $\Delta x_{cm}(t)$ for long times represented in the figure by the renormalized free Brownian motion (straight black line).} \label{graphic1} \end{figure} One can deduce in a simplified intuitive way the temporal evolution of the mean-square displacement starting by the transition probability density $P$ for small times (small $\tau$) \cite{risken} \begin{equation} P(x,t+\tau|x',t)=\frac{1}{2\sqrt{\pi D\tau}}\ \ \mbox{\Large{$e$}}^{\mbox{\large{$-\frac{[x-x'-F_{PN}(x)\tau]^2}{4D\tau}$}}}. \end{equation} Next we consider the transition probability $P_{01}(x_0+\Delta x_1,t_0+\tau|x_0,t_0)$ to run from the point $x_0$ at time $t_0$ to the point $x_1=x_0+\Delta x_1$ at time $t_1=t_0+\tau$ and $P_{12}(x_0+\Delta x_1+\Delta x_2,t_0+2\tau|x_0+\Delta x_1,t_0+\tau)$ to run from $x_1$ at $t_1$ to $x_2=x_1+\Delta x_2$ at $t_2=t_1+\tau$ \begin{equation}\left\{\begin{array}{rll} P_{01} & = & \mbox{\Large{$\frac{1}{2\sqrt{\pi D\tau}}$}} \ \ \mbox{\Large{$e$}}^{\mbox{\large{$-\frac{[\Delta x_1-F_{PN}(x_0)\tau]^2}{4D\tau}$}}}\\ &&\\ P_{12} & = & \mbox{\Large{$\frac{1}{2\sqrt{\pi D\tau}}$}} \ \ \mbox{\Large{$e$}}^{\mbox{\large{$-\frac{[\Delta x_2-F_{PN}(x_0+\Delta x_1)\tau]^2}{4D\tau}$}}}.\\ \end{array}\right. \end{equation} For a free Brownian motion ($F_{PN}(x)=0$) the condition $P_{01}=P_{12}$ implies $\Delta x_1=\Delta x_2$ and stochastic displacements are space independent. If we impose this condition in presence of a periodic force $F_{PN}(x)=-dU_{PN}(x)/dx$, we obtain \begin{equation}\begin{array}{l} P_{01}=P_{12} \ \ \Rightarrow \ \ \Delta x_1-F_{PN}(x_0)\tau=\Delta x_2-F_{PN}(x_0+\Delta x_1)\tau \ \ \Rightarrow\\ \\ \Rightarrow \ \ \Delta x_2=\Delta x_1\left[1+\left.\mbox{\large{$\frac{dF_{PN}(x)}{dx}$}}\right|_{x_0}\tau\right] , \end{array} \end{equation} and then \begin{equation} \Delta x_2 \gtrless\Delta x_1 \ \ \ \ \mbox{if} \ \ \ \ \left.\frac{dF_{PN}}{dx}\right|_{x_0}\gtrless 0 \ \ \ \ \mbox{or rather} \ \ \ \ \left.\frac{d^2U_{PN}}{dx^2}\right|_{x_0}\lessgtr 0 . \end{equation} This result implies that, with the initial condition $x_{cm}(0)=0$, if the potential $U_{PN}(x)$ is convex (concave) the mean-square displacement curve is concave (convex). In the case of the PN potential, we find indeed that the mean-square displacement curve should display upper and lower deviations from the straight line, corresponding to a renormalized free Brownian motion, depending in $d^2U_{PN}(x)/dx^2$. These deviations decrease with time so that for large times the curve should approach a straight line \cite{risken}. \section{Flexible grain boundary} A more general description of the GB considers its internal deformation and the dynamics is described by Eq.~\ref{flexible_GB}. The dynamical behavior of the GB depends on the amplitude of the three terms in the right-hand side of Eq.~\ref{flexible_GB}. \begin{figure}[h!] \begin{center} \includegraphics[clip=true,width=10cm]{figure2.eps} \end{center} \caption{Mean-square width of the grain boundary, $W^2(t)$, in the case of $N=32$ for the two typical situation in which (continuum line) the GB exfoliate ($W^2(t)$ increase with time) because the noise is high enough respect to $A_{PK}$ and to $A_{PN}$, (dashed line) the GB reach a stationary state ($W^2(t)$ saturates after a certain time) because the noise is small enough respect to $A_{PK}$ or to $A_{PN}$.} \label{graphic8} \end{figure} The parameters that characterize the behavior of the GB are $a$, $b$, $A_{PK}$, $A_{PN}$ and $D$. Varying the values of these parameters, in the long-time limit the GB can either exfoliate (when the noise, $D$, is high enough with respect to $A_{PK}$ and to $A_{PN}$) or reach a stationary state (when the noise, $D$, is small when compared to $A_{PK}$ or to $A_{PN}$). The asymptotic behavior can be read off from the width $W^2(t)$ that keeps on increasing when the GB exfoliates and saturates when the GB remains stable. In Fig.~\ref{graphic8} the comparison between these two typical situations is displayed in the case of $N=32$, $a=b=3\pi$, $A_{PN}=0,0.1,0.2,0.4$, $D=0.25$ and $A_{PK}=0.14$ for the case in which the GB remains stable, while $A_{PK}=0.0896$ for the case in which the GB exfoliates. In what follows, we analyze the dynamical behavior of the stable GB for $a=b=3\pi$, $A_{PN}=0,0.4$, $A_{PK}=0.14$ and $D=0.25$. In Fig.~\ref{graphic4} the average position of the GB center of mass $\Delta x_{cm}(t)$ is displayed with and without the PN force in Log-Log scale for $N=32,64,128,256,512$. \begin{figure}[h!] \begin{center} \includegraphics[clip=true,width=10cm]{figure3.eps} \includegraphics[clip=true,width=10cm]{figure4.eps} \end{center} \caption{Mean-square displacement of the center of mass of the grain boundary, $\Delta x_{cm}(t)$ , for Langevin approach simulation without the PN force (a) and with the PN force (b). $\Delta x_{cm}(t)$ for $N=32,64,128,256,512$ is displayed in Log-Log scale.} \label{graphic4} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[clip=true,width=10cm]{figure5.eps} \includegraphics[clip=true,width=10cm]{figure6.eps} \end{center} \caption{Mean-square width of the grain boundary, $W^2(t)$, for Langevin approach simulation without the PN force (a) and with the PN force (b). $W^2(t)$ for $N=32,64,128,256,512$ is displayed in Log-Log scale.} \label{graphic2} \end{figure} For long times in both cases we have a linear behavior $\Delta x_{cm}(t)\sim t$, but for short times, in the presence of the PN force, there is a clear deviation from linearity. This result confirms the conclusion made in the previous section, that the PN force is the cause for the deviation from linearity of $\Delta x_{cm}(t)$ for short times observed in Ref.~\cite{trautt}. Next we characterize the morphology of the GB through the width $W^2(t)$. In Fig.~\ref{graphic2} $W^2(t)$ for $N=32,64,128,256,512$ is displayed with and without the PN force in Log-Log scale. In the absence of the PN force (Fig.~\ref{graphic2}a) the time dependence of $W^2(t)$ is qualitatively similar to the same case but with linearized PK force discussed in the next section, while in the presence of the PN force, for $A_{PN}=0.4$ (Fig.~\ref{graphic2}b), $W^2(t)$ exhibit a plateau for intermediate times. \section{Continuum Theory} It is possible to develop an analytic expression in the continuum limit ($a\rightarrow 0$, $N\rightarrow\infty$ and $L=Na=const.$) for short or long times for $W^2(t)$ in absence of the PN force linearizing the PK force. The equation of motion for $F_{PN}=0$ and $F_{PK}$ linearized is \begin{equation}\label{continuum_GB} \dot{x}_n(t)=-\frac{\mu b^2}{2\pi(1-\nu)\gamma}\sum_{m=1}^{N}\frac{x_n-x_m}{(y_n-y_m)^2}+\eta_n(t) . \end{equation} To obtain the short time behavior is sufficient to rewrite Eq.~\ref{continuum_GB} as a generalized Ornstein-Uhlenbeck process \cite{risken} \begin{equation}\label{O-U} \dot{x}_n(t)=\sum_{m=1}^{N}g_{nm}x_m + \eta_n(t) . \end{equation} The general solution of Eq.~\ref{O-U} is \begin{equation} x_n(t)=\int_{0}^{\infty}\sum_{m=1}^{N}G_{nm}(t')\eta_m(t-t')dt' , \end{equation} with $\{G_{nm}(t)\}=\hat{G}(t)=e^{\hat{g}t}=\mathbbm{1}+\hat{g}t+\hat{g}^2t^2/2+...$ (where $\mathbbm{1}=\{\delta_{ij}\}$). From the definition of $W^2(t)$, results \begin{equation}\label{W2} W^2(t)=\frac{D}{N}\sum_{n,m=1}^{N}\int_{0}^{t}G_{nm}^2(t')dt'-\frac{D}{N^2} \sum_{n,m,l=1}^{N}\int_{0}^{t}G_{nm}(t')G_{lm}(t')dt' . \end{equation} Replacing the Taylor expansion of the $\hat{G}$ matrix in Eq.~\ref{W2} one obtains for short times ($t\ll1/\|\hat{g}\|$) that $W^2(t)=(1-1/N)Dt+o(t)$ and in the continuum limit $W^2(t)=Dt+o(t)$. To obtain the long times behavior of $W^2(t)$, we rewrite Eq.~\ref{continuum_GB} in Fourier space \cite{moretti,chui}. Employing the decomposition $x_m=1/L\sum_k exp(-ikam)x_k$, we obtain \begin{equation}\label{continuum_fourier} \fl \dot{x}_k=-\frac{\mu b^2}{2\pi(1-\nu)\gamma a^2L} \sum_{m=-\infty}^{\infty}e^{ikam} \sum_{n=-\infty}^{\infty} \frac{\sum_{k'}(e^{-ik'am}-e^{-ik'an})x_{k'}}{(m-n)^2}+\eta_k . \end{equation} The first term in the right-hand side of Eq.~\ref{continuum_fourier} can be rewritten as \begin{equation} -\frac{\mu b^2}{2\pi(1-\nu)\gamma a^2L}\sum_{k'}x_{k'}\sum_{m=-\infty}^{\infty}e^{i(k-k')am} (\sum_{d=-\infty}^{\infty}\frac{1-e^{ik'ad}}{d^2}) , \end{equation} where $d=m-n$. Using the following results \begin{equation} \sum_{d=1}^{\infty}\frac{1}{d^2}=\frac{\pi^2}{6} , \ \ \ \ \sum_{d=1}^{\infty} \frac{cos(cd)}{d^2}=\frac{\pi^2}{6}-\frac{\pi|c|}{2}+\frac{c^2}{4} , \end{equation} we obtain \begin{equation} \sum_{d=-\infty}^{\infty}\frac{1-e^{ik'ad}}{d^2}=2\sum_{d=1}^{\infty}\frac{1-\cos(k'ad)}{d^2} =\pi|k'|a-\frac{k'^2a^2}{2} , \end{equation} so that Eq.~\ref{continuum_fourier} becomes \begin{equation} \dot{x}_k=-\frac{\mu b^2}{2\pi(1-\nu)\gamma a^2}(\pi|k|-\frac{k^2a}{2})x_k+\eta_k . \end{equation} In the long time limit (large $x$, small $k$) the $k^2$ term can be neglected. Finally in the continuum limit we replace $y_n$, $y_m$ by the continuum variables $y$, $y'$ and $\langle\eta(y,t)\eta(y',t')\rangle=aD\delta(y-y')\delta(t-t')$. \begin{figure}[h!] \begin{center} \includegraphics[clip=true,width=10cm]{figure7.eps} \end{center} \caption{Comparison between the mean-square width $W^2(t)$ for linearized $F_{PK}$ and $F_{PN}=0$ in the case of $N=32$ computed by numerical simulation and the continuum theoretical prediction in the short and long times limit. The black line represents the simulated data, the green line the long times theoretical prediction with fitted parameters and the blue line the short times theoretical prediction.} \label{graphic6} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[clip=true,width=10cm]{figure8.eps} \end{center} \caption{Size dependence for the saturation value of the mean-square width $W^2_s$ in Log scale for the abscissa (x-axis). As can be seen in figure, in the case in which the PN force is absent ($A_{PN}=0$) the relation is logarithmic ($W^2_s\sim\log N$), while in presence of the PN force ($A_{PN}\ne 0$) a slight deviation from the logarithmic dependence is observed.} \label{graphic7} \end{figure} Thus Eq.~\ref{continuum_fourier} becomes \begin{equation}\label{k} \dot{x}_k=-\alpha|k|x_k+\eta_k , \end{equation} where $\alpha=\mu b^2/[2(1-\nu)\gamma a^2]$. Eq.~\ref{k} can be solved exactly and $W^2(t)$ is given by \cite{krug,krug2} \begin{equation}\label{W_exact} W^2(t)=\frac{aD}{2\pi\alpha}[\ln(\frac{L}{a})+\ln(1-e^{-4\pi\alpha t/L})] . \end{equation} To reproduce the continuum limit by simulations of Eq.~\ref{continuum_GB} we would need a very large GB, with $N\gg 512$. Thus a comparison between Eq.~\ref{W_exact} and the simulations for small $N$ is possible only by introducing some effective parameters in Eq.~\ref{W_exact}. In Fig.~\ref{graphic6} $W^2(t)$ computed by the simulations with $N=32$ (for which the better statistic is available) is compared with the fitted theoretical prediction for short and long times. Eq.~\ref{W_exact} also predicts that the saturated value of width ($W^2_s$) exhibits a logarithmic dependence on the GB length $L=N/a$: $W^2_s\sim\log N$. In the case in which $F_{PK}$ is not linearized this result is also confirmed by numerical simulations, showing that $W_s^2$ increases logarithmically with $N$ when $F_{PN}=0$ (see Fig.~\ref{graphic7}). In presence of a periodic potential ($F_{PN}> 0$), however, we observe a deviation from the logarithmic growth at large $N$. This suggests that the Peierls-Nabarro potential may set a limit to the GB roughness. \section{Summary and Discussion} We have investigated the diffusion of a regularly spaced low-angle grain boundary in a crystalline material. A typical computational method to describe the dynamics of the grain boundary is to perform deterministic molecular dynamics simulations with appropriate interatomic interactions \cite{trautt}. Here we have employed the over-damped Langevin approach to obtain a long time description of the dynamics, but in particular to perform a comparison with molecular dynamics simulations for a specific material \cite{trautt}. The first results is the interpretation of the early times behavior of the mean-square displacement $\Delta x_{cm}(t)$. The deviation for early times of $\Delta x_{cm}(t)$ by the case of the renormalized Brownian motion, that holds for long times, can be interpreted as the effect on the dislocations of the periodicity of the lattice giving rise to the Peierls-Nabarro potential. Secondly the description of the dynamic ($\Delta x_{cm}(t)$) and the morphology ($W^2(t)$) of the grain boundary by means of over-damped Langevin equations is in qualitatively good agreement with its behavior in real materials, so this approach can be considered an useful tool for these studies. \ack We thank P. Moretti for useful discussions. \section*{References}
1,477,468,750,715
arxiv
\section{Introduction} Segmentation of organs or abnormal regions is a fundamental task in clinical applications, such as diagnosis, intervention and treatment planning. Deep learning techniques are driving progress in automating the segmentation task under the full-supervision paradigm \cite{milletari2016v,cciccek20163d}. Training these models, however, relies on a large amount of pixel-level annotations, which require expensive clinical expertise \cite{cheplygina2019not}. Semi-supervised learning (SSL) techniques alleviate the annotation scarcity by leveraging unlabeled data with a small amount of labeled data. Current semi-supervised segmentation methods typically utilize the unlabeled data either in the form of pseudo labels \cite{bai2017semi,zheng2020cartilage}, regularization \cite{nie2018asdnet,cui2019semi,peng2020deep} or knowledge priors \cite{zheng2019semi,he2020dense}. For instance, self-training methods \cite{bai2017semi} generate pseudo labels from unlabeled data, which are used to retrain the network iteratively. A wide range of regularization-based methods has been explored for semi-supervised segmentation using adversarial learning \cite{nie2018asdnet,chaitanya2019semi}, consistency learning \cite{bortsova2019semi,yu2019uncertainty,li2020transformation,luo2021semi}, or co-training \cite{peng2020deep,xia20203d,wang2021self}. Adversarial methods encourage the segmentation of unlabeled images to be closer to those of the labeled images. In contrast, consistency and co-training methods encourage two or more segmentation predictions, either from the same or different networks, to be consistent under different perturbations of the input data. Such consistency-based methods are popular in semi-supervision due to their simplicity. Consequently, self-ensembling \cite{laine2016temporal} and mean teacher-based \cite{tarvainen2017mean} methods are often used in semi-supervised segmentation of medical images \cite{cui2019semi,bortsova2019semi,li2020transformation}. However, their generated predictions from the unlabeled images may not always be reliable. To alleviate this issue, uncertainty-aware regularization methods \cite{yu2019uncertainty,sedai2019uncertainty,wang2020double,wang2021tripled,luo2021efficient} are proposed to gradually add reliable target regions in predictions. This uncertainty scheme is also employed in co-training \cite{xia20203d} and self-training \cite{zheng2020cartilage} approaches to obtain reliable predictions. Although these methods perform well in low-labeled data regimes, their high computation and complex training techniques might limit their applicability to broader applications in practice. For instance, the uncertainty estimation is approximated via Monte-Carlo Dropout \cite{gal2016dropout} or an ensembling, which requires multiple predictions per image. Co-training methods require two or more networks to be trained simultaneously, whereas self-training-based methods rely on costly iterations. Lastly, adversarial training is challenging in terms of convergence \cite{salimans2016improved}. Prior-based methods in semi-supervised segmentation typically incorporate anatomical knowledge of the target object during training the model. For instance, He \textit{et al.} \cite{he2020dense} encode the unlabeled images in an autoencoder and combine the learnt features as prior knowledge in the segmentation networks. Recent attempts use signed distance maps (SDM) as shape constraints during training \cite{li2020shape,xue2020shape,wang2021tripled}. For instance, Le \textit{et al.} \cite{li2020shape} propose an additional task of predicting SDM and enforcing consistency with an adversarial loss. Zheng \textit{et al.} \cite{zheng2019semi} exploit a probabilistic atlas in their loss function. These knowledge-based methods require an additional task to constraints shape prior, or it requires aligned images. These limitations motivate our approach, which leverages a learnt labeling representation to approximate the uncertainty. Our main idea is to mimic a shape prior by learning a representation using segmentation masks such that each prediction is mapped into a set of plausible segmentations. In contrast to \cite{zheng2019semi}, our approach does not require aligned images. The mapped segmentation is subsequently used to estimate the uncertainty maps to guide the segmentation network. We hypothesize that the proposed uncertainty estimates are more robust than those derived from the entropy variance, requiring multiple inferences strategy. \paragraph{\bf Our contributions.} We propose a novel way to estimate the pixel-wise uncertainty to guide the training of a segmentation model. In particular, we integrate a pre-trained denoising autoencoder (DAE) into the training, whose goal is to leverage a learnt labeling representation on unlabeled data. The DAE maps the segmentation predictions into a set of plausible segmentation masks. Then, we approximate the uncertainty by computing the pixel-wise difference between predicted segmentation and its DAE reconstruction. In contrast to commonly used uncertainty-based approaches, our uncertainty map needs a single inference from the DAE model, reducing computation complexity. Our method is extensively evaluated on the 2018 Atrial Segmentation Challenge dataset \cite{xiong2021global}. The results demonstrate the superiority of our approach over the state-of-the-art. \begin{figure*}[t!] \centering \includegraphics[width=0.975\linewidth]{Arch_v2.pdf} \caption{Overview of our uncertainty estimation from labeling representation for semi-supervised segmentation. A pre-trained labeling representation (DAE) is integrated into the training of the mean teacher method, which maps the teacher predictions $p^t$ into plausible segmentation $\hat{p}^t$. The uncertainty map ($U$) is subsequently estimated with the teacher and DAE predictions, guiding the student model.} \label{fig:arch} \end{figure*} \section{Method} \label{sec:methods} The schematic of the proposed label representation-based uncertainty estimation is shown in Fig~\ref{fig:arch}. The main idea is to exploit a labeling representation that maps the predictions of the segmentation into set of plausible masks. The reconstructed segmentations will be later employed to estimate an uncertainty map. Following current literature \cite{yu2019uncertainty}, we adopt a mean teacher approach to train a segmentation network. These steps are detailed next. \subsection{Mean Teacher Formulation} The standard semi-supervised learning consists of $N$ labeled and $M$ unlabeled data in the training set, where $N \ll M$. Let $D_L = \{(x_i, y_i)\}_{i=1}^N$ and $D_U = \{(x_i)\}_{N+1}^{(N+M)}$ denote the labeled and unlabeled sets, where an input volume is represented as $x_i \in R^{H \times W \times D}$ and its corresponding segmentation mask is $y_i \in \{0,1,...,C\}^{H \times W \times D}$, with $C$ being the number of classes. We use the common mean teacher approach used in semi-supervised segmentation, which consists of a student ($S$) and teacher ($T$) model, both having the same segmentation architecture. The overall objective function is defined as follows: \begin{equation} \label{eq:learner} \mathcal{L} = \underset{\theta_s}{\text{min}} \sum_{i=1}^N \mathcal{L}_s(f(x_i; \theta_s), y_i) + \lambda_c \sum_{i=1}^{N+M} \mathcal{L}_c(f(x_i; \theta_s, \eta), f(x_i; \theta_t; \eta{'} )), \end{equation} where $f(\cdot)$ denotes the segmentation network, and $\theta_s$ and $\theta_t$ are the learnable weights of the student and teacher models. The supervised loss $\mathcal{L}_s$ measures the segmentation quality on the labeled data, whereas the consistency loss $\mathcal{L}_c$ measures the prediction consistency of student and teacher models for the same input volume $x_i$ under different perturbations ($\eta$, $\eta{'}$). The balance between supervised and unsupervised loss is controlled by a ramp-up weighting co-efficient $\lambda_c$. In the mean teacher training, the student model parameters are optimized with stochastic gradient descent (SGD), whereas exponential moving average (EMA) is employed at each training step $t$, i.e., $\theta_t = \alpha \theta_{t-1} + (1-\alpha) \theta_s$ to update the teacher model parameters. Note that $\alpha$ is the smoothing coefficient of EMA that controls the update rate. \subsection{Labeling Representation Prior} Incorporating object shape prior in deep segmentation models is not obvious. One of the reasons is that, in order to integrate such prior knowledge during training, one needs to augment the learning objective with a differentiable term, which in the case of complex shapes is not trivial. To circumvent these difficulties, a simpler solution is to resort to an autoencoder trained with pixel-wise labels, which can represent anatomical priors and be used as a global regularizer during training. This strategy has been adopted for fully supervised training in \cite{oktay2017anatomically} and as a post-processing step in \cite{larrazabal2020post} to correct the segmentation predictions. Motivated by this, we represent the available labels in a non-linear latent space using a denoising autoencoder (DAE) \cite{vincent2010stacked}, which somehow mimics a shape prior. The DAE model consists of an encoder $f_e(\cdot)$ and a decoder module $f_d(\cdot)$ with a $d$-dimensional latent space as shown in the Fig.~\ref{fig:arch}. The DAE is trained to reconstruct the clean labels $y_i$ from its corrupted version $\tilde{y}_i$, which can be achieved with a mean squared error loss: $\frac{1}{H \times W \times D}\sum_v ||f_d(f_e(\tilde{y}_{i,v})) - y_{i,v}||^2$. \subsection{Uncertainty from a Labeling Representation} The role of the uncertainty is to gradually update the student model with reliable target regions from the teacher predictions. Our proposed method estimates the uncertainty directly from the labeling representation network $f_d(f_e(\cdot))$, requiring only one inference step. First, we map the prediction from the teacher model $p^t_i$ with a DAE model to produce a plausible segmentation $\hat{p}^t_i$. We subsequently estimate the uncertainty as the pixel-wise difference between the DAE output and the prediction, i.e., $U_i = ||\hat{p}^t_i - p^t_i||^2$. Then, the reliable target for the consistency loss is obtained as $e^{-\gamma U_i}$, similarly to \cite{luo2021efficient}, where $\gamma$ is an uncertainty weighting factor empirically set to 1. Finally, our consistency loss is defined as: \begin{equation} \mathcal{L}_c(p^s_i, p^t_i) = \frac{\sum_v e^{-\gamma U_{i,v}} ||p^s_{i,v} - p^t_{i,v}||^2}{\sum_v e^{-\gamma U_{i,v}}} \end{equation} where $v$ is a voxel. We jointly optimize the consistency loss $\mathcal{L}_c$ and supervised loss $\mathcal{L}_s$ as learning objectives, where $\mathcal{L}_s$ uses the cross-entropy and dice losses. \section{Results} Our proposed method is compared with the state-of-the-art semi-supervised segmentation methods \cite{yu2019uncertainty,li2020shape,luo2021semi,luo2021efficient} \footnote{\small{We use official implementation provided by baseline methods to run the experiments.}}. We group the uncertainty-based methods to assess the effectiveness of our uncertainty estimation for segmentation. For a fair comparison, all experiments are run three times with a fixed set of seeds on the same machines, and their average results are reported. \paragraph{\bf Dataset and Evaluation Metrics.} Our method is evaluated on the Left Atrium (LA) dataset from the 2018 Atrial Segmentation Challenge \cite{xiong2021global}. The dataset consists of 100 3D MR volumes of LA with an isotropic resolution of 0.625$mm^3$ and corresponding segmentation masks. In our experiments, we use a 80/20 training/testing split and apply the same preprocessing as in \cite{yu2019uncertainty,li2020shape,luo2021semi}. The training set is partitioned into $N/M$ labeled/unlabeled splits, fixed across all methods for each setting. We employ Dice Score Coefficient (DSC) and 95\% Hausdorff Distance (HD) metrics to assess segmentation performance. \paragraph{\bf Implementation and Training details.} Following \cite{yu2019uncertainty,li2020shape,luo2021semi}, we use V-net \cite{milletari2016v} as backbone architecture for teacher, student and DAE models. The skip connections are removed, and a dense layer is added at the bottleneck layer for the DAE model. The student model is trained by a SGD optimizer with an initial learning rate ($lr$) of 0.1 and momentum 0.9 for 6000 iterations with a cosine annealing \cite{loshchilov2016sgdr} decaying. The teacher weights are updated by an EMA with an update rate of $\alpha$=0.99 as in \cite{tarvainen2017mean}. The consistency weight is updated with Gaussian warming up function $\lambda_c = \beta * e^{-5(1-t/t_{max})^2}$, where $t$ and $t_{max}$ denotes current and maximum training iterations, and $\beta$ is set to 0.1, as in \cite{yu2019uncertainty}. The DAE model is also trained with SGD with $lr$=0.1, momentum of 0.9 and decaying the $lr$ by 2 for every 5000 iterations. Input to both segmentation and DAE networks are random cropped to $112 \times 112 \times 80$ size and employ online standard data augmentation techniques such as random flipping and rotation. In addition, input labels to the DAE model are corrupted with a random swapping of pixels around class boundaries, morphological operations (erosion and dilation), resizing and adding/removing shapes. The batch size is set to 4 in both networks. Input batch for segmentation network uses two labeled and unlabeled data. For testing, generating segmentation predictions uses the sliding window strategy, and the method is evaluated at the last iteration as in \cite{yu2019uncertainty}. Our experiments were run on an NVIDIA RTX A6000 GPU with PyTorch 1.8.0+cu111. \paragraph{\bf Comparison with the state-of-the-art.} We now compare our method with relevant semi-supervised segmentation approaches under the 10\% and 20\% labeled data settings and report their results in Tables~\ref{table:10results}-\ref{table:20results}. Non-uncertainty-based method such as MT \cite{tarvainen2017mean}, DCT \cite{luo2021semi}, and SASSnet \cite{li2020shape} are grouped in the middle of the table, while uncertainty-based methods UAMT \cite{yu2019uncertainty}, URPC \cite{luo2021efficient} \footnote{Note that URPC \cite{luo2021efficient} use multi-scale 3D U-Net \cite{cciccek20163d} architecture.} and our methods are grouped at the bottom of each table. The upper and lower bound from the backbone architecture V-net \cite{milletari2016v} are reported in the top. In the first setting, 10\% of labeled data is used, and the remaining images are used as unlabeled data. From Table~\ref{table:10results}, we can observe that leveraging unlabeled data improves the lower bound in all baselines. The uncertainty-based baselines seem to improve the segmentation performance by 1\% in Dice score compared to non-uncertainty-based baselines. However, their performance drops in terms of HD up to 5mm. Among baseline methods, UAMT and DCT achieve the best Dice and HD scores, respectively. Compared to these best performing baselines, our method brings 1.5\% and 0.8mm improvements in Dice and HD scores. Moreover, uncertainty estimation in our method requires a single inference from a labeling representation, whereas UAMT uses $K$=8 inferences per training step to obtain an uncertainty map. Furthermore, we also validate our method on the 20\% of labeled data scenario, whose results are reported in Tables~\ref{table:20results}. Results demonstrate a similar trend as compared to the 10\% experiments. The uncertainty-based baselines improve 1\% in terms of Dice and drop up to 1mm in HD, compared to non-uncertainty-based methods. Our method improves the best performing baseline in both Dice and HD scores. Particularly, our method improves the HD score by 2.5mm compared to the best performing baseline (SASSnet). Visual results of different segmentation results are depicted in Fig.~\ref{fig:seg_results}. In the top row of the figure, the segmentation of SASSnet produces holes in segmentation, and their method employs a post-processing tool to improve the segmentation, which is avoided for a fair comparison. DTC captures the challenging top right side region in segmentation; however, the prediction is under-segmented and noisy. The uncertainty-based methods improve the segmentation in UAMT and produce smooth segmentation boundaries in URPC. Our method improves the segmentation region further compared to URPC. In the case of 20\% labeled data experiments, all methods improve the segmentation due to having access to more labels during training, while the boundary regions are either under or over-segmented. Our method produces better and smoother segmentation, which can be due to the knowledge derived from the labeling representation. \begin{table}[h!] \centering \addtolength{\tabcolsep}{9pt} \caption{Segmentation results on the LA test set for 10\% labeled data experiments averaged over three runs. Uncertainty methods with $K$ inferences are grouped at the bottom, while $K$ = -, indicates non-uncertainty methods.} \scalebox{0.9}{ \begin{tabular}{l | c | c | c c} \toprule \bf Methods & \bf \#K & \bf $N$/$M$ & \textbf{DSC (\%)} & \textbf{HD (mm)} \\ \midrule Upper bound & - & 80/0 & 91.23 $\pm$ 0.44 & 6.08 $\pm$ 1.84 \\ Lower bound & - & 8/0 & 76.07 $\pm$ 5.02 & 28.75 $\pm$ 0.72 \\ \midrule MT \cite{tarvainen2017mean} & - & 8/72 & 78.22 $\pm$ 6.89 & 16.74 $\pm$ 4.80 \\ SASSnet \cite{li2020shape} & - & 8/72 & 83.70 $\pm$ 1.48 & 16.90 $\pm$ 1.35 \\ DCT \cite{luo2021semi} & - & 8/72 & 83.10 $\pm$ 0.26 & 12.62 $\pm$ 1.44 \\ \midrule UAMT \cite{yu2019uncertainty} & 8 & 8/72 & 85.09 $\pm$ 1.42 & 18.34 $\pm$ 2.80 \\ URPC \cite{luo2021efficient} & 1 & 8/72 & 84.47 $\pm$ 0.31 & 17.11 $\pm$ 0.60 \\ Ours & 1 & 8/72 & \bf 86.58 $\pm$ 1.03 & \bf 11.82 $\pm$ 1.42 \\ \bottomrule \end{tabular}} \label{table:10results} \end{table} \begin{table}[h!] \centering \addtolength{\tabcolsep}{9pt} \caption{Segmentation results on the LA test set for 20\% labeled data experiments averaged over three runs. Uncertainty methods with $K$ inferences are grouped at the bottom, while $K$ = -, indicates non-uncertainty methods.} \scalebox{0.9}{ \begin{tabular}{l | c | c | c c} \toprule \bf Methods & \bf \#K & \bf $N$/$M$ & \textbf{DSC (\%)} & \textbf{HD (mm)}\\ \midrule Upper bound & - & 80/0 & 91.23 $\pm$ 0.44 & 6.08 $\pm$ 1.84 \\ Lower bound & - & 16/0 & 81.46 $\pm$ 2.96 & 23.61 $\pm$ 4.94 \\ \midrule MT \cite{tarvainen2017mean} & - & 16/64 & 86.06 $\pm$ 0.81 & 11.63 $\pm$ 3.4 \\ SASSnet \cite{li2020shape} & - & 16/64 & 87.81 $\pm$ 1.45 & 10.18 $\pm$ 0.55 \\ DCT \cite{luo2021semi} & - & 16/64 & 87.35 $\pm$ 1.26 & 10.25 $\pm$ 2.49 \\ \midrule UAMT \cite{yu2019uncertainty} & 8 & 16/64 & 87.78 $\pm$ 1.03 & 11.1 $\pm$ 1.91 \\ URPC \cite{luo2021efficient} & 1 & 16/64 & 88.58 $\pm$ 0.10 & 13.1 $\pm$ 0.60 \\ Ours & 1 & 16/64 & \bf 88.60 $\pm$ 0.82 & \bf 7.61 $\pm$ 0.78 \\ \bottomrule \end{tabular}} \label{table:20results} \end{table} \begin{figure*}[t!] \centering \includegraphics[width=0.9\linewidth]{Segmentation_results_v2.pdf} \caption{\textbf{Qualitative comparison under 10\% and 20\% annotation setting.} DSC (\%) and HD (mm) scores are mentioned at the top of each image. Coloring is the prediction (Red) and ground truth (Blue).} \label{fig:seg_results} \end{figure*} \paragraph{\bf Ablation Study.} To validate the effectiveness of our uncertainty estimation on segmentation performance, two experiments are conducted by adopting threshold strategy and entropy scheme from UAMT. Particularly, a threshold strategy is used in consistency, whereas entropy is used to estimate the uncertainty, and their results are reported in Table~\ref{table:ablation}. Compared to UAMT, our threshold and entropy experiments significantly improve the segmentation performance in HD and Dice scores, while our proposed method (L2-based exponential uncertainty) achieves the best performance. These results show the merit of our labeling representation for uncertainty estimation. Furthermore, we report the ablation on uncertainty weight $\gamma$ and consistency weight $\beta$, in Table~\ref{table:ablation_beta_gamma}. Results demonstrate that $\gamma$=1 is best for our method, while for $\beta$=1 our method further improves Dice and HD scores; however, we report on $\beta$=0.1 in all experiments for a fair comparison. Overall, for most of the $\gamma$ and $\beta$ values, our method is consistently better than UAMT baselines, demonstrating the robustness of our approach. \begin{table}[h!] \centering \caption{Effectiveness of our proposed uncertainty estimation on segmentation results.} \scalebox{0.9}{ \begin{tabular}{l | c | c c} \toprule \bf Methods & \bf $N$/$M$ & \textbf{DSC (\%)} & \textbf{HD (mm)} \\ \midrule UAMT \cite{yu2019uncertainty} & 8/72 & 85.09 $\pm$ 1.42 & 18.34 $\pm$ 2.80 \\ Ours (Threshold) & 8/72 & 85.39 $\pm$ 0.91 & 12.96 $\pm$ 3.05 \\ Ours (Entropy) & 8/72 & 85.92 $\pm$ 1.52 & \bf 11.16 $\pm$ 0.82 \\ Ours & 8/72 & \bf 86.58 $\pm$ 1.03 & 11.82 $\pm$ 1.42 \\ \bottomrule \end{tabular}} \label{table:ablation} \end{table} \begin{table}[h!] \centering \addtolength{\tabcolsep}{2.0pt} \caption{Evaluating the $\gamma$ and $\beta$ values under 10\% annotation setting.} \scalebox{0.9}{ \begin{tabular}{l c c | l c c} \toprule \bf $\gamma$, $\beta=0.1$ & \textbf{DSC (\%)} & \textbf{HD (mm)} & \bf $\beta$, $\gamma=1$ & \textbf{DSC (\%)} & \textbf{HD (mm)} \\ \midrule 0.1 & 85.30 $\pm$ 1.17 & 13.51 $\pm$ 2.66 & 0.01 & 84.89 $\pm$ 0.92 & 11.84 $\pm$ 2.79 \\ 0.5 & 85.28 $\pm$ 0.60 & 14.01 $\pm$ 4.44 & 0.05 & 85.88 $\pm$ 1.44 & 10.98 $\pm$ 1.85 \\ 1 & \bf 86.58 $\pm$ 1.03 & \bf 11.82 $\pm$ 1.42 & 0.1 & 86.58 $\pm$ 1.03 & 11.82 $\pm$ 1.42 \\ 2 & 85.84 $\pm$ 1.39 & 12.13 $\pm$ 3.43 & 0.5 & 86.54 $\pm$ 0.74 & 12.42 $\pm$ 1.31 \\ 5 & 84.87 $\pm$ 0.85 & 15.28 $\pm$ 1.76 & 1 & \bf 86.89 $\pm$ 0.6 & \bf 9.85 $\pm$ 0.82 \\ \bottomrule \end{tabular}} \label{table:ablation_beta_gamma} \end{table} \section{Conclusion} We presented a novel labeling representation-based uncertainty estimation for the semi-supervised segmentation. Our method produces an uncertainty map from a labeling representation network, which guides the reliable regions of prediction for the segmentation network, thereby achieving better segmentation results. Results demonstrate that the proposed method achieves the best performance compared to state-of-the-art baselines on left atrium segmentation from 3D MR volumes in two different settings. The ablation studies demonstrate the effectiveness and robustness of our uncertainty estimation compared to the entropy-based method. Our proposed uncertainty estimation from the labeling representation approach can be adapted to a broader range of applications where it is crucial to obtain a reliable prediction. \paragraph{\bf Acknowledgments:} This research work was partly funded by the Canada Research Chair on Shape Analysis in Medical Imaging, the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Fonds de Recherche du Quebec (FQRNT). \bibliographystyle{splncs04}
1,477,468,750,716
arxiv
\section{Introduction} In this paper we investigate a class of nonconvex and nonsmooth optimization problems, where the penalty is the composition of a nonsmooth nonconvex mapping with a linear operator and the smooth part is a least squares type term. Similar optimization problems in the case where the operator inside the penalty coincides with the identity matrix have attracted increasingly attention due to their applications to sparsity of solutions, feature selection, and many other related fields as e.g. compressed sensing, signal processing, and machine learning (see e.g. \cite{7,14}). The convex nonsmooth case of the $\ell^1$ norm has gained large popularity and has been thoroughly studied. The convexity allows to formulate efficient and globally convergent algorithms to find a numerical solution. Here we mention \cite{10,46} where the basis pursuit and the Lasso problems were introduced to solve $\ell^1$ minimization problems. Recently increased interest has arisen towards nonconvex and nonsmooth penalties, such as the $\ell^\tau$ quasi-norm, with $\tau$ larger or equal to zero and less than $1$ (see e.g. \cite{BRLORE,HIWU13,KI,KKR,LI,OHDOBRPO15}), the smoothly clipped absolute deviation (SCAD) \cite{FL,JJLR}, and the minimax concave penalty (MCP) \cite{CHZ,JJLR}. The nonconvexity has been shown to provide some advantages with respect to the convex models. For example, it allows to require less data in order to recover exactly the solution (see e.g. \cite{CS,FLai,45}) and it tends to produce unbiased estimates for large coefficients \cite{51,FL,FP}. Note that all the previously mentioned works deal with the particular case where the operator coincides with the identity. Nonconvex optimization problems as we consider, where the operator inside the penalty is different form the identity, arise also in the modelling of cohesive fractures in continuum mechanics, where the concavity of the penalty is crucial to model the evolution of the fracture energy released within the growth of the crack opening. Here the operator is of importance to model the jump of the displacement between the two lips of the fractures. We refer to \cite{PI13,GK,A1,A2} and subsection \ref{subsecnum} for more details. The study of these problems for nonconvex penalties, including $\ell^\tau$, with $\tau$ strictly positive and less than $1$, the SCAD and the MCP functionals, and for linear operators not necessarily coinciding with the identity, is also motivated by applications different from those arising in fracture mechanics. For example in imaging the $\ell^\tau$ quasi-norm, with $\tau$ strictly positive and less than $1$, of the numerical gradient of the solution has been proposed as a nonconvex extension of the total variation (like TV) regularizer (see e.g \cite{HIWU13,OHDOBRPO15}) in order to reconstruct piecewise smooth solutions. The SCAD and the MCP penalties have been used for high dimensional regression and variable selection methods in high-throughput biomedical studies \cite{BRHU}. We mention also that the SCAD has been proposed as a nonconvex penalty in the network estimation to attenuate the bias problem \cite{FANFENG}. The main difficulties in the analysis of these problems come from the interplay between the nonsmoothness, the nonconvexity, and the coupling between coordinates which is described by the operator inside the penalty. Since standard algorithms are not readily available, the resolution of these problems requires the development of new analytical and numerical techniques. In the present paper we propose a monotonically convergent algorithm to solve this kind of problems. This is an iterative procedure which solves the necessary optimality condition of a regularized version of the original problem. A remarkable property of our scheme is the strict monotonicity of the functional along the sequence of iterates. The convergence of the iteration procedure is proved under the same assumptions that guarantee existence of solutions. The performance of the scheme is successfully tested to simulate the evolution of cohesive fractures for several different test configurations. Then we turn to an issue of high relevance, namely the comparison between two alternative algorithms, the GIST "General Iterative Shrinkage and Thresholding" algorithm for $\ell^\tau$ minimization, with $\tau$ strictly positive and less than $1$ and the FISTA "Fast Iterative Shrinkage-Thresholding Algorithm" for $\ell^1$ minimization. The comparison is carried out with respect to the infimal value reached by the iteration procedure and with respect to computing time. Our results show that the monotone algorithm is able to reach a smaller value of the objective functional that we consider when compared to the one of GIST. Note that, differently from GIST, the monotone scheme solves a system of nonlinear equations at each iteration level. We remark that in \cite{LLSZ} GIST was compared with the IRLS "iterative reiweighted least squares" algorithm, which is another popular scheme for $\ell^\tau$ minimization, with $\tau$ strictly positive and less than $1$. The results of \cite{LLSZ} show that GIST and IRSL have nearly the same performance, with only one difference which is speed, where GIST appears to be the faster one. An analogous procedure to the one proposed in the present paper was developed in \cite{GK} to solve similar problems where the nonconvex penalty coincides with $\ell^\tau$ quasi-norm, with $\tau$ strictly positive and less than or equal to $1$. With respect to \cite{GK}, in the present paper we deal with more general concave penalties. Moreover, we carry out several numerical experiments for diverse situations in cohesive fracture mechanics, comparing the behaviours for different concave penalties such as the SCAD, the MCP and the $\ell^\tau$ penalty, with $\tau$ strictly positive and less than $1$. Finally in the present paper we compare the performance of the scheme with that of GIST. Let us recall some further literature concerning nonconvex nonsmooth optimization of the type investigated in the present paper. In \cite{JJLR,J} a primal-dual active set type algorithm has been developed, in the case the operator inside the penalty coincides with the identity. For more references in this case we refer to \cite{GK}. Concerning $\ell^\tau$ minimization, with $\tau$ larger than or equal to zero and less than or equal to $1$ when the operator is not the identity, other techniques have recently been investigated. Here we mention iteratively reweighted convex majorization algorithms \cite{OHDOBRPO15}, alternating direction method of multiplier (ADMM) \cite{LI} and finally a Newton-type solution algorithm for a regularized version of the original problem \cite{HIWU13}. Finally we recall the paper \cite{A1}, where a novel algorithm for nonsmooth nonconvex optimization with linear constraints is proposed, consisting of a generalization of the well-known nonstationary augmented Lagrangian method for convex optimization. The convergence to critical points is proved and several tests were made for free-discontinuity variational models, such as the Mumford-Shah functional. The nonsmoothness considered in \cite{A1} does not allow singular behaviour of the type that the $\ell^\tau$ term, with $\tau$ larger than or equal to zero and strictly less than $1$ does. The paper is structured as follows. In Section \ref{existence}, subsection \ref{subsecass} we state the precise assumptions, in subsection \ref{subsecex} we prove existence for the problem in consideration, in subsection \ref{subsecmon} we propose the monotone scheme to solve a regularized version of the original problem and we prove its convergence, and finally in subsection \ref{subsecas} we study the asymptotic behavior as the concavity and regularization parameters go to zero. In Section \ref{alg} we present the precise form of our scheme. In subsection \ref{subsecnum} we discuss our numerical experience for cohesive evolution of fracture mechanics and in subsection \ref{compg} we compare the performance of our scheme to that of GIST for three different test cases, the academical M-matrix example, an optimal control problem and a microscopy imaging example. \section{Existence and monotone algorithm}\label{existence} \subsection{Assumptions}\label{subsecass} We consider \begin{equation}\label{optprobphi} \min_{x \in {\mathbb R}^n}J(x)=\frac{1}{2}|A x-b|_2^2+\sum_{i=1}^r\phi(\Lambda x)_i, \end{equation} where $A \in \mathbb{M}^{m\times n}$, $\Lambda \in \mathbb{M}^{r\times n},$ $b \in {\mathbb R}^m$ and $\phi(t): {\mathbb R} \rightarrow {\mathbb R}^+$ satisfies \[ {\bf (H)} \begin{cases} \text{(i)}\; &\phi \mbox{ is even with } \phi(0)=0, \mbox{ nondecreasing for } t\geq 0 \mbox{ and continuous};\\ \text{(ii)}\; &\phi \mbox{ is differentiable on } ]0,\infty[;\\ \text{(iii)}\; &\phi \mbox{ is concave on } {\mathbb R}^+;\\ \text{(iv)}\; & \mbox{ there exists a neighbourhood of zero where the function } t \rightarrow \frac{\phi'(t)}{t} \mbox{ is monotone};\\ \end{cases} \] Above monotonically increasing or decreasing are admitted. Throughout the rest of the paper we will use the notation $$ \Phi(\Lambda x):=\sum_{i=1}^r\phi(\Lambda x)_i. $$ Under assumption ${\bf (H)}$, the following two cases are analysed: \begin{itemize} \item[(a)] \begin{itemize} \item[(i)] $\phi(t)$ is a constant when $|t|\geq t_0$ for some $t_0>0$; \item[(ii)] A is coercive, i.e. $\mbox{rank}(A)=n$. \end{itemize} \vspace{0.2cm} \item[(b)] \begin{itemize} \item[(i)] for some $\gamma>0$ it holds $\phi(at)=a^\gamma \phi(t)$ for all $t \in {\mathbb R}$ and $a \in {\mathbb R}^+$; \item[(ii)] $ \mbox{Ker}(A)\cap\mbox{Ker}(\Lambda)=\{0\}. $ \end{itemize} \vspace{0.2cm} \end{itemize} Three popular examples of nonconvex penalties which satisfy ${\bf (H)}$ and the assumptions on $\phi$ in (a) or (b) are the following: \medskip \begin{description} \item[${\bm \ell^\tau}$] $\tau \in (0,1], \lambda >0$ \begin{equation}\label{p} \phi(t)=\lambda |t|^\tau, \end{equation} satisfying $(b)(i)$. \vspace{0.5cm} \item[SCAD] $\tau >1, \lambda >0$ \begin{equation}\label{SCAD} \phi(t)=\left\{ \begin{array}{lll} \frac{\lambda^2(\tau+1)}{2} \quad &|t|\geq \lambda \tau\\ \frac{\lambda \tau |t|-\frac{1}{2}(t^2+\lambda^2)}{\tau-1}\quad &\lambda < |t|\leq \lambda \tau\\ \lambda |t| \quad &|t|\leq \lambda, \end{array} \right.\, \end{equation} satisfying $(a)(i)$. \vspace{0.5cm} \item[MCP] $\tau>1, \lambda >0$ \begin{equation}\label{MCP} \phi(t)=\left\{ \begin{array}{lll} \lambda(|t|-\frac{t^2}{2\lambda\tau})\quad &|t|< \lambda \tau\\ \frac{\lambda^2\tau}{2}\quad &|t|\geq \lambda \tau, \end{array} \right.\, \end{equation} satisfying $(a)(i)$. \end{description} \begin{remark}\rm{ The singularity at the origin of the three penalties leads to sparsity of the solution. In the SCAD and the MCP, the derivative vanishes for large values to ensure unbiasedness. Problems as \eqref{optprobphi} with $\phi$ given by the $\ell^\tau$-quasi norm with $\tau \in (0,1)$ were studied in \cite{GK}. For more details on its statistical properties, such as variable selection and oracle property, of the $\ell^\tau$-quasi norm, we refer to \cite{CS,FLai,HHM,KKF}. The SCAD (smoothly clipped absolute deviation) (\cite{FL,FP}) has raised interest in relation to variable selection consistency and asymptotic estimation efficiency (see \cite{FP}). It can be obtained upon integration of the following formula for $\tau>2$ $$ \phi(t)=\lambda \int_0^{|t|}\min\left(1,\frac{\max(0,\lambda \tau-|s|)}{\lambda (\tau-1)}\right) ds. $$ The MCP (minimax concave penalty) \cite{CHZ} can be recovered from the following formula $$ \phi(t)=\lambda \int_0^{|t|} \max\left (0,1-\frac{|s|}{\lambda \tau}\right) ds $$ and minimizes the maximum concavity $\sup_{0<t_1<t_2}\frac{\left(\phi'(t_1)-\phi'(t_2)\right)}{(t_2-t_1)}$ subject to the constraints $\phi'(t)=0$ for any $|t|\geq \lambda \tau$ (unbiasedness) and $\phi'(0^{\pm})=\pm \lambda$ (feature selection). The condition $\tau>1$ ensures the wellposedness of the thresholding operator. } \end{remark} \subsection{Existence}\label{subsecex} First we prove coercivity of the functional J in \eqref{optprobphi} under assumptions (a) or (b). \begin{lem}\label{coer} Let assumptions \textbf{(H)} and either (a) or (b) hold. Then the functional J in \eqref{optprobphi} is coercive. \end{lem} \begin{proof} Under assumption (a), the coercivity of J follows trivially. Suppose now that $(c)$ holds. Then the result follows by similar arguments to that used in \cite{GK}, Theorem 1 (where $\phi$ is the $\ell^\tau$ quasi-norm). We proceed by contradiction and we suppose that $|x_k|_2\rightarrow + \infty$ and $J(x_k)$ is bounded. For each $k$, let $x_k=t_kz_k$ be such that $t_k\geq 0, x_k \in {\mathbb R}^n$ and $|z_k|_2=1. $ By (b) (i) we have $$ \Phi(\Lambda z_k)= \frac{1}{t_k^\gamma}\Phi(\Lambda x_k) $$ and then since $t_k \rightarrow + \infty$ and $J(x_k)$ is bounded, we have for $k \rightarrow + \infty$ $$ \displaystyle 0\leq |A z_k|_2^2+ \Phi(\Lambda z_k)=\frac{1}{ t_k^2}|A x_k|_2^2+ \frac{1}{t_k^\gamma}\Phi(\Lambda x_k)\leq \frac{1}{t_k^{\min\{2,\gamma\}}}\left( |A x_k|_2^2+\Phi(\Lambda x_k)\right) \rightarrow 0 $$ and hence $$ \lim_{k \rightarrow + \infty} |Az_k|_2^2+\Phi(\Lambda z_k)=0. $$ By compactness, the sequence $\{z_k\}$ has an accumulation point $\bar z$ such that $|\bar z|=1$ and $\bar z \in \mbox{Ker}(A)\cap \mbox{Ker}(\Lambda)$, which contradicts (c) (ii). \end{proof} In the following theorem we state the existence of at least a minimizer to \eqref{optprobphi} under either (a) or (b). We omit the proof since it follows directly by the continuity and coercivity of the functional in \eqref{optprobphi}. \begin{theorem}\label{thmex} Let assumptions \textbf{(H)} and either (a) or (b) hold. Then there exists at least one minimizer to problem \eqref{optprobphi}. \end{theorem} \begin{oss}\rm{ We remark that when assumption (a) (i) holds but $A$ is not coercive, existence can still be proven in case $\Lambda \in {\mathbb R}^{n\times n}$ is invertible. Indeed by the invertibility of $\Lambda$, one can define $\bar{y}=\Lambda^{-1}\bar x$, where $\bar x$ is a minimizer of $\bar{J}(x)=\frac{1}{2}|(A\Lambda^{-1})x-b|_2^2+\Phi(x)$ and prove that $\bar{y}$ is a minimizer of \eqref{optprobphi}. The existence of a minimizer for the functional $\bar{J}$ was proven in \cite{J}, Theorem $2.1$. However in our analysis we cover the two cases (a) and (b) since when (a) (ii) is replaced by the invertibility of $\Lambda$, we can not prove the coercivity of $J$, which is a key element for the convergence of the algorithm that we analise (see the following section). } \end{oss} \subsection{A monotone convergent algorithm}\label{subsecmon} Following \cite{KI}, in order to overcome the singularity of the function $\phi(t)$ near $t=0$, we consider for $\varepsilon>0$ the following regularized version of \eqref{optprobphi} \begin{equation} \label{optprobepsphi} \min_{x \in {\mathbb R}^n}J_\varepsilon(x) = \frac{1}{2}|A x-b|_2^2+ \Psi_\varepsilon(|\Lambda x|^2), \end{equation} where for $t \geq 0$ \begin{equation}\label{psieps} \Psi_{\varepsilon}(t)= \left\{ \begin{array}{ll} \frac{\phi'(\varepsilon)}{2\varepsilon}t+\left(1-\frac{\phi'(\varepsilon) \varepsilon}{2\phi(\varepsilon)}\right)\phi(\varepsilon) \quad &\mbox{for }\,\, 0\leq t \leq \varepsilon^2\\ \noalign{\smallskip} \phi(\sqrt{t}) \quad & \mbox{ for }\,\, t \geq \varepsilon^2, \end{array} \right.\, \end{equation} and $\Psi_\varepsilon(|\Lambda x|^2)$ is short for $\sum_{i=1}^r \Psi_\varepsilon(|(\Lambda x)_i|^2)$. Note that \begin{equation}\label{derpsi} \Psi'_\varepsilon(t)=\frac{1}{\max\left\{\frac{2\varepsilon}{\phi'(\varepsilon)},\frac{2\sqrt{t}}{\phi'(\sqrt{t})}\right\}} \geq 0 \quad \mbox{ on } (0, \infty), \end{equation} hence $\Psi_\varepsilon$ is $C^1$ and by assumption \textbf{(H)} (iii) is concave on $[0,\infty)$. Moreover $t \rightarrow \Psi_\varepsilon(t^2) \in C^1(-\infty, \infty)$. \begin{oss} \rm{ In \eqref{derpsi} we suppose that the function $x \rightarrow \frac{\phi'(x)}{x}$ is decreasing. When the function $x \rightarrow \frac{\phi'(x)}{x}$ is increasing, one needs to replace the maximum in \eqref{derpsi} with the minimum and the following results follow as well. } \end{oss} The necessary optimality condition for \eqref{optprobepsphi} is given by \begin{equation}\label{necoptcondexp} A^*A x+\Lambda^* \frac{1}{\max\left\{\frac{\varepsilon}{\phi'(\varepsilon)},\frac{|\Lambda x|}{\phi'(|\Lambda x|)}\right\}}\Lambda x=A^*b, \end{equation} the second addend is short for the vector with $l$-component $\sum_{i=1}^r(\Lambda^*)_{li} \frac{1}{\max\left\{\frac{\varepsilon}{\phi'(\varepsilon)},\frac{|(\Lambda x)_i|}{\phi'(|(\Lambda x)_i|)}\right\}}(\Lambda x)_i$. For convenience of exposition in the following we write \eqref{necoptcondexp} in the more compact notation $$ A^*A x+2\Lambda^* \Psi'_\varepsilon(|\Lambda x|^2)\Lambda x=A^*b, $$ where the $l$-component of the second addend is given by $\sum_{i=1}^r(\Lambda^*)_{li} \Psi'_\varepsilon(|(\Lambda x)^2_i|)(\Lambda x)_i$. This can equivalently be expressed as \begin{equation} \label{optcondeps2m} A^*A x+2\Lambda^* \Psi'_\varepsilon(|y|^2)y=A^*b \quad \mbox{with } y=\Lambda x. \end{equation} In order to solve \eqref{optcondeps2m}, the following iterative procedure is considered: \begin{equation}\label{iter2m} A^*A x^{k+1}+2\Lambda^* \Psi'_\varepsilon(|y^k|^2)y^{k+1}=A^*b \quad \mbox{where } y^{k}=\Lambda x^{k}. \end{equation} Existence of a solution to equation \eqref{iter2m} for any $k \geq 0$ follows from the existence of a minimizer of the associated optimization problem \begin{equation}\label{minit} \min_{x\in {\mathbb R}^n} \frac{1}{2}|Ax-b|_2^2+\left|\sqrt{\Psi'_\varepsilon(|y^k|^2)} \Lambda x\right|_2^2. \end{equation} We have the following convergence result. \begin{theorem}\label{monotdec2m} Assume \textbf{(H)} and either (a) or (b). For $\varepsilon>0$, let $\{x_k\}$ be generated by \eqref{iter2m}. Then $J_{\varepsilon}(x_k)$ is strictly monotonically decreasing, unless there exists some $k$ such that $x^k = x^{k+1}$, and $x^k$ satisfies the necessary optimality condition \eqref{optcondeps2m}. Moreover every cluster point of $x^k$, of which there exists at least one, is a solution of \eqref{optcondeps2m}. \end{theorem} \begin{proof} The proof strongly depends on the coercivity of the functional $J$ and it follows arguments similar to those of \cite[Theorem $4.1$]{KI}. Multiplying \eqref{iter2m} by $x^{k+1}-x^k$, we get \begin{eqnarray}\label{proofeq1} \frac{1}{2}|A x^{k+1}|^2-\frac{1}{2}|A x^k|^2+\frac{1}{2}|A (x^{k+1}-x^{k})|^2&\nonumber +& \left( 2\Psi'_\varepsilon(|y^k|^2)y^{k+1},y^{k+1}-y^{k}\right)\\ &=&(A^*b,x^{k+1}-x^k). \end{eqnarray} Note that \begin{equation}\label{mon1} \left(y^{k+1},y^{k+1}-y^k\right)=\frac{1}{2}\sum_{i=1}^n\left(|y_i^{k+1}|^2-|y_i^{k}|^2+|y_i^{k+1}-y_i^{k}|^2\right). \end{equation} By assumption ${\bf (H)}\,(iii)$ the function $t \rightarrow \Psi_\varepsilon(t)$ is concave on $[0,\infty)$, and we have \begin{equation}\label{mon3} 2\Psi_\varepsilon(|y_i^{k+1}|^2)-2\Psi_\varepsilon(|y_i^{k}|^2)-\Psi'_\varepsilon(|y_i^k|^2)(|y_i^{k+1}|^2-|y_i^{k}|^2) \leq 0. \end{equation} Then, using \eqref{proofeq1}-\eqref{mon3}, we get \begin{equation}\label{442m} J_\varepsilon(x^{k+1}) +\frac{1}{2}|A(x^{k+1}-x^k)|_2^2+\sum_i\Psi'_\varepsilon(|y_i^k|^2)|y_i^{k+1}-y_i^{k}|^2 \leq J_\varepsilon(x^k). \end{equation} From \eqref{442m} and the coercivity of $J_\varepsilon$, it follows that $\{x^k\}_{k=1}^\infty$ and thus $\{y^{k}\}_{k=1}^\infty$ are bounded. Consequently, from \eqref{442m} and \eqref{derpsi}, there exists a constant $\kappa>0$ such that \begin{equation}\label{45} J_\varepsilon(x^{k+1}) +\frac{1}{2}|A(x^{k+1}-x^k)|_2^2+\kappa|y^{k+1}-y^{k}|_2^2 \leq J_\varepsilon(x^k). \end{equation} Conditions (a) (ii), (b) (ii) respectively imply that $J_\varepsilon(x_k)$ is strictly decreasing unless $x^k=x^{k+1}$. In the latter case from \eqref{iter2m} we infer that $x^k$ solves \eqref{optcondeps2m}, from which we conclude the first part of the theorem. From \eqref{45}, we conclude that \begin{equation}\label{46} \sum_{k=0}^\infty |A(x^{k+1}-x^k)|_2^2+\kappa|y^{k+1}-y^{k}|_2^2 <\infty. \end{equation} Since $\{x^k\}_{k=1}^\infty$ is bounded, there exists a subsequence and $\bar x \in {\mathbb R}^n$ such that $x^{k_l}\rightarrow \bar x$. By \eqref{46} we get $$ \lim_{k\rightarrow \infty}|A(x^{k+1}-x^k)|_2^2+\kappa|y^{k+1}-y^{k}|_2^2=0. $$ Then by using the coercivity of A under assumption (a) and the fact that $\mbox{Ker}(A)\cap \mbox{Ker}(\Lambda)=\{0\}$ under assumption (b), we conclude that $$ \lim_{k\rightarrow \infty}(x^{k+1}-x^{k})=0 $$ and hence $x^{k_l+1} \rightarrow \bar x$. We can now pass to the limit with respect to $k$ in \eqref{iter2m}, to obtain that $\bar x$ is a solution to \eqref{optcondeps2m}. \end{proof} In the following proposition we establish the convergence of \eqref{optprobepsphi} to \eqref{optprobphi} as $\varepsilon$ goes to zero. \begin{proposition} Assume \textbf{(H)} and either (a) or (b). Denote by $\{x_\varepsilon\}_{\varepsilon>0}$ a solution to \eqref{optprobepsphi}. Then any cluster point of $\{x_\varepsilon\}_{\varepsilon>0}$, of which there exists at least one, is a solution of \eqref{optprobphi}. \end{proposition} \begin{proof} From the coercivity of $J_\varepsilon$ we have that $\{x_\varepsilon\}_{\varepsilon}$ is bounded for $\varepsilon$ small. Hence there exists a subsequence and $\bar x \in {\mathbb R}^n$ such that $x_{\varepsilon_l} \rightarrow \bar x$. By property \textbf{(H)} (i) of $\phi$, we have \begin{equation}\label{Hi} \lim_{t\rightarrow 0 }\phi(t)=0 \quad \mbox{ and } \quad \phi'(t)\geq 0 \quad \forall t \geq 0. \end{equation} Moreover by the concavity of the function $\phi$ we have $$ \phi(t)-\phi(s)\leq \phi'(s)(t-s) \quad \mbox{for } s \in(0,\infty), t \in [0,\infty) $$ and by choosing $s=\varepsilon$ and $t=0$ and by \eqref{Hi}, we get for $\varepsilon$ small enough \begin{equation}\label{psiepseps} \phi'(\varepsilon)\varepsilon \rightarrow 0 \,\,\mbox{ as } \varepsilon \rightarrow 0. \end{equation} By the definition of $\Psi_\varepsilon$, \eqref{Hi} and \eqref{psiepseps} we obtain that $\Psi_\varepsilon(t)$ converges uniformly to $\phi(\sqrt{t})$ as $\varepsilon \rightarrow 0$, equivalently $$ \sup_{t\in [0,\infty)}\left|\Psi_\varepsilon(t) -\phi(\sqrt{t})\right| \rightarrow 0\quad \mbox{ as } \varepsilon \rightarrow 0, $$ from which we obtain \begin{equation}\label{limpsieps} \Psi_\varepsilon(|\Lambda x_\varepsilon|^2)=\sum_{i=1}^r\Psi_\varepsilon(|(\Lambda x_\varepsilon)_i|^2) \rightarrow \sum_{i=1}^r \phi(\Lambda x_\varepsilon)_i=\Phi(\Lambda\bar x)\quad \mbox{ as } \varepsilon \rightarrow 0. \end{equation} Since $\{x_\varepsilon\}_{\varepsilon}$ solves \eqref{optprobepsphi}, by letting $\varepsilon\rightarrow 0$ and using \eqref{limpsieps}, we easily get that $\bar x$ is a solution of \eqref{optprobphi}. \end{proof} \subsection{Asymptotic behaviour as $\lambda \rightarrow 0^{+}$ and $\tau\rightarrow 0^{+}$ for the power law}\label{subsecas} We discuss the asymptotics as $\lambda$ and $\tau$ go to zero in \eqref{optprobphi} for $\phi(t)=|t|^\tau, \tau \in (0,1]$, which we repeat for convenience \begin{equation}\label{optprobphias} \min_{x \in {\mathbb R}^n}\frac{1}{2}|A x-b|_2^2+\lambda |\Lambda x|_\tau^\tau, \end{equation} where $A, b, \Lambda$ are as in \eqref{optprobphi}, $\tau \in (0,1], \lambda >0$ and $$ |\Lambda x|_\tau^\tau=\sum_{i=1}^r |(\Lambda x)_i|^\tau. $$ First we analyse the convergence as $\lambda \rightarrow 0$ for any fixed $\tau >0$. We denote by $P$ the orthogonal projection of ${\mathbb R}^n$ onto $\mbox{Ker}(A^{*})$ and set $\tilde{b} = (I-P)b \in \mbox{Rg}(A)$. Then $$ |Ax -b|_2^2=|Ax-\tilde{b}|_2^2+|Pb|_2^2. $$ For $\tau>0$ fixed consider the problem \begin{equation}\label{minnorm} \min |\Lambda x|_\tau^\tau \quad \mbox{ subject to } Ax -\tilde{b}=0, \end{equation} \begin{theorem} Let $\mbox{rank}(A)=n$. For $\tau>0$ fixed, let $x_{\lambda}$ be a minimizer of \eqref{optprobphias} for any given $\lambda >0$. Let $\tilde{x} \in {\mathbb R}^n$ be such that $A\tilde{x}=\tilde{b}$. Then every cluster point of solutions $x_{\lambda}$ to \eqref{optprobphias} as $\lambda \rightarrow 0^{+}$ is a solution to \eqref{minnorm}. \end{theorem} \begin{proof} By optimality of $x_\lambda$ we have \begin{equation}\label{G} \frac{1}{2}|Ax_{\lambda} - \tilde{b}|_2^2+\lambda |\Lambda x_\lambda|_\tau^\tau \leq \frac{1}{2}|A\tilde{x}-\tilde{b}|_2^2+\lambda |\Lambda \tilde{x}|_\tau^\tau=\lambda |\Lambda \tilde{x}|_\tau^\tau, \end{equation} from which we conclude that $\lim |Ax_{\lambda} -\tilde{b}|_2^2=0$ as $\lambda \rightarrow 0^{+}$. Since $\mbox{rank}(A)=n$, the sequence $\{x_\lambda\}_{\lambda >0}$ is bounded in $\lambda$. Then there exists $\bar x \in {\mathbb R}^n$ and a subsequence $x_{\lambda_l} \rightarrow \bar x$ satisfying $A \bar x =\tilde{b}$. From \eqref{G} we have $$ |\Lambda x_\lambda|_\tau^\tau\leq |\Lambda \tilde{x}|_\tau^\tau \quad \mbox{ for all } \tilde x \mbox{ satisfying } A\tilde{x}=\tilde{b}. $$ Taking the limit as $\lambda \rightarrow 0^{+}$, we conclude that $\bar x$ is a solution to \eqref{minnorm}. \end{proof} Now we prove the convergence as $\tau \rightarrow 0$ for any fixed $\lambda >0$ of \eqref{optprobphias} to the related $\ell^0$-problem \begin{equation}\label{optprobphi0} \min_{x \in {\mathbb R}^n}\frac{1}{2}|Ax-b|_2^2+\lambda |\Lambda x|_0, \end{equation} where for any $x \in {\mathbb R}^n$ $$ |x|_0=\sum_{k=1}^n|x_k|^0=\mbox{ number of nonzero elements of } x. $$ The precise statement is given in the following theorem. \begin{theorem} Let $\mbox{rank}(A)=n$ and let $\lambda >0$ be fixed. Then any cluster point (of which there exists at least one) of solutions $\{x_{\tau}\}$ to \eqref{optprobphias} converges as $\tau \rightarrow 0^{+}$ to a solution of \eqref{optprobphi0}. \end{theorem} \begin{proof} $\mbox{Rank}(A)=n$ ensures the existence of a converging subsequence (denoted with the same symbol) $\{x_\tau\}\rightarrow \bar x$ for some $\bar x\in {\mathbb R}^n$. For any fixed $i \in \{1,\cdots, r\}$, denote $y_\tau=|(\Lambda x_\tau)_i|$ and $\bar y=|(\Lambda \bar x)_i|$ and notice that $y_\tau \rightarrow \bar{y}$ as $\tau \rightarrow 0$. Then if $\bar y=0$, we can assume $y_\tau <1$ for $\tau$ enough small and we conclude $$ y_\tau^\tau<y_\tau\rightarrow 0 \quad \mbox{ as } \tau \rightarrow 0. $$ If $\bar y>0$ we have $$ \log(y_\tau^\tau)=\tau\log(y_\tau) \rightarrow 0 \quad \mbox{ as } \tau \rightarrow 0 $$ and thus $$ y_\tau^\tau \rightarrow 1 \quad \mbox{ as } \tau \rightarrow 0. $$ By using the above arguments for all $i=1,\cdots, r$, we have $$ |(\Lambda x_\tau)_i|^\tau \rightarrow |(\Lambda \bar x)_i|^0 \quad \mbox{ as } \tau \rightarrow 0 $$ and then we conclude \begin{equation}\label{limtau} |\Lambda x_\tau|^{\tau}_\tau\rightarrow |\Lambda \bar x|_0 \quad \mbox{ as } \tau \rightarrow 0. \end{equation} By the optimality of $x_\tau$ we have $$ \frac{1}{2}|Ax_\tau-b|_2^2+\lambda |\Lambda x_\tau|^{\tau}_\tau\leq \frac{1}{2}|Ax-b|_2^2+\lambda |\Lambda x|_\tau^\tau, \quad \mbox{ for all } x \in {\mathbb R}^n. $$ Then the proof follows by taking the limit $\tau\rightarrow 0$ and using \eqref{limtau} to obtain $$ \frac{1}{2}|A\bar x-b|_2^2+\lambda |\Lambda \bar x|_0\leq \frac{1}{2}|Ax-b|_2^2+\lambda |\Lambda x|_0, \quad \mbox{ for all } x \in {\mathbb R}^n. $$ \end{proof} \section{Algorithm and numerical results}\label{alg} For convenience we recall the algorithm in the following form. \begin{algorithm}[h!] \caption{Monotone algorithm with $\varepsilon$-continuation strategy} \begin{algorithmic}[1] \STATE Initialize $x^0$, $\varepsilon^0$, and set $y^{0}=\Lambda x^0$. Set $k=0$; \REPEAT \STATE Solve for $x^{k+1}$ $$ A^*A x^{k+1}+\Lambda^*\frac{1}{\max\left\{\frac{\varepsilon}{\phi'(\varepsilon)},\frac{|y^k|}{\phi'(|y^k|)}\right\}}\Lambda x^{k+1}=A^*b. $$ \STATE Set $y^{k+1}=\Lambda x^{k+1}$. \STATE Set $k=k+1$. \UNTIL{the stopping criterion is fulfilled}. \STATE Reduce $\varepsilon$ and repeat 2. \end{algorithmic} \end{algorithm} \begin{remark}\rm{ Note that an $\varepsilon$-continuation strategy is performed, that is, the procedure is performed for an initial value $\varepsilon^0$ and then $\varepsilon$ is decreased up to a certain value. More specifically, in all our experiments, $\varepsilon$ is initialized with $10^{-1}$ and decreased up to $10^{-12}$. \\ } \end{remark} \begin{remark}\rm{ The stopping criterion is based on the $l^\infty$-norm of the equation \eqref{optcondeps2m} and the tolerance is set to $10^{-3}$ in all the following examples, expect for the fracture problem where it is of the order of $10^{-15}$. } \end{remark} In the following subsection we present our numerical results in cohesive fracture mechanics. Then in subsection \ref{compg} the performance of our algorithm is compared to two other schemes for nonconvex and nonsmooth optimization problems. \subsection{Application to quasi-static evolution of cohesive fracture models}\label{subsecnum} In this section we focus on the numerical realization of quasi-static evolutions of cohesive fractures. This kind of problems require the minimization of an energy functional, which has two components: the elastic energy and the cohesive fracture energy. The underlying idea is that the fracture energy is released gradually with the growth of the crack opening. The cohesive energy, denoted by $\theta$, is assumed to be a monotonic non-decreasing function of the jump amplitude of the displacement, denoted by $\llbracket u \rrbracket$. Cohesive energies were introduced independently by Dugdale \cite{DG} and Barenblatt \cite{BA}, we refer to \cite{PI13} for more details on the models. Let us just remark that the two models differ mainly in the evolution of the derivative $\theta'(\llbracket u\rrbracket)$, that is, the \textit{bridging force}, across a crack amplitude $\llbracket u \rrbracket$. In Dugdale's model this force keeps a constant value up to a critical value of the crack opening and then drops to zero. In Barenblatt's model, the dependence of the force on $\llbracket u \rrbracket$ is continuous and decreasing. \\ In this section we test the $\ell^\tau$-term $0<\tau<1$ as a model for the cohesive energy. In particular, the cohesive energy is not differentiable in zero and the bridging force goes to infinity when the jump amplitude goes to zero. Note also that the bridging force goes to zero when the jump amplitude goes to infinity.\\ we denote by $u\,:\, \Omega \rightarrow {\mathbb R}$ the displacement function. The deformation of the domain is given by an external force which we express in terms of an external displacement function $g\,:\,\Omega\times [0,T] \rightarrow {\mathbb R}$. We require that the displacement $u$ coincides with the external deformation, that is $$ u|_{\partial \Omega}=g|_{\partial \Omega}. $$ We denote by $\Gamma$ the point of the (potential) crack, and by $\theta(\llbracket u \rrbracket)_\Gamma$ the value of the cohesive energy $\theta$ on the crack amplitude of the displacement $\llbracket u \rrbracket$ on $\Gamma$. Since we are in a quasi-static setting, we introduce the time discretization $0=t_0<t_1< \cdots <t_T=T$ and look for the equilibrium configurations which are minimizers of the energy of the system. This means that for each $i \in \{0, \cdots, T\}$ we need to minimize the energy of the system $$ J(u)=\frac{1}{2}\int_{\Omega\backslash \Gamma}|a(x)\nabla u|^2 dx +\theta(\llbracket u \rrbracket)_\Gamma $$ with respect to a given boundary datum $g$: $$ u^*\in \argmin_{u=g(t_i) \mbox{ on } \partial \Omega} J(u). $$ The function $a(\cdot)$ measures the degree of homogeneity of the material, e.g. $a(x)\equiv 1$ means that the material is homogeneous. In our experiments, we consider three different types of cohesive energy, the $\ell^\tau$ $\tau \in (0,1)$, SCAD and MCP penalties as defined in \eqref{p}, \eqref{SCAD}, \eqref{MCP} respectively. In paragraphs \ref{onedim} and \ref{twodim} we show our results for one-dimensional and two-dimensional experiments, respectively. \subsubsection{One-dimensional experiments}\label{onedim} We consider the one-dimensional domain $\Omega=[0,1]$ and we chose the point of crack as the midpoint $\Gamma=0.5$. We divide $\Omega$ into $2N$ intervals and approximate the displacement function with a function $u_h$ that is piecewise linear on $\Omega\backslash \Gamma$ and has two degrees of freedom on $\Gamma$ to represent correctly the two lips of the fracture, denoting with $u_{N}^{-}$ the one on $[0,0.5]$ and $u_{N}^{+}$ the one on $[0.5,1]$. We discretize the problem in the following way \begin{equation}\label{fractdiscr} J_h(u_h)=\frac{1}{2} \sum_{i=1}^{2N} 2N |a_i(u_i -u_{i-1})|^2+ \theta(\llbracket u_N \rrbracket), \end{equation} where if $i\leq N$ we identify $u_N=u_N^-$ while for $i>N, u_N=u_N^+$ and $a_h$ denotes the piecewise linear approximation of the material inhomogeneity function. We remark that the jump of the displacement is not taken into account in the sum, and the gradient of $u$ is approximated with finite difference of first order. The Dirichlet condition is applied on $\partial \Omega=\{0,2l\}$ and the external displacememt is chosen as $$ u(0,t)=0, \quad u(2l,t)=2lt. $$ To enforce the boundary condition in the minimization process, we add it to the energy functional as a penalization term. Hence, we solve the following unconstrained minimization problem \begin{equation}\label{ff1} \min N|A u_h -b|_2^2+\theta(\llbracket u_N \rrbracket), \end{equation} where the operator $A \in {\mathbb R}^{(2N+1)\times (2N+1)}$ is given by $A=RD$ where $R\in {\mathbb R}^{(2N+1)\times (2N+1)}$ is the diagonal operator with $i$-entries $R_{ii}=a_i$ and $$ A=\left[\begin{array}{c} \bar D\\ 0\, \,\cdots\,\, 0 \, \,\gamma\end{array} \right]. $$ Here $\bar D \in {\mathbb R}^{2N\times (2N+1)}$ denotes the backward finite difference operator $D \, : \,{\mathbb R}^{2N+1} \rightarrow {\mathbb R}^{2N+1}$ without the $N+1$ row, where $D$ is defined as \begin{equation}\label{Dcontrol} D=\left(\begin{array}{ccccc} 1& 0& 0& \cdots& 0\\ -1& 1& 0& \cdots& 0\\\vspace{0.2cm}\\ 0& \cdots& 0&-1&1\ \end{array}\right). \end{equation} Moreover $b \in {\mathbb R}^{2N+1}$ in \eqref{ff1} is given by $b=(0,\cdots, \gamma t_i)'$ and $\gamma$ is the penalization parameter. To compute the jump between the two lips of the fracture, we introduce the operator $D_f:{\mathbb R}^{2N+1} \rightarrow {\mathbb R}$ defined as $D_f=(0,\cdots, -1,1,0,\cdots,0)$ where $-1$ and $1$ are respectively in the $N$ and $N+1$ positions. Then we write the functional \eqref{ff1} as follows \begin{equation}\label{ff2} \min N|A u_h -b|_2^2+ \theta( D_fu), \end{equation} We consider the three different penalizations given by the $\ell^\tau, \tau \in (0,1)$, the SCAD and the MCP penalties. Note that $\mbox{KerA }=0$, hence assumptions $(a) (ii)$ and $(c)(ii)$ are satisfied and existence of a minimizer for \eqref{ff2} is guaranteed. Our numerical experiments were conducted with a discretization in $2N$ intervals with $N=100$. The time step in the time discretization of $[0,T]$ with $T=3$ is set to $dt=0.01$. The parameters of the energy functional $J_h(u_h)$ are set to $\lambda=1, \gamma=50$. We remark that in the following experiments the material function $a(x)$ was always chosen as the identity. For tests with more general $a(x)$, we refer to the two-dimensional experiments reported in the following subsection. In Figures $1,2$ we report our results obtained by \textbf{Algorithm $1$} respectively for the models $\ell^p$ and SCAD. In each figure we show time frames to represent the evolutions of the crack for different values of the parameter $\tau$. Each time frame consists of three different time steps $(t_1, t_2, t_3)$, where $t_2, t_3$ are chosen as the first instant where the prefacture and the fracture appear. We observe the three phases that we expect from a cohesive fracture model: \begin{itemize} \item \textit{Pure elastic deformation}: in this case the jump amplitude is zero and the gradient of the displacement is constant in $\Omega \backslash \Gamma$; \item \textit{Prefracture}: the two lips of the fracture do not touch each other, but they are not free to move. The elastic energy is still present. \item \textit{Fracture}: the two parts are free to move. In this final phase the gradient of the displacement (and then the elastic energy) is zero. \end{itemize} We remark that the formation of the crack is anticipated for smaller values of $\tau$. As we see in Figure $1$, for $\tau=.01$ prefracture and fracture are reached at $t=.3$ and $t=1.5$ respectively. As $\tau$ is increased to $\tau=.1$, prefracture and fracture occur at $t=1$ and $t=3$ respectively. We observe the same phenomenon for the SCAD (see Figure $2$). We tested our algorithm also for the MCP model, where no prefracture phase can be observed, that is, the displacement breaks almost instantaneously to reach the complete fracture. Finally we remark that in our experiments the residue is $O(10^{-16})$ and the number of iterations is small, e.g. $12, 15$ for $\tau=.01, .1$ respectively. \subsubsection{Two-dimensional experiments}\label{twodim} We consider the two-dimensional domain $\Omega=(0,1)\times (0,1)$ and we chose the one-dimensional subdomain $0.5\times (0,1)$ as the line of crack. We proceed in the discretization of the problem analogously as in subsection \ref{onedim}, that is, we divide $[0,1]$ into $2N$ intervals and approximate the displacement function with a function $u_h$ that is piecewise linear in $\Omega\backslash \Gamma$ and has two degrees of freedom on $\Gamma$ to represent correctly the two lips of the fracture. Define the operator $A \in {\mathbb R}^{(2N+1)\times (2N+1)}$ by $$ A=\left[\begin{array}{c} R^1D_1\\ R^2D_2\\ \gamma I_{m^2}\end{array} \right], $$ where $m=2N+2$, $R^1, R^2 \in {\mathbb R}^{(2N+1)\times (2N+1)}$ are two diagonal operators approximating the degree of homogeneity of the material, $D_1\in {\mathbb R}^{(m-1)(m-2)\times m^2}, D_2\in {\mathbb R}^{m(m-2)\times m^2}$ are defined as $$ D_1=G_1((m-1)N+1: (m-1)N+2(m-1),:)=[\quad], \quad D_2=G_2(mN+1:mN+m,:)=[\quad ], $$ where $G_1, G_2 \in {\mathbb R}^{m(m-1)\times m^2}$ are defined as follows $$ G_1=kron(I_m,D), \quad G_2=kron(D,I_m) $$ and $D \in {\mathbb R}^{m-1\times m}$ denotes the following backward finite difference operator \begin{equation}\label{Dcontrol} D=\left(\begin{array}{cccccc} -1& 1& 0& \cdots & \cdots& 0\\0 &-1& 1& 0& \cdots& 0\\\vspace{0.2cm}\\ 0& \cdots& \cdots& 0&-1&1\ \end{array}\right). \end{equation} Again we enforce the boundary condition by adding it to the energy functional as a penalization term. Hence, we solve the following unconstrained minimization problem \begin{equation}\label{ff12dim} \min |A u_h -b|_2^2+\theta(D_f u), \end{equation} where $b \in {\mathbb R}^{(m-2)(2m-1)+m^2}$ in \ref{ff12dim} is given by $b=(0,\cdots, \gamma g(t_i))'$, $g(t_i)$ is the discretization of the boundary datum $g$ at time $t_i$ and $\gamma$ is the penalization parameter. Moreover the jump of the crack is represented by the operator $D_f\in {\mathbb R}^{m\times m^2}$ defined as follows $$ D_f=[0_{m,mN}, -I_m, I_m, 0_{m,m^2-mN-2m}] $$ where by $0_{r,s}$ we denote the null matrix of dimension $r\times s$. Our numerical experiments were conducted with a discretization in $2N$ intervals with $N=80$. The time step in the time discretization of $[0,T]$ with $T=3$ is set to $dt=0.01$. The parameters of the energy functional $J_h(u_h)$ are set to $\lambda=1, \gamma=50$. We perform two different series of experiments with boundary data respectively resulting from evaluating $g_1, g_2$ on $\partial \Omega$, where $$ g_1(t)(x)=(2x_1-0.5)t \quad \mbox{ for every } t \in [0,1], x=(x_1,x_2) \in \Omega $$ and the other one with boundary datum $$ g_2(t)(x)=2t \cos(4(x_2-0.5))(x_1-0.5)\quad \mbox{ for every } t \in [0,1], x=(x_1,x_2) \in \Omega. $$ In Figures $3,4,5,6$ we show the results obtained with boundary datum $g_1$ for each of the considered models, that is, $\ell^\tau$, SCAD and MCP and in Figures $7$ the ones with boundary datum $g_2$ for the $\ell^\tau$ model. In the case of boundary datum $g_2$ we tested our algorithm also on the SCAD and the MCP models, obtaining similar results to the ones shown in Figure $7$. In these first experiments, the diagonal operators $R_1, R_2$ are taken as the identity, that is, we suppose to have an homogeneous material. As expected from a cohesive fracture model, we observe the three phases of pure elastic deformation, prefracture and fracture. Also, prefracture and fracture are reached at different times for different values of $\tau$, typically they are anticipated for smaller values of $\tau$. When the boundary datum is $g_2$, that is, not constant in the $y$ direction, we note that the fracture is reached before in the part of the fracture line corresponding to the part of the boundary where the datum is bigger. In Figures $8,9,10$ we tested the algorithm in case of a non homogeneous material. In Figure $8$ we show the result for a two-material specimen, that is, we took \begin{equation}\label{R1} \left\{ \begin{array}{lll} R^1_{ii}=600 & i=1, \cdots, (m-1)(N+1), \\ R^1_{ii}=1 & i=(m-1)(N+1)+1, \cdots, 2(m-1)(N+1) \end{array} \right.\, \end{equation} \begin{equation}\label{R2} \left\{ \begin{array}{lll} R^2_{ii}=600 & i=1, \cdots, mN, \\ R^2_{ii}=1 & i=mN+1, \cdots, 2mN \end{array} \right.\, \end{equation} Note that, for the above values of $R^1, R^2$, the slides of the specimen show an asymmetric behaviour, namely the displacement is flatter where the material function is bigger (that is, when $R_{ii}(x)=600$). In Figure $9,10$ we report the results when $R^1,R^2$ are the discretization of the following function \begin{equation}\label{Rr} \left\{ \begin{array}{lll} r(x,y)=400exp(y), & \mbox{ for } x\leq N \\ r(x,y)=400y & \mbox{ otherwise} \end{array} \right.\, \end{equation} Note that in Figure $10$ the boundary datum is chosen as $$ g_3(t)=\frac{1}{100}cos(2(y-0.5))(x-0.5). $$ As expected due to the choice of $R^1, R^2$, we remark an asymmetric behaviour of the fracture in the $y$ direction, namely the specimen brakes before where the material function is higher. \subsection{Comparison with GIST}\label{compg} In this section we present the result of experiments to compare the performance of \textbf{Algorithm 1} with the following two other algorithms for nonconvex and nonsmooth minimization. We first compare with the GIST "General Iterative Shrinkage and Thresholding" algorithm for $\ell^\tau$, $\tau<1$ minimization. We took advantage of the fact that for GIST\footnote{The reference paper is \cite{tuia}, the toolbox can be found in https://github.com/rflamary/nonconvex-optimization. } an open source toolbox is available, which facilitated an unbiased comparison. Moreover, in \cite{LLSZ} several tests were made to compare GIST and IRLS "Iteratively reweighted least squares", showing that the two algorithms have nearly the same performance, with only significant difference in speed, where GIST appears to be the faster one. Concerning $\ell^1$-minimization based algorithms, we compared our algorithm with the FISTA "Fast Iterative Shrinkage-Thresholding Algorithm", see subsection \ref{compg}. We remark that the results of \cite{LLSZ} show no particular differences in the performance of the algorithm for different values of $\tau$, except that the speed becomes much worse for p near to $1$, say $\tau=0.9$. Motivated also by this observations, the comparisons explained in the following were made for one fixed value of $\tau$. The comparison is carried out through the following three examples, the academical M-matrix problem, an optimal control problem and a microscopy imaging reconstruction example. The monotone algorithm is stopped when the $\ell^\infty$-residue of the optimality condition \ref{optcondeps2m} is of the order of $10^{-3}$ in the $M$-matrix and optimal control problems and of the order of $10^{-8}$ in the imaging example. GIST is terminated if the relative change of the two consecutive objective function values is less than $10^{-5}$ or the number of iterations exceeds $1000$. We remark that no significant changes were remarked by setting a lower tolerance than $10^{-5}$ or a bigger number of maximal iteration for GIST. Since both GIST and the FISTA solve the problem \eqref{optprobphi} when the operator $\Lambda$ coincides with the identity, we also make this choice in the following subsections. Finally we remark that the three examples were analysed already in \cite{GK} with different aims. \subsubsection{M-matrix example} We consider \begin{equation} \label{optprobM} \min_{x \in {\mathbb R}^{n\times n}}J(x)= \min_{x \in {\mathbb R}^{n\times n}}\frac{1}{2}|A x-b|_2^2+\lambda |x|^\tau_\tau, \end{equation} $A$ is the forward finite difference gradient $$ A=\left(\begin{array}{c} G_1\\G2\end{array}\right), $$ with $G_1 \in {\mathbb R}^{n(n+1)\times n^2}, G_2 \in {\mathbb R}^{n(n+1)\times n^2}$ as $$ G_1=I \otimes D, \quad G_2=D \otimes I, $$ $I$ is the $n\times n$ identity matrix, $\otimes$ the tensor product, $D=(n+1)\tilde{D},$ $ \tilde{D} \in {\mathbb R}^{(n+1)\times n}$ is $$ \left(\begin{array}{ccccc} 1& 0& 0& \cdots& 0\\ -1& 1& 0& \cdots& 0\\ \vspace{0.2cm}\\ 0& \cdots& 0&-1&1\\0&\cdots&0&0&-1 \end{array}\right). $$ Then $A^T A$ is an $M$ matrix coinciding with the $5$-point star discretization on a uniform mesh on a square of the Laplacian with Dirichlet boundary conditions. Moreover \eqref{optprobM} can be equivalently expressed as \begin{equation} \label{optprob2} \min_{x \in {\mathbb R}^{n\times n}}\frac{1}{2}|A x|_2^2-(x,f)+\lambda |x|^\tau_\tau, \end{equation} where $f=A^T b$. If $\lambda=0$ this is the discretized variational form of the elliptic equation \begin{equation} \label{elleq} -\Delta y=f \mbox{ in } \Omega, \quad y=0 \mbox{ on } \partial \Omega. \end{equation} For $\lambda>0$ the variational problem \eqref{optprob2} gives a sparsity enhancing solution for the elliptic equation \eqref{elleq}, that is, the displacement $y$ will be $0$ when the forcing $f$ is small. Our tests are conducted with $f$ chosen as discretization of $f=10 x_1\mbox{sin}(5x_2) \mbox{cos}(7 x_1)$. The inizialization is chosen as the solution of the corresponding non-sparse optimization problem.\\ We remark that in \cite{GKproc} and \cite{GK} the algorithm was also tested in the same situation for different values of $\tau$ and $\lambda$, showing, in particular and consistent with our expectations, that the sparsity of the solution increases with $\lambda$. Here we focus on the comparison between the performances of \textbf{Algorithm 1} and GIST. In order to compare the two schemes, we focus on the value of the unregularized functional $J$ in \eqref{optprobM} reached by both algorithms, the time to acquire it, and the number of iterations. Our tests were conducted for $\tau=0.5$, and $\lambda$ incrementally increasing from $10^{-3}$ to $0.3$, see the following tables. The parameter $\varepsilon$ was decreased from $10^{-1}$ to $10^{-6}$. These values are reported in Table $1,2$ and $3,4$ for GIST and \textbf{Algorithm 1} respectively. We observe that \textbf{Algorithm 1} achieves always lower values of the functional J, but in a longer time. The number of iterations needed by \textbf{Algorithm 1} is smaller than the number of iterations of GIST for small values of $\lambda$, more precisely for $\lambda <0.1$. Note that for smaller $\lambda$ the number of iterations of \textbf{Algorithm 1} is smaller than the one of GIST. This suggests, consistent with our expectation, that the monotone scheme is slower than GIST mainly because it solves a nonlinear equation at each iteration. We carried out a further test in order to measure the timing performance of \textbf{Algorithm 1}, that is, the algorithm is stopped as soon as the value of J achieved by GIST is reached. In Table $4,5$ we report the time, the number of iterations, the values of J, and the value of $\varepsilon$ reached. We observe that the time is almost always smaller than the one of GIST, except for values of $\lambda$ bigger or equal than $\lambda=0.15$. Also, for these values, the differences in the time are very small. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline\noalign{\smallskip} {\bf $\lambda$ } & 0.01& 0.05 & 0.10&0.15&0.2 & 0.3 \\ \noalign{\smallskip} \hline \noalign{\smallskip} J & 246.324 & 264.232 & 285.26& 303.685 & 319.737 &338.998\\ time & 0.563 & 0.701 & 0.444 &0.468 & 0.461 & 0.61\\ iterations & 293& 384 & 249&247 & 216 & 209 \\ \noalign{\smallskip}\hline \end{tabular} \caption{M-matrix example. Value of J, time, iterations of GIST.} \end{center} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline\noalign{\smallskip} {\bf $\lambda$ } & 0.01& 0.05 & 0.10&0.15&0.2 & 0.3 \\ \noalign{\smallskip} \hline \noalign{\smallskip} J & 246.186 & 263.92 & 284.079& 301.327 & 315.553 & 331.71 \\ time & 10.92 & 26.142 & 56.397& 33.021 & 124.624 & 31.423 \\ iterations & 149 & 361 & 779 & 456 & 1722 & 433\\ \noalign{\smallskip}\hline \end{tabular} \caption{M-matrix example. Value of J, time, iterations of \textbf{Algorithm 1}.} \end{center} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline\noalign{\smallskip} {\bf $\lambda$}& 0.001 & 0.01& 0.05 & 0.1&0.15&0.2 \\ \noalign{\smallskip} \hline \noalign{\smallskip} J$_{\mbox{\footnotesize{GIST}}}$ &242.158 & 246.324 & 264.232 & 285.26 & 303.685 & 319.737 \\ iter$_{\mbox{\footnotesize{mon}}}$ & 1 & 1 & 5 &5 & 6 & 7 \\ time$_{\mbox{\footnotesize{mon}}}$ & 0.085 & 0.082& 0.39& 0.387 &0.478& 0.673 \\ time$_{\mbox{\footnotesize{GIST}}}$ & 0.445 & 0.563 & 0.701 & 0.444 & 0.468 &0.461 \\ \noalign{\smallskip} \hline \end{tabular} \caption{M-matrix example. Value of the functional, iterations, time to which \textbf{Algorithm 1} overcome GIST's.} \end{center} \end{table} \subsubsection{Optimal control problem} We consider the linear control system $$ \frac{d}{dt} y(t)=\mathcal{A} y(t)+B u(t), \quad y(0)=0, $$ that is, \begin{equation}\label{LCSfinalstate} y(T)=\int_0^T e^{\mathcal{A}(T-s)} B u(s) ds, \end{equation} where the linear closed operator $\mathcal{A}$ generates a $C_0$-semigroup $e^{\mathcal{A}t}$, $t\geq 0$ on the Hilbert space $X$. More specifically, we consider the one-dimensional controlled heat equation for $y=y(t,x)$: \begin{equation}\label{actionscontrol} y_t=y_{xx}+b_1(x)u_1(t)+b_2(x)u_2(t), \quad x \in (0,1), \end{equation} with homogeneous boundary conditions $y(t,0)=y(t,1)=0$ and thus $X=L^2(0,1)$. The differential operator $\mathcal{A}y=y_{xx}$ is discretized in space by the second order finite difference approximation with $n=49$ interior spatial nodes ($\Delta x=\frac{1}{50}$). We use two time dependent controls $\overrightarrow u=(u_1,u_2)$ with corresponding spatial control distributions $b_i$ chosen as step functions: $$ b_1(x)=\chi_{(.2,.3)}, \quad b_2(x)=\chi_{(.6,.7)}. $$ The control problem consists in finding the control function $\overrightarrow u$ that steers the state $y(0)=0$ to a neighbourhood of the desired state $y_d$ at the terminal time $T=1$. We discretize the problem in time by the mid-point rule, i.e. \begin{equation}\label{midpoint} A \overrightarrow u=\sum_{k=1}^m e^{\mathcal{A}\left(T-t_{k}-\frac{\Delta t}{2}\right)} (B \overrightarrow u)_k \Delta t, \end{equation} where $\overrightarrow u=(u_1^1,\cdots, u_1^m,u_2^1,\cdots u_2^m)$ is a discretized control vector whose coordinates represent the values at the mid-point of the intervals $(t_k,t_{k+1})$. Note that in \eqref{midpoint} we denote by $B$ a suitable rearrangement of the matrix $B$ in \eqref{LCSfinalstate} with some abuse of notation. A uniform step-size $\Delta t=\frac{1}{50}$ ($m=50$) is utilized. The solution of the control problem is based on the sparsity formulation \eqref{optprobphi}, where $\Lambda=I$ and $\phi_{\lambda, \tau}(x)=\lambda |x|^{\tau}$ and $b$ in \eqref{optprobphi} is the discretized target function chosen as the Gaussian distribution $y_d(x)=0.4\,\mbox{exp}(-70(x-.7)^2))$ centered at $x=.7$. That is, we apply our algorithm for the discretized optimal control problem in time and space where $x$ from \eqref{optprobphi} is the discretized control vector $u \in {\mathbb R}^{2m}$ which is mapped by $A$ to the discretized output $y$ at time $1$ by means of \eqref{midpoint}. Moreover $b$ from \eqref{optprobphi} is the discretized state $y_d$ with respect to the spatial grid $\Delta x$. The parameter $\varepsilon$ was initialized with $10^{-3}$ and decreased down to $10^{-8}$. Similarly as in the previous subsection, we compare the values of the functional, the time and the number of iterations. The experiments are carried out for $\tau=0.5$ and $\lambda$ in the interval $10^{-3}$-$0.2$. We report only the values for the second control $u_2$ since the first control $u_1$ is always zero (as expected). As can be seen from the following tables, the same kind of remarks as in the previous subsection apply. In particular GIST is faster but less precise than \textbf{Algorithm 1}, but \textbf{Algorithm 1} overcomes the value reached by GIST more rapidly. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline\noalign{\smallskip} $\lambda$& 0.0001 & 0.001& 0.01 & 0.2\\ \noalign{\smallskip} \hline \noalign{\smallskip} J & 0.044 & 0.073 & 0.599 & 0.599\\ time & 0.296 & 0.047 & 0.04 & 0.037 \\ iterations & 222& 157& 3& 3 \\ \noalign{\smallskip} \hline \end{tabular} \caption{Optimal control problem. Value of J, time, iterations of GIST. } \end{center} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline\noalign{\smallskip} $\lambda$ & 0.0001& 0.001&0.01 & 0.2 \\ \noalign{\smallskip} \hline \noalign{\smallskip} J & 0.042 & 0.068& 0.185 & 0.599\\ time& 11.758 & 15.140 & 14.866 & 12.501 \\ iterations & 35 & 28 & 32 & 27\\ \noalign{\smallskip} \hline \end{tabular} \caption{Optimal control problem. Value of J, time, iterations of \textbf{Algorithm 1}.} \end{center} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline\noalign{\smallskip} $\lambda$& 0.0001 & 0.001& 0.01 & 0.2\\ \noalign{\smallskip} \hline \noalign{\smallskip} J$_{\mbox{\footnotesize{mon}}}$ & 0.043 & 0.071 & 0.185 & 0.599 \\ iter$_{\mbox{\footnotesize{mon}}}$ & 1 & 1 & 5 &5 \\ time$_{\mbox{\footnotesize{mon}}}$ & 2.2 & 0.1& 0.39& 0.025 \\ time$_{\mbox{\footnotesize{GIST}}}$ & 0.296 & 0.047 & 0.04 & 0.037 \\ \noalign{\smallskip} \hline \end{tabular} \caption{Optimal control problem. Value of J, iterations, time for which \textbf{Algorithm 1} overcomes GIST's.} \end{center} \end{table} \subsubsection{Compressed sensing approach for microscopy image reconstruction}\label{mimrec} We compare \textbf{Algorithm 1} and GIST in a microscopy imaging problem, in particular we focus on the STORM (stochastic optical reconstruction microscopy) method, based on stochastically switching and high-precision detection of single molecules to achieve an image resolution beyond the diffraction limit. The literature on the STORM has been intensively increasing, see e.g. \cite{RBZ}, \cite{BPS} \cite{HGM}, \cite{HBZ}. We refer in particular to \cite{GK} for a detailed description of the method and for more references. Our approach is based on the following constrained-minimization problem: \begin{equation}\label{minprobimage} \min_{x \in {\mathbb R}^n} |x|^\tau_{\tau} \quad \mbox{ such that } \, \,|A x-b|_2 \leq \varepsilon, \end{equation} where $\tau \in (0,1]$, $x$ is the up-sampled, reconstructed image, $b$ is the experimentally observed image, and $A$ is the impulse response (of size $m\times n$, where $m$ and $n$ are the numbers of pixels in $b$ and $x$, respectively). $A$ is usually called the point spread function (PSF) and describes the response of an imaging system to a point source or point object. Problem \eqref{minprobimage} can be reformulated as: \begin{equation}\label{minprobimagebeta} \min_{x \in {\mathbb R}^n} \frac{1}{2}|Ax-b|^2_2+\lambda |x|^\tau_\tau. \end{equation} First we tested the procedure for same resolution images, in particular the conventional and the true images are both $128\times 128$ pixel images. Then the algorithm was tested in the case of a $16\times 16$ pixel conventional image and a $128 \times 128$ true image. The values for the impulse response $A$ and the measured data $b$ were chosen according to the literature, in particular $A$ was taken as the Gaussian PSF matrix with variance $\sigma=8$ and size $3\times \sigma=24$, and $b$ was simulated by convolving the impulse response $A$ with a random $0$-$1$ mask over the image adding a white random noise so that the signal to noise ratio is $.01$. We carried out several tests with the same data for different values of $\tau,\lambda$. We report only our results for $\tau=.1$ and $\lambda=10^{-6}, \lambda=10^{-9}$ for the same and the different resolution case respectively, since for these values the best reconstructions were achieved. We focus on two different type of images, a sparse $0$-$1$ cross-like image and the standard phantom image. In order to compare the performance of \textbf{Algorithm 1} and the GIST algorithm, we focus on the number of surplus emitters (Error+) and missed emitters (Error-) recovered in the case of the cross image and different resolution. The errors are computed on an average over six recoveries for different values of the noise. The graphics of the errors against the noise are reported in Figures $11$ and $12$ for \textbf{Algorithm 1} and GIST respectively. We remark that these quantities are typically used as a measure of the efficacy of the reconstruction method, see for example \cite{DP} (where, under certain conditions, a linear decay with respect to the noise is proven) and \cite{C}. The results shows that by GIST the Error- is always $197$, whereas by \textbf{Algorithm 1} is always under $53$ and even smaller for small values of the noise. On the other hand, the Error+ by GIST is always 0 and by \textbf{Algorithm 1} is zero for small values of the noise and then monotonically increasing until it reaches $175$ when the noise is equal to $0.1$. Consistently with what expected, by \textbf{Algorithm 1} the graphics show a linear decay w.r.t. the noise, differently from the behaviour showed by GIST. Moreover, the results found by \textbf{Algorihtm 1} lead to more accuracy in the recovery, in the sense that the quantity of missed emitters is smaller, whereas on the other hand GIST seems to lead to a more sparser solutions (since the Error+ is 0 by GIST). Finally we remark that in the case of the cross image GIST is faster than our algorithm, consistently with the result presented in the previous subsection and as expected, since our algorithm solves a nonlinear equation for each minimization problem. On the other hand, in the case of the standard phantom image GIST results to be far slower than \textbf{Algorithm 1}. In Figure $13$ we report the results obtained in the same situation by the FISTA "Fast Iterative Shrinkage Thresholding Algorithm" for $\ell^1$ minimization. We remark that by the FISTA the Error+ is always above $400$, whereas by \textbf{Algorithm $1$} is zero for small value of the noise. This shows that \textbf{Algorithm 1} leads to more sparsity with respect to the FISTA, consistently with our expectation since the FISTA is based on $\ell^1$ minimization. \section{Conclusions} We have developed a monotone convergent algorithm for a class of nonconvex nonsmooth optimization problems arising in the modelling of fracture mechanics and in imaging reconstruction, including the $\ell^\tau, \tau \in (0,1]$, the smoothly clipped absolute deviation and the minimax concave penalty. Theoretically, we established the existence of a minimizer of the original problem under assumptions implying coercivity of the functional. Then we derived necessary optimality conditions for a regularized version of the original problem. The optimality conditions for the regularized problem were solved through a monotonically convergent scheme based on an iterative procedure. We proved the convergence of the iteration procedure under the same assumptions that guarantee existence. A remarkable result is the strict monotonicity of the functional along the sequence of iterates generated by the scheme. Moreover we proved the convergence of the regularized problem to the original one as the regularization parameter goes to zero. The procedure is very efficient and accurate. The efficiency and accuracy of the procedure was verified by numerical tests simulating the evolution of cohesive fractures and microscopy imaging. An issue of high relevance to us was the comparison of the scheme to two alternative algorithms, the GIST "General Iterative Shrinkage and Thresholding" algorithm for $\ell^\tau$ minimization, with $\tau$ strictly positive and less than $1$ and the FISTA "Fast Iterative Shrinkage-Thresholding Algorithm" for $\ell^1$ minimization. We first compared with GIST by focusing on the infimal value reached by the iteration procedure and on the computing time. Our results showed that the monotone algorithm is able to reach a smaller value of the objective functional when compared to GIST's, therefore leading to a better accuracy. Finally we compared our scheme with FISTA in sparse recovery related to microscopy imaging. The results showed that the monotone scheme lead to more sparsity with respect to FISTA, as expected since FISTA concerns $\ell^1$ minimization. \begin{figure}[!ht] \centering \subfloat[$\footnotesize{t=.2}$]{\includegraphics[height=4cm, width=1.7cm]{fractp01okk1.png}} \subfloat[$\footnotesize{t=.3}$]{\includegraphics[height=4cm, width=1.7cm]{fractp01okk2.png}} \subfloat[$\footnotesize{t=1.5}$]{\includegraphics[height=4cm, width=1.7cm]{fractp01okk3.png}} \hspace{0.5cm} \subfloat[$\footnotesize{t=.9}$] { \includegraphics[height=4cm, width=1.7cm]{fractp1okk1.png} } \subfloat[$\footnotesize{t=1}$] { \includegraphics[height=4cm, width=1.7cm]{fractp1okk2.png} } \subfloat[$\footnotesize{t=3}$] { \includegraphics[height=4cm, width=1.7cm]{fractp1okk3.png} } \caption{Three time-step evolution of the displacement for $\tau=.01$, $t = .2, .3, 1.5$ (left), $\tau=.1$, $t=.9, 1, 3$ (right). Results obtained by \textbf{Algorithm $1$}.} \end{figure} \vspace{0.3cm} \begin{figure}[!ht] \centering \subfloat[$\footnotesize{t=1}$]{\includegraphics[height=4cm, width=1.7cm]{SCAD21.png}} \subfloat[$\footnotesize{t=2.1}$]{\includegraphics[height=4cm, width=1.7cm]{SCAD22.png}} \subfloat[$\footnotesize{t=2.2}$]{\includegraphics[height=4cm, width=1.7cm]{SCAD23.png}} \subfloat[$\footnotesize{t=2.5}$]{\includegraphics[height=4cm, width=1.7cm]{SCAD24.png}} \hspace{0.4cm} \subfloat[$\footnotesize{t=.1}$]{\includegraphics[height=4cm, width=1.7cm]{SCAD11.png}} \subfloat[$\footnotesize{t=2.1}$]{\includegraphics[height=4cm, width=1.7cm]{SCAD12.png}} \subfloat[$\footnotesize{t=2.2}$]{\includegraphics[height=4cm, width=1.7cm]{SCAD13.png}} \subfloat[$\footnotesize{t=2.5}$]{\includegraphics[height=4cm, width=1.7cm]{SCAD14.png}} \caption{Four time-step evolution of the displacement for the SCAD model, $\tau=20$, $t=1, 2.1, 2.2, 2.5$ (left), $\tau=10$, $t = .1, 2.1, 2.2, 2.5$ (right). Results obtained by \textbf{Algorithm $1$}.} \end{figure} \vspace{1cm} \begin{figure}[!ht] \subfloat[$t=0.1, \tau=0.001$] { \includegraphics[height=4.5cm, width=4.5cm]{p0011.png} } \subfloat[$t=0.8, \tau=0.001$] { \includegraphics[height=4.5cm, width=4.5cm]{p0012.png} } \subfloat[$t=0.8, \tau=0.001$] { \includegraphics[height=4.5cm, width=4.5cm]{p0013.png} } \caption{Displacement, $\theta(\cdot)=|\cdot|_\tau^\tau$, with $\tau=0.001$, $R^1=R^2=I$, and boundary datum $g=g_1$} \end{figure} \vspace{2cm} \begin{figure}[!ht] \subfloat[$t=0.1, \tau=0.01$] { \includegraphics[height=4.5cm, width=4.5cm]{p011.png} } \subfloat[$t=0.35, \tau=0.01$] { \includegraphics[height=4.5cm, width=4.5cm]{p012.png} } \subfloat[$t=0.8, \tau=0.01$] { \includegraphics[height=4.5cm, width=4.5cm]{p013.png} } \newline \hspace{-5cm} \subfloat[$t=0.1, \tau=0.0001$] { \includegraphics[height=4.5cm, width=4.5cm]{p00011.png} } \subfloat[$t=0.8, \tau=0.0001$] { \includegraphics[height=4.5cm, width=4.5cm]{p00012.png} } \subfloat[$t=0.8, \tau=0.0001$] { \includegraphics[height=4.5cm, width=4.5cm]{p00013.png} } \caption{Displacement, $\theta(\cdot)=|\cdot|_\tau^\tau$, comparison between $\tau=0.01$ and $\tau=0.0001$, $R^1=R^2=I$, boundary datum $g=g_1$.} \end{figure} \vspace{1cm} \begin{figure}[!ht] \subfloat[$t=1.5, \tau=20$] { \includegraphics[height=3.5cm, width=3.5cm]{SCADg1t201.png} } \subfloat[$t=1.7, \tau=20$] { \includegraphics[height=3.5cm, width=3.5cm]{SCADg1t202.png} } \subfloat[$t=2.5, \tau=20$] { \includegraphics[height=3.5cm, width=3.5cm]{SCADg1t203.png} } \subfloat[$t=3, \tau=20$] { \includegraphics[height=3.5cm, width=3.5cm]{SCADg1t204.png} } \newline \hspace{-5cm} \subfloat[$t=1, \tau=15$] { \includegraphics[height=3.5cm, width=3.5cm]{SCADg1t151.png} } \subfloat[$t=2.8, \tau=15$] { \includegraphics[height=3.5cm, width=3.5cm]{SCADg1t152.png} } \subfloat[$t=3, \tau=15$] { \includegraphics[height=3.5cm, width=3.5cm]{SCADg1t153.png} } \caption{Displacement for the $\mbox{SCAD}$ model, comparison between $\tau=20$ and $\tau=15$, $R^1=R^2=I$, boundary datum $g=g_1$.} \end{figure} \begin{figure}[!ht] \subfloat[$t=1.5, \tau=20$] { \includegraphics[height=3.3cm, width=3.5cm]{MCPg1t201.png} } \subfloat[$t=1.8, \tau=20$] { \includegraphics[height=3.3cm, width=3.5cm]{MCPg1t202.png} } \subfloat[$t=2.3, \tau=20$] { \includegraphics[height=3.3cm, width=3.5cm]{MCPg1t203.png} } \newline \hspace{-5cm} \subfloat[$t=1.5, \tau=10$] { \includegraphics[height=3.3cm, width=3.5cm]{MCPg1t101.png} } \subfloat[$t=1.56, \tau=10$] { \includegraphics[height=3.3cm, width=3.5cm]{MCPg1t102.png} } \subfloat[$t=2, \tau=10$] { \includegraphics[height=3.3cm, width=3.5cm]{MCPg1t103.png} } \caption{Displacement for the MCP model, comparison between $\tau=20$ and $\tau=10$, $R^1=R^2=I$, boundary datum $g=g_1$.} \end{figure} \begin{figure}[!ht] \subfloat[$t=0.4, \tau=0.001$] { \includegraphics[height=3.2cm, width=3.5cm]{p001g21.png} } \subfloat[$t=2, \tau=0.001$] { \includegraphics[height=3.2cm, width=3.5cm]{p001g22.png} } \subfloat[$t=3, \tau=0.001$] { \includegraphics[height=3.2cm, width=3.5cm]{p001g23.png} } \newline \hspace{-5cm} \subfloat[$t=0.4, \tau=0.0001$] { \includegraphics[height=3.2cm, width=3.5cm]{p0001g21.png} } \subfloat[$t=2, \tau=0.0001$] { \includegraphics[height=3.2cm, width=3.5cm]{p0001g22.png} } \subfloat[$t=3, \tau=0.0001$] { \includegraphics[height=3.2cm, width=3.5cm]{p0001g23.png} } \caption{Displacement, $\theta(\cdot)=|\cdot|_\tau^\tau$, comparison between $\tau=0.001$ and $\tau=0.0001$, $R^1=R^2=I$, boundary datum $g=g_2$.} \end{figure} \vspace{-0.4cm} \begin{figure}[!ht] \subfloat[$t=0.9, \tau=0.01$] { \includegraphics[height=3.2cm, width=3.5cm]{inh11.png} } \subfloat[$t=1.1, \tau=0.01$] { \includegraphics[height=3.2cm, width=3.5cm]{inh12.png} } \subfloat[$t=1.5, \tau=0.01$] { \includegraphics[height=3.2cm, width=3.5cm]{inh13.png} } \caption{Displacement, $\theta(\cdot)=|\cdot|_\tau^\tau$, $\tau=0.01$, $R^1, R^2$ given by \eqref{R1}-\eqref{R2}, boundary datum $g=g_1$.} \end{figure} \vspace{-0.4cm} \begin{figure}[!ht] \subfloat[$t=1.5, \tau=0.1$] { \includegraphics[height=3.2cm, width=3.5cm]{inh21.png} } \subfloat[$t=2, \tau=0.1$] { \includegraphics[height=3.2cm, width=3.5cm]{inh22.png} } \subfloat[$t=3, \tau=0.1$] { \includegraphics[height=3.2cm, width=3.5cm]{inh23.png} } \caption{Displacement, $\theta(\cdot)=|\cdot|_\tau^\tau$, $\tau=0.01, R^1,R^2$ given by \eqref{Rr}, boundary datum $g=g_1$} \end{figure} \begin{figure}[!ht] \subfloat[$t=0.2, \tau=0.1$] { \includegraphics[height=3.5cm, width=3.5cm]{mistery.png} } \subfloat[$t=1.5, \tau=0.1$] { \includegraphics[height=3.5cm, width=3.5cm]{inNc2.png} } \subfloat[$t=3, \tau=0.1$] { \includegraphics[height=3.5cm, width=3.5cm]{inNc3.png} } \caption{Displacement, $\theta(\cdot)=|\cdot|_\tau^\tau$, $\tau=0.1$, $R^1,R^2$ given by \eqref{Rr}, boundary datum $g=g_3$} \end{figure} \begin{figure}[!ht] \centering \includegraphics[height=3cm, width=16cm]{ERR2.png} \caption{Error+ (surplus of emitters), Error- (missed emitters) against noise. Results obtained by \textbf{Algorithm 1}, $\tau=.5, \lambda=10^{-6}$. } \end{figure} \begin{figure} \centering \includegraphics[height=3cm, width=16cm]{ERR1.png} \caption{Error+ (surplus of emitters), Error- (missed emitters) against noise. Results obtained by GIST, $\tau=.5, \lambda=10^{-6}$.} \end{figure} \begin{figure}[!ht] \centering \includegraphics[height=3cm, width=16cm]{ERR3.png} \caption{Error+ (surplus of emitters), Error- (missed emitters) against noise. Results obtained by the FISTA, $ \lambda=10^{-6}$.} \end{figure}
1,477,468,750,717
arxiv
\section{introduction} \label{sec:intro} The first detection of a binary-black hole merger, GW150914, opened the door to gravitational-wave astronomy \citep{ligovirgo2016}. The masses of the black holes, $\sim 29$ and $\sim 36M_\odot$, are larger than $\sim 10$--$15M_\odot$ previously expected from galactic observations \citep{ozel_pnm2010,kreidberg_bfk2012}, and this finding prompts a vigorous debate regarding their origin. At the same time, \citet{ligovirgo2016-3} suggest a very high merger rate of binary black holes in our Universe based on a part of Advanced LIGO's O1 observation run. These facts immediately mean that space-based gravitational-wave detectors, such as evolved Laser Interferometric Space Antenna \citep[eLISA: see][for LISA Pathfinder]{lisapf2016}, would have a fair chance to detect extragalactic stellar-mass binary black holes as well as galactic compact binaries \citep{ligovirgo2016-2,sesana2016}. Observing gravitational waves at low frequency will be important to understand the origin of massive black holes like GW150914. Facing massive black holes with $\gtrsim 30M_\odot$ unexpected for end products of stellar evolution with the solar metallicity \citep{ligovirgo2016-2}, possible formation channels of the binary are actively discussed. One plausible scenario is the evolution of low-metallicity stars with weak stellar winds in an isolated field binary \citep[see][for reviews]{postnov_yungelson2014}. Another scenario is dynamical formation in dense stellar environments like galactic nuclei or globular clusters \citep[see, e.g.,][]{benacquista_downing2013,rodriguez_mpchr2015,rodriguez_hckr2016}. It is also pointed out that a binary of primordial black holes satisfying current observational constraints is also consistent with GW150914 \citep{bird_cmakkrr2016,sasaki_sty2016}. Because these scenarios predict different distribution of binary parameters such as the eccentricity, which is determined to higher accuracy at lower frequency \citep[see section 1 of][for the discussion]{nishizawa_bks2016}, multiple detections could statistically clarify the formation scenario \citep[see also][]{breivik_rlkr2016,nishizawa_bks2016-2}. Moreover, precise localization of the binary by the annual modulation of the detector with the distance estimation will be beneficial to determine the host galaxy, information of which is also invaluable to infer the origin. In this paper, we study prospects for detecting extragalactic binary black holes by eLISA, enhancing the previous investigation for Galactic binary black holes by one of the authors \citep{seto2016}. The possibility of detecting extragalactic binary black holes is mentioned briefly by LIGO Scientific Collaboration immediately after the detection of GW150914 \citep{ligovirgo2016-2}. Various authors conducted follow-up studies of this possibility by Monte Carlo simulations \citep{nishizawa_bks2016,sesana2016,vitale2016}, primarily focusing on the aspects of multiband gravitational-wave astronomy, i.e., simultaneous detections of the same binary by eLISA and ground-based interferometric detectors such as Advanced LIGO. Here, we analytically evaluate the expected number of detections and put more emphasis on extragalactic binary black holes that do not merge during the operation of eLISA than previous studies do, because such binaries inevitably dominate the detection. We also show that a monochromatic approximation works well to derive semiquantitative features of observational prospects for non-merging binaries. Our assumptions and parameter choices are summarized as follows. We treat all the binary black holes as circular, and denote the gravitational-wave frequency by $f$, which is twice the orbital frequency. This is justified to the accuracy of our discussion, because the eccentricity is not expected to be very high. We apply the quadrupole formula for point masses neglecting the black hole spin as well as higher order post-Newtonian effects, and the cosmological redshift is also neglected. We take the fiducial chirp mass of binary black holes to be $\mathcal{M} = 28 M_\odot$ according to the estimate from GW150914 \citep{ligovirgo2016}, and later show that our estimate applies approximately to distribution of chirp masses once averaged over the weight $\mathcal{M}^{10/3}$. We take the fiducial comoving merger rate to be $R = \SI{100}{Gpc^{-3}.yr^{-1}}$ motivated by the estimate from a part of O1 \citep{ligovirgo2016-3}. \section{search for extragalactic binary black holes} In this section, we derive the frequency distribution of the expected number of detections for extragalactic binary black holes by eLISA with various planned sensitivities. We take the fiducial value of the observation period $T$ to be \SI{3}{yr} in the same manner as \citet{seto2016} and the fiducial value of the detection threshold for the signal-to-noise ratio $\rho_\mathrm{thr}$ to be 8. \subsection{formulation} The signal-to-noise ratio $\rho$ for a binary with the initial frequency $f_i$ at the start of the observation is given by \begin{equation} \rho^2 = 4 \int_{f_i}^{f_f} \frac{| \tilde{h} (f) |^2}{(3/20)S(f)} df , \label{eq:snr} \end{equation} where $S(f)$ is the one-sided, sky-averaged noise spectral density of eLISA such as those provided in \citet{elisa2012,klein_etal2016}. The factor of $3/20$ is introduced to derive an effective non-sky-averaged noise spectral density \citep{berti_bw2005,nishizawa_bks2016}. The frequency $f_f$ at the end of the observation is given by \begin{equation} f_f ( f_i , T ) = \left( f_i^{-8/3} - \frac{256 \pi^{8/3} G^{5/3} \mathcal{M}^{5/3} T}{5 c^5} \right)^{-3/8} , \end{equation} as long as the binary does not merge within the observation period. The frequency above which the merger occurs within the observation period $T$ is given by \begin{align} f_\mathrm{merge} (T) & = \frac{5^{3/8} c^{15/8}}{8 \pi G^{5/8} \mathcal{M}^{5/8} T^{3/8}} \\ & = \SI{19.2}{\milli\hertz} \left( \frac{\mathcal{M}}{28M_\odot} \right)^{-5/8} \left( \frac{T}{\SI{3}{yr}} \right)^{-3/8} , \label{eq:fmerge} \end{align} and such binaries will serve as interesting targets of multiband gravitational-wave astronomy \citep{sesana2016,vitale2016,nishizawa_bks2016}. If the binary merges within the observation period, namely $f_i \ge f_\mathrm{merge}(T)$, or $f_f$ becomes higher than \SI{1}{\hertz}, above which the detector noise would lose control, we set $f_f$ in equation \eqref{eq:snr} to be \SI{1}{\hertz}. The gravitational-wave spectral amplitude for a binary at a distance $D$ is given by \begin{equation} \left| \tilde{h}(f) \right|^2 = \frac{5 G^{5/3} \mathcal{M}^{5/3}}{24 \pi^{4/3} c^3 D^2 f^{7/3}} \times \frac{3}{4} \left[ F_+^2 \left( \frac{1 + \mu^2}{2} \right)^2 + F_\times^2 \mu^2 \right] , \end{equation} where $F_+$ and $F_\times$ are the antenna pattern functions, which depend on the sky location and polarization angle, and $\mu = \cos \imath$ is the cosine of the inclination angle $\imath$. We explicitly separate the factor $(\sqrt{3}/2)^2$ accounting the opening angle \ang{60} of eLISA from the antenna pattern functions. The antenna pattern functions also change with time due to the annual rotation of the detector plane. Because we focus on the case that $T \ge \SI{1}{yr}$, we take the time average of the antenna pattern functions in advance to set $\langle F_+^2 \rangle_t = \langle F_\times^2 \rangle_t = 1/5$ for individual binaries. This is a reasonable approximation at least for a six-link configuration \citep{seto2004}.\footnote{For a binary with a short signal duration, the average over the sky location is less than 0.221. Given the convex nature of the function $x^{3/2}$ that relates the signal-to-noise ratio and the effective volume, the actual value should be in the range $[0.2,0.221]$, resulting in an error less than 17\% for our effective volume below.} Meanwhile, even for a four-link configuration, we can extend the procedure developed in \citet{seto2014} to show that the averaging with respect to the polarization angle has a negligible effect of $\sim 1\%$ level on the detectable volume $V ( f_i , T )$ estimated below. We assume a four-link configuration of eLISA in this study, and the result for a six-link configuration may be estimated simply multiplying $\tilde{h}(f)$ by a factor of $\sqrt{2}$ and scaling all the relevant quantities accordingly. Then, the signal-to-noise ratio relevant for our study is expressed as \begin{align} \rho^2 & = \frac{A^2}{D^2} \left[ \left( \frac{1 + \mu^2}{2} \right)^2 + \mu^2 \right] I_7 ( f_i , T ), \label{eq:snrav} \\ A^2 & \equiv \frac{G^{5/3} \mathcal{M}^{5/3}}{8 \pi^{4/3} c^3} , \label{eq:snrfac} \\ I_7 ( f_i , T ) & \equiv \int_{f_i}^{f_f} \frac{f^{-7/3}}{(3/20)S(f)} df . \end{align} Note that $I_7$ defined above depends on the initial frequency $f_i$ and the observation period $T$. The maximum detectable distance within which the signal-to-noise ratio is larger than a given threshold $\rho_\mathrm{thr}$ depends on the inclination angle, initial frequency, and observation period as \begin{equation} D_\mathrm{thr} = \frac{A}{\rho_\mathrm{thr}} \left[ \left( \frac{1 + \mu^2}{2} \right)^2 + \mu^2 \right]^{1/2} I_7 ( f_i , T )^{1/2} . \end{equation} The detectable volume averaged over the inclination angle becomes a function of the initial frequency and observation period as \begin{equation} V ( f_i , T ) = \frac{4\pi}{3} \times \frac{1}{2} \int_{-1}^1 D_\mathrm{thr}^3 d \mu , \end{equation} and the effective range (or sensemon range) of eLISA is also defined by $D_\mathrm{eff} \equiv ( 3V / 4\pi )^{1/3}$. Using \citep[see also][]{schutz2011} \begin{equation} \frac{1}{2} \int_{-1}^1 \left[ \left( \frac{1 + \mu^2}{2} \right)^2 + \mu^2 \right]^{3/2} d \mu \approx 0.822 , \label{eq:ave} \end{equation} the detectable volume averaged over the inclination angle is calculated as \begin{equation} V ( f_i , T ) = \frac{4 \pi}{3} \times 0.822 \frac{A^3}{\rho_\mathrm{thr}^3} I_7 ( f_i , T )^{3/2} , \label{eq:volume} \end{equation} and the effective range is given by \begin{equation} D_\mathrm{eff} ( f_i , T ) = 0.937 \frac{A}{\rho_\mathrm{thr}} I_7 ( f_i , T )^{1/2} . \label{eq:range} \end{equation} Hereafter, we denote the initial frequency simply by $f$ anticipating no confusion would arise. The frequency distribution of the expected number of detections for binary black holes can be calculated as \begin{equation} \frac{dN}{d \ln f} = V (f,T) \frac{dn}{d \ln f} , \label{eq:exact} \end{equation} where $dn/d \ln f$ is the number-density distribution of the binaries. For a collection of identical binary black holes, the distribution of the number density $n$ in each logarithmic frequency interval should be proportional to the time that binaries spend in the interval. Here, we specifically consider the collection of binary black holes similar to GW150914. Using the comoving merger rate $R = dn/dt$, the distribution is written as \begin{align} \frac{dn}{d \ln f} & = \frac{f}{\dot{f}} R = \frac{5 c^5 R}{96 \pi^{8/3} G^{5/3} \mathcal{M}^{5/3} f^{8/3}} \label{eq:distrib} \\ & = \SI{4.57e-6}{Mpc^{-3}} \notag \\ & \times \left( \frac{f}{\SI{10}{\milli\hertz}} \right)^{-8/3} \left( \frac{\mathcal{M}}{28M_\odot} \right)^{-5/3} \left( \frac{R}{\SI{100}{Gpc^{-3}.yr^{-1}}} \right) . \end{align} The distribution of the expected number of detections, $dN/d \ln f$, is obtained by multiplying equations \eqref{eq:volume} and \eqref{eq:distrib}, where it is proportional to $R / \rho_\mathrm{thr}^3$ as expected. The distance $D_\mathrm{near}$ to the nearest and thus loudest binary black holes in each logarithmic frequency interval can be guessed from this number-density distribution via the condition \begin{equation} \frac{4\pi}{3} D_\mathrm{near}^3(f) \frac{dn}{d \ln f} = 1 , \end{equation} and it is found to be \begin{align} D_\mathrm{near} (f) & = \frac{3^{2/3} 2 \pi^{5/9} G^{5/9} \mathcal{M}^{5/9} f^{8/9}}{5^{1/3} c^{5/3} R^{1/3}} \\ & = \SI{37.4}{Mpc} \notag \\ & \times \left( \frac{f}{\SI{10}{\milli\hertz}} \right)^{8/9} \left( \frac{\mathcal{M}}{28 M_\odot} \right)^{5/9} \left( \frac{R}{\SI{100}{Gpc^{-3}.yr^{-1}}} \right)^{-1/3} . \end{align} \begin{figure} \includegraphics[width=.95\linewidth]{fig1.pdf} \caption{Effective range $D_\mathrm{eff}$ for 3-yr observations of eLISA and the distance $D_\mathrm{near}$ to the nearest binary black holes similar to GW150914. The curves for $D_\mathrm{eff}$ labelled by NGO and N2A5 are calculated with the noise curve of \citet{elisa2012} and that of \citet{klein_etal2016}, respectively. Galactic binary white dwarfs are taken into account as the foreground for the latter according to \citet{klein_etal2016}. The vertical dotted line marks $f_\mathrm{merge} ( \SI{3}{yr} ) = \SI{19.2}{\milli\hertz}$.} \label{fig:range} \end{figure} To indicate the relevant distance scale of the problem, we show the effective range $D_\mathrm{eff} (f)$ for two representative noise curve models of eLISA \citep{elisa2012,klein_etal2016} and the distance to the nearest binary $D_\mathrm{near} (f)$ in Fig.~\ref{fig:range}. This figure shows that the relevant range is local with $\lesssim \SI{350}{Mpc}$. The expected number of detections becomes lower than unity for the frequency range where $D_\mathrm{eff} \lesssim D_\mathrm{near}$, and thus the actual number will fluctuate significantly. This fluctuation will especially be relevant for the detection of merging binary black holes satisfying $f > f_\mathrm{merge} (T)$ by low-sensitivity configurations such as NGO \citep{elisa2012}, with which $D_\mathrm{eff} / D_\mathrm{near}$ never exceeds 2 irrespective of the frequency. \subsection{Frequency distribution of the expected number of detections} \begin{figure} \includegraphics[width=.95\linewidth]{fig2.pdf} \caption{Frequency distribution of the expected number of detections for binary black holes similar to GW150914 in each logarithmic frequency interval for 3-yr observations of eLISA. The curve labeled by NGO is calculated with the noise curve of \citet{elisa2012}, and the others are with those of \citet{klein_etal2016}. Galactic binary white dwarfs are taken into account as the foreground for N2 configurations according to \citet{klein_etal2016}. The vertical dotted line marks $f_\mathrm{merge} ( \SI{3}{yr} ) = \SI{19.2}{\milli\hertz}$.} \label{fig:dndlnf} \end{figure} Fig.~\ref{fig:dndlnf} shows the distribution, $dN/d \ln f$ (equation \ref{eq:exact}), calculated with various noise curve models of eLISA \citep{elisa2012,klein_etal2016}. For models of \cite{klein_etal2016}, N2 has a weaker acceleration noise than N1, and the sensitivity is improved primarily at low frequency. A1, A2, and A5 correspond to the arm length of \SI{1e6}{\kilo\meter}, \SI{2e6}{\kilo\meter}, and \SI{5e6}{\kilo\meter}, respectively, and the long arm length improves the sensitivity at intermediate frequency. This figure shows that the noise curve has a critical impact on the number of detections and also that the majority of the detected binaries will not merge within the observation period in any case. The number of merging binary black holes satisfying $f \ge f_\mathrm{merge} (T)$ is not affected significantly by the sensitivity at low frequency, as found from the comparisons between N1 and N2 configurations. This is expected, because additional detections of merging binary black holes require improvement of the sensitivity at $f \ge f_\mathrm{merge} (T)$. On another front, a long arm length increases the detections of merging binary black holes significantly, as found from the comparisons among A1, A2, and A5 configurations. Both improvements enhance the number of total detections. \begin{figure} \includegraphics[width=.95\linewidth]{fig3.pdf} \caption{Fraction of detected binary black holes that merge within the observation period of eLISA to all the detected binary black holes as a function of the observation period. The curve for N2A1 mostly overlaps with that for NGO due to the similar shape of the noise curve.} \label{fig:fraction} \end{figure} The dominance of non-merging binary black holes is always the case for reasonable observation periods as shown in Fig.~\ref{fig:fraction}. This figure shows that merging binary black holes become dominant only for an improbably long observational period of $T \gtrsim \SI{5}{yr}$ with the N1 configurations, which are less sensitive at low frequency \citep[see also][]{sesana2016}. The fraction will be no larger than $\sim 10\%$ for the N2 configurations because of its high sensitivity at low frequency, which drastically increases the detections of non-merging binary black holes. The fraction is nearly the same for NGO and N2A1, because they have very similar noise curves except for the overall normalization. \begin{figure*} \begin{tabular}{cc} \includegraphics[width=.47\linewidth]{fig4a.pdf} & \includegraphics[width=.47\linewidth]{fig4b.pdf} \end{tabular} \caption{Expected number of all the detected binary black holes (left) and merging binary black holes (right) as functions of the observation period of eLISA. Note the different scale of the vertical axes.} \label{fig:number} \end{figure*} \begin{table} \caption{The expected number of all the detected binary black holes and merging binary black holes for representative values of the observation period, $T$.} \centering \begin{tabular}{c|cccccc} \hline Model & \SI{1}{yr} & \SI{2}{yr} & \SI{3}{yr} & \SI{4}{yr} & \SI{5}{yr} & \SI{10}{yr} \\ \hline \hline NGO & 1.2 & 3.4 & 6.2 & 9.5 & 13.2 & 36.2 \\ (merge) & 0.07 & 0.3 & 0.7 & 1.3 & 2.0 & 7.9 \\ N1A1 & 1.4 & 3.7 & 6.5 & 9.6 & 13.0 & 32.3 \\ (merge) & 0.3 & 1.1 & 2.5 & 4.4 & 6.6 & 21.6 \\ N1A2 & 6.0 & 16.5 & 29.5 & 44.3 & 60.3 & 153 \\ (merge) & 0.6 & 3.3 & 8.1 & 15.2 & 23.9 & 88.8 \\ N1A5 & 22.2 & 61.9 & 112 & 170 & 235 & 621 \\ (merge) & 0.8 & 5.3 & 15.1 & 31.2 & 53.2 & 249 \\ N2A1 & 4.7 & 13.2 & 24.0 & 36.6 & 50.8 & 139 \\ (merge) & 0.3 & 1.2 & 2.8 & 5.0 & 7.9 & 31.0 \\ N2A2 & 27.6 & 77.6 & 142 & 217 & 302 & 835 \\ (merge) & 0.6 & 3.5 & 9.1 & 17.7 & 29.2 & 135 \\ N2A5 & 174 & 492 & 903 & 1390 & 1940 & 5420 \\ (merge) & 0.8 & 5.7 & 17.0 & 36.9 & 66.3 & 401 \\ \hline \end{tabular} \label{table:number} \end{table} The number of merging binary black holes itself is always larger for the N2 configurations than the N1 configurations as well as that of all the detected binary black holes as shown in Fig.~\ref{fig:number}. We also present the numbers for representative values of $T$ in Table \ref{table:number}. This figure shows that the number of detections for extragalactic stellar-mass binary black holes could be comparable to that for supermassive binary black holes \citep{klein_etal2016}. This figure also shows that the number of (merging or total) detected binaries increases faster than the increase of the observation period, $T$. As we explain in the next section, the total number is expected to increase approximately as $T^{3/2}$ for the case that dominant sources do not evolve significantly, i.e., the monochromatic approximation applies well. This is the case for eLISA. This situation should be contrasted with compact binary coalescences for ground-based detectors, the number of which should be proportional to $T$. In reality, the increase could be even faster for eLISA, because the long-term observation will allow us to remove more confusion noise caused by Galactic compact binaries. The total expected number of detections varies by two orders of magnitude, from $\sim 6$ (NGO) to $\sim 900$ (N2A5) for $T=\SI{3}{yr}$, among different detector configurations. The prospect for multiband gravitational-wave astronomy crucially depends on the detector arm length. If the arm length is $\sim \SI{1e6}{\kilo\meter}$ (A1), the number of binaries that merge within the observation period is $O(1)$ at best. Even if we would count binaries that merge within $\sim \SI{10}{yr}$ after the shutdown of eLISA, the number will be only $\lesssim 30$. By contrast, the arm length of $\sim \SI{5e6}{\kilo\meter}$ (A5) would increase the number of merging binary black holes by a factor of several. Thus, the long arm length of eLISA is highly desired for fruitful multiband gravitational-wave astronomy. Reducing the acceleration noise adds not much to multiband observations. Our estimates agree approximately with those of \citet{sesana2016} derived by Monte Carlo simulations with distribution of chirp masses. \section{monochromatic approximation} In the low-frequency regime where the binary evolution is negligible during the observation period, the monochromatic approximation works well to estimate various aspects of detectable signals. The signal-to-noise ratio of gravitational waves is approximated by \begin{align} \rho^2 & \approx 4 \frac{| \tilde{h} (f) |^2}{(3/20)S(f)} \dot{f} T \\ & = \frac{12 \pi^{4/3} G^{10/3} \mathcal{M}^{10/3}}{5 c^8 D^2} \left[ \left( \frac{1 + \mu^2}{2} \right)^2 + \mu^2 \right]\frac{f^{4/3} T}{(3/20)S(f)} , \end{align} where the time and polarization angle average are assumed. After averaging over the inclination angle, the frequency distribution of the expected number of detections becomes \begin{equation} \frac{dN}{d \ln f} \approx 0.822 \times \frac{\pi^{1/3} G^{10/3} \mathcal{M}^{10/3} R}{3^{1/2} 5^{1/2} c^7 \rho_\mathrm{thr}^3} \frac{f^{-2/3} T^{3/2}}{[(3/20)S(f)]^{3/2}} \label{eq:mono} \end{equation} in the monochromatic approximation. A remarkable point is that the expected number of detections is proportional to $\mathcal{M}^{10/3}$. This expression implies that our simple estimate should also be valid approximately for the case that the chirp mass has distribution, once we replace $\mathcal{M}$ by the averaged chirp mass defined by \begin{equation} \langle \mathcal{M} \rangle \equiv \left[ \int p ( \mathcal{M} ) \mathcal{M}^{10/3} d \mathcal{M} \right]^{3/10} , \label{eq:average} \end{equation} where $p ( \mathcal{M} )$ is the probability distribution of the chirp mass. For example, if the realistic typical chirp mass would be $\sim 9M_\odot$ corresponding to a $10M_\odot$--$10M_\odot$ binary, the number of detections is smaller by a factor of $\sim 40$--50 than the current estimate. This dependence should be compared with $\mathcal{M}^{5/2}$ expected for ground-based detectors (see equations \ref{eq:snrav} and \ref{eq:snrfac}), which observe chirping signals. The distribution scales as $\propto T^{3/2}$ because of the relation $\rho \propto T^{1/2}$. This means that a longer operation of eLISA will detect more binary black holes than the increase linear in time differently from compact binary coalescences for ground-based detectors. The longer operation will further increase the detections of non-merging binary black holes by removing more confusion noise caused by Galactic compact binaries, and thus $T^{3/2}$ is conservative. While this effect will not be relevant to merging binary black holes at high frequency, their detections should also increase even faster than $T^{3/2}$, because the longer operation of eLISA allows binaries at lower frequency to merge within the observation period. \begin{figure} \includegraphics[width=.95\linewidth]{fig5.pdf} \caption{Comparison of the frequency distribution of the expected number of detections obtained by the exact expression, equation \eqref{eq:exact} (solid curve), and that by the monochromatic approximation, equation \eqref{eq:mono} (dashed curve). The results for NGO (black) and N2A5 (red) configurations are shown assuming 3-yr observations. The vertical dotted line marks $f_\mathrm{merge} ( \SI{3}{yr} ) = \SI{19.2}{\milli\hertz}$.} \label{fig:mono} \end{figure} Fig.~\ref{fig:mono} compares $dN/d \ln f$ obtained by the exact integration, equation \eqref{eq:exact}, and that by the monochromatic approximation, equation \eqref{eq:mono}, for $T=\SI{3}{yr}$. The agreement is quite good at frequency lower than $f_\mathrm{merge} (T)$, particularly for N2A5. The deviation is only by a factor of less than 2 at $f_\mathrm{merge} (T)$ even for NGO. This clearly shows that the monochromatic approximation works quite well to estimate the number of non-merging binary black holes. By contrast, the monochromatic approximation significantly overestimates the number of merging binary black holes at high frequency. The breakdown of this approximation at $f \gtrsim f_\mathrm{merge} (T)$ is inevitable, because the binary evolution necessarily becomes important. Still, taking the fact that the detections is dominated by non-merging binaries, the monochromatic approximation is useful to derive semiquantitative dependence of characteristic quantities on relevant parameters and to evaluate detector performance by a concise calculation. \begin{figure} \includegraphics[width=.95\linewidth]{fig6.pdf} \caption{Tangential line with the slope $-2/9$ to the square root of $S(f)$. The black and red curves are for NGO and N2A5, respectively.} \label{fig:tangent} \end{figure} The frequency at which the distribution, $dN/d \ln f$, peaks will dominate the detections, and it can be evaluated graphically by drawing a tangential line with an appropriate slope to the noise spectral density. Specifically, equation \eqref{eq:mono} shows that the peak frequency is obtained as a contact point of a tangential line with the slope $-2/9$, i.e., $\propto f^{-2/9}$, to the square root of the noise spectral density, $S(f)^{1/2}$, as shown in Fig.~\ref{fig:tangent}. The same can be done by taking the minimum of $S(f) f^{4/9}$. \section{localization error} We briefly estimate the localization error using empirical formulae derived in \citet{takahashi_seto2002} adopting the monochromatic approximation and Fisher analysis \citep[see also][]{cutler1998,vecchio_wickham2004}. The errors of the sky location and the distance to the source for $T \gtrsim \SI{2}{yr}$ are, respectively, given by (recall $\SI{1}{deg^2} = \SI{3.e-4}{str}$) \begin{align} \Delta \Omega (f) & \sim \SI{7.1e-4}{str} \left( \frac{\rho}{10} \right)^{-2} \left( \frac{f}{\SI{10}{\milli\hertz}} \right)^{-2} , \label{eq:sky} \\ \frac{\Delta D}{D} & \sim 0.2 \left( \frac{\rho}{10} \right)^{-1} . \end{align} Here, the empirical formula of $\Delta \Omega (f)$ is applicable to $f \gtrsim \SI{2}{\milli\hertz}$ where the source is localized primarily by the Doppler shift associated with the annual motion of the detector \citep{takahashi_seto2002}. Thus, the formula is appropriate for the frequency range that we are interested in this study (see Fig.~\ref{fig:range}). Assuming the error volume $\Delta V$ to have an elliptical shape in conformity with the Fisher analysis, we find \begin{align} \Delta V (f) & \sim \frac{4}{3} D^2 \Delta \Omega (f) \Delta D \\ & \sim \SI{200}{Mpc^3} \left( \frac{D}{\SI{100}{Mpc}} \right)^3 \left( \frac{\rho}{10} \right)^{-3} \left( \frac{f}{\SI{10}{\milli\hertz}} \right)^{-2} . \label{eq:errvol} \end{align} While this expression is valid for any monochromatic source, the signal-to-noise ratio, $\rho$, is determined by $D$ and $f$ for binary black holes with given binary parameters such as the chirp mass and the inclination angle. In this section, we focus on the nearest and thus loudest binaries located at $D_\mathrm{near}(f)$, because the error volume should be the smallest for such systems and the host galaxy should be determined most accurately. To derive the signal-to-noise ratio, we average the inclination angle somewhat arbitrarily in the same manner as is done in the previous section, equation \eqref{eq:ave}. This averaging has a useful feature that the signal-to-noise ratio for the nearest binary becomes $\rho_\mathrm{thr} D_\mathrm{eff} / D_\mathrm{near}$. The error volume for the nearest binary $( \Delta V )_\mathrm{near}$ is found to be proportional to $D_\mathrm{near}^6 D_\mathrm{eff}^{-3} f^{-2}$ irrespective of the precise averaging procedure, and furthermore in the monochromatic approximation, proportional to $f^{4/3} S(f)^{3/2}$. This implies that the smallest error volume is achieved at the minimum of $S(f) f^{8/9}$ or equivalently the contact point of $S(f)^{1/2}$ and a tangential line with the slope $-4/9$. For eLISA noise curves, the minimum of $( \Delta V )_\mathrm{near}$ is very close to the maximum of $dN/d \ln f$. \begin{figure} \includegraphics[width=.95\linewidth]{fig7.pdf} \caption{Error volume $\Delta V(f)$ for the nearest and thus loudest binaries located at $D_\mathrm{near} (f)$ for 3-yr observations of eLISA. The black and red curves are for NGO and N2A5, respectively. It should be cautioned that the result below $\sim \SI{2}{\milli\hertz}$ is poorly described by the empirical formula adopted in this study, equation \eqref{eq:sky}. The vertical dotted line marks $f_\mathrm{merge} ( \SI{3}{yr} ) = \SI{19.2}{\milli\hertz}$.} \label{fig:volume} \end{figure} Fig.~\ref{fig:volume} shows the error volume for the nearest and thus loudest binaries in each logarithmic frequency interval. This figure is drawn using the exact integration, equation \eqref{eq:exact}, rather than the monochromatic approximation, equation \eqref{eq:mono}, and the results are nearly identical below $f_\mathrm{merge}(T)$. Assuming the typical number density of galaxies $\sim \SI{0.01}{Mpc^{-3}}$, this figure strongly suggests that we may be able to determine the host galaxy of many binary black holes with very high confidence. Because massive binary black holes like GW150914 are hardly expected to be localized accurately by ground-based detectors, which crucially rely on high-frequency signals with $f \gtrsim \SI{100}{\hertz}$ via the triangulation \citep{fairhurst2009}, the exquisite accuracy of localization by eLISA will be invaluable to study the origin of massive binary black holes. The accurate determination of the host galaxy will also be important to study the cosmology using binary black holes as standard sirens \citep{schutz1986}. Our estimate refers only to the statistical error. Taking the high precision of localization by eLISA, other sources of errors such as the inaccuracy of templates would require serious consideration \citep{cutler_vallisneri2007}. Furthermore, the error volume estimated above is not accurate at high frequency, because the monochromatic approximation used to derive equation \eqref{eq:errvol} breaks down. The estimate for $f \lesssim \SI{2}{\milli\hertz}$ is also inaccurate, because the empirical formula does not apply. While the statistical error in the frequency range where most of the binaries are detected is handled properly in our discussion, it would be interesting to study the localization error in a more comprehensive manner along the line of our concise calculation to complement Monte Carlo simulations \citep{nishizawa_bks2016,sesana2016,vitale2016}. In addition, the eccentricity estimation will also be important to clarify the nature of massive binary black holes, and thus it would be useful to assess the estimation error in a quantitative manner. The eccentricity may be estimated via the amplitude of the third harmonic mode in the monochromatic limit \citep{seto2016} or via the matched-filtering analysis at high frequency, for which \citet{nishizawa_bks2016} gave error distribution by Monte Carlo simulations combined with the Fisher analysis. We left such extension for the future study. \section{summary} We investigate the prospects for detecting extragalactic stellar-mass binary black holes by eLISA, motivated by the first direct detection of gravitational waves, GW150914. eLISA might observe as many binary black holes similar to GW150914 as supermassive binary black holes. The detection will be dominated by binaries that do not merge within the observation period $T$ of eLISA, particularly when the low-frequency acceleration noise is improved. The total number of detections varies by about two orders of magnitude depending on the detector sensitivity, and a long-arm detector is highly desired for promoting multiband gravitational-wave astronomy by increasing the detections of merging binary black holes. Scientific returns from a long operation is more than the increase linear in time. Specifically, the total number of detections increases as or faster than $T^{3/2}$, and the number of merging binary black holes increases much faster than $T^{3/2}$. A monochromatic approximation works well to derive various features of observational prospects, especially for the case that the sensitivity is high at low frequency so that the detections of non-merging binary black holes are numerous. The expected number of detections is found to be proportional to $\mathcal{M}^\mathrm{10/3}$ by the monochromatic approximation, and our estimate will be applicable to binary black holes with chirp-mass distribution if we use the appropriately weighted average chirp mass (equation \ref{eq:average}). The error volume of the localization can be so small that the host galaxy is determined with high confidence. This ability of eLISA will be invaluable to clarify the origin of massive binary black holes. \section*{Acknowledgements} This work is supported by JSPS Kakenhi Grant-in-Aid for Research Activity Start-up (No.~15H06857), for Scientific Research (No.~15K65075) and for Scientific Research on Innovative Areas (No.~24103006). Koutarou Kyutoku is supported by the RIKEN iTHES project. \bibliographystyle{mn2e}
1,477,468,750,718
arxiv
\section{Introduction} Traditionally, atom tracking is used in chemistry to understand the underlying reactions and interactions behind some chemical or biological system. In practice, atoms are usually tracked using isotopic labeling experiments. In a typical isotopic labeling experiment, one or several atoms of some educt molecule of the chemical system we wish to examine are replaced by an isotopic equivalent (e.g. $^{12}$C is replaced with $^{13}$C). These compounds are then introduced to the system of interest, and the resulting product compounds are examined, e.g. by mass spectrometry \parencite{isotope-ms} or nuclear magnetic resonance \parencite{isotope-nmr}. By determining the positions of the isotopes in the product compounds, information about the underlying reactions might then be derived. From a theoretical perspective, characterizing a formal framework to track atoms through reactions is an important step to understand the possible behaviors of a chemical or biological system. In this contribution, we introduce such a framework based on concepts rooted in semigroup theory. Semigroup theory can be used as a tool to analyze biological systems such as metabolic and gene regulatory networks \parencite{nehaniv2015symmetry,egri2008hierarchical}. In particular, Krohn-Rhodes theory \parencite{rhodes2009applications} was used to analyze biological systems by decomposing a semigroup into simpler components. The networks are modeled as state automatas (or ensembles of automatas), and their characteristic semigroup, i.e., the semigroup that characterizes the transition function of the automata \parencite{algebraic-automata}, is then decomposed using Krohn-Rhodes decompositions or, if not computationally feasible, the holonomy decomposition variant \parencite{egri2015computational}. The result is a set of symmetric natural subsystems and an associated hierarchy between them, that can then be used to reason about the system. In \parencite{IsotopeLabel2019} algebraic structures were employed for modeling atom tracking: graph transformation rules are iteratively applied to sets of undirected graphs (molecules) in order to generate the hyper-edges (the chemical reactions) of a directed hypergraph (the chemical reactions network) \parencite{andersen2016software,chemicalmotifs}. A semigroup is defined by using the (partial) transformations that naturally arise from modeling chemical reactions as graph transformations. Utilizing this particular semigroup so-called pathway tables can be constructed, detailing the orbit of single atoms through different pathways to help with the design of isotopic labeling experiments. In this work, we show that we can gain a deeper understanding of the analyzed system by considering how atoms move in relation to each other. To this end, we briefly introduce useful terminology in Section \ref{sec:preliminaries}, found in graph transformation theory as well as semigroup theory. In Section \ref{sec:chemical-networks-and-algebraic-structures} we show how the possible trajectories of a subset of atoms can be intuitively represented as the (right) Cayley graph \parencite{denes1966connections} of the associated semigroup of a chemical network. Moreover, we define natural subsystems of a chemical network in terms of reversible atom configurations and show how they naturally relate to the strongly connected components of the corresponding Cayley graph. We show the usefulness of our approach in Section \ref{sec:results} by using the constructions defined in Section \ref{sec:chemical-networks-and-algebraic-structures} to differentiate chemical pathways, based on the atom trajectories derived from each pathway. We then show how the Cayley graph additionally provides a natural handle for the analysis of cyclic chemical systems such as the TCA cycle \parencite{biochem-textbook}. \section{Preliminaries} \label{sec:preliminaries} \smallskip \noindent \textbf{Graphs:} In this contribution we consider directed as well as undirected connected graphs $G=(V,E)$ with vertex set $V(G)\coloneqq V$ and edge set $E(G)\coloneqq E$. A graph is vertex or edge labeled if its vertices or edges are equipped with a labeling function respectively. If it is both vertex and edge labeled, we simply call the graph labeled. We write $l(x)$ for the vertex labels $(x \in V(G))$ and edge labels $(x \in E(G))$. Given two (un)directed graphs $G$ and $G'$ and a bijection $\varphi: V(G) \rightarrow V(G')$, we say that $\varphi$ is edge-preserving if $(v, u) \in E(G)$ if and only if $(\varphi(v), \varphi(u)) \in E(G')$. Additionally, if $G$ and $G'$ are labeled, $\varphi$ is label-preserving if $l(v) = l(\varphi(v))$ for any $v \in V(G)$ and $l(v, u) = l(\varphi(v), \varphi(u))$ for any $(v, u) \in E(G)$. The bijection $\varphi$ is an isomorphism if it is edge-preserving and, in the case that $G$ and $G'$ are labeled, label-preserving. If $G = G'$, then $\varphi$ is also an automorphism. Given a (directed) graph $G$ we call $G$ (strongly) connected if there exists a path from any vertex $u$ to any vertex $v$. We call the subgraph $H$ of $G$ a (strongly) connected component if $H$ is a maximal (strongly) connected subgraph. Since the motivation of this work is rooted in chemistry, sometimes it is more natural to talk about the undirected labeled graphs as molecules, their vertices as atoms (with labels defining the atom type), and their edges as bonds (whose labels distinguish single, double, triple, and aromatic bonds, for instance), while still using common graph terminology for mathematical precision. \noindent \textbf{Graph Transformations:} As molecules are modeled as undirected labeled graphs, it is natural to think of chemical reactions as graph transformations, where a set of educt graphs are transformed into a set of product graphs. We model such transformations using the double pushout (DPO) approach. For a detailed overview of the DPO approach and its variations see \parencite{habel2001double}. Here, we will use DPO as defined in \parencite{andersen2016software} that specifically describes how to model chemical reactions as rules in a DPO framework. A rule $p$ describing a transformation of a graph pattern $L$ into a graph pattern $R$ is denoted as a span $L \xleftarrow{l}{} K \xrightarrow{r}{} R$, where $K$ is the subgraph of $L$ remaining unchanged during rewriting and $l$ and $r$ are the subgraph morphism $K$ to $L$ and $R$ respectively. The rule $p$ can be applied to a graph $G$ if and only if (i) $L$ can be embedded in $G$ (i.e., $L$ is subgraph monomorphic to $G$) and (ii) the graphs $D$ and $H$ exists such that the diagram depicted in Fig.~ \ref{fig:derivation} commutes. \begin{figure}[h] \centering \begin{tikzpicture} \node[] (L) at (0, 0){L}; \node[] (K) at (1.5, 0){K}; \node[] (R) at (3, 0){R}; \node[] (G) at (0, -1.5){G}; \node[] (D) at (1.5, -1.5){D}; \node[] (H) at (3, -1.5){H}; \draw[->] (K) -- node[above] {l} (L); \draw[->] (K) -- node[above] {r} (R); \draw[->] (L) -- node[right] {$m$} (G); \draw[->] (K) -- node[right] {} (D); \draw[->] (R) -- node[right] {} (H); \draw[->] (D) -- node[below] {l'} (G); \draw[->] (D) -- node[below] {r'} (H); \end{tikzpicture} \caption{A direct derivation.} \label{fig:derivation} \end{figure} The graphs $D$ and $H$ are unique if they exist \parencite{habel2001double}. The graph $H$ is the resulting graph obtained by rewriting $G$ with respect to the rule $p$. We call the application of $p$ on $G$ to obtain $H$ via the map $m: L \rightarrow G$, a direct derivation and denote it as $G \xRightarrow{p,m}{} H$ or $G \xRightarrow{p}{} H$, if $m$ is not important. We note, that $m$ is not necessarily unique, i.e., there might exist a different map $m'$ such that $G \xRightarrow{p,m'}{} H$. For a DPO rule $p$ to model chemistry, we follow the modeling in \parencite{chemicalmotifs}, and impose 3 additional conditions that $p$ must satisfy. (i) All graph morphisms must be injective (i.e., they describe subgraph isomorphisms). (ii) The restriction of graph morphisms $l$ and are $r$ to the vertices must be bijective, ensuring atoms are conserved through a reaction. (iii) Changes in charges and edges (chemical bonds) must conserve the total number of electrons. In the above framework, a chemical reaction is a direct derivation $G \xRightarrow{p,m}{} H$, where each connected component of $G$ and $H$ corresponds to the educt and product molecules, respectively. Condition (i) and (ii), ensures that $l$ and $r$, and by extension $l'$ and $r'$ are bijective mappings when restricted to the vertices. As a consequence we can track each atom through a chemical reaction modeled as a direct derivation by the map $l'^{-1}\circ r'$. We note, that like $m$, $l'$ and $r'$ might not be unique for a given direct derivation $G \xRightarrow{p}{} H$. We define the set of all such maps $l'^{-1}\circ r'$ for all possible maps $l'$ and $r'$ obtained from $G \xRightarrow{p}{} H$ as $tr(G \xRightarrow{p}{} H)$. An example of a direct derivation representing a chemical reaction is depicted in Fig.~ \ref{fig:direct-der}. \begin{figure} \centering \includegraphics[width=1\textwidth]{./figs/2.pdf} \caption{An example of a direct derivation. The mapping $l$, $r$, $l'$ and $r'$ is implicitly given by the depicted positions of the atoms. Given a chemical network, each hyper-edge directly corresponds to such a direct derivation. \label{fig:direct-der} } \end{figure} \noindent \textbf{Chemical Networks:} We consider a directed hypergraph where each edge $e=(e^+,e^-)$ is a pair of subsets of vertices. Moreover, we let $Y_e = e^+\cup e^-$ denote the set of vertices that are comprised in the start-vertex $e^+$ and end-vertex $e^-$ of $e$. In short, a chemical network $\mathrm{CN}$ is a hypergraph where each vertex is a connected graph representing a molecule and each hyper-edge a rule application corresponding to a chemical reaction. Hence, every hyper-edge $e$ of $\mathrm{CN}$ corresponds to a set of direct derivations transforming the in-going vertices of $e$ into its out-going vertices. For a given set of edges $E$ of $\mathrm{CN}$, let $\mathcal{D}$ be the set of all direct derivations that can be obtained from $E$. Then, $tr(E) = \bigcup_{G\xRightarrow{p}{} H\in \mathcal{D}} tr(G\xRightarrow{p}{} H)$ and $tr(\mathrm{CN}) = tr(E(CN))$.\\ \noindent \textbf{Semigroups and transformation semigroups:} A \emph{semigroup} is a pair $(S, \circ)$, where $S$ is a set and $\circ: S \times S\rightarrow S$ an associative binary operator on $S$. We often write $ab$ for the product $a\circ b$. A semigroup that contains the identity element $1$ (i.e., $s1 = s = 1s$ for all $s \in S$) is a \emph{monoid}. The \emph{order} of a semigroup $S$ is its cardinality $|S|$. A subset $A \subseteq S$ is said to \emph{generate $S$} or called a generating set for $S$, $\langle A \rangle = S$, if all elements of $S$ can be expressed as a finite product of elements in $A$. Given a non-empty finite set $X$, a \emph{transformation} on $X$ is an arbitrary map $f: X \rightarrow X$ that assigns to \emph{every} element $x\in X$ some element $f(x)\in X$. The identity of a transformation on $X$ is denoted $1_X$. A transformation monoid is a transformation semigroup with identity. If $X=\{1,\dots,n\}$, we often use the notation $(i_1, i_2, \dots, i_n)$ for the transformation $f(j)=i_j$, $1\leq j\leq n$. Note, the elements $i_1, i_2, \dots, i_n$ need not necessarily be pairwise distinct. Let $T$ be the set of all possible transformations on $X$. If $S\subseteq T$ and $S$ is closed under function composition $\circ$, then $(S,\circ)$ forms a semigroup, also called a \emph{transformation semigroup}. To emphasize that $S$ is a collection of transformations on $X$, we will use the notation $(X,S)$ for transformation semigroups and say that $S$ \emph{acts on} $X$. Given a tuple $\bar{z} = (z_1, z_2, \dots, z_n)$ of $n$ distinct elements of $X$ and a transformation semigroup $(X,S)$, the orbit of $\bar{z}$ is defined as $\mathcal{O}(\bar{z}, S) = \{(s(z_1), \dots, s(z_n)\mid s \in S\}.$ In what follows, we use the notion $y\in t= (i_1, i_2, \dots, i_n)$ to indicate that $y = i_j$ for some $j$, $1\leq j\leq n$. Given a transformation semigroup $(X,S)$ with generating set $A$, in symbols $S=\langle A \rangle$, we will employ the \emph{(right) Cayley graph} $\mathrm{Cay}(S, A)$ of $S$ and $A$ with vertex set $S$ and edge set $E(\mathrm{Cay}(S, A)) = \{(s, sa) \mid s\in S, a\in A\}$. In addition, every edge $(s, sa)$ of $\mathrm{Cay}(S, A)$ obtains label $l_a$, that is, the unique label that is associated to each generator $a$ in $A$. Similarly, the \emph{projected} Cayley graph $\mathrm{PCay}(S, A, \bar{z})$ is defined for tuples $\bar{z}$: It has vertex set $\mathcal{O}(\bar{z}, S)$ and for all $s\in \mathcal{O}(\bar{z}, S)$ and for all $a\in A$ there is an edge $(s, sa)$ with label $l_a$. A \emph{free semigroup} $\Sigma^+$, is the semigroup containing all finite sequences of strings constructed from the alphabet $\Sigma$ with concatanation as the associative binary operator. Adding the empty string $\epsilon$ results in the free monoid $\Sigma^* = \Sigma^+ \cup \{\epsilon\}$. \section[Chemical Networks and Algebraic Structures]{Chemical Networks and their Algebraic Structures} \label{sec:chemical-networks-and-algebraic-structures} \subsection{Characteristic Monoids}\label{sec:charsg} Assume we are given some chemical network $\mathrm{CN}$ that is some hypergraph modeling some chemistry. As we are interested in tracking the possible movements of atoms in $\mathrm{CN}$, we are inherently interested in the reactions of $\mathrm{CN}$, i.e., in its edge set $E(\mathrm{CN})$. Indeed, atoms can only reconfigure to construct new molecules under the execution of some reaction. We will refer to the execution of a reaction as an \emph{event}. The possible reconfigurations of atoms caused by a single event, is given by the set of atom maps $tr(\mathrm{CN})$ constituting a set of (partial) transformations on $X = \bigcup_{M \in V(\mathrm{CN})}V(M)$. Note, the vertex $M\in V(\mathrm{CN})$ corresponds to an entire molecule for which $V(M)$ denotes the set of atoms (=labeled vertices). A transformation $t$ on $X$ describes the position (i.e., in what molecule and where in the molecule the atom is found) of each atom in $X$ when $X$ is transformed by $t$. In what follows, we will sometimes refer to such transformations on $X$ as \emph{atom states}, as the transformations encapsulates the "state" of the network, i.e., the position of each atom. To track the possible movement of atoms through a chemical network, we must consider sequences of events. \begin{definition}[Event Traces] Let $\Sigma$ be an alphabet containing a unique identifier $t$ for each atom map in $tr(\mathrm{CN})$. Then, an \emph{event trace} is an element of the free monoid $\Sigma^*$. \end{definition} The free monoid $\Sigma^*$ contains all possible sequences of events that can move the atoms of $X$. Note, $\Sigma^*$ does not track the actual atoms through event traces. For this, we use the following structure: \begin{definition}[Characteristic Monoids] \label{def:charsg} Let the characteristic monoid of $\mathrm{CN}$ be defined as the transformation monoid $ \mathcal{S}(\mathrm{CN}) = (X, \langle tr(\mathrm{CN}) \cup 1_X \rangle).$ Moreover, given a set of edges $E \subseteq E(\mathrm{CN})$, and the set of atoms $Y \subseteq X$ found in $E$ (that is $Y= \cup_{e\in E} Y_e$), we let the characteristic monoid of $E$ be defined as $ \mathcal{S}(E) = (Y, \langle tr(E) \cup 1_Y \rangle). $ \end{definition} Let $\sigma: \Sigma \rightarrow tr(\mathrm{CN})$ be the function, that maps all identifiers of $\Sigma$ to their corresponding atom map in $tr(\mathrm{CN})$. Given an event trace $t = t_1t_2\dots t_n \in \Sigma^*$, we let the events of $t$ refer to their corresponding transformations in $tr(\mathrm{CN})$ when acting on an element $s\in\mathcal{S}(\mathrm{CN})$, i.e., $st = s\sigma(t_1)\sigma(t_2)\dots \sigma(t_n) \in \mathcal{S}(\mathrm{CN})$. Every event trace $t \in \Sigma^*$ gives rise to a member $\mathcal{S}(\mathrm{CN})$, in particular the transformation $1_Xt$, that represents the resulting atom state obtained from moving atoms according to $t$. Hence, there is a homomorphism from $\Sigma^*$ to $\mathcal{S}(\mathrm{CN})$, meaning that $\mathcal{S}(\mathrm{CN})$ captures all possible movements of atoms through reactions of $\mathrm{CN}$. Often, we are only interested in tracking the movement of a small number of atoms. Let $\bar{z}$ be a tuple of distinct elements from $X$ that we want to track. Then, there is again a homomorphism from $\Sigma^*$ and $\mathcal{O}(\bar{z}, \mathcal{S}(\mathrm{CN}))$. Namely, for a given event trace $t \in \Sigma^*$, we can track the atoms of $\bar{z}$ as the atom state $1_{\{x\ |\ x \in \bar{z}\}}t$ corresponding to an element in the orbit $\mathcal{O}(\bar{z}, \mathcal{S}(\mathrm{CN}))$, if we treat the element as a (partial) transformation. As a result, $\mathcal{O}(\bar{z}, \mathcal{S}(\mathrm{CN}))$ characterizes the possible movements of the atoms in $\bar{z}$, and we will refer to its elements as atom states similarly to elements in $\mathcal{S}(\mathrm{CN})$ as they conceptually represent the same thing. We note, the above definitions are not unlike some of the core definitions within algebraic automata theory \parencite{algebraic-automata}. Here, the possible inputs of an automata is often defined in terms of strings obtained from the free monoid on the alphabet of the automata. The characteristic semigroup is then defined as the semigroup that characterizes the possible state transitions. In the same vein, we can view our notion of event traces as the possible "inputs" to our chemical network CN that moves some initial configuration of atoms $1_X$. The characteristic monoid of CN then characterize the possible movements of atoms through event traces. In what follows we let $\mathrm{Cay}(\mathrm{CN})$ denote the Cayley graph $\mathrm{Cay}(\mathcal{S}(\mathrm{CN})$, $tr(\mathrm{CN}) \cup 1_X)$. Similarly, given a tuple of atoms $\bar{z}$, we let $\mathrm{PCay}(\mathrm{CN}, \bar{z})$ denote the projected Cayley graph $\mathrm{PCay}(\mathcal{S}(\mathrm{CN}), tr(\mathrm{CN})\cup 1_X, \bar{z})$. We note, that by Def.~ \ref{def:charsg}, $\mathcal{S}(\mathrm{CN})$ is constructed from the generating set $\langle tr(\mathrm{CN})\cup 1_X \rangle$, and hence $\mathrm{Cay}(\mathrm{CN})$ and $\mathrm{PCay}(\mathrm{CN}, \bar{z})$ are well defined. Since the transformation $1_X$ will always result in a loop on every vertex of the (projected) Cayley graph, and conveys no meaningful information, we will refrain from including any edge arising from $1_X$. We can illustrate the relation between atom states using the Cayley graph $\mathrm{Cay}(\mathrm{CN})$. More precisely, there exists an edge between two atom states $a,b \in \mathcal{S}(\mathrm{CN})$ with label $t$, if it is possible to move the atoms in $a$ to $b$ using $t$. It is natural to relate $\Sigma^*$ to $\mathrm{Cay}(\mathrm{CN})$. Namely, any path in $\mathrm{Cay}(\mathrm{CN})$ corresponds directly to an event trace in $\Sigma^*$. Hence, where $\Sigma^*$ encapsulates the "inputs" of the chemical network and $\mathcal{S}(\mathrm{CN})$ contains the possible atom states derived from $\Sigma^*$, the Cayley graph $\mathrm{Cay}(\mathrm{CN})$ captures \emph{how} atom states from $\mathcal{S}(\mathrm{CN})$ can be created by event traces. \begin{figure}[t] \centering \centering \subfloat[] { \includegraphics[width=0.29\textwidth]{./figs/3a.pdf} \label{fig:formose-example} } \subfloat[] { \raisebox{1.5cm}{ \includegraphics[width=0.6\textwidth]{./figs/3b.pdf} \label{fig:formose-Cayley} } } \caption[]{\subref{fig:formose-example} A small example using molecules and reactions found in the Formose reaction. The carbon atoms of each molecule are labeled with a unique identifier for easy reference. \subref{fig:formose-Cayley} The Cayley graph $\mathrm{Cay}(\mathrm{CN})$ of Fig.~ \ref{fig:formose-example} from the example of Sec.~\ref{sec:charsg}. From the graph, we see the longest path from $1_X$ has length $2$, meaning that any event trace can at most transform $1_X$ meaningfully twice. In fact, only two types of event traces are of interest: Either the tracked atoms are immediately moved by the reaction $r_1$ to $p_{0,2}$, or the atoms of glycolaldehyde are first moved to $p_{0,0}$ using $r_0$, and then moved to $p_{0,2}$. } \end{figure} \smallskip \textbf{Example:} \label{sec:ex_simple} As an illustrative example, consider the reaction network $\mathrm{CN}$ depicted in Fig.~\ref{fig:formose-example}. For simplicity we will use reactions $r_0$ and $r_1$ involved in the so-called Formose reaction. We restrict ourselves to only consider the carbon atoms of all molecules, and have labeled them with a corresponding unique id for easy reference. Here, the underlying set $X=\{1,2,\dots, 8\}$ corresponds to the eight elements labeled by $1,2,\dots, 8$ in Fig.~ \ref{fig:formose-example}. From $tr(\mathrm{CN})$ we get 4 transformations: $s_1 = [3,4,3,4,5,6,7,8]$, $s_2 = [4,3,3,4,5,6,7,8]$ (both obtained from $r_0$), and $s_3 = [5,6,7,8,5,6,7,8]$, $s_4 = [5,6,8,7,5,6,7,8]$ (both obtained from $r_1$) with the corresponding alphabet $\Sigma = \{s_1, s_2, s_3, s_4 \}$. For a reaction, the corresponding transformation(s) maps the atoms of the educt molecules to the atoms of the product molecules while all other atoms are mapped with the identity. The transformations describe how carbon atoms are rearranged into different configurations when an event is fired. $s_1$ and $s_2$ describe how the carbon atoms of a glycolaldehyde molecule are arranged in the molecule $p_{0,0}$ when transformed via the reaction $r_0$. In the case of $s_1$, we see that the carbons are rearranged such that $s_1(1) = 3$ and $s_1(2) = 4$. Of course, due to the symmetries in the molecule $p_{0,0}$, reaction $r_0$ also results in the mirrored transformation of $s_1$, i.e., $s_2(1) = 4$ and $s_2(2) = 3$. The characteristic monoid of $\mathrm{CN}$, $\mathcal{S}(\mathrm{CN})$, has an order of $9$. We illustrate the movement of atoms in $\mathrm{CN}$ by its Cayley graph $\mathrm{Cay}(\mathrm{CN})$ which is depicted in Fig.~ \ref{fig:formose-Cayley}. Any path originating from the identity element corresponds to an event trace, e.g. we can track the atoms $1$ and $2$ through the event trace $s_1s_3$ as the corresponding path and realize $s_1s_3(1) = 8$ and $s_1s_3(2) = 7$. \begin{figure}[t] \centering \subfloat[] { \centering \includegraphics[width=0.24\textwidth]{figs/4a.pdf} \label{fig:formose_orbit_graph} } \subfloat[] { \raisebox{0.5cm}{ \includegraphics[width=0.45\textwidth]{./figs/4b.pdf} \label{fig:example-cycle} } } \caption[]{ \subref{fig:formose_orbit_graph} The projected Cayley graph $\mathrm{PCay}(\mathrm{CN}, (1,2))$ from the example of Section \ref{sec:charsg}. Like for $\mathrm{Cay}(\mathrm{CN})$, we see there are only two types of event traces of interest. However, since we are only tracking the atoms of the glycolaldehyde molecule, some atom states are effectively coalesced compared to $\mathrm{Cay}(\mathrm{CN})$. \subref{fig:example-cycle} The projected Cayley graph $\mathrm{PCay}(\mathrm{CN}, (1,2))$ from the example of Sec.~\ref{sec:subsystem}. The graph shows the natural subsystems of the carbon atoms of a glycolaldehyde molecule. Vertices in the same box constitutes vertices that are in the same natural subsystem. Note, edges between vertices in the same natural subsystem are not depicted (e.g, one of the eight hidden edges in the top-level subsystem is $(3,4)\rightarrow (1,2)$ with label $s_5$). } \end{figure} Assume now, that we were only interested in tracking the carbon atoms found in the glycolaldehyde molecule. To this end, we can examine $\mathcal{O}(\bar{z}, \mathcal{S}(\mathrm{CN}))$, which contains $6$ elements, meaning there exists $6$ unique atom states for the atoms in a glycolaldehyde molecule. Again, we can study these movements using the projected Cayley graph $\mathrm{PCay}(\mathrm{CN}, (1,2))$. The resulting graph is depicted in Fig.~ \ref{fig:formose_orbit_graph}. \subsection{Natural Subsystems of Atom States} \label{sec:subsystem} In the intersection between group theory and systems biology, attempts to formalize the notion of natural subsystems and hierarchical relations within such systems have been done by works such as \parencite{nehaniv2015symmetry}. Here, natural subsystems are defined as symmetric structures arising from a biological system. Such symmetries manifests as permutation groups of the associated semigroup representing said system. In such a model the Krohn-Rhodes decomposition or the holonomy decomposition \parencite{egri2015computational} can be used to construct a hierarchical structure on such natural subsystems of the biological system. In terms of atom tracking, however, defining natural subsystems in terms of the permutation groups in $\mathcal{S}(\mathrm{CN})$ does not have an immediately useful interpretation. Similarly, the hierarchical structure obtained from methods such as holonomy decomposition are not intuitive to interpret. Instead, when talking about natural subsystems in terms of atom tracking, we are interested in systems of reversible \emph{event traces}, i.e., event traces that do not change the original configuration of atoms. To this end, it is natural to define natural subsystems of $\mathcal{S}(\mathrm{CN})$ in terms of Green's relations \parencite{algebraic-theory-semigroup}. For elements $s_1, s_2 \in \mathcal{S}(\mathrm{CN})$, we define the reflexive transitive relation $\succeq_{\mathcal R}$ as $s_1 \succeq_{\mathcal R} s_2$, if there exists an event trace $t \in \Sigma^*$ such that $s_1 t = s_2$. In addition, we define an equivalence relation $\mathcal{R}$, where $s_1$ is equivalent to $s_2$, in symbols $s_1\mathcal{R}s_2$ whenever $s_1 \succeq_{\mathcal{R}} s_2$ and $s_2 \succeq_{\mathcal{R}} s_1$. \begin{definition}[Natural Subsystems] The natural subsystems of $\mathcal{S}(\mathrm{CN})$ is the set of equivalence classes induced by the $\mathcal{R}$-relation. \end{definition} The equivalence classes correspond to the strongly connected components of the Cayley graph $\mathrm{Cay}(\mathrm{CN})$ \parencite{froidure1997algorithms}. We note, that for a tuple of atoms $\bar{z}$, the natural extension to natural subsystems of the orbit $\mathcal{O}(\bar{z}, \mathcal{S}(\mathrm{CN}))$ is simply the strongly connected components of its projected Cayley graph $\mathrm{PCay}(\mathrm{CN}, \bar{z})$. The $\mathcal{R}$ relation is interesting, as the equivalence classes on $\mathcal{S}(\mathrm{CN})$ induced by the $\mathcal{R}$ relation forms pools of reversible event traces. More precisely, let $s_1\mathcal{R} s_2$ for some $s_1, s_2 \in \mathcal{S}(\mathrm{CN})$, where $s_1\cdot t_{12} = s_2$ and $s_2\cdot t_{21} = s_1$ for some $t_{12},t_{21}\in \Sigma^*$. Then, the event traces $t_{12}$ and $t_{21}$ are reversible, i.e. we can re-obtain $s_1$ as $s_1t_{12}t_{21} = s_1$ and $s_2$ as $s_2t_{21}t_{12} = s_2$. Additionally, the quotient graph of the equivalence classes of the $\mathcal{R}$ relation on the Cayley graph $\mathrm{Cay}(\mathrm{CN})$ naturally forms a hierarchical relation on the atom states of $\mathcal{S}(\mathrm{CN})$ that has a useful interpretation from the point of view of chemistry as we will see in Sec.~\ref{SEC:TCA}. \textbf{Example: } Again, consider the reaction network obtained from the formose reaction depicted in Fig.~ \ref{fig:formose-example}. We will include the transformations obtained from reaction $r_2$ in additions to the transformations listed in Sec.~\ref{sec:ex_simple}: $s_5 = [1,2,1,2,5,6,7,8]$ and $s_6 = [1,2,2,1,5,6,7,8]$ (both obtained from $r_2$). Assume we are interested in determining how carbon atoms of a glycolaldehyde molecule can reconfigure into different molecules. The projected Cayley graph $\mathrm{PCay}(\mathrm{CN}, (1,2))$ shows such configurations and is depicted in Fig.~ \ref{fig:example-cycle}. Here, the atom states belonging to the same gray box are strongly connected and hence belong to the same natural subsystem. For clarity, we have removed edges between atom states in the same subsystem, since any atom state in a subsystem can be transformed into any other state in the same subsystem. Notably, we see from Fig.~ \ref{fig:example-cycle} that the atoms $1$ and $2$ in the glycolaldehyde molecule can swap positions. We could of course also realize that such a swap was possible by noticing the symmetries in the glycolaldehyde molecule and the fact that we can convert glycolaldehyde to the $p_{0,0}$ molecule and vice versa. However, such patterns becomes immediately obvious from the projected Cayley graph. Finally, we can derive from Fig.~ \ref{fig:example-cycle}, that it is only possible to leave the original subsystem by applying transformation $s_3$ or $s_4$, corresponding to reaction~$r_1$. \section{Results} \subsection{Implementation} To test the practicality of the structures introduced in the previous section, we implemented the construction of the projected Cayley graph of a set of atoms in a chemical network. The resulting implementation can be found at \url{https://github.com/Nojgaard/cat} All code is written in python and uses the software package MØD \parencite{andersen2016software} and NetworkX \parencite{SciPyProceedings_11} to construct the chemical networks and find the transformations used for the characteristic monoid. All figures in the following section were constructed with said implementation, and each run finished within seconds on an 8 core Intel Core i9 CPU with 64 GB memory. The most time consuming part of the implementation was the computation of the transformations obtained from each hyper-edge in the chemical network. In contrast, the construction time of the projected Cayley graph proved to be negligible. \label{sec:results} \subsection{Differentiating Pathways} In this section, we will explore the possibilities of using the characteristic monoids of chemical networks to determine if it is possible to distinguish between two pathways $P_1$ and $P_2$, based on their atom states of their respective characteristic monoids. The motivation stems from methods such as isotope labeling. Here, a "labeled" atom, is a detectable isotope whose position is known in some initial molecule and can then be detected, along with its exact position, in the product molecules of some pathway. In contrast to \parencite{IsotopeLabel2019}, we will not focus on the orbits of atoms in isolation, as we lose the ability to reason about atom positions in relation to each other. Moreover, as we will see here, the Cayley graph of the chemical network can be used to identify the exact event two pathways split. Given a chemical network $\mathrm{CN}$, a pathway $P$ is a set of hyper-edges (i.e. reactions) from $\mathrm{CN}$ equipped with a set of input and output molecules. We think of a pathway as a process that consumes a set of input molecules to construct a set of output molecules, using the reactions specified by $P$. In our case, a "labeled" atom is a point in $\mathcal{S}(\mathrm{CN})$. Given two pathways $P_1$ and $P_2$, we can characterize the possible movement of atoms as the characteristic monoids $\mathcal{S}(P_1)$ and $\mathcal{S}(P_2)$. In practice, it might not be feasible to track every atom in $\mathrm{CN}$, e.g. we are only able to replace a few atoms with its corresponding detectable isotope, and hence it becomes useful to consider the orbits $\mathcal{O}(\bar{z}, \mathcal{S}(P_1))$ and $\mathcal{O}(\bar{z}, \mathcal{S}(P_2))$ where $\bar{z}$ is the atoms from the input molecules we can track. Clearly, of the atom states in $\mathcal{O}(\bar{z}, \mathcal{S}(P_1))$ and $\mathcal{O}(\bar{z}, \mathcal{S}(P_2))$, we can only expect to observe, e.g. in an isotope labeling experiment, the atom states that locates the tracked atoms in the output molecules. As a result, we arrive at the following observation: \begin{observation} Let $Y_i \subseteq \mathcal{O}(\bar{z}, \mathcal{S}(P_i))$, $i \in \{1,2\}$, be the atom states we can hope to observe after some isotope labeling experiment. Then, we can always distinguish between $P_1$ and $P_2$ if $Y_1\cap Y_2 = \emptyset$. \end{observation} \noindent \textbf{Example:} Consider the network $\mathrm{CN}$ depicted in Fig.~ \ref{fig:anrorc-dg} modelling the creation of product 4-phenyl-6-aminopyrimidine (denoted P) from the educt 4-(benzyloxy)-6-bromopyrimidine (denoted E) using ammonia. This well investigated and widely used substitution mechanism (ANRORC) \parencite{anrorc} was proven to non-trivially function via ring opening and ring closure (and an accompanied carbon replacement) via isotope labeling. \begin{figure}[t] \centering \subfloat[] { \includegraphics[width=0.75\textwidth]{./figs/5a.pdf} \label{fig:anrorc-dg} } \subfloat[] { \includegraphics[width=0.25\textwidth]{./figs/5b.pdf} \label{fig:anrorc-cay} } \caption[]{\subref{fig:anrorc-dg} The chemical network for the creation of P from E using ammonia. The dotted lines shows the possible atom trajectories for the atoms $2$ and $3$ respectively. \subref{fig:anrorc-cay} The projected Cayley graph $\mathrm{PCay}(\mathrm{CN}, (2,3))$. } \end{figure} Two possible pathways are modelled: the input molecules for the two pathways are the molecules E, \ch{NH3}, \ch{NH2}, while the output is the single molecule P. The first, seemingly correct but wrong, pathway $P_1 = \{r_3\}$ converts E and an \ch{NH3} molecule directly into P, by replacing the \ch{Br} atom with \ch{NH2}. The second pathway consists of the reactions $P_2 = \{ r_0, r_1, r_2, r_4 \}$ and models the ANRORC mechanism. Assume we wanted to device a strategy to decide what pathway is executed in reality. By replacing the nitrogen atoms of the E molecule with the isotope $^{13}$N we would be able to observe where the atoms are positioned in the produced P molecule. Since we, by assumption, only label the nitrogen atoms of the E molecule, i.e., the atoms $3$ and $2$, we can look at the orbits of the characteristic monoids $\mathcal{O}((2,3), \mathcal{S}(P_1))$ and $\mathcal{O}((2,3), \mathcal{S}(P_2))$ with the order of $5$ and $2$ respectively. We see that both orbits only contains a single element locating $(2,3)$ in the $P$ molecule, namely the element $(14,15)$ for $\mathcal{O}((2,3), \mathcal{S}(P_1))$ and $(14,13)$ for $\mathcal{O}((2,3), \mathcal{S}(P_2))$. As the possible configurations are different for $P_1$ and $P_2$, it is hence possible to always identify if the $P$ molecule was created by $P_1$ or $P_2$. This fact, also becomes immediately obvious by looking at the projected Cayley graph $\mathrm{PCay}(\mathrm{CN}, (2,3))$ depicted in Fig.~ \ref{fig:anrorc-cay}, that shows the immediate divergence of atom states of the two pathways. \subsection{Natural Subsystems in the TCA Cycle}\label{SEC:TCA} The citric acid cycle, also known as the tricarboxylic (TCA) cycle or the Krebs cycle, is at the heart of many metabolic systems. The cycle is used by aerobic organisms to release stored energy in the form of ATP by the oxidation of acetyl-CoA into water and CO$_2$. The details for the TCA cycle can be found in any standard chemistry text book, e.g. \parencite{biochem-textbook}. In \parencite{smith2016origin}, the trajectories of different carbon atoms in the TCA cycle was examined to explain the change of their oxidation states. It is well known that there is an enzymatic differentiation of the two carboxymethyl groups in citrate, which requires a rigorous stereochemical modeling of the graph grammar rules used \parencite{stereoGraTra}. Ignoring such stereochemical modeling would lead to atom mappings not occurring in nature. We will provide a formal handle to analyze theoretically possible carbon trajectories using the algebraic constructs provided in this paper. As we will see, such structures provides intuitive interpretations for the TCA cycle. More precisely, assume we are interested in answering the following questions: What are the possible trajectories of the carbons of an oxaloacetate (OAA) molecule within the TCA cycle while i.) ignoring the enzymatic differentiation of the two carboxymethyl groups in citrate (denoted TCA-$\square$), or ii.) not ignoring (denoted TCA-${\tetrahedron}$). To answer these questions, we will decompose the characteristic monoid of the TCA cycle into its natural subsystems and examine them using the projected Cayley graph. In our setting, the TCA cycle is the chemical network $\mathrm{CN}$,depicted in Fig.~ \ref{fig:krebbs_cay_path}, giving rise to transformations of the underlying monoid. The network is made up of 13 reactions, however, some of the reactions are not shown for simplicity. Of these 13 reactions, 7 of them yields exactly 1 transformation each while the remaining 6 yields 2 possible transformations each, resulting in a total of 19 transformations found. The reactions containing multiple transformations are due to automorphisms in molecules such as citrate and fumarate. When the enzymatic differentiation of the carboxymethyl group in citrate is not ignored, only 4 of the 13 reactions yield 2 possible transformations, as the carbon traces to and from citrate are more constrained. In short, while both $\text{TCA-}\square$ and $\text{TCA-}\tetrahedron$ are modeled by the same network, the obtained transformations differ. More precisely, $|tr(\mathrm{CN})| = 19$ wrt. $\text{TCA-}\square$ and $|tr(\mathrm{CN})| = 17$ wrt. $\text{TCA-}\tetrahedron$. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figs/6.pdf} \caption[]{ A (simplified) chemical network modelling the TCA cycle. Note, any molecules not containing carbon atoms are modelled, but not depicted here. Each carbon atom is equipped with a unique id for easy reference. } \label{fig:krebbs_cay_path} \end{figure} To start the cycle, an Acetyl-CoA molecule is condensed with an OAA molecule, executing a cycle of reactions that ends up regenerating the OAA molecule while expelling two \ch{CO2} and water on the way. When an original atom is expelled from the cycle, we will consider it permanently lost. The carbon atoms of the OAA molecule that we are interested in tracking are annotated with the ids 4, 5, 6, and 7. Let $\bar{z} = (4,5,6,7)$. The projected Cayley graph of $\mathrm{PCay}(\mathrm{CN}, \bar{z})$ wrt. TCA-$\square$ (resp. TCA-${\tetrahedron}$) , consists of 213 (resp. 67) vertices. The full Cayley graphs are depicted in Fig.~ \ref{fig:krebbs_cay_full} and \ref{fig:krebbs_cay_cycle_stereo} respectively. When a carbon atom leaves the TCA cycle we denote it by $"\_"$. E.g. the atom state $(\_,7,6,\_)$ should be read as the original carbon atoms with ids $4$ and $7$ has been expelled, while the carbon atoms with ids $5$ and $6$ are located at the atoms with id $7$ and $6$ respectively. \begin{figure}[t] \centering \subfloat [] { \label{fig:krebbs_cay_full} \includegraphics[width=0.5\textwidth]{figs/7a.pdf} } \subfloat [] { \label{fig:krebbs_cay_cycle_stereo} \includegraphics[width=0.5\textwidth]{figs/7b.pdf} } \caption[]{ \subref{fig:krebbs_cay_full} The projected Cayley graph $\mathrm{PCay}(\mathrm{CN}, (4,5,6,7))$ wrt. $\text{TCA-}\square$. The non black colored vertices of the same color, correspond to atom states that are part of the same strongly connected component. \subref{fig:krebbs_cay_cycle_stereo} The projected cayley graph $\mathrm{PCay}(\mathrm{CN}, (4,5,6,7)$ wrt. $\text{TCA-}\tetrahedron$. The non black colored vertices of the same color, correspond to atom states that are part of the same strongly connected component. } \end{figure} We can find the natural subsystems of $\mathrm{CN}$ as the strongly connected components of $\mathrm{PCay}(\mathrm{CN}, \bar{z})$. In TCA-$\square$ (resp. TCA-${\tetrahedron}$) we find 92 (resp. 51) strongly connected components of which 8 (resp. only 1) are non-trivial. Any non-trivial strongly connected component must invariably contain at least one tour around the TCA cycle, since this is the only way the original atoms of the OAA molecules can be reused to create another OAA molecule. Moreover, any non-trivial strongly connected component represents a sequence(s) of reactions that uses (some of the) original atoms of the OAA molecule. To simplify $\mathrm{PCay}(\mathrm{CN}, \bar{z})$ such that only the information on carbon traces of the atoms of OAA are depicted, we will construct the simplified projected Cayley graph, denoted $\mathrm{SCay}(\mathrm{CN}, \bar{z})$, as follows: collapse any vertex in $\mathrm{PCay}(\mathrm{CN}, \bar{z})$ that is part of a trivial strongly connected component and whose atoms are not located in an OAA molecule. Moreover, for any non-trivial strongly connected component, hide the edges between atom states in the same strongly connected component, and finally only include atom states if the atoms are located in a OAA molecule. The resulting graphs for TCA-$\square$ and TCA-${\tetrahedron}$ are depicted in Fig.~ \ref{fig:vier}. Each box in the figure represents a natural subsystem that contains an atom state where every atom is either expelled or located in an OAA molecule. When ignoring the stereochemical formation of citrate, $(\_,5,6,7)$ is a grey node in $\mathrm{SCay}(\mathrm{CN}, \bar{z})$ (i.e., a representative of a strongly connected component $\mathrm{PCay}(\mathrm{CN}, \bar{z})$), i.e., there is a trajectory where three of the four original carbons of OAA are re-used at the same location after a TCA-$\square$ cycle turnover. However in TCA-$\tetrahedron$ only $(\_,5,\_,\_)$ is a representative of a strongly connected component, i.e., only the carbon with id 5 of OAA can be kept at the same location when a multitude of TCA-$\tetrahedron$ turnovers are executed. If that carbon changes location it will leave the TCA cycle after exactly two more turnovers (the natural subsystems reachable from $(\_,5,\_,\_)$ do not correspond to strongly connected components) via positions $5 \rightarrow 6 \rightarrow 4 \rightarrow \_$ or via $5 \rightarrow 6 \rightarrow 7 \rightarrow \_$. To the best of our knowledge such investigations have not been executed formally before. \begin{figure}[t] \centering \begin{minipage}[b]{.20\textwidth} \subfloat [] { \label{fig:tca_oaa_graph}\includegraphics[width=0.7\linewidth]{./figs/8a.pdf}} \vfill \subfloat [] {\label{fig:krebbs_stereo_scay}\includegraphics[width=1.2\linewidth]{./figs/8b.pdf}} \end{minipage} \begin{minipage}[b]{.45\textwidth} \centering \subfloat [] {\label{fig:krebbs_cay_simple}\includegraphics[width=1.1\linewidth]{figs/8c.pdf}} \end{minipage} \caption[]{\subref{fig:tca_oaa_graph} The oxaloacetate molecule. The carbon atoms are equipped with the ids 4, 5, 6, and 7. \subref{fig:krebbs_stereo_scay} The simplified projected Cayley graph $\mathrm{SCay}(\mathrm{CN}, (4,5,6,7))$, when adjusting for stereospecific citrate in $tr(\mathrm{CN})$. \subref{fig:krebbs_cay_simple} The simplified projected Cayley graph $\mathrm{SCay}(\mathrm{CN}, (4,5,6,7))$ when not considering stereospecificity. } \label{fig:vier} \end{figure} Interestingly, $\mathrm{SCay}(\mathrm{CN}, \bar{z})$, as depicted in Fig.~ \ref{fig:krebbs_cay_simple}, allows us to closely examine each of the possible carbon trajectories of TCA$-\square$. E.g. the fact that the atom state $(\_,6,7,\_)$ is present in $\mathrm{SCay}(\mathrm{CN}, \bar{z})$ wrt. $\text{TCA-}\square$, means that there exists a sequence of reactions that expels the carbons with ids $4$ and $7$, but re-uses the carbon atoms with id $5$ and $6$ to create a new OAA atom, where $5$ is located at $6$ and $6$ is located at $7$. Structurally the atoms $4$ and $7$ corresponds to the the outer atoms in the carbon backbone in the OAA molecule, while the atoms $5$ and $6$ correspond to the inner atoms in the carbon backbone. In other words the presence of $(\_,6,7,\_)$, means there exists a sequence of reactions that expels the outer atoms of the carbon backbone while recycling the inner atoms. Fig.~ \ref{fig:krebbs_cay_simple}, gives us a rough road map to determine exactly what sequence of events must have taken place in order to end up in the atom state $(\_,6,7,\_)$. We start with the atom state $(4,5,6,7)$ and see there is an edge directly to $(\_,6,7,\_)$, meaning that we can expel the two outer atoms in a single cycle. This is, however, not the only way we can end up with the atom state $(\_,6,7,\_)$. E.g. after one cycle we can expel the carbon with id $4$ and end up with the atom state $(\_, 5, 6, 7)$, i.e., all other atoms are still in their original positions. After another cycle we can end up in the atom state $(\_,6,7,\_)$ or $(\_6,5,4)$. Note, that $(\_, 5, 6, 7)$ is part of a non-trivial strongly connected component, meaning that there exists a sequence of reactions in the TCA cycle that ends up in the exact same atom state. i.e., we expel the carbon atom at position $4$ (which is already expelled) while keeping all other atoms at their original position. In contrast, the atom state $(\_, 6,5,4)$ is part of a trivial strongly connected component, meaning that any sequence of reaction in the TCA cycle will have to change the atom state. If any non-trivial strongly connected component in Fig.~ \ref{fig:krebbs_cay_simple} contains more than one vertex, it means that we can swap between atom states after a tour in the TCA cycle. As an example, consider the atom state $(\_,6,5,\_)$ and $(\_,5,6,\_)$ that are both part of the same strongly connected component. The fact that they are part of the same strongly connected component, means it is possible to swap the inner atoms of the carbon backbone during a TCA cycle. If we would be interested in the exact sequence of transformations that lead to the swap, we simply examine the subgraph of $\mathrm{PCay}(\mathrm{CN}, \bar{z})$ wrt. $\text{TCA-}\square$ corresponding to that natural subsystem of $\mathrm{SCay}(\mathrm{CN}, \bar{z})$ wrt. $\text{TCA-}\square$ as illustrated in Fig.~ \ref{fig:krebbs_cay_cycle}. The figure depicts all possible ways to swap the positions of atoms with ids $5$ and $6$ as the possible paths between $(\_,5,6,\_)$ and $(\_,6,5,\_)$. Fig.~ \ref{fig:krebbs_cay_path}, shows one such path traversing the TCA cycle without expelling any of the remaining carbon atoms. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figs/9.pdf} \caption{ The strongly connected component of $\mathrm{PCay}(\mathrm{CN}, (4,5,6,7))$ wrt. $\text{TCA-}\square$ containing the state $(\_,6,5,\_)$ and $(\_,5,6,\_)$. } \label{fig:krebbs_cay_cycle} \end{figure} \newpage \section{Conclusion} In this work we have extended the insights provided by \parencite{IsotopeLabel2019}, by showing the natural relationship between event traces, the characteristic monoid and its corresponding Cayley graph. The projected Cayley graph provides valuable insights into local substructures of reversible event traces. We see future steps for this approach to branch in at least two directions. On one hand, these methods shows obvious applications in isotopic labeling design. To this end, it is natural to extend the system to model the actual process of such experiments. E.g. when doing isotopic labeling experiments with mass spectrometry, molecules are broken into fragments and the weight of such fragments are deduced to determine the topology of the fragment. Using our model to track where the atoms might end up in such fragments and how it affects their weight seems like a natural next step. On the other hand, a more rigorous investigation of the fundamental properties derived from semigroup theory of the characteristic monoid seems appealing. As we have shown here, understanding such relations might grant insights into the nature of the examined system. \newpage \section*{Acknowledgements} This work is supported in by Novo Nordisk Foundation grant NNF19OC0057834 and by the Independent Research Fund Denmark, Natural Sciences, grant DFF-0135-00420B. \section*{Author Disclosure Statement} Nothing to declare. \newpage \singlespacing \printbibliography % % % % % % \end{document}
1,477,468,750,719
arxiv
\section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the CVPR 2020 web page for a discussion of the policy on dual submissions. \subsection{Paper length} Papers, excluding the references section, must be no longer than eight pages in length. The references section will not be included in the page count, and there is no limit on the length of the references section. For example, a paper of eight pages with two pages of references would have a total length of 10 pages. {\bf There will be no extra page charges for CVPR 2020.} Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $095.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \textit{et al.} [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \textit{et al.}. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \textit{et al.}, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. \medskip \noindent FAQ\medskip\\ {\bf Q:} Are acknowledgements OK?\\ {\bf A:} No. Leave them for the final copy.\medskip\\ {\bf Q:} How do I cite my results reported in open challenges? {\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\ \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \textit{e.g.}, meaning ``for example'', should not be a sentence-ending space. So \textit{e.g.} is correct, {\em e.g.} is not. The provided \verb'\textit{e.g.}' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\textit{et al.}'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \textit{et al.}~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \textit{et al.}~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\textit{et al.}' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \textit{et al.}. For this citation style, keep multiple citations in numerical (not chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to \cite{Alpher02,Alpher03,Authors14}. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. Page numbers should be in footer with page numbers, centered and .75 inches from the bottom of the page and make it start at the correct page number rather than the 4321 in the example. To do this fine the line (around line 23) \begin{verbatim} \setcounter{page}{4321} \end{verbatim} where the number 4321 is your assigned starting page. Make sure the first page is numbered by commenting out the first page being empty on line 46 \begin{verbatim} \end{verbatim} \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the CVPR 2020 web page for a discussion of the use of color in your document. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: \url{https://www.computer.org/about/contact}. {\small \bibliographystyle{ieee_fullname} \section{Introduction} Image captioning is a long-standing task in artificial intelligence~\cite{farhadi2010every, kulkarni2013babytalk, kuznetsova2012collective, mitchell2012midge, yang2011corpus, fang2015captions}. The task is challenging in that it requires visual perception and recognition, and natural language generation grounded in perception and real-world knowledge ~\cite{kuznetsova2012collective, yang2011corpus}. With recent progress in computer vision~\cite{he2017mask, ren2015faster}, natural language processing~\cite{devlin2018bert,Radford2018ImprovingLU, vaswani2017attention}, and vision-language understanding~\cite{li2020oscar, sharma2018conceptual, zhou2020unified}, the performance on image captioning has been substantially improved on public benchmarks like COCO~\cite{chen2015microsoft} and Flickr30k~\cite{young2014image}. However, models trained on such datasets with limited visual concepts generalize poorly to in-the-wild images~\cite{tran2016rich}. To improve image captioning in the wild, the nocaps benchmark~\cite{agrawal2019nocaps} is developed to evaluate Novel Object Captioning (NOC)\footnote{We use ``NOC'' to represent the task of novel object captioning and ``nocaps'' to refer to the nocaps benchmark.} at scale. The training data for nocaps is the COCO dataset consisting of image-caption pairs and the Open Images dataset~\cite{kuznetsova2018open} containing bounding boxes and image-level tags. The test data consists of images selected from Open Images, containing nearly $400$ objects that are not or rarely seen in the COCO dataset. This raises the challenge of how to generate captions that describe novel objects unseen in the paired image-caption training data. A common strategy is to resort to alternative data sources without caption supervision. Prior works on NOC~\cite{lu2018neural, wu2018decoupled} propose to generate template sentences that can be filled in with detected visual concepts for NOC. However, the relationship between image and text is not fully explored in their frameworks. We will show that the performance of NOC can be significantly improved by pursuing image-text aligned representation learning. \begin{figure}[t] \begin{center} \includegraphics[trim=10 90 10 10, clip,width=0.46\textwidth]{figures/concept_v3.pdf} \vspace{-2mm} \figcaption{VIVO pre-training uses paired image-tag data to learn a rich visual vocabulary where image region features and tags of the semantically similar objects are mapped into vectors that are close to each other. Fine-tuning is conducted on paired image-caption data that only cover a limited numbers of objects (in blue). During inference, our model can generalize to describe novel objects (in yellow) that are learnt during VIVO pre-training.} \label{fig:concept} \end{center} \end{figure} \xiyin{In this paper, we present VIsual VOcabulary (VIVO) pre-training that leverages large amounts of vision data without caption annotations to learn a rich visual vocabulary for NOC. As shown in Figure~\ref{fig:concept}, we define visual vocabulary as a joint embedding space where image region features and tags of semantically similar objects are mapped into vectors that are close to each other, \textit{e.g.}, ``person'' and ``man'', ``accordion'' and ``instrument''. Once the visual vocabulary is pre-trained, we can fine-tune the model using image-caption pairs for caption generation. Note that the dataset used for fine-tuning only covers a small subset of the most commonly occurred objects in the learnt visual vocabulary. Nevertheless, our model can generalize to any images that contain similar scenes (\textit{e.g.}, people sitting in couch in Figure~\ref{fig:concept}) with novel objects unseen in the fine-tuning dataset, like ``accordion'', thanks to the pre-trained visual vocabulary.} \xiyin{The VIVO pre-training method is motivated to learn the cross-modality semantic alignment, similarly as in conventional Vision-Language Pre-training (VLP) methods. However, unlike existing VLP models which are pre-trained using image-caption pairs, VIVO is pre-trained on image-tag pairs. To the best of our knowledge, VIVO is the first VLP method that does not rely on caption annotations. Thus, it opens the possibility of leveraging, for VLP, many existing vision datasets originally developed for image tagging or object detection tasks like ImageNet~\cite{deng2009imagenet}, Open Images~\cite{kuznetsova2018open}, Objects365~\cite{shao2019objects365}, etc. Moreover, we can also leverage large amounts of images, paired with machine-generated tags as weak supervision signals, for VLP. \xiyin{VIVO pre-training aims to learn a joint representation of visual and text input. We feed to a multi-layer Transformer model an input consisting of image region features and a paired image-tag set. We then randomly mask one or more tags, and ask the model to predict these masked tags conditioned on the image region features and the other tags. Given that tags are not ordered, we employ the Hungarian matching loss~\cite{stewart2016end, carion2020end} for tag prediction optimization. Extensive experiments show that VIVO pre-training significantly improves the captioning performance on NOC. In addition, our model can precisely align the object mentions in a generated caption with the regions in the corresponding image.} \iffalse In this paper, we present VIsual VOcabulary (VIVO) pre-training that leverages large-scale vision data without caption annotations to enhance the image captioning model. We cast pre-training as a process of learning a visual vocabulary, defined as a joint image-text embedding space, where each visual concept is represented by a unique feature prototype of semantically similar image regions. Different from prior Vision-Language Pre-training (VLP)~\cite{li2020oscar}, we learn cross-modal embeddings, so that in our learned visual vocabulary, each visual concept can also be described by an object tag in text. Moreover, to the best of our knowledge, we are the first to enable VLP in the absence of caption data, and thus drastically enlarge the data amount that could be used. Motivated by the qualitative visual-text alignment in OSCAR~\cite{li2020oscar}, we quantitatively evaluate the performance of VIVO pre-training on aligning image region features with object tags, leading to a better understanding on what pre-training brings for downstream tasks. We also show that the alignment is helpful for identifying novel objects at the inference time. Extensive experiments have shown that VIVO pre-training significantly improves the captioning performance on NOC. \fi In summary, we make the following contributions. \begin{tight_itemize} \item{We propose a new VIVO pre-training method that leverages large amounts of vision data without caption annotations for vision-language representation learning.} \item{\xiyin{We develop a Hungarian matching loss with masked tag prediction to conduct pre-training with image-tag pairs.}} \item With a single model, our method achieves the new state-of-the-art result on the nocaps benchmark and surpasses the human CIDEr score. \end{tight_itemize} \section{Prior Work} \Paragraph{Image Captioning} \xiyin{Prior works on image captioning have focused on exploring different model structures and learning methods for different applications. For example, ~\citet{song2019connecting, wang2019hierarchical, gao2019deliberate, huang2019attention, pan2020x, guo2020normalized, cornia2020meshed} explore different attention mechanisms in captioning modeling. Other works improve the performance with reinforcement learning~\cite{rennie2017self, li2019meta, yang2020fashion} or adversarial learning~\cite{chen2019improving, dognin2019adversarial}. Different applications such as dense captioning~\cite{johnson2016densecap,yin2019context, li2019learning}, grounded captioning~\cite{ma2019learning, zhou2020more}, image captioning with reading comprehension~\cite{sidorov2020textcaps} have been studied. However, all these methods assume that most of the visual objects in test data are seen in training data. Thus, they do not work well for NOC, where the objects presented in test images are often unseen in the caption-annotated training data.} \Paragraph{Novel Object Captioning (NOC)} NOC requires a model to generate image captions that describe novel objects that are unseen in the paired image-caption training data. Since the task setting resembles that in real-world applications, it draws growing interest in the research community. The early works, such as Deep Compositional Captioner~\cite{hendricks2016deep} and Novel Object Captioner~\cite{venugopalan2017captioning}, propose to use unpaired image and sentence data to transfer knowledge among semantically similar visual concepts. Empirical evaluation on the COCO dataset by holding out $8$ novel object categories suggests that these methods might be applicable to NOC. Recent studies propose to explicitly leverage the object detection results for NOC. \citet{yao2017incorporating} use LSTM-C with a copying mechanism to assemble the detected novel objects for caption generation. Neural Baby Talk~\cite{lu2018neural} and Decoupled Novel Object Captioner~\cite{wu2018decoupled} generate template sentences that are later filled in with visual concepts recognized by object detectors. Similarly, Constrained Beam Search~\cite{anderson2016guided} is exploited to generate captions that contain detected novel objects~\cite{agrawal2019nocaps}. None of the aforementioned methods for NOC fully exploits the relationship between image and text, which we argue is crucial to the quality of generated captions. In this study, we pre-train a Transformer model to learn a visual vocabulary where object tags are aligned with their corresponding image feature representations in a semantic space. \Paragraph{Vision and Language Pre-training} Motivated by BERT~\cite{devlin2018bert}, many VLP methods have been proposed to learn vision-language representations by pre-training large-scale Transformer models ~\cite{lu2019vilbert,tan2019lxmert,su2019vl,chen2020uniter, zhou2020unified,li2020oscar}. Most existing VLP methods are developed for understanding tasks such as image-text retrieval and visual question answering. Only a few of them ~\cite{zhou2020unified, li2020oscar} can be applied to image captioning. But these methods use paired image-caption data for pre-training, and are not applicable to NOC. In this study, we break the dependency on image-caption pairs in VLP for the first time. The proposed VIVO pre-training learns vision-language alignment on image-tag pairs, improving the image captioning results on both NOC and the general image captioning task. \section{Proposed Method} Recent image captioning models have achieved impressive results on the tasks where large amounts of paired image-caption training data is available. But they generalize poorly to images in the wild, where there are a wide variety of visual objects that are unseen in the caption corpora for training. For example, the models trained on COCO Captions can faithfully describe images containing objects such as ``people'', ``dogs'', or ``a couch'', but fail to generate a reasonable caption for any image containing ``an accordion'' since the object is unseen in COCO Captions. To address this problem, we propose a weakly supervised learning approach to pre-training image captioning models on image-tag pairs that, compared to image-caption pairs, are of larger amounts and contain many more diverse visual objects. Our approach uses a two-stage training scheme that consists of VIVO pre-training and fine-tuning. Figure~\ref{fig:overview} illustrates our approach using an example. First, in the pre-training stage (Figure~\ref{fig:overview}(a)), an image captioning model learns to label image regions using tags (\textit{e.g.}, ``person'', ``accordion'') using image-tag pairs as training data, where the object ``accordion'' is included. Then in fine-tuning (Figure~\ref{fig:overview}(b)), given image-caption pairs and their corresponding object tags detected (\textit{e.g.}, ``person'' and ``dog''), the model learns to map an image to a (reusable) caption template (\textit{e.g.}, ``[A] holding [B] ...''), and fill the template with the object tags to form a caption (\textit{e.g.}, ``a person holding a dog.'') While the caption templates are learned from image-caption pairs, the object tags to be filled may refer to novel visual objects that are unseen in image-caption pairs (but seen in image-tag data in this example). Thus, our model achieves the compositionality generalization, allowing for zero-shot generalization to novel objects for image captioning. As shown in Figure~\ref{fig:overview}(c), at inference time the model is able to select the template ``[A] holding [B] ...'', fill it with the object tags ``person'' and ``accordin'', which are unseen in the paired image-caption training data, and compose the caption ``a person holding an accordion''. The model architecture is shown in Figure~\ref{fig:model_arch}. It consists of multiple Transformer layers to encode the input into a feature vector and a linear layer with softmax to generate the text description of the visual objects in the image. In what follows, we describe in detail the way the model is pre-trained and fine-tuned. \subsection{VIVO Pre-training} We pre-train the Transformer model on a large-scale dataset with abundant tags, \textit{e.g.}, the Open Images training set with $6.4K$ classes of image-level tags. Unlike many existing VLP methods that rely on image-caption pairs, VIVO pre-training is conducted solely on image-tag pairs, which are much easier to collect by either human labeling or auto tagging. The training objective is to predict the missing (masked) tags given a bag of image-level tags and image regions. We denote the training set as $\mathbb{D} = \{{\bf{I}}_i, {\bf{G}}_i\}_{i=1}^{N}$ with $N$ images and their corresponding tags, where ${\bf{G}}_i = \{g_{ij}\}_{j=1}^{L_i}$ is a set of $L_i$ image-level tags that are associated with the image ${\bf{I}}_i$. These tags are textual labels of the visual objects presented in the image, \textit{e.g.}, ``person'', ``cat'', ``dinning table'', etc. In the rest of the paper, we omit the subscript $i$ for simplicity. We use a multi-layer Transformer model to learn a joint representation for both vision and language domains. The input to the Transformer model consists of image region features ${\bf{V}}$ and tag tokens ${\bf{T}}$, where ${\bf{V}} = \{{\bf{v}}_k\}_{k=1}^{K}$ are extracted from image $\bf{I}$ using a detector trained on Visual Genome dataset~\cite{anderson2018bottom}, and ${\bf{T}} = \{t_j\}_{j=1}^{T}$ are tokenized tags in $\bf{G}$. During training, some tokens are randomly masked out for the model to predict. The main difference between a caption and a set of tags is that words in the caption are ordered while tags are not ordered. This unordered nature may result in ambiguity in tag prediction when two tags are masked out simultaneously. For example, if the masked tokens are ``dog" and ``cat", we can predict each token in either position without restricting to the original position or order in the input. To resolve this issue, we propose to use the Hungarian matching loss~\cite{stewart2016end, carion2020end} to formulate the tag prediction as a set-matching problem. We denote the set of $M$ masked tokens as $\tilde{{\bf{T}}} = \{t_m\}_{m=1}^{M}$ where $t_m$ is the token id in the vocabulary, and the prediction probabilities of the corresponding representations in the final layer of Transformer as ${\bf{P}} = \{{\bf{p}}_i\}_{i=1}^{M}$ where ${\bf{p}}_i$ is the classification probabilities for the $i$-th masked position. Since the target tokens in $\tilde{{\bf{T}}}$ are unordered, we need an one-to-one mapping from $\tilde{{\bf{T}}}$ to ${\bf{P}}$ such that the prediction for each masked position is assigned one of the target tokens. Once such an assignment $\alpha$ is known, the loss is defined as: \begin{equation} \label{eq:loss} L(\tilde{{\bf{T}}}, {\bf{P}}, \alpha) = \sum_{i=1}^{M} (- \log({\bf{p}}_{i}(t_{\alpha(i)}))) \end{equation} where $\alpha$ is a permutation of the $M$ indices, i.e., $\alpha(i)$ is the index of the target token assigned to the $i$-th prediction. Since the assignment is unknown, we want $\alpha$ to be the best possible mapping between $\tilde{{\bf{T}}}$ and ${\bf{P}}$. Formally, we define such best possible $\alpha$ to be the one that minimizes the following total cost among all the valid\footnote{For a tag tokenized into multiple tokens, the order of tokens within the tag cannot be changed.} permutations: \begin{equation} \label{eq:alpha} \hat{\alpha} = \underset{\alpha}{\argmin}\sum_{i=1}^{M}C( {\bf{p}}_{i}, t_{\alpha(i)}), \end{equation} where $C({\bf{p}}_i, t_m) = 1 - {\bf{p}}_i(t_m)$ is the cost function of assigning the target $t_m$ to the $i$-th prediction. The reason why we use $C({\bf{p}}_i, t_m)$ instead of $-\log({\bf{p}}_{i}(t_{\alpha(i)}))$ as in \eqref{eq:loss} is that it is bounded. Now we can compute the final loss as $L(\tilde{{\bf{T}}}, {\bf{P}}, \hat{\alpha})$, where $L$ is defined in \eqref{eq:loss} and $\hat{\alpha}$ is defined in \eqref{eq:alpha}. As shown in Figure~\ref{fig:overview} (a), we use bi-directional attention mask in VIVO pre-training. In order to predict a missing tag, the model will have to resort to image region features and the other tags. So it learns a joint representation containing information from both image regions and textual tags. This facilitates the cross-modality alignment between representations of image regions and tags. \iffalse The task is to assign the $M$ token ids as the targets for $M$ predictions such that the loss is minimized. To achieve this, we first search a permutation of $M$ indexes $\alpha$ where $\alpha(i)$ represents the index of the targeted token id of the $i^{th}$ masked token. The optimal assignment is computed as: \begin{equation} \hat{\alpha} = \underset{\alpha}{\argmin}\sum_{i}^{M}L(t_i, {\bf{p}}_{\alpha(i)}), \end{equation} where $L(t_i, {\bf{p}}_{\alpha(i)})$ is the loss by assigning ground-truth token id $t_i$ to the prediction of ${\bf{p}}_{\alpha(i)}$. We use standard softmax cross-entropy loss for classification. The final loss is calculated with the optimal assignment: \begin{equation} L({\bf{T}}_m, {\bf{P}}) = \sum_{i}^{M} (- \log({\bf{p}}_{\alpha(i)}(t_i))) \end{equation} As shown in Figure~\ref{fig:overview} (a), we use bi-directional attention mask to facilitate the cross-modality alignment between image regions and tags via the Hungarian matching loss in masked tag prediction. After VIVO pre-training, we expect the model to be capable of recognizing the trained visual concepts in different images. \fi \subsection{Fine-tuning and Inference} After pre-training, the Transformer model is fine-tuned on a dataset where both captions and tags are available, \textit{e.g.}, the COCO set annotated with tags from $80$ object classes and captions. The tags can also be automatically generated using a pre-trained tagging or detection model. Given image regions and tags, the model learns to predict the conditional caption sentence where some positions are randomly masked out. More specifically, the input to the model during fine-tuning is a triplet of image region features ${\bf{V}}$, a set of tags $\bf{T}$ and a caption ${\bf{C}}$, where ${\bf{V}}$ and $\bf{T}$ are constructed in the same way as described in pre-training, and ${\bf{C}}$ is a sequence of tokens. During fine-tuning, we randomly mask out some of the tokens in a caption sentence for prediction, and optimize the model parameters using the cross-entropy loss. To make the model generate captions from left to right at inference time, during fine-tuning we apply the uni-directional attention mask on a caption sequence to prevent the positions from attending to subsequent positions. During inference, we first extract image region features and detect tags from a given image. Then the model is applied to generate a sequence, one token at a time, until it outputs the end of sentence token or reaches the maximum length. At each step the model is auto-regressive, consuming the previously generated tokens as additional input when generating the next. In the next section, we present extensive experimental results, showing that our model can generate captions to describe novel objects and that the alignment between image regions and tags, learned from VIVO pre-training, is crucial to the model's superior performance on NOC. \iffalse \begin{figure}[t] \begin{center} \includegraphics[trim=20 140 365 60, clip,width=0.5\textwidth]{figures/visual_align.pdf} \figcaption{\textbf{Visual-text alignment analysis.} Given the image region features of bounding boxes and their tags, we calculate the distances of the final representations as an indicator of the alignment quality.} \label{fig:visual_align} \end{center} \end{figure} \subsection{Visual-Text Alignment} While cross-modality alignment is essential for vision-language tasks~\cite{li2020oscar, cao2020behind, selvaraju2019taking}, how to quantitatively evaluate the alignment is less explored. In this work, we present a cross-modality similarity measurement method to quantify the effect of VIVO pre-training in aligning image regions and tags. \xiaowh{Given the image with labeled tags $\{g_{i}\}_{i=1}^L$ and associated bounding boxes $\{{\bf{b}}_{i}\}_{i=1}^L$, we use the object detection model, which is the same one used to extract image region features in training and inference as described above, to get features $\{{\bf{r}}_{i}\}_{i=1}^L$, where $\bf{r}_{i}$ is the cropped region feature from ${\bf{b}}_{i}$. Feeding the tag $g_{i}$ and region feature $\bf{r}_{i}$ into the VIVO pre-trained Transformer model, we get the contextualized representations output from the last encoder layer as $f^g_{i}$ and $f^{\bf{r}}_{i}$, respectively. The cosine of $f^g_{i}$ and $f^{\bf{r}}_{i}$ is used to measure the similarity between image region ${\bf{b}}_{i}$ and tag $g_{i}$ given the model. We use the similarity averaged over all image-tag pairs to evaluate the alignment.} \fi \section{Experiments} \subsection{Experimental Settings} \Paragraph{Datasets} We use the Open Images V5 challenge training set, which has $1.7M$ images, for VIVO pre-training. We select $500$ classes\footnote{Only $500$ out of $600$ objects are used in the challenge set, as we further refine the labels by removing classes that are ``parts'' (\textit{e.g.}, human eyes).} from bounding box annotations and $6.4K$ classes from human verified image-level labels. The joint image-tag pairs, containing $6.4K$ unique classes in total, are used in VIVO pre-training. In the fine-tuning stage, the training data is the COCO training set of $118K$ images, each with $5$ captions. We evaluate our model on the validation and test sets of nocaps, which consist of $4.5K$ and $10.6K$ images from the Open Images validation and test sets, respectively. \Paragraph{Implementation Details} We use the object detector from UpDown~\cite{anderson2018bottom} to extract image region features, \xiyin{which are concatenated with scaled bounding boxes to form a $2054$-dimension vector \leizhang{($2048$D for the visual features and $6$D for the bounding box encoding including top-left and bottom-up corners as well as the box's width and height)}. We use an object detector trained on the Open Images dataset to detect object tags for all datasets. For pre-training and fine-tuning, we also add the ground-truth tags from the training sets.} No ground-truth tags are used on the nocaps validation and test sets. The Transformer model is initialized using BERT-base~\cite{devlin2018bert} where we add a linear layer to transform the image region features to the vectors with same size as the word embeddings. In VIVO pre-training, we use a maximum of $50$ image regions and $15$ tag tokens per image. The model is trained for $160K$ iterations (about $100$ epochs) with a batch size of $1024$ and a learning rate of $5\times10^{-5}$. In fine-tuning, we set the maximum caption length to $40$ and the maximum tag length to $30$. The model is trained for $30$ epochs with a batch size of $256$ and a learning rate of $5\times10^{-5}$, optimized using the cross-entropy loss. To further boost the performance, we perform the SCST optimization~\cite{rennie2017self} with a learning rate of $2\times10^{-6}$ for $5$ epochs. During inference, we use greedy decoding to generate image captions with a maximum length of $20$. \subsection{Novel Object Captioning} We compare our method with UpDown~\cite{anderson2018bottom, agrawal2019nocaps} and OSCAR\footnote{We compare with OSCAR base whose model size is the same as ours. In fact, our model with $12$ layers and hidden size of $768$ even outperforms the OSCAR large model.}~\cite{li2020oscar}, which holds the state-of-the-art result on the nocaps benchmark. The training data for the baselines is the COCO dataset. Following prior settings, we also report the results after our model is optimized using SCST~\cite{rennie2017self} and generates captions using Constrained Beam Search (CBS)~\cite{anderson2016guided}. The evaluation results on nocaps \kevin{validation} and test sets are shown in Table~\ref{tab:nocaps_caption}. By leveraging VIVO pre-training on the Open Images dataset, our method has achieved significant improvement compared to all prior works. Our plain version (VIVO) already outperforms UpDown+ELMo+CBS and OSCAR by a large margin. It is worth noting that CBS brings absolute gains of $17.8\%$ and $15.5\%$ for UpDown and OSCAR, respectively, but it only improves VIVO by $3.8\%$. This suggests that our model is more capable of generating captions with novel objects without explicitly adding any constrains. Our best results are new state-of-the-art and surpasses the human CIDEr score on the overall dataset. \input{nocaps_table} \kevin{To quantitatively evaluate how well the model can describe novel objects,} we also calculate the F1-score following~\citet{hendricks2016deep}, where all the objects mentioned in the generated caption sentences are compared against the ground-truth object tags. Table~\ref{tab:nocaps_f1} shows the comparison with OSCAR on the nocaps validation set. We see that VIVO improves OSCAR in F1-scores substantially especially for out-of-domain objects. This again verifies the effectiveness of VIVO pre-training in learning to recognize novel objects for NOC. Although object tags are used in both VIVO pre-training and fine-tuning stages, we show that the model's capability of generating captions that precisely describe novel objects at inference time attributes largely to pre-training. We compare the distribution of object tags on COCO and nocaps, which are generated by the object detector trained on the Open Images dataset and used for fine-tuning and inference, respectively. As shown in Table~\ref{tab:nocaps_dist}, COCO has a long-tail distribution where $415$ out of $568$ categories amounts only to $2.43\%$ of all the tags. The under-representation of novel objects makes the trained model statistically unlikely to generate plausible captions that describe these novel objects. Therefore, our VIVO pre-training, which mitigates the data imbalance issue by leveraging diverse tags in image-tag pairs, is crucial to improving model's generalization property, as empirically demonstrated on NOC. \begin{table}[h] \begin{center} \caption{Comparison of F1-scores (in \%) on object classes of Open Images, evaluated on the nocaps validation set. There are $504$ classes in total. $105$ of them are in-domain, which are $80$ common classes from COCO and $25$ objects frequently appearing in COCO Captions. The remaining $399$ classes are the out-of-domain objects.} \small \label{tab:nocaps_f1} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule model & in-domain & out-of-domain & entire \\ \midrule OSCAR~\cite{li2020oscar} & $39.5$ & $15.7$ & $20.7$ \\ VIVO & $\bf 46.3$ & $\bf 30.6$ & $\bf 33.8$ \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[h] \begin{center} \caption{Distribution of $568$ object categories on COCO training images and nocaps validation images. Each column is a subset of object categories whose number of occurrences are below the threshold. The percentage is calculated by dividing the counts of those objects by the total counts of all objects in the dataset.} \small \label{tab:nocaps_dist} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule $\#$occur in COCO ($<=$) & $0$ & $10$ & $100$ & $1K$ & $10K$ \\ \midrule $\#$categories & $194$ & $274$ & $415$ & $522$ & $563$ \\ percentage in COCO &$0.0$ & $0.14$ & $2.43$ & $15.62$ & $64.01$ \\ percentage in nocaps &$0.24$ & $5.05$ & $15.98$ & $35.71$ & $69.91$ \\ \bottomrule \end{tabular} \end{center} \end{table} \iffalse \begin{figure}[t] \begin{center} \begin{tabular}{@{}c@{}c@{}} \includegraphics[width=0.5\columnwidth]{figures/feat_align_OID.pdf} & \includegraphics[width=0.5\columnwidth]{figures/related_feat_align_OID_0905.pdf} \\ [-1mm] (a) & (b) \\ [-2mm] \end{tabular} \end{center} \figcaption{Analysis of visual-text alignment on the Open Images validation set. (a) Feature distance of image region and tag pairs for both common and uncommon objects. (b) Relative feature distance for common and uncommon objects during VIVO pre-training.} \label{fig:alignment} \end{figure} \fi \begin{figure}[t] \begin{center} \includegraphics[trim=50 105 150 90, clip,width=0.5\textwidth]{figures/model_arch.pdf} \figcaption{\kevin{Overview of our VIVO pre-trained Transformer model. Our model consists of multiple Transformer encoder layers followed by a linear layer and a softmax layer. We use masked tag prediction to conduct pre-training. To analyze the visual-text alignment, we use the outputs of the last layer of the encoder layers to estimate the cosine similarity between the image region and tag.}} \label{fig:model_arch} \end{center} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[trim=25 125 25 15, clip,width=1\textwidth]{figures/visual_nocaps_v2.pdf} \figcaption{Image captioning results on nocaps. B: our baseline without adding VIVO pre-training. V: our approach with VIVO pre-training. \textcolor{red}{Red} text represents novel objects. For each image, we show the similarity scores of each image region to the novel objects appear in the captions. The bounding box color is brighter when the similarity is higher. } \label{fig:nocaps} {\vspace{-4mm}} \end{center} \end{figure*} \subsection{Visual-Text Alignment} \kevin{To further understand the effects} of VIVO pre-training in learning visual vocabulary, which aligns image regions with object tags, we show how the novel object tags can be grounded in image regions in Figure~\ref{fig:nocaps}. Given the images from the Open Images validation set, we extract image region features using the same object detector from UpDown and generate captions from the captioning model with VIVO pre-training. After identifying the novel objects in the generated captions, \kevin{as shown in Figure~\ref{fig:model_arch}}, we feed the novel object tags, together with the extracted image region features, to the VIVO pre-trained Transformer model. The output of the last encoder layer is used as the contextualized representation of the corresponding input. We then calculate the cosine similarity between representations of each pair of image region and object tag. We highlight the pairs with high scores in Figure~\ref{fig:nocaps}. The result shows that our model can precisely align the mentions of these novel objects in captions with the corresponding image regions. \begin{table}[t] \begin{center} \caption{Evaluation on COCO test set of Karpathy split~\cite{karpathy2015deep}. All results are based on single model with cross-entropy optimization.} \small \label{tab:cc_oid} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule pre-training & \kevin{BLEU4} & \kevin{Meteor} & \kevin{CIDEr} & \kevin{SPICE} \\ \midrule NO & $33.7$ & $27.9$ & $114.7$ & $21.2$ \\ CC (OSCAR) & $34.8$ & $\bf 28.4$ & $118.2$ & $21.6$\\ CC (OSCAR) + OI (VIVO) & $\bf 34.9$ & $\bf 28.4$ & $\bf 119.8$ & $\bf 21.7$\\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{General Image Captioning} VIVO pre-training does not require the paired image-caption data for model training as in conventional VLP methods. It opens up an opportunity to leverage additional data sources to improve image captioning models. To demonstrate the effectiveness of VIVO pre-training on general image captioning tasks, we trained two versions of OSCAR, following the setting in~\citet{li2020oscar}. The first OSCAR model is trained solely on Conceptual Captions (CC)~\cite{sharma2018conceptual}, as described in \citet{li2020oscar}. The second OSCAR model is pre-trained using VIVO on Open Images (OI), and then fine-tuned on CC. As shown in Table~\ref{tab:cc_oid}, VIVO pre-training improves the model performance across all metrics evaluated on the COCO test set, especially in CIDEr score. \xiyin{We do observe, however, that the gain on the COCO benchmark is not as substantial as that on the nocaps benchmark. We conjecture that this is due to the COCO dataset containing only a small number of visual concepts and thus diminishing the benefit of learning a large visual vocabulary. \leizhang{It is also worth noting that using machine-generated image tags rather than human-written captions makes it possible to utilize potentially unlimited amounts of images, which we will pursue in our future work.}} \begin{table}[ht] \begin{center} \tabcaption{Ablation study of VIVO pre-training using different tag sizes. Results are evaluated on the entire validation set of nocaps.} {\vspace{0mm}} \small \label{tab:tag} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule \kevin{Tag size} & \kevin{BLEU4} & \kevin{Meteor} & \kevin{CIDEr} & \kevin{SPICE} \\ \midrule $0$ (w/o VIVO) & $18.3$ & $24.2$ & $69.6$ & $11.3$ \\ $500$ classes & $20.6$ & $25.4$ & $76.5$ & $11.9$ \\ $6.4K$ classes & $\bf 21.2$ & $\bf 25.4$ & $\bf 77.8$ & $\bf 12.0$ \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[h] \begin{center} \tabcaption{Ablation study of the proposed Hungarian matching loss. Results are evaluated on the entire validation set of nocaps. } {\vspace{0mm}} \small \label{tab:loss} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule Loss & \kevin{BLEU4} & \kevin{Meteor} & \kevin{CIDEr} & \kevin{SPICE} \\ \midrule Mask only one token & $20.6$ & $25.2$ & $74.9$ & $11.8$ \\ w/o Hungarian matching & $21.0$ & $25.4$ & $75.8$ & $11.8$ \\ \kevin{w/} Hungarian matching & $\bf 21.2$ & $\bf 25.4$ & $\bf 77.8$ & $\bf 12.0$ \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Ablation Study} We select a subset of $10\%$ images from the Open Images training set to conduct an ablation study. We fine-tune with cross-entropy loss on the COCO dataset and report the performance on the nocaps validation set. \Paragraph{Using a Larger Set of Tags} We investigate whether using a larger set of tags in pre-training improves performance of the downstream image captioning task. We select $500$ classes of objects, which are used to train the object detector, from the overall $6.4K$ classes of tags to conduct VIVO pre-training. As shown in Table~\ref{tab:tag}, VIVO pre-training with $500$ classes significantly improves the performance on nocaps by $6.9\%$ compared to no pre-training. Expanding the labels to $6.4K$ classes can further improve the performance, although the gain is limited due to the increased diversity of objects presented in test images. \Paragraph{Using Hungarian Matching Loss} We evaluate the effectiveness of the proposed Hungarian matching in VIVO pre-training to predict a set of tags. Training without Hungarian matching reduces the tag prediction to the standard masked language modeling task, which predicts the masked tokens in the same order as that in the input sequence. In addition, we also perform VIVO pre-training by masking only one token in input, which makes word order information not useful. The evaluation results on the nocaps validation set are in Table~\ref{tab:loss}. We can see that masking only one token is not effective, and using Hungarian matching leads to the best model performance. \iffalse \Paragraph{Contrastive Loss} Most vision-language pre-training work adopts the contrastive loss used in BERT. However, our training target is a set of labels. We presume that for some images the tags are less discriminative than the sentence description, so contrastive loss does not equally serve its purpose. In Table~\ref{tab:contrast}, we compare two models, with and without contrastive loss, respectively. The former randomly gets labels of another image with a probability of $50\%$, and MLM is not applied when the pair is mismatched. The latter always gets its own labels. \begin{table}[h] \begin{center} \tabcaption{Comparison of VIVO pre-training with and without contrastive loss. Results are evaluated on the entire validation set of nocaps.} {\vspace{0mm}} \small \label{tab:contrast} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule model & B4 & M & C & S \\ \midrule w/ contrastive & $21.1$ & $25.4$ & $76.8$ & $11.8$ \\ w/o contrastive & $\bf 21.2$ & $\bf 25.4$ & $\bf 77.8$ & $\bf 12.0$ \\ \bottomrule \end{tabular} \end{center} \end{table} \fi \iffalse \Paragraph{Constructing Sub-sentence Sequences} We compare different approaches to constructing the input text sequence. In previous language modeling work, the input sequence is usually a tokenized sentence, starting and ending with special tokens, [CLS] and [SEP], respectively. However, our input consists of multiple tags. The combination of tags form a sub-sentence, making it different from a fluent whole sentence. We compare different ways to handle the sub-sentence input and show that our proposed method performs the best. The results of some selected methods are in Table~\ref{tab:sequence}. \begin{itemize} \item \textbf{Sentence}: concatenate tags to mimic a sentence, start with [CLS], end with [SEP]. \item \textbf{Separated}: add [SEP] to the end of each tag, then concatenate them together. \item \textbf{Joined}: concatenate tags, followed by [SEP] at the end of sequence. \end{itemize} \begin{table}[h] \begin{center} \tabcaption{Comparison of different ways to construct the input text sequence, which consists of multiple tags.} {\vspace{0mm}} \small \label{tab:sequence} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule input sequence & B4 & M & C & S \\ \midrule Sentence & $20.9$ & $25.3$ & $75.6$ & $11.9$ \\ Separated & $21.2$ & $25.4$ & $76.5$ & $11.9$ \\ Joined & $\bf 21.2$ & $\bf 25.4$ & $\bf 77.8$ & $\bf 12.0$ \\ \bottomrule \end{tabular} \end{center} \end{table} \fi \iffalse \begin{figure}[t] \begin{center} \begin{tabular}{@{}c@{}c@{}} \includegraphics[width=0.5\columnwidth]{figures/tsne_vivo.pdf} & \includegraphics[width=0.5\columnwidth]{figures/tsne_bertbase.pdf} \\ [-1mm] (VIVO) & (Baseline) \\ [-2mm] \end{tabular} \end{center} \figcaption{TSNE visualization.} \label{fig:tsne_alignment} \end{figure} \fi \section{Conclusions} We have presented a weakly supervised learning approach to training image captioning models in two steps. First, a Transformer-based model is pre-trained on large amounts of image-tag pairs to learn a visual vocabulary without the need of using image-caption pairs which are harder to obtain. Then, the model is fine-tuned on image-caption pairs to learn to incorporate information from the pre-trained visual vocabulary and compose image captions that can describe novel visual objects unseen in the training data of image-caption pairs. Our experiments on the nocaps benchmark dataset demonstrate that our model achieves compositional generalization, allowing for zero-shot generalization to novel objects for image captioning. As a result, our best single model creates new state-of-the-art that surpasses the human CIDEr score on nocaps. A detailed analysis reveals that the generalization is attributed to a large degree to the visual vocabulary learned in model pre-training, which maps visual objects or regions with similar semantic meanings to feature vectors that are close to each other in a discrete semantic space. Since our pre-training does not need paired image-caption data, one of our future works is to leverage large amounts of vision data, beyond image-tag pairs used in this paper, to significantly improve the quality of the visual vocabulary. \section{Acknowledgements} We thank Jianfeng Wang, Ehsan Azarnasab, Lin Liang, Pengchuan Zhang, Xiujun Li, Chunyuan Li, Jianwei Yang, Yu Wang, Houdong Hu, Furu Wei, Dong Li for valuable discussions and comments. \section{Visual Vocabulary Visualization} To further understand the effects of VIVO, we conduct a qualitative comparison between the feature spaces learnt from the baselines and VIVO using $t$-SNE~\cite{maaten2008visualizing}. We randomly sample $30$ object categories from the nocaps validation set, and visualize the representations of the image regions and object tags. Figure~\ref{fig:tsne_comparison} shows the comparison of two baselines and VIVO. The results show that VIVO compares favorably with the baselines in visual-text alignment. We enlarge the $t$-SNE visualization results of Figure~\ref{fig:tsne_comparison}(e), Figure~\ref{fig:tsne_comparison}(c), and Figure~\ref{fig:tsne_comparison}(f) in Figure~\ref{fig:tsne_alignment_baseline_ft}, Figure~\ref{fig:tsne_alignment_VIVO}, and Figure~\ref{fig:tsne_alignment_VIVO_ft}, respectively. The results reveal some interesting findings: (\textit{i}) We observe that VIVO pre-training is helpful in learning a better cross-modality alignment compared to the baselines. (\textit{ii}) Fine-tuning with paired image-caption training data can further improve the alignment between two modalities. (\textit{iii}) In Figure~\ref{fig:tsne_comparison}(e), the alignment of the baseline is better for that objects that frequently occur in the caption corpora, \textit{e.g.}, motorcycle, pizza, but worse for novel objects, \textit{e.g.}, violin, drum, grape. (\textit{iv}) VIVO improves the alignment overall, especially for novel objects. \begin{figure*}[t] \begin{center} \includegraphics[trim=100 80 90 100, width=1.\textwidth]{figures/tsne_baseline_ft_new.pdf} \end{center} \figcaption{$t$-SNE visualization of the baseline with BERT initialization and fine-tuning, as shown in Figure~\ref{fig:tsne_comparison}(e). The marker ``$\times$'' with the same color indicates the same object class. We observe that the alignment is better for the objects commonly presenting in the caption corpora, \textit{e.g.,} pizza, motorcycle, but worse for novel objects, \textit{e.g.,} grape, violin, drum, strawberry.} \label{fig:tsne_alignment_baseline_ft} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[trim=100 80 90 100, width=1.\textwidth]{figures/tsne_vivo_pretrain.pdf} \end{center} \figcaption{$t$-SNE visualization of the VIVO pre-trained model, as shown in Figure~\ref{fig:tsne_comparison}(c). The marker ``$\times$'' with the same color indicates the same object class. With the help of VIVO pre-training, we see that the image region features and object tags are better aligned compared to the baselines. } \label{fig:tsne_alignment_VIVO} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[trim=100 80 90 100, width=1.\textwidth]{figures/tsne_vivo_ft.pdf} \end{center} \figcaption{$t$-SNE visualization of VIVO fine-tuned model, as shown in Figure~\ref{fig:tsne_comparison}(f). The marker ``$\times$'' with the same color indicates the same object class. Our model improves the visual-text alignment overall, especially for novel objects.} \label{fig:tsne_alignment_VIVO_ft} \end{figure*} \section{Implementation Details} Our transformer-based image captioning model consists of $12$ transformer layers for encoding and a linear layer for prediction. Note that the model does not have any decoder layer. We use WordPiece embedding~\cite{wu2016google} with a $30,000$ token vocabulary to represent input words, including both object tags and captions. For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. In addition, we use the Faster R-CNN model from UpDown~\cite{anderson2018bottom} to extract image region features and a tagging model trained on the Open Images dataset to predict tags. The transformer model is first pre-trained then fine-tuned, and applied iteratively at inference time to generate the output. \Paragraph{Pre-training} As described in the main text of the paper, our model consumes a set of tags as textual inputs during pre-training. In addition to the ground truth labels from the Open Images training set, we also use the predictions from our tagging model to enhance the label quality to mitigate that the labels in the Open Images dataset are not complete. We tokenize the tags, and concatenate the tokens into a sequence. We also add the special token [SEP] at the end of the sequence. Following masked language modeling of BERT, we randomly choose $15\%$ of tokens for prediction, \textit{i.e.}, replacing the chosen token with (1) the [MASK] token $80\%$ of the time (2) a random token $10\%$ of the time (3) the unchanged token $10\%$ of the time. We concatenate the textual feature sequence and the visual feature sequence to form the input to the model. \Paragraph{Fine-tuning} The textual input encompasses a caption sentence (i.e., the ground truth of COCO Captions), and a set of tags (i.e., the prediction of the tagging model). The sequence for the caption always starts with the [CLS] token and ends with the [SEP] token. The sequence for tags is constructed in the same way as described in pre-training. To differentiate the caption from tags, we add a learned segment embedding to every token indicating whether it belongs to the caption or the tag sequence. In fine-tuning, we only mask tokens from the caption for prediction. The caption feature sequence, tag feature sequence and visual feature sequence are concatenated and fed into the model. \Paragraph{Inference} At inference time, the model's input contains three parts: a previous prediction for caption, a set of predicted tags, and image region features. At the beginning, the caption part is a [CLS] token followed by a [MASK] token. We feed the input made up of three parts to the model and get the prediction at the position of the [MASK] token. In the next step, we replace the previous [MASK] token with the prediction, and insert another [MASK] token at the end of the caption sequence. This step iterates until the prediction of the end of sentence token, \textit{i.e.}, the [SEP] token, or reaching the maximum length. In this way, the model generates a caption sentence from left to right. \iffalse \section{More Ablation Study} \Paragraph{Constructing Sequence with Tags} We compare different approaches to construct the input text sequence in VIVO pre-training. In previous language modeling work, the input sequence is usually a tokenized sentence, starting and ending with special tokens, [CLS] and [SEP], respectively. However, our input consists of multiple tags. The combination of tags is different from a fluent whole sentence. We compare different ways to handle the bag of tags input and show that our proposed method performs better. The results of some selected methods are in Table~\ref{tab:sequence}. \begin{itemize} \item \textbf{Sentence}: concatenate tags to mimic a sentence, start with [CLS], end with [SEP]. \item \textbf{Separated}: add [SEP] to the end of each tag, then concatenate them together. \item \textbf{Joined}: concatenate tags, followed by [SEP] at the end of sequence. \end{itemize} \begin{table}[h] \begin{center} \tabcaption{Comparison of different ways to construct the input text sequence in VIVO pre-training, which consists of multiple tags. Results are evaluated on the entire validation set of nocaps.} {\vspace{0mm}} \small \label{tab:sequence} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule input sequence & BLEU4 & Meteor & CIDEr & SPICE \\ \midrule Sentence & $20.9$ & $25.3$ & $75.6$ & $11.9$ \\ Separated & $21.2$ & $25.4$ & $76.5$ & $11.9$ \\ Joined & $\bf 21.2$ & $\bf 25.4$ & $\bf 77.8$ & $\bf 12.0$ \\ \bottomrule \end{tabular} \end{center} \end{table} \Paragraph{Contrastive Loss} Most vision-language pre-training work adopts the contrastive loss, analogous to the Next Sentence Prediction loss used in BERT, to predict whether the image and caption are matched. However, our training target is a set of tags. We presume that for some images the tags are less discriminative than the sentence description, so contrastive loss does not equally serve its purpose. In Table~\ref{tab:contrast}, we compare two models, with and without contrastive loss, respectively. The former randomly gets labels of another image with a probability of $50\%$, and MLM is not applied when the pair is mismatched. The latter always gets its own labels. It shows that the model without contrastive loss is slightly better. \begin{table}[h] \begin{center} \tabcaption{Comparison of VIVO pre-training with and without contrastive loss. Results are evaluated on the entire validation set of nocaps.} {\vspace{0mm}} \small \label{tab:contrast} \begin{tabular}{l@{\hspace{2mm}}|c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \toprule model & BLEU4 & Meteor & CIDEr & SPICE \\ \midrule w/ contrastive & $21.1$ & $25.4$ & $76.8$ & $11.8$ \\ w/o contrastive & $\bf 21.2$ & $\bf 25.4$ & $\bf 77.8$ & $\bf 12.0$ \\ \bottomrule \end{tabular} \end{center} \end{table} \section{Effects of Constrained Beam Search} While our model has achieved substantial improvement on generating captions with novel objects, there is no absolute guarantee that the generated captions always include novel object tags. So the model can still take advantage of Constrained Beam Search (CBS)~\cite{anderson2016guided}, which forces the 100 percent inclusion of selected tag words in the caption output, to further boost the performance especially on out-of-domain images. In our experiments, CBS increases the CIDEr scores on the nocaps validation set by $1.6\%$, $1.7\%$, $11.9\%$, for in-domain, near-domain and out-of-domain images, respectively. We show some successful examples of using CBS to decode captions with provided novel object tags in image (a), (b), (c) of Figure~\ref{fig:nocaps_cbs}. However, we also observe that despite the benefit of deterministic mentions of given objects brought from CBS, the model sometimes generates broken sentences due to the low probability of the CBS-enforced word, which breaks the statistics learned in the language model. Our VIVO model can alleviate, but not eliminate, this problem by predicting the words of novel objects with higher probabilities, and thus help to generate smooth and/or logically correct captions. Take the examples in Figure~\ref{fig:nocaps_cbs}. In image (d), both models fail to generate a reasonable sentence describing ``tie'' and ``miniskirt''. In image (e) and (f), the baseline model without VIVO pre-training seems to integrate novel objects in captions after adding CBS, the sentence is nevertheless logical nonsense. By contrast, our VIVO model correctly uses the provided constraints, \textit{i.e.}, ``beehive'' and ``cabbage'' in the CBS caption outputs, in place of ``box'' and ``plant'' in the original captions, respectively. We posit that the word embedding learned from pre-training, \textit{i.e.}, the Visual Vocabulary where representations of semantically similar objects are close to each other, facilities the vocabulary expansion to unseen objects, which is the key of generalization at inference time for CBS. \begin{figure*}[t] \begin{center} \includegraphics[trim=5 5 60 5, clip,width=1\textwidth]{figures/visual_cbs.pdf} \figcaption{Some images from the nocaps validation set and the corresponding captions generated with and without CBS. \textbf{B}: the baseline model without VIVO pre-training. \textbf{V}: our proposed model with VIVO pre-training. \textbf{B+C} (\textbf{V+C}): the baseline (VIVO) model with CBS at inference time. The constraints given to the CBS are shown in \textcolor{green}{green}. The objects generated by the original model without CBS are shown in \textcolor{red}{red}. } \label{fig:nocaps_cbs} {\vspace{-4mm}} \end{center} \end{figure*} \fi \begin{comment} \begin{figure*}[t] \setlength{\tabcolsep}{0.0pt} \renewcommand{\arraystretch}{2} \begin{tabular}{rccc} & Baseline (Random init) & Baseline (BERT init) & VIVO \\ \rotatebox[origin=c]{90}{\small{Pre-training}} & \raisebox{-.5\height}{\includegraphics[trim=80 80 80 80,clip,height=.33\textwidth]{./figures/tsne_randominit_pretrain.pdf}} & \raisebox{-.5\height}{\includegraphics[trim=80 80 80 80,clip,height=.33\textwidth]{./figures/tsne_baseline_bertbase.pdf}} & \raisebox{-.5\height}{\includegraphics[trim=80 80 80 80,clip,height=.33\textwidth]{./figures/tsne_vivo_pretrain.pdf}}\\ & (a) & (b) & (c)\\ \rotatebox[origin=c]{90}{\small{Fine-tuning}} & \raisebox{-.5\height}{\includegraphics[trim=80 80 80 80,clip,height=.33\textwidth]{./figures/tsne_randominit_ft.pdf}} & \raisebox{-.5\height}{\includegraphics[trim=80 80 80 80,clip,height=.33\textwidth]{./figures/tsne_baseline_ft_new.pdf}} & \raisebox{-.5\height}{\includegraphics[trim=80 80 80 80,clip,height=.33\textwidth]{./figures/tsne_vivo_ft.pdf}}\\ & (d) & (e) & (f)\\ \end{tabular} \caption{Comparison of $t$-SNE visualizations of the baselines and our model. In (a) and (d), we see that the baseline with random initialization does not work well for visual-text alignment. In (b), the baseline with BERT initialization does not align the two modalities at first, but the alignment is improved after fine-tuning, as shown in (e). In contrast, our approach improves visual-text alignment in both pre-training and fine-tuning, as shown in (c) and (f).} \label{fig:feat-visual} \end{figure*} \end{comment} \section{#1} \vspace{-1mm}} \newcommand{\SubSection}[1]{\vspace{-1mm} \subsection{#1} \vspace{-1mm}} \newcommand{\textit{i.e.}}{\textit{i.e.}} \newcommand{\textit{e.g.}}{\textit{e.g.}} \newcommand{\textit{vs.}}{\textit{vs.}} \newcommand{\textit{et al.}}{\textit{et al.}} \def{\vspace{0mm}}{{\vspace{0mm}}} \def{\vspace{-4mm}}{{\vspace{-4mm}}} \makeatletter \newcommand\figcaption{\def\@captype{figure}\caption} \newcommand\tabcaption{\def\@captype{table}\caption} \makeatother \newenvironment{tight_itemize}{ \begin{itemize} \setlength{\topsep}{0pt} \setlength{\itemsep}{2pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} }{\end{itemize}} \nocopyright \pdfinfo{ /Title (VIVO: Surpassing Human Performance in Novel Object Captioning with Visual Vocabulary Pre-Training) /Author (Xiaowei Hu, Xi Yin, Kevin Lin, Lijuan Wang, Lei Zhang, Jianfeng Gao, Zicheng Liu) } \setcounter{secnumdepth}{0} \title{VIVO: Surpassing Human Performance in Novel Object Captioning \\ with Visual Vocabulary Pre-Training} \author{ Xiaowei Hu, Xi Yin, Kevin Lin, Lijuan Wang, Lei Zhang, \\ Jianfeng Gao, Zicheng Liu \\ } \affiliations { Microsoft Corporation \\ \{xiaowh, keli, lijuanw, leizhang, jfgao, zliu\}@microsoft.com \\ yinxi.whu@gmail.com } \begin{document} \maketitle \begin{abstract} It is highly desirable yet challenging to generate image captions that can describe novel objects which are unseen in caption-labeled training data, a capability that is evaluated in the novel object captioning challenge (nocaps). In this challenge, no additional image-caption training data, other than COCO Captions, is allowed for model training. Thus, conventional Vision-Language Pre-training (VLP) methods cannot be applied. This paper presents VIsual VOcabulary pre-training (VIVO) that performs pre-training in the absence of caption annotations. By breaking the dependency of paired image-caption training data in VLP, VIVO can leverage large amounts of paired image-tag data to learn a visual vocabulary. This is done by pre-training a multi-layer Transformer model that learns to align image-level tags with their corresponding image region features. To address the unordered nature of image tags, VIVO uses a Hungarian matching loss with masked tag prediction to conduct pre-training. We validate the effectiveness of VIVO by fine-tuning the pre-trained model for image captioning. In addition, we perform an analysis of the visual-text alignment inferred by our model. The results show that our model can not only generate fluent image captions that describe novel objects, but also identify the locations of these objects. Our single model has achieved new state-of-the-art results on nocaps and surpassed the human CIDEr score. \end{abstract} \input{nocaps_01_intro} \input{nocaps_02_prior} \input{nocaps_03_method} \input{nocaps_04_experiment} \input{nocaps_05_conclusion} \begin{small}
1,477,468,750,720
arxiv
\section{Introduction} The discovery of current cosmic accelerated expansion has stimulated many researchers to explore the cause of this tremendous change in cosmic history. A mysterious force known as dark energy (DE) is considered as the basic ingredient responsible for this expanding phase of the universe. Dark energy has repulsive nature with negatively large pressure but its complete characteristics are still unknown. Modified gravity theories are considered as the favorable and optimistic approaches to unveil the salient features of DE. These modified theories of gravity are obtained by replacing or adding curvature invariants as well as their corresponding generic functions in the Einstein-Hilbert action. Gauss-Bonnet (GB) invariant is a linear combination of quadratic invariants of the form \begin{equation}\nonumber \mathcal{G}=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\xi\eta}R^{\mu\nu\xi\eta}, \end{equation} where $R,~R_{\mu\nu}$ and $R_{\mu\nu\xi\eta}$ denote the Ricci scalar, Ricci and Riemann tensors, respectively. It is the second order Lovelock scalar with the interesting feature that it is free from spin-2 ghost instabilities while its dynamical effects do not appear in four dimensions \cite{1}. The coupling of $\mathcal{G}$ with scalar field or adding generic function $f(\mathcal{G})$ in geometric part of the Einstein-Hilbert action are the two promising ways to study the dynamics of $\mathcal{G}$ in four dimensions. Nojiri and Odintsov \cite{2} presented the second approach as an alternative for DE referred as $f(\mathcal{G})$ gravity which explores the fascinating characteristics of late-time cosmological evolution. This theory is consistent with solar system constraints and has a quite rich cosmological structure \cite{3}. The curvature-matter coupling in modified theories has attained much attention to discuss the cosmic accelerated expansion. Harko et al. \cite{4} introduced such coupling in $f(R)$ gravity referred as $f(R,T)$ gravity. Recently, we have established this coupling between quadratic curvature invariant and matter named as $f(\mathcal{G},T)$ theory of gravity and found that such coupling leads to the non-conservation of energy-momentum tensor $(T_{\mu\nu})$ \cite{5}. Furthermore, the non-geodesic lines of geometry are followed by massive test particles due to the presence of extra force while dust particles follow geodesic trajectories. The stability of Einstein universe is analyzed for both conserved as well as non-conserved $T_{\mu\nu}$ in this theory \cite{6}. Shamir and Ahmad \cite{7} applied Noether symmetry approach to construct some cosmological viable $f(\mathcal{G},T)$ models in the background of isotropic and homogeneous universe. We have reconstructed the cosmic evolutionary models corresponding to phantom/non-phantom epochs, de Sitter universe as well as power-law solution and analyzed their stability \cite{8}. The significant connection between gravitation and thermodynamics is established after the remarkable discovery of black hole (BH) thermodynamics with Hawking temperature as well as BH entropy \cite{9}. Jacobson \cite{10} obtained the Einstein field equations using fundamental relation known as Clausius relation $dQ=\mathcal{T}d\mathcal{S}$ ($\mathcal{S},~\mathcal{T}$ and $dQ$ represent the entropy, Unruh temperature and energy flux observed by accelerated observer just inside the horizon, respectively) together with the proportionality of entropy and horizon area in the context of Rindler spacetime. Cai and Kim \cite{11} showed that Einstein field equations can be rewritten in the form of first law of thermodynamics for isotropic and homogeneous universe with any spatial curvature parameter. Akbar and Cai \cite{12} found that the Friedmann equations at the apparent horizon can be written in the form $dE=\mathcal{T}d\mathcal{S}+Wd\mathcal{V}$ ($E,~\mathcal{V}$ and $W$ are the energy, volume inside the horizon and work density, respectively) in general relativity (GR), GB gravity and the general Lovelock theory of gravity. In modified theories, an additional entropy production term is appeared in Clausius relation that corresponds to the non-equilibrium behavior of thermodynamics while no extra term appears in GB gravity, Lovelock gravity and braneworld gravity \cite{13}. The generalized second law of thermodynamics (GSLT) has a significant importance in modified theories of gravity. Wu et al. \cite{14} derived the universal condition for the validity of GSLT in modified theories of gravity. Bamba and Geng \cite{15} found that GSLT in the effective phantom/non-phantom phase is satisfied in $f(R)$ gravity. Sadjadi \cite{16} studied the second law as well as GSLT in $f(R,\mathcal{G})$ gravity for de Sitter universe model as well as power-law solution with the assumption that apparent horizon is in thermal equilibrium. Bamba and Geng \cite{17} found that GSLT holds for the FRW universe with the same temperature inside and outside the apparent horizon in generalized teleparallel theory. Sharif and Zubair \cite{18} checked the validity of first and second laws of thermodynamics at the apparent horizon for both equilibrium as well as non-equilibrium descriptions in $f(R,T)$ gravity and found that GSLT holds in both phantom as well as non-phantom phases of the universe. Abdolmaleki and Najafi \cite{19} explored the validity of GSLT for isotropic and homogeneous universe filled with radiation and matter surrounded by apparent horizon with Hawking temperature in $f(\mathcal{G})$ gravity. In this paper, we investigate the first as well as second law of thermodynamics at the apparent horizon of FRW model with any spatial curvature. The paper has the following format. In section \textbf{2}, we discuss the basic formalism of this gravity while the laws of thermodynamics are investigated in section \textbf{3}. Section \textbf{4} is devoted to analyze the validity of GSLT for reconstructed $f(\mathcal{G},T)$ models corresponding to de Sitter and power-law solution. The results are summarized in the last section. \section{$f(\mathcal{G},T)$ Gravity} The action of $f(\mathcal{G},T)$ gravity is given by \cite{5} \begin{equation}\label{1} \mathcal{I}=\int\sqrt{-g}\left(\frac{R+f(\mathcal{G},T)}{16\pi G} +\mathcal{L}_{m}\right)d^{4}x, \end{equation} where $g,~G$ and $\mathcal{L}_{m}$ represent determinant of the metric tensor $(g_{\mu\nu})$, gravitational constant and matter Lagrangian density, respectively. The variation of the action (\ref{1}) with respect to $g_{\mu\nu}$ gives the fourth-order field equations as \begin{eqnarray}\nonumber R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R&=&\frac{1}{2}g_{\mu\nu} f(\mathcal{G},T)-2RR_{\mu\nu}f_{\mathcal{G}}(\mathcal{G},T) +4R^{\xi}_{\mu}R_{\xi\nu}f_{\mathcal{G}}(\mathcal{G},T) \\\nonumber&+&4R_{\mu\xi\nu\eta}R^{\xi\eta}f_{\mathcal{G}} (\mathcal{G},T)-2R_{\mu}^{\xi\eta\delta}R_{\nu\xi\eta\delta} f_{\mathcal{G}}(\mathcal{G},T)-2Rg_{\mu\nu}\\\nonumber&\times& \nabla^{2}f_{\mathcal{G}}(\mathcal{G},T)+4R_{\mu\nu}\nabla^{2} f_{\mathcal{G}}(\mathcal{G},T)+2R\nabla_{\mu}\nabla_{\nu} f_{\mathcal{G}}(\mathcal{G},T)\\\nonumber&-&4R^{\xi}_{\nu} \nabla_{\mu}\nabla_{\xi}f_{\mathcal{G}}(\mathcal{G},T) -4R^{\xi}_{\mu}\nabla_{\nu}\nabla_{\xi}f_{\mathcal{G}}(\mathcal{G},T) +4g_{\mu\nu}R^{\xi\eta}\\\nonumber&\times&\nabla_{\xi}\nabla_{\eta} f_{\mathcal{G}}(\mathcal{G},T)-4R_{\mu\xi\nu\eta}\nabla^{\xi} \nabla^{\eta}f_{\mathcal{G}}(\mathcal{G},T)-(T_{\mu\nu} +\Theta_{\mu\nu})\\\label{2}&\times&f_{T}(\mathcal{G},T) +8\pi GT_{\mu\nu}, \end{eqnarray} where $f_{\mathcal{G}}(\mathcal{G},T)=\frac{\partial f(\mathcal{G},T)}{\partial\mathcal{G}}, ~f_{T}(\mathcal{G},T)=\frac{\partial f(\mathcal{G},T)}{\partial T}, ~\nabla^{2}=\nabla_{\mu}\nabla^{\mu}$ ($\nabla_{\mu}$ is a covariant derivative) and $\Theta_{\mu\nu}$ has the following expression \cite{20} \begin{equation}\nonumber \Theta_{\mu\nu} =-2T_{\mu\nu}+g_{\mu\nu}\mathcal{L}_{m}-2g^{\xi\eta} \frac{\partial^{2}\mathcal{L}_{m}}{\partial g^{\mu\nu}\partial g^{\xi\eta}}. \end{equation} The variation of $\sqrt{-g}\mathcal{L}_{m}$ with respect to $g_{\mu\nu}$ yields \begin{equation}\nonumber T_{\mu\nu}=g_{\mu\nu}\mathcal{L}_{m}-2\frac{\partial \mathcal{L}_{m}}{\partial g^{\mu\nu}}, \end{equation} where we have used that $\mathcal{L}_{m}$ depends only on $g_{\mu\nu}$. The covariant derivative of Eq.(\ref{2}) gives \begin{eqnarray}\nonumber \nabla^{\mu}T_{\mu\nu}&=&-\frac{f_{T}(\mathcal{G},T)}{8\pi G-f_{T}(\mathcal{G},T)}\left[\frac{1}{2}g_{\mu\nu} \nabla^{\mu}T-(\Theta_{\mu\nu}+T_{\mu\nu})\nabla^{\mu} \ln f_{T}(\mathcal{G},T)\right.\\\label{5}&-&\left. \nabla^{\mu}\Theta_{\mu\nu}\right]. \end{eqnarray} The non-zero divergence shows that the conservation law does not hold in this gravity due to the curvature-matter coupling. The above equations show that matter Lagrangian density and a generic function have a significant importance to discuss the dynamics of curvature-matter coupled theories. The particular forms of $f(\mathcal{G},T)$ are \begin{equation}\nonumber f(\mathcal{G},T)=f_{1}(\mathcal{G})+f_{2}(T),\quad f(\mathcal{G},T)=f_{1}(\mathcal{G})+f_{2}(\mathcal{G})f_{3}(T), \end{equation} where the first choice is considered as correction to $f(\mathcal{G})$ gravity since it does not involve the direct non-minimal curvature-matter coupling while the second form implies direct coupling. Unlike $f(R,T)$ gravity \cite{4}, the choice $f(\mathcal{G},T)=\mathcal{G}+\lambda T$ ($\lambda$ is an arbitrary parameter) does not exist in this gravity since $\mathcal{G}$ is a topological invariant in four dimensions. It is clear from Eq.(\ref{2}) that the contribution of GB disappears for this particular choice of the model. The energy-momentum tensor for perfect fluid as cosmic matter contents is given by \begin{equation}\label{6} T_{\mu\nu}=(\rho+P)v_{\mu}v_{\nu}-Pg_{\mu\nu}, \end{equation} where $P,~\rho$ and $v_{\mu}$ denote pressure, energy density and four-velocity, respectively. This four-velocity satisfies the relations $v^{\xi}v_{\xi}=1$ and $v^{\xi}\nabla_{\nu}v_{\xi}=0$. In this case, the tensor $\Theta_{\mu\nu}$ with $\mathcal{L}_{m}=-P$ takes the form \begin{equation}\label{7} \Theta_{\mu\nu}=-Pg_{\mu\nu}-2T_{\mu\nu}. \end{equation} Using Eqs.(\ref{6}) and (\ref{7}), Eq.(\ref{2}) can be written in a similar form as the Einstein field equations for dust case $(P=0)$ \begin{equation}\label{8} G_{\mu\nu}=8\pi\widetilde{G}T_{\mu\nu}^{\mathrm{eff}} =8\pi\widetilde{G}T_{\mu\nu}+T_{\mu\nu}^{\mathrm{(D)}}, \end{equation} where \begin{eqnarray}\nonumber T_{\mu\nu}^{\mathrm{(D)}}&=& \frac{1}{2}g_{\mu\nu} f(\mathcal{G},T)-2RR_{\mu\nu}f_{\mathcal{G}}(\mathcal{G},T) +4R^{\xi}_{\mu}R_{\xi\nu}f_{\mathcal{G}}(\mathcal{G},T) \\\nonumber&+&4R_{\mu\xi\nu\eta}R^{\xi\eta}f_{\mathcal{G}} (\mathcal{G},T)-2R_{\mu}^{\xi\eta\delta}R_{\nu\xi\eta\delta} f_{\mathcal{G}}(\mathcal{G},T)-2Rg_{\mu\nu} \nabla^{2}f_{\mathcal{G}}(\mathcal{G},T)\\\nonumber&+&4R_{\mu\nu}\nabla^{2} f_{\mathcal{G}}(\mathcal{G},T)+2R\nabla_{\mu}\nabla_{\nu} f_{\mathcal{G}}(\mathcal{G},T)-4R^{\xi}_{\nu} \nabla_{\mu}\nabla_{\xi}f_{\mathcal{G}}(\mathcal{G},T) \\\nonumber&-&4R^{\xi}_{\mu}\nabla_{\nu}\nabla_{\xi}f_{\mathcal{G}}(\mathcal{G},T) +4g_{\mu\nu}R^{\xi\eta}\nabla_{\xi}\nabla_{\eta} f_{\mathcal{G}}(\mathcal{G},T)\\\nonumber&-&4R_{\mu\xi\nu\eta}\nabla^{\xi} \nabla^{\eta}f_{\mathcal{G}}(\mathcal{G},T),\\\nonumber \widetilde{G}&=&GF,\quad F=1+\frac{f_{T}(\mathcal{G},T)}{8\pi G}. \end{eqnarray} The line element for FRW universe model is \begin{equation}\label{9} ds^{2}=dt^{2}-\frac{a^{2}(t)}{1-kr^{2}}dr^{2}-\hat{r}^{2}d\theta^{2} -\hat{r}^{2}\sin^{2}\theta d\phi^{2}, \end{equation} where $\hat{r}=a(t)r,~a(t)$ and $k$ represent the scale factor depending on cosmic time and spatial curvature parameter which corresponds to open $(k=-1)$, closed $(k=1)$ and flat $(k=0)$ geometries of the universe. The GB invariant takes the form \begin{equation}\nonumber \mathcal{G}=24(H^{2}+\dot{H})\left(H^{2}+\frac{k}{a^{2}}\right). \end{equation} Using Eqs.(\ref{6}), (\ref{8}) and (\ref{9}), we obtain the following field equations \begin{eqnarray}\nonumber 3\left(H^{2}+\frac{k}{a^{2}}\right)&=&8\pi \widetilde{G}\rho+\frac{1}{2}f(\mathcal{G},T)-12(H^{2}+\dot{H}) \left(H^{2}+\frac{k}{a^{2}}\right)\\\nonumber&\times&f_{\mathcal{G}} (\mathcal{G},T)+12H\left(H^{2}+\frac{k}{a^{2}}\right) \left(f_{\mathcal{GG}}(\mathcal{G},T)\dot{\mathcal{G}}\right. \\\label{10}&+&\left.f_{\mathcal{G}T}(\mathcal{G},T)\dot{T}\right), \\\nonumber-\left(2\dot{H}+3H^{2}+\frac{k}{a^{2}}\right)&=& -\frac{1}{2}f(\mathcal{G},T)+12(H^{2}+\dot{H})\left(H^{2}+\frac{k}{a^{2}} \right)f_{\mathcal{G}}(\mathcal{G},T)\\\nonumber&-&8H(H^{2}+\dot{H}) \left(f_{\mathcal{GG}}(\mathcal{G},T)\dot{\mathcal{G}}+f_{\mathcal{G}T} (\mathcal{G},T)\dot{T}\right)\\\nonumber&-&4\left(H^{2}+\frac{k}{a^{2}} \right)\left(f_{\mathcal{GGG}}(\mathcal{G},T)\dot{\mathcal{G}}^{2}+2f_{\mathcal{GG}T} (\mathcal{G},T)\dot{\mathcal{G}}\dot{T}\right.\\\label{11}&+&\left. f_{\mathcal{G}TT}(\mathcal{G},T)\dot{T}^{2}+f_{\mathcal{GG}}(\mathcal{G},T) \ddot{\mathcal{G}}+f_{\mathcal{G}T}(\mathcal{G},T)\ddot{T}\right), \end{eqnarray} where $H=\dot{a}/a$ is a Hubble parameter and dot represents derivative with respect to time. We can rewrite the above equations as \begin{eqnarray}\label{12} 3\left(H^{2}+\frac{k}{a^{2}}\right)&=&8\pi\widetilde{G}\rho_{\mathrm{_{tot}}} =8\pi\widetilde{G}(\rho+\rho^{\mathrm{(D)}}),\\\label{13} -2\left(\dot{H}-\frac{k}{a^{2}}\right)&=&8\pi\widetilde{G}(\rho_{\mathrm{_{tot}}} +P_{\mathrm{_{tot}}})=8\pi\widetilde{G}(\rho+\rho^{\mathrm{(D)}}+P^{\mathrm{(D)}}), \end{eqnarray} where $\rho_{\mathrm{(D)}}$ and $P_{\mathrm{(D)}}$ are dark source terms given by \begin{eqnarray}\nonumber \rho^{\mathrm{(D)}}&=&\frac{1}{8\pi\widetilde{G}F}\left[\frac{1}{2} f(\mathcal{G},T)-12(H^{2}+\dot{H})\left(H^{2}+\frac{k}{a^{2}}\right) f_{\mathcal{G}}(\mathcal{G},T)\right.\\\nonumber&+&\left.12H\left(H^{2} +\frac{k}{a^{2}}\right)\left(f_{\mathcal{GG}}(\mathcal{G},T)\dot{\mathcal{G}} +f_{\mathcal{G}T}(\mathcal{G},T)\dot{T}\right)\right],\\\nonumber P^{\mathrm{(D)}}&=&\frac{1}{8\pi\widetilde{G}F}\left[-\frac{1}{2} f(\mathcal{G},T)+12(H^{2}+\dot{H})\left(H^{2}+\frac{k}{a^{2}}\right) f_{\mathcal{G}}(\mathcal{G},T)-8H\right.\\\nonumber&\times&\left.(H^{2} +\dot{H})\left(f_{\mathcal{GG}}(\mathcal{G},T)\dot{\mathcal{G}} +f_{\mathcal{G}T}(\mathcal{G},T)\dot{T}\right)-4\left(H^{2}+\frac{k}{a^{2}} \right)\right.\\\nonumber&\times&\left.\left(f_{\mathcal{GGG}} (\mathcal{G},T)\dot{\mathcal{G}}^{2}+2f_{\mathcal{GG}T}(\mathcal{G},T)\dot{\mathcal{G}} \dot{T}+f_{\mathcal{G}TT}(\mathcal{G},T)\dot{T}^{2}+f_{\mathcal{GG}} (\mathcal{G},T)\ddot{\mathcal{G}}\right.\right.\\\nonumber&+&\left.\left. f_{\mathcal{G}T}(\mathcal{G},T)\ddot{T}\right)\right]. \end{eqnarray} The continuity equation for Eq.(\ref{9}) becomes \begin{equation}\label{15} \dot{\rho}+3H\rho=\frac{-1}{8\pi G+f_{T}(\mathcal{G},T)} \left[\frac{1}{2}\dot{\rho}f_{T}(\mathcal{G},T)+\rho\left(f_{\mathcal{G}T} (\mathcal{G},T)\dot{\mathcal{G}}+f_{TT}(\mathcal{G},T)\dot{T}\right)\right]. \end{equation} The conservation law holds in the absence of curvature-matter coupling for both $f(\mathcal{G})$ gravity and GR. \section{Laws of Thermodynamics} In this section, we study the laws of thermodynamics in the context of $f(\mathcal{G},T)$ gravity at the apparent horizon of FRW universe model. \subsection{First Law} The first law of thermodynamics is based on the concept that energy remains conserved in the system but can change from one form to another. To study this law, we first find the dynamical apparent horizon evaluated by the relation \begin{equation}\nonumber h^{\mu\nu}\partial_{\mu}\hat{r}\partial_{\nu}\hat{r}=0, \end{equation} where $h_{\mu\nu}=\mathrm{diag}(1,\frac{-a^{2}}{1-kr^{2}})$ is a two-dimensional metric. For isotropic and homogeneous universe model, the above relation gives the radius of apparent horizon as \begin{equation}\nonumber \hat{r}_{A}=\left(H^{2}+\frac{k}{a^{2}}\right)^{-\frac{1}{2}}. \end{equation} Taking the time derivative of this equation and using Eq.(\ref{13}), it follows that \begin{equation}\label{3a} d\hat{r}_{A}=4\pi G\left(\rho_{\mathrm{_{tot}}}+P_{\mathrm{_{tot}}}\right)\hat{r}_{A}^{3}HFdt, \end{equation} where $d{\hat{r}_{A}}$ represents the infinitesimal change in apparent horizon radius during the small time interval $dt$. Bekenstein-Hawking entropy is defined as one fourth of apparent horizon area $(\mathcal{A}=4\pi\hat{r}_{A}^{2})$ in units of Newton's gravitational constant \cite{9}. In modified theories of gravity, the entropy of stationary BH solutions with bifurcate Killing horizons is a Noether charge entropy also known as Wald entropy \cite{21}. It depends on the variation of Lagrangian density $(\mathcal{L})$ with respect to $R_{\mu\nu\xi\eta}$ as \cite{22} \begin{equation}\label{4a} \mathcal{S}=-2\pi\oint\frac{\partial\mathcal{L}}{\partial R_{\mu\nu\xi\eta}}\epsilon_{\xi\eta}\epsilon_{\mu\nu}dV_{n-2}^{2}, \end{equation} where $dV_{n-2}^{2}$ and $\epsilon_{\xi\eta}$ represent the volume element on $(n-2)$-dimensional spacelike bifurcation surface $(\Sigma)$ and binormal vector to $\Sigma$ satisfying the relation $\epsilon_{\xi\eta}\epsilon^{\xi\eta}=-2$. Brustein et al. \cite{23} proposed that Wald entropy is equal to quarter of $\mathcal{A}$ in units of the effective gravitational coupling in modified theories of gravity. Using these concepts, the Wald entropy in $f(\mathcal{G},T)$ gravity is given by \begin{equation}\label{5a} \mathcal{S}=\frac{\mathcal{A}}{4GF}\left(1-\frac{4}{\hat{r}_{A}^{2}} f_{\mathcal{G}}(\mathcal{G},T)\right). \end{equation} This formula corresponds to $f(\mathcal{G})$ gravity for $F=1$ while the traditional entropy in GR is obtained for $f_{\mathcal{G}}=0$ \cite{24}. Taking differential of Eq.(\ref{5a}) and using Eq.(\ref{3a}), we obtain \begin{eqnarray}\label{6a} \frac{1}{2\pi\hat{r}_{A}}d\mathcal{S}=4\pi\left(\rho_{\mathrm{_{tot}}} +P_{\mathrm{_{tot}}}\right)\hat{r}_{A}^{3}Hdt-\frac{2}{\hat{r}_{A}GF} df_{\mathcal{G}}+\frac{\hat{r}_{A}}{2G}\left(1-\frac{4}{\hat{r}_{A}^{2}}\ f_{\mathcal{G}}\right)d\left(\frac{1}{F}\right). \end{eqnarray} The surface gravity $(\kappa_{sg})$ helps to determine temperature on the apparent horizon as \cite{11} \begin{equation}\label{7a} \mathcal{T}=\frac{|\kappa_{sg}|}{2\pi}, \end{equation} where \begin{eqnarray}\nonumber \kappa_{sg}&=&\frac{1}{2\sqrt{-h}}\partial_{\mu} \left(\sqrt{-h}h^{\mu\nu}\partial_{\nu}\hat{r}\right) \\\label{8a}&=&\frac{1}{\hat{r}_{A}}\left(1-\frac{\dot{\hat{r}}_{A}} {2\hat{r}_{A}H}\right)=\frac{1}{2}\hat{r}_{A}\left(\frac{k}{a^{2}}+H^{2}+\dot{H} \right), \end{eqnarray} $h$ is the determinant of $h_{\mu\nu}$. Using Eqs.(\ref{6a})-(\ref{8a}), we have \begin{eqnarray}\nonumber \mathcal{T}d\mathcal{S}&=&4\pi\left(\rho_{\mathrm{_{tot}}} +P_{\mathrm{_{tot}}}\right)\hat{r}_{A}^{3}Hdt-2\pi\left(\rho_{\mathrm{_{tot}}} +P_{\mathrm{_{tot}}}\right)\hat{r}_{A}^{2}d\hat{r}_{A}-\frac{4\pi\mathcal{T}}{GF} df_{\mathcal{G}}\\\label{9a}&+&\frac{\pi}{G}\hat{r}_{A}^{2}\mathcal{T} \left(1-\frac{4}{\hat{r}_{A}^{2}}f_{\mathcal{G}}\right)d\left(\frac{1}{F}\right). \end{eqnarray} The total energy inside the apparent horizon of radius $\hat{r}_{A}$ for FRW universe model is given by \begin{equation}\nonumber E=\mathcal{V}\rho_{\mathrm{_{tot}}}=\frac{4}{3}\pi\hat{r}_{A}^{3} \rho_{\mathrm{_{tot}}}=\frac{3\mathcal{V}}{8\pi\widetilde{G}}\left(H^{2} +\frac{k}{a^{2}}\right). \end{equation} This equation shows that $E$ is directly related to $\hat{r}_{A}$, so the small displacement $d\hat{r}_{A}$ in horizon radius will cause the infinitesimal change given by \begin{eqnarray}\label{11a} dE=4\pi\rho_{\mathrm{_{tot}}}\hat{r}_{A}^{2}d\hat{r}_{A}-4\pi \left(\rho_{\mathrm{_{tot}}} +P_{\mathrm{_{tot}}}\right)\hat{r}_{A}^{3}Hdt+\frac{\hat{r}_{A}}{2G}d \left(\frac{1}{F}\right). \end{eqnarray} Using Eqs.(\ref{9a}) and (\ref{11a}), it follows that \begin{eqnarray}\nonumber \mathcal{T}d\mathcal{S}=-dE+Wd\mathcal{V}-\frac{4\pi\mathcal{T}}{GF}d f_{\mathcal{G}}+\frac{\hat{r}_{A}}{2G}\left[1+2\pi\hat{r}_{A} \mathcal{T}\left(1-\frac{4}{\hat{r}_{A}^{2}}f_{\mathcal{G}}\right)\right] d\left(\frac{1}{F}\right), \end{eqnarray} where $W=\left(\rho_{\mathrm{_{tot}}} -P_{\mathrm{_{tot}}}\right)/2$ is the work done by the system. The above equation can be written as \begin{equation}\label{13a} \mathcal{T}\left(d\mathcal{S}+d_{i}\mathcal{S}\right)=-dE+Wd\mathcal{V}, \end{equation} where \begin{equation}\nonumber d_{i}\mathcal{S}=\frac{4\pi}{GF}df_{\mathcal{G}}-\frac{\hat{r}_{A}} {2G\mathcal{T}}\left[1+2\pi\hat{r}_{A}\mathcal{T}\left(1- \frac{4}{\hat{r}_{A}^{2}}f_{\mathcal{G}}\right)\right] d\left(\frac{1}{F}\right), \end{equation} is interpreted as the entropy production term which appears due to non-equilibrium thermodynamical behavior at the apparent horizon. The non-equilibrium picture of thermodynamics implies that there is some energy change inside and outside the apparent horizon. Due to the presence of this extra term, the field equations do not obey the universal form of first law of thermodynamics $dE=\mathcal{T}d\mathcal{S}+Wd\mathcal{V}$ in this gravity. It is mentioned here that in modified theories, this auxiliary term usually appears in the first law of thermodynamics while it is absent in GR, GB gravity and Lovelock gravity \cite{12,13}. \subsection{Generalized Second Law} In this section, we discuss the GSLT in $f(\mathcal{G},T)$ gravity which states that total entropy of the system is not decreasing in time given by \begin{equation}\label{1b} \dot{\mathcal{S}}+\dot{\mathcal{S}}_{\mathrm{_{tot}}}+d_{i}\dot{\mathcal{S}}\geq0, \end{equation} where $\mathcal{S}_{\mathrm{_{tot}}}$ is the entropy due to energy as well as all matter contents inside the horizon and $d_{i}\dot{\mathcal{S}}=\partial_{t}(d_{i}\mathcal{S})$. The Gibbs equation relates $\mathcal{S}_{\mathrm{_{tot}}}$ to the total energy density and pressure as \cite{14} \begin{equation}\label{2b} \mathcal{T}_{\mathrm{_{tot}}}d\mathcal{S}_{\mathrm{_{tot}}} =d(\rho_{\mathrm{_{tot}}}\mathcal{V})+P_{\mathrm{_{tot}}}d\mathcal{V}, \end{equation} where $\mathcal{T}_{\mathrm{_{tot}}}$ represents total temperature corresponding to all matter and energy contents inside the horizon and is not equal to the apparent horizon temperature. We assume \begin{equation}\nonumber \mathcal{T}_{\mathrm{_{tot}}}=\zeta\mathcal{T},\quad 0<\zeta<1. \end{equation} This proportional relation shows that total temperature inside the horizon is positive and always smaller than the temperature at the apparent horizon. Using Eqs.(\ref{13a}) and (\ref{2b}) in (\ref{1b}), we obtain \begin{equation}\label{4b} \dot{\mathcal{S}}+\dot{\mathcal{S}}_{\mathrm{_{tot}}}+d_{i}\dot{\mathcal{S}} =\left(\frac{24+\hat{r}_{A}^{4}\mathcal{G}}{96\pi \zeta\hat{r}_{A}}\right)\Upsilon\geq0, \end{equation} where \begin{equation}\nonumber \Upsilon=(1-\zeta)\mathcal{V}\dot{\rho}_{\mathrm{_{tot}}}+ \left(\rho_{\mathrm{_{tot}}} +P_{\mathrm{_{tot}}}\right) \left(1-\frac{\zeta}{2}\right)\dot{\mathcal{V}}. \end{equation} Using Eqs.(\ref{12}) and (\ref{13}), the GSLT condition takes the form \begin{equation}\label{5b} \left(\frac{24+\hat{r}_{A}^{4}\mathcal{G}}{192\pi\zeta GF}\right) \hat{r}_{A}^{4}\Xi\geq0, \end{equation} where \begin{equation}\nonumber \Xi=(2-\zeta)H\left(\dot{H}-\frac{k}{a^{2}}\right)^{2} +\frac{2(1-\zeta)H}{\hat{r}_{A}}\left(\dot{H}-\frac{k}{a^2}\right) +\frac{(1-\zeta)F}{\hat{r}_{A}^{4}}\partial_{t}\left( \frac{1}{F}\right). \end{equation} It is seen that GSLT is valid for $\mathcal{G}>0,~F>0$ and $\Xi>0$. For flat FRW universe model, the conditions $\mathcal{G}>0,~F>0,~H>0,~\dot{H}>0$ and $\partial_{t}\left( \frac{1}{F}\right)>0$ must be satisfied to protect the GSLT in $f(\mathcal{G},T)$ gravity. The equilibrium description of thermodynamics implies that the temperature inside and at the horizon are same yielding \begin{equation}\nonumber \hat{r}_{A}^{4}H\left(\dot{H}-\frac{k}{a^2}\right)^{2} \left(\frac{24+\hat{r}_{A}^{4}\mathcal{G}}{192\pi\zeta GF}\right)\geq0,\quad\zeta=1. \end{equation} The validity of GSLT can be obtained for positive values of $H,~\mathcal{G}$ and $F$. \section{Validity of GSLT} Now we check the validity of GSLT for some reconstructed cosmological models in $f(\mathcal{G},T)$ gravity. \subsection{de Sitter Universe} The well-known cosmological de Sitter solution elegantly describes the evolution of current cosmic expansion. For this model, the Hubble parameter is constant ($H(t)=H_{0}$) and scale factor grows exponentially as $a(t)=a_{0}e^{H_{0}t}$. In case of dust fluid, energy density and GB invariant are given by \begin{equation}\nonumber \rho=\rho_{0}e^{-3H_{0}t},\quad\mathcal{G}=24H_{0}^4, \end{equation} where $\rho_{0}$ is an integration constant. In this case, Eq.(\ref{5b}) takes the form \begin{eqnarray}\nonumber &&\frac{1+a_{0}^{4}H_{0}^{4}(a_{0}^{2}H_{0}^{2}+ke^{-2H_{0}t})}{\zeta(8\pi G+f_{T})^{2}}\left[2kH_{0}e^{-2H_{0}t}(b-1)(8\pi G+f_{T})\right.\\\nonumber&\times&\left.(a_{0}^{2}H_{0}^{4}+ke^{-2H_{0}t})^{-1} -3\rho_{0}H_{0}e^{-3H_{0}t}(b-1)f_{TT}+k^{2}H_{0}e^{-4H_{0}t}(2-b) \right.\\\label{2c}&\times&\left.(8\pi G+f_{T})(a_{0}^{2}H_{0}^{2}+ke^{-2H_{0}t})^{-2}\right]\geq0. \end{eqnarray} The reconstructed $f(\mathcal{G},T)$ model for de Sitter universe is given by \cite{5} \begin{equation}\nonumber f(\mathcal{G},T)=c_{1}c_{2}e^{c_{1}\mathcal{G}}T^{-\frac{1}{2} \left(\frac{1-24c_{1}H_{0}^{4}}{1-36c_{1}H_{0}^{4}}\right)}+ c_{1}c_{2}T^{-\frac{1}{2}}-\frac{16\pi G}{3}T+6H_{0}^{2}, \end{equation} where $c_{j}$'s $(j=1,2)$ are integration constants and the standard conservation law is used in the reconstruction technique. The continuity constraint splits the above model into the following two $f(\mathcal{G},T)$ forms \begin{eqnarray}\label{4c} f_{1}(\mathcal{G},T)&=&\frac{18c_{1}^{2}c_{2}H_{0}^{4}(32c_{1}H_{0}^{4}-1)} {(1-36c_{1}H_{0}^{2})^{2}}e^{c_{1}\mathcal{G}}T^{-\frac{1}{2} \left(\frac{1-24c_{1}H_{0}^{4}}{1-36c_{1}H_{0}^{4}}\right)}+6H_{0}^{2} ,\\\nonumber f_{2}(\mathcal{G},T)&=&\frac{18c_{1}H_{0}^{4}(1-32c_{1}H_{0}^{4})} {(1-24c_{1}H_{0}^{4})(1-30c_{1}H_{0}^{4})}\left(c_{1}c_{2}T^{-\frac{1}{2}}- \frac{16\pi G}{3}T\right)+6H_{0}^{2}.\\\label{5c} \end{eqnarray} \begin{figure} \epsfig{file=fig1.eps, width=0.5\linewidth}\epsfig{file=fig2.eps, width=0.5\linewidth}\caption{Validity of GSLT for the model (\ref{4c}). The left plot is for $c_{1}=c_{2}=1$ and right for $c_{1}=1$ with $c_{2}=-1$.} \end{figure} \begin{figure} \epsfig{file=fig3.eps, width=0.5\linewidth}\epsfig{file=fig4.eps, width=0.5\linewidth}\caption{Validity of GSLT for the model (\ref{4c}). The left plot is for $c_{1}=-1$ with $c_{2}=1$ and right for $c_{1}=c_{2}=-1$.} \end{figure} Figures \textbf{1} and \textbf{2} show the validity of GSLT for the model (\ref{4c}) in the background of flat FRW universe model. The present day value of Hubble parameter is $H_{0}=(67.8\pm0.9)~\mathrm{kms}^{-1}\mathrm{Mpc}^{-1}$ at the $68\%$C.L. (C.L. stands for confidence level) which can be considered as $0.67$ in units of $100~\mathrm{kms}^{-1}\mathrm{Mpc}^{-1}$ \cite{r1,r2}. The value of matter density parameter is constrained as $0.308\pm0.012$ with $68\%$C.L. whereas scale factor at $t_{0}=13.7~\mathrm{Gyr}$ is $a_{0}=1$ \cite{r1}. For this model, we have four parameters $c_{1},~c_{2},~\zeta$ and $t$ with fixed values of $H_{0}=0.67,~a_{0}=1$ and $\rho_{0}=0.3$. Here, we examine the validity of GSLT against two parameters $\zeta$ and $t$ with four possible choices of integration constants. For the case $(c_{1},c_{2})>0$, we find that the validity of GSLT holds for the considered intervals of $\zeta$ and $t$. Figure \textbf{1} (left) indicates the validity for $c_{1}=c_{2}=1$ while the right plot corresponds to the case $c_{1}>0$ and $c_{2}<0$. The left plot of Figure \textbf{2} shows that the validity of GSLT is not true for $c_{1}<0$ with $c_{2}>0$ while it satisfies for both negative values of $(c_{1},c_{2})$ as shown in the right panel. It is found that the generalized second law holds at all times only for the same signatures of integration constants. \begin{figure} \epsfig{file=fig5.eps, width=0.5\linewidth}\epsfig{file=fig6.eps, width=0.5\linewidth}\caption{Validity of GSLT for the model (\ref{5c}). The left plot is for $c_{1}=c_{2}=1$ and right for $c_{1}=1$ with $c_{2}=-1$.} \end{figure} \begin{figure} \epsfig{file=fig7.eps, width=0.5\linewidth}\epsfig{file=fig8.eps, width=0.5\linewidth}\caption{Validity of GSLT for the model (\ref{5c}). The left plot is for $c_{1}=-1$ with $c_{2}=1$ and right for $c_{1}=c_{2}=-1$.} \end{figure} The graphical behavior of GSLT for de Sitter $f(\mathcal{G},T)$ model (\ref{5c}) is shown in Figures \textbf{3} and \textbf{4} against parameters $\zeta$ and $t$. In this case, we have considered $H_{0}=0.67,~a_{0}=1$ and $\rho_{0}=0.3$ with four possible signature choices of integration constants $c_{1}$ and $c_{2}$ as in the previous model. Figure \textbf{3} (right) and Figure \textbf{4} (left) show that GSLT is true for all considered values of $\zeta$ and $t$ with opposite signatures of $(c_{1},c_{2})$. For model (\ref{5c}), the choice of same signatures of integration constants is ruled out since it does not provide feasible region for which GSLT holds. \subsection{Power-law Solution} Power-law solution has remarkable importance in modified theories of gravity to discuss the decelerated as well as accelerated cosmic evolutionary phases which are characterized by the scale factor as \cite{26} \begin{equation}\label{1p} a(t)=a_{0}t^{\beta},\quad H=\frac{\beta}{t},\quad\beta>0. \end{equation} The accelerated phase of the universe is observed for $\beta>1$ while $0<\beta<1$ covers the decelerated phase including dust $(\beta=\frac{2}{3})$ as well as radiation $(\beta=\frac{1}{2})$ dominated cosmic epochs. For this scale factor, the energy density and GB invariant becomes \begin{equation}\label{2p} \rho=\rho_{0}t^{-3\beta},\quad\mathcal{G}=24\frac{\beta^3}{t^4}(\beta-1). \end{equation} Using Eqs.(\ref{1p}) and (\ref{2p}) in (\ref{5b}), the validity condition for GSLT takes the form \begin{eqnarray}\nonumber &&\frac{1+a_{0}^{4}\beta^{3}(\beta-1)t^{-4}(a_{0}^{2}\beta^{2}t^{-2}+kt^{-2\beta})^{-2}} {\zeta(8\pi G+f_{T})}\left[\frac{\beta}{t}(2-\zeta)\left(\frac{\beta a_{0}^{2}t^{-2}+kt^{-2\beta}} {\beta^{2}a_{0}^{2}t^{-2}+kt^{-2\beta}}\right)^{2}\right.\\\nonumber&-&\left. 2\frac{\beta}{t}(1-\zeta)\left(\frac{\beta a_{0}^{2}t^{-2}+kt^{-2\beta}} {\beta^{2}a_{0}^{2}t^{-2}+kt^{-2\beta}}\right)-\frac{\zeta-1}{8\pi G+f_{T}}\left(96\frac{\beta^{3}}{t^{5}}(\beta-1)f_{\mathcal{G}T} \right.\right.\\\nonumber&+&\left.\left.3\beta\rho_{0}t^{-4}f_{TT}\right)\right]\geq0. \end{eqnarray} The reconstructed $f(\mathcal{G},T)$ model for dust fluid is given by \cite{5} \begin{eqnarray}\nonumber f(\mathcal{G},T)=d_{1}d_{3}T^{d_{2}} \mathcal{G}^{\frac{1}{4}(\alpha_{1}+\alpha_{2})}+ d_{2}d_{3}T^{d_{2}}\mathcal{G}^{\frac{1}{4} (\alpha_{1}-\alpha_{2})}+d_{1}d_{2} T^{\alpha_{3}}+\alpha_{4}T+\alpha_{5}T^{\alpha_{6}}, \end{eqnarray} where $d_{l}$'s $(l=1,2,3)$ are integration constants and \begin{eqnarray}\nonumber \alpha_{1}&=&\frac{1}{2}\left[5-\beta(1+3d_{2})\right],\\\nonumber \alpha_{2}&=&\left[\frac{3}{4}\beta d_{2}\{3d_{2}\beta+2(\beta-1) -8\}+\frac{1}{4}(\beta-1)(\beta+7)+4+8d_{2}(\beta-1)\right]^{\frac{1}{2}}, \\\nonumber\alpha_{3}&=&-\frac{1}{2},\quad\alpha_{4}=-\frac{16\pi G}{3},\quad\alpha_{5}=\left(\frac{18\beta^{3}}{3\beta+4}\right) \rho_{0}^{-\frac{2}{3\beta}},\quad\alpha_{6}=\frac{2}{3\beta}, \end{eqnarray} imply that the conservation law holds. The continuity constraint splits this model into two functions with some additional constants as \begin{eqnarray}\label{4p} f_{1}(\mathcal{G},T)&=&d_{1}d_{3}\gamma_{1}T^{d_{2}} \mathcal{G}^{\frac{1}{4}(\alpha_{1}+\alpha_{2})}+d_{1}d_{2} \gamma_{2}T^{\alpha_{3}}+\gamma_{3}T+\gamma_{4}T^{\alpha_{6}},\\\label{5p} f_{2}(\mathcal{G},T)&=&d_{2}d_{3}\gamma_{5}T^{d_{2}} \mathcal{G}^{\frac{1}{4}(\alpha_{1}-\alpha_{2})}+d_{1}d_{2} \gamma_{6}T^{\alpha_{3}}+\gamma_{7}T+\gamma_{8}T^{\alpha_{6}}, \end{eqnarray} where \begin{eqnarray}\nonumber \gamma_{1}&=&1-\frac{\alpha_{7}}{\alpha_{8}},\quad \gamma_{2}=1-\frac{\alpha_{3}^{2}}{\alpha_{8}},\quad \gamma_{3}=\alpha_{4}\left(1-\frac{1}{\alpha_{8}}\right),\quad \gamma_{4}=\alpha_{5}\left(1-\frac{\alpha_{6}^{2}}{\alpha_{8}}\right), \\\nonumber\gamma_{5}&=&1-\frac{\alpha_{8}}{\alpha_{7}},\quad \gamma_{6}=1-\frac{\alpha_{3}^{2}}{\alpha_{7}},\quad \gamma_{7}=\gamma_{4}\left(1-\frac{1}{\alpha_{7}}\right),\quad \gamma_{8}=\gamma_{5}\left(1-\frac{\alpha_{6}^{2}}{\alpha_{7}}\right), \\\nonumber\alpha_{7}&=&\frac{d_{2}}{6\beta}\left[6d_{2}\beta -3\beta+2(\alpha_{1}+\alpha_{2})\right],\quad\alpha_{8} =\frac{d_{2}}{6\beta}\left[6d_{2}\beta-3\beta+2(\alpha_{1}-\alpha_{2})\right]. \end{eqnarray} \begin{figure} \epsfig{file=fig9.eps, width=0.5\linewidth}\epsfig{file=fig10.eps, width=0.5\linewidth}\caption{Validity of GSLT for the model (\ref{4p}). The left plot is for $d_{1}=d_{3}=1$ and right for $d_{1}=1$ and $d_{3}=-1$.} \end{figure} \begin{figure} \epsfig{file=fig11.eps, width=0.5\linewidth}\epsfig{file=fig12.eps, width=0.5\linewidth}\caption{Validity of GSLT for the model (\ref{4p}). The left plot is for $d_{1}=-1$ with $d_{3}=1$ and right for $d_{1}=d_{3}=-1$.} \end{figure} The validity of GSLT for models (\ref{4p}) and (\ref{5p}) depend on five parameters $d_{1},~d_{2},~d_{3},~\zeta$ and $t$. Figure \textbf{5} and \textbf{6} show the behavior of GSLT for power-law reconstructed $f(\mathcal{G},T)$ model (\ref{4p}). We examine the validity against $\zeta$ and $t$ with $a_{0}=1,~\rho_{0}=0.3,~n=\frac{2}{3}$ and $d_{2}=-1.64285$ while the behavior of remaining two integration constants $d_{2}$ and $d_{3}$ are investigated for four possible choices of their signatures. Figure \textbf{5} shows that GSLT is satisfied for both cases $d_{3}>0$ and $d_{3}<0$ with $d_{1}>0$ in the considered interval of parameters $(\zeta,t)$. We also check that the validity region decreases as the value of integration constants increase positively as well as negatively. Figure \textbf{6} shows that GSLT does not hold for model (\ref{4p}) when $d_{3}>0$ and $d_{3}<0$ with $d_{1}<0$. From both figures, it is found that the signature of $d_{1}$ has dominant effect on the validity of GSLT as compared to $d_{3}$. \begin{figure} \epsfig{file=fig13.eps, width=0.5\linewidth}\epsfig{file=fig14.eps, width=0.5\linewidth}\caption{Validity of GSLT for the model (\ref{5p}). The left plot is for $d_{1}=d_{3}=1$ and right for $d_{1}=1$ and $d_{3}=-1$.} \end{figure} \begin{figure} \epsfig{file=fig15.eps, width=0.5\linewidth}\epsfig{file=fig16.eps, width=0.5\linewidth}\caption{Validity of GSLT for the model (\ref{5p}). The left plot is for $d_{1}=-1$ with $d_{3}=1$ and right for $d_{1}=d_{3}=-1$.} \end{figure} The validity of GSLT for the model (\ref{5p}) is shown in Figures \textbf{7} and \textbf{8}. For this model, the viability of law again depends on five parameters $d_{1},~d_{2},~d_{3},~\zeta$ and $t$ while we set $a_{0}=1,~\rho_{0}=0.3,~n=\frac{2}{3}$ and $d_{2}=7.5$. The left panel shows that this law is satisfied for all values of $\zeta$ at the initial times as well as when $\zeta$ approaches to $1$ with $t\geq27$ for the case $(d_{1},d_{3})>0$ while the feasible region for $d_{1}>0$ and $d_{3}<0$ is shown in the right plot. Similarly, Figure \textbf{8} shows the regions where GSLT holds for the remaining two signatures of $(d_{1},d_{3})$. In this case, we observe that validity of GSLT is true for all four possible choices of integration constants for the specific ranges of $\zeta$ and $t$. \section{Concluding Remarks} In this paper, we have investigated the first and second laws in the non-equilibrium description of thermodynamics and also checked the validity of GSLT for reconstructed models in $f(\mathcal{G},T)$ gravity. The thermodynamical laws are studied at the apparent horizon of FRW universe model with any spatial curvature parameter $k$. We have found that the total entropy in the first law of thermodynamics involves contribution from horizon entropy in terms of area and the entropy production term. This second term appears due to non-equilibrium behavior which implies that some energy is exchanged between outside and inside the apparent horizon. It is worth mentioning here that no such auxiliary entropy production term appears in GR, GB, Lovelock and braneworld theories of gravity \cite{12,13}. We have found the general expression for the validity of GSLT in terms of horizon entropy, entropy production term as well as entropy corresponding to all matter and energy contents inside the horizon. For non-equilibrium picture of thermodynamics, it is assumed that temperature associated with all matter and energy contents inside the horizon is always positive and smaller than the temperature at apparent horizon. It is found that viability condition for this law is consistent with the universal condition for its validity in modified theories of gravity \cite{14}. We have also investigated the validity condition of GSLT for the equilibrium description of thermodynamics. The validity of this law for the reconstructed $f(\mathcal{G},T)$ models (de Sitter universe and power-law solution) for the dust fluid \cite{5,8} is also studied. The results can be summarized as follows. \begin{itemize} \item For de Sitter reconstructed models, it is found that the validity of GSLT is true for model (\ref{4c}) when the integration constants $(c_{1},c_{2})$ have same signatures while for the second model (\ref{5c}), the feasible regions are obtained for the opposite signatures (Figures \textbf{1}-\textbf{4}). \item For power-law reconstructed models, the valid results are found when integration constant $d_{1}$ is positive for the model (\ref{4p}) while for the model (\ref{5p}), this holds for all possible choices of $d_{1}$ and $d_{3}$ (Figures \textbf{5}-\textbf{8}). \end{itemize} We conclude that the validity condition of GSLT is true for both reconstructed de Sitter and power-law $f(\mathcal{G},T)$ models with suitable choices of free parameters. \vspace{1.0cm} {\bf Acknowledgment} \vspace{0.25cm} We would like to thank the Higher Education Commission, Islamabad, Pakistan for its financial support through the {\it Indigenous Ph.D. 5000 Fellowship Program Phase-II, Batch-III.}\\\\ \textbf{The authors have no conflict of interest.}
1,477,468,750,721
arxiv
\section{Introduction} As a natural extension of image style transfer, video style transfer has recently gained interests among researchers~\cite{chen2017coherent,gupta2017characterizing,huang2017real,ruder2016artistic,ruder2018artistic,anderson2016deepmovie,chen2018stereoscopic}. Although some image style transfer methods~\cite{johnson2016perceptual,dumoulin2016learned} have achieved real-time processing speed, i.e. around or above 24 frames per second (FPS), one of the most critical issues in their stylization results is high temporal inconsistency. Temporal inconsistency, or sometimes called incoherence, can be observed visually as flickering between consecutive stylized frames and inconsistent stylization of moving objects~\cite{chen2017coherent}. Figure \ref{coherence}(a)(b) demonstrate temporal inconsistency in video style transfer. To mitigate this effect, optimization methods guided by optical flows and occlusion masks were proposed~\cite{anderson2016deepmovie,ruder2016artistic}. Although these methods can generate smooth and coherent stylized videos, it generally takes several minutes to process each video frame due to optimization on the fly. Some recent models~\cite{chen2017coherent,gupta2017characterizing,huang2017real,ruder2018artistic} improved the speed of video style transfer using optical flows and occlusion masks explicitly or implicitly, yet they failed to guarantee real-time processing speed, nice perceptual style quality, and coherent stylization at the same time. In this paper, we propose ReCoNet, a real-time coherent video style transfer network as a solution to the aforementioned problem. ReCoNet is a feed-forward neural network which can generate coherent stylized videos with rich artistic strokes and textures in real-time speed. It stylizes videos frame by frame through an encoder and a decoder, and uses a VGG loss network~\cite{johnson2016perceptual,simonyan2014very} to capture the perceptual style of the transfer target. It also incorporates optical flows and occlusion masks as guidance in its temporal loss to maintain temporal consistency between consecutive frames, and the effects can be observed in Figure \ref{coherence}(c). In the inference stage, ReCoNet can run far above the real-time standard on modern GPUs due to its lightweight and feed-forward network design. \begin{figure}[t] \centering \includegraphics[width=10cm]{pics/coherence.pdf} \caption{Temporal inconsistency in video style transfer. (a) The style target \textit{Edtaonisl} (Francis Picabia, 1913) and two consecutive video frames from Videvo.net~\cite{videvonet} (b) Style transfer results by Chen~\textit{et al}~\cite{chen2017coherent} (c) Style transfer results by ReCoNet. The circled regions show that our model can better suppress temporal inconsistency, while Chen~\textit{et al}'s model generates inconsistent color and noticeable flickering effects} \label{coherence} \end{figure} We find that the brightness constancy assumption~\cite{horn1974determining} in optical flow estimation may not strictly hold in real-world videos and animations, and there exist luminance differences on traceable pixels between consecutive image frames. Such luminance differences cannot be captured by temporal losses purely based on optical flows. To consider the luminance difference, we further introduce a luminance warping constraint in our temporal loss. From stylization results of previous methods~\cite{chen2017coherent,gupta2017characterizing,huang2017real,ruder2016artistic,ruder2018artistic}, we have also observed instability such as different color appearances of the same moving object in consecutive frames. With the intuition that the same object should possess the same features in high-level feature maps, we apply a feature-map-level temporal loss to our encoder. This further improves temporal consistency of our model. In summary, there exist the following contributions in our paper: \begin{itemize} \item Our model highly incorporates perceptual style and temporal consistency in the stylized video. With a new feed-forward network design, it can achieve an inference speed over 200 FPS on a single modern GPU. Our model can reproduce various artistic styles on videos with stable results. \item We first propose a luminance warping constraint in the output-level temporal loss to specifically consider luminance changes of traceable pixels in the input video. This constraint can improve stylizing stability in areas with illumination effects and help suppress overall temporal inconsistency. \item We first propose a feature-map-level temporal loss to penalize variations in high-level features of the same object in consecutive frames. This improves stylizing stability of traceable objects in video scenes. \end{itemize} In this paper, related work for image and video style transfer will be first reviewed in Section 2. Detailed motivations, network architecture, and loss functions will be presented in Section 3. In Section 4, the experiment results will be reported and analyzed, where our model shows outstanding performance \section{Related Work} \label{sec:related_work} Gatys~\textit{et al}~\cite{gatys2015neural,gatys2016image} first developed a neural algorithm for automatic image style transfer, which refines a random noise to a stylized image iteratively constrained by a content loss and a style loss. This method inspired many later image style transfer models~\cite{johnson2016perceptual,dumoulin2016learned,selim2016painting,champandard2016semantic,ulyanov2016texture,luan2017deep,gatys2017controlling,chen2017stylebank,liao2017visual}. One of the most successful successor is the feed-forward perceptual losses model proposed by Johnson~\textit{et al}~\cite{johnson2016perceptual}, using a pre-trained VGG network~\cite{simonyan2014very} to compute perceptual losses. Although their model has achieved both preferable perceptual quality and near real-time inference speed, severe flickering artifacts can be observed when applying this method frame by frame to videos since temporal stability is not considered. Afterwards, Anderson~\textit{et al}~\cite{anderson2016deepmovie} and Ruder~\textit{et al}~\cite{ruder2016artistic} introduced a temporal loss function in video stylization as an explicit consideration of temporal consistency. The temporal loss is involved with optical flows and occlusion masks and is iteratively optimized for each frame until the loss converges. However, it generally takes several minutes for their models to process each video frame, which is not applicable for real-time usage. Although Ruder~\textit{et al}~\cite{ruder2018artistic} later accelerated the inference speed, their stylization still runs far below the real-time standard. To obtain a consistent and fast video style transfer method, some real-time or near real-time models have recently been developed. Chen~\textit{et al}~\cite{chen2017coherent,chen2018stereoscopic} proposed a recurrent model that uses feature maps of the previous frame in addition to the input consecutive frames, and involves explicit optical flows warping on feature maps in both training and inference stages. Since this model requires optical flow estimation by \textit{FlowNetS}~\cite{dosovitskiy2015flownet} in the inference stage, its inference speed barely reaches real-time level and the temporal consistency is susceptible to errors in optical flow estimation. Gupta~\textit{et al}~\cite{gupta2017characterizing} also proposed a recurrent model which takes an additional stylized previous frame as the input. Although their model performs similarly to Chen~\textit{et al}'s model in terms of temporal consistency, it suffers from transparency issues and still barely reaches real-time inference speed. Using a feed-forward network design, Huang~\textit{et al}~\cite{huang2017real} proposed a model similar to the perceptual losses model~\cite{johnson2016perceptual} with an additional temporal loss. This model is faster since it neither estimates optical flows nor uses information of previous frames in the inference stage. However, Huang~\textit{et al}'s model calculates the content loss from a deeper layer $relu4\_2$, which is hard to capture low-level features. Strokes and textures are also weakened in their stylization results due to a low weight ratio between perceptual losses and the temporal loss. Noticing strengths and weaknesses of these models, we propose several improvements in ReCoNet. Compared with Chen~\textit{et al}~\cite{chen2017coherent}'s model, our model does not estimate optical flows but involves ground-truth optical flows only in loss calculation in the training stage. This can avoid optical flow prediction errors and accelerate inference speed. Meanwhile, our model can render style patterns and textures much more conspicuously than Huang~\textit{et al}~\cite{huang2017real}'s model, which could only generate minor visual patterns and strokes besides color adjustment. Our lightweight and feed-forward network can run faster than all video stylization models mentioned above~\cite{chen2017coherent,gupta2017characterizing,huang2017real,ruder2016artistic,ruder2018artistic,anderson2016deepmovie}. \section{Method} \begin{figure}[t] \centering \includegraphics[width=12cm]{pics/pipeline.pdf} \caption{The pipeline of ReCoNet. $I_t, F_t, O_t$ denote the input image, encoded feature maps, and stylized output image at time frame $t$. $M_t$ and $W_t$ denote the occlusion mask and the optical flow between time frames $t-1$ and $t$. $Style$ denotes the artistic style image. The dashed box represents the prediction results of the previous frame, which will only be used in the training process. Red arrows and texts denote loss functions} \label{pipeline} \end{figure} The training pipeline of ReCoNet is shown in Figure \ref{pipeline}. ReCoNet consists of three modules: an encoder that converts input image frames to encoded feature maps, a decoder that generates stylized images from feature maps, and a VGG-16~\cite{simonyan2014very} loss network to compute the perceptual losses. Additionally, a multi-level temporal loss is added to the output of encoder and the output of decoder to reduce temporal incoherence. In the inference stage, only the encoder and the decoder will be used to stylize videos frame by frame. \subsection{Motivation} \label{sec:motivation} \subsubsection{Luminance Difference} In real-world videos, the luminance and color appearances can be different on the same object in consecutive frames due to illumination effects. In such cases, the data does not satisfy the assumption known as brightness constancy constraint~\cite{horn1974determining}, and direct optical flow warping will ignore luminance changes in traceable pixels~\cite{dosovitskiy2015flownet,weinzaepfel2013deepflow,ilg2017flownet}. In animations, many datasets use the albedo pass to calculate ground-truth optical flows but later add illuminations including smooth shading and specular reflections to the final image frames, such as MPI Sintel Dataset~\cite{butler2012naturalistic}. This also results in differences on luminance and color appearances. To further examine the illumination difference, we computed the absolute value of temporal warping error $I_t - W_t(I_{t-1})$ over MPI Sintel Dataset and 50 real-world videos download from Videvo.net~\cite{videvonet}, where $W$ is the forward optical flow and $I$ is the input image frame. We used \textit{FlowNet2}~\cite{ilg2017flownet} to calculate optical flows and the method of Sundaram~\textit{et al}~\cite{sundaram2010dense} to obtain occlusion masks for downloaded videos. Figure \ref{colorspace} demonstrates the histograms of temporal warping error in both RGB and XYZ color space. We can draw two conclusions based on the results. First, RGB channels share similar warping error distributions. There is no bias of changes in color channel. Second, despite changes in relative luminance channel Y, the chromaticity channels X and Z in XYZ color space also contribute to the total inter-frame difference. However, since there is no exact guideline of chromaticity mapping in a particular style, we mainly consider luminance difference in our temporal loss. \begin{figure}[t] \centering \includegraphics[width=12cm]{pics/colorspace.pdf} \caption{Histograms of temporal warping error in different datasets and color spaces} \label{colorspace} \end{figure} Based on our findings, we propose a novel luminance constraint in our temporal loss to encourage the stylized frames to have the same luminance changes as the input frames. This can reduce unstable color changes under illumination effects and improve temporal consistency of the stylized frames. Experiments in Section \ref{sec:ablation} show that this new constraint can bring significant improvements to the output perceptual quality and temporal stability. \subsubsection{Feature-map-level Temporal Loss} Another new loss function we propose for feature-map-level consistency is based on the intuition that the same object should preserve the same representation in high-level feature maps. Although warping frames directly at the output level may not be accurate due to illuminations, the same method can be very suitable at the feature-map level as examined by Chen~\textit{et al}~\cite{chen2017coherent}. We use ground-truth optical flows and occlusion masks to calculate feature-map-level temporal loss between the warped feature maps and the current ones. Experiments in Section \ref{sec:ablation} show that this new loss can improve stylization consistency on the same object in consecutive frames. \subsection{Network Architecture} ReCoNet adopts a pure CNN-based design. Compared to feed-forward networks in literature~\cite{huang2017real,johnson2016perceptual}, we separate the whole network to an encoder and a decoder for different purposes. The encoder is designed to encode image frames to feature maps with aggregated perceptual information, and the feature-map-level temporal loss is computed on its output. The decoder is designed to decode feature maps to a stylized image where we compute the output-level temporal loss. Table \ref{encoder_decoder} shows our encoder and decoder design. There are three convolutional layers and four residual blocks~\cite{he2016deep} in the encoder, and two up-sampling convolutional layers with a final convolutional layer in the decoder. We use an up-sample layer and a convolutional layer instead of one traditional deconvolutional layer in the decoder to reduce checkerboard artifacts~\cite{odena2016deconvolution}. We adopt instance normalization~\cite{ulyanov2016instance} after each convolution process to attain better stylization quality. Reflection padding is used at each convolutional layer. The loss network is a VGG-16 network~\cite{simonyan2014very} pre-trained on the ImageNet dataset~\cite{deng2009imagenet}. For each iteration, the VGG-16 network processes each of the input image frame, output image frame and style target independently. The content and style losses are then computed based on the generated image features. \setlength{\tabcolsep}{4pt} \begin{table}[t] \caption{Network layer specification. Layer and output sizes are denoted as channel $\times$ height $\times$ width. \textit{Conv, Res, InsNorm, ReLU, Tanh} denote convolutional layer, residual block~\cite{he2016deep}, instance normalization layer~\cite{ulyanov2016instance}, ReLU activation layer~\cite{nair2010rectified}, and Tanh activation layer respectively} \begin{center} \begin{tabular}{| c | c| c | c |} \hline Layer & Layer Size & Stride & Output Size\\\hline \multicolumn{4}{|c|}{Encoder}\\\hline Input & & & $3 \times 640 \times 360$\\ Conv + InsNorm + ReLU & $48 \times 9 \times 9$ & 1 & $48 \times 640 \times 360$\\ Conv + InsNorm + ReLU & $96 \times 3 \times 3$ & 2 & $96 \times 320 \times 180$\\ Conv + InsNorm + ReLU & $192 \times 3 \times 3$ & 2 & $192 \times 160 \times 90$\\ (Res + InsNorm + ReLU ) $\times 4$ & $192 \times 3 \times 3$ & 1 & $192 \times 160 \times 90$\\\hline \multicolumn{4}{|c|}{Decoder}\\\hline Up-sample & & 1/2 & $192 \times 320 \times 180$\\ Conv + InsNorm + ReLU & $96 \times 3 \times 3$ & 1 & $96 \times 320 \times 180$\\ Up-sample & & 1/2 & $96 \times 640 \times 360$\\ Conv + InsNorm + ReLU & $48 \times 3 \times 3$ & 1 & $48 \times 640 \times 360$\\ Conv + Tanh & $3 \times 9 \times 9$ & 1 & $3 \times 640 \times 360$\\\hline \end{tabular} \end{center} \label{encoder_decoder} \end{table} \setlength{\tabcolsep}{1.4pt} \subsection{Loss functions} Our multi-level temporal loss design focuses on temporal coherence at both high-level feature maps and the final stylized output. At the feature-map level, a strict optical flow warping is adopted to achieve temporal consistency of traceable pixels in high-level features. At the output level, an optical flow warping with a luminance constraint is used to simulate both the movements and luminance changes of traceable pixels. The perceptual losses design is inherited from the perceptual losses model~\cite{johnson2016perceptual}. A two-frame synergic training mechanism~\cite{huang2017real} is used in the training stage. For each iteration, the network generates feature maps and stylized output of the first image frame and the second image frame in two runs. Then, the temporal losses are computed using the feature maps and stylized output of both frames, and the perceptual losses are computed on each frame independently and summed up. Note again that in the inference stage, only one image frame will be processed by the network in a single run. \subsubsection{Output-level Temporal loss} The temporal losses in previous works~\cite{chen2017coherent,gupta2017characterizing,huang2017real,ruder2016artistic,ruder2018artistic} usually ignore changes in luminance of traceable pixels. Taking this issue into account, the \textit{relative luminance} $Y=0.2126R+0.7152G+0.0722B$, same as Y in XYZ color space, is added as a warping constraint for all channels in RGB color space: \begin{equation} \label{eq:temp_o} \mathcal{L}_{temp, o}(t-1, t) = \sum_c \frac{1}{D} M_t \lVert (O_t - W_t(O_{t-1}))_c - (I_t - W_t(I_{t-1}))_Y \rVert^2 \end{equation} where $c \in [R, G, B]$ is each of the RGB channels of the image, $Y$ the relative luminance channel, $O_{t-1}$ and $O_t$ the stylized images for previous and current input frames respectively, $I_{t-1}$ and $I_t$ the previous and current input frames respectively, $W_t$ the ground-truth forward optical flow, $M_t$ the ground-truth forward occlusion mask (1 at traceable pixels or 0 at untraceable pixels), $D = H \times W$ the multiplication of height $H$ and width $W$ of the input/output image. We apply the relative luminance warping constraint to each RGB channel equally based on the ``no bias'' conclusion in Section \ref{sec:motivation}. Section \ref{sec:ablation} further discusses different choices of the luminance constraint and the output-level temporal loss. \subsubsection{Feature-map-level Temporal loss} The feature-map-level temporal loss penalizes temporal inconsistency on the encoded feature maps between two consecutive input image frames: \begin{equation} \mathcal{L}_{temp, f}(t-1, t) = \frac{1}{D} M_t \lVert F_t - W_t(F_{t-1}) \rVert^2 \end{equation} where $F_{t-1}$ and $F_t$ are the feature maps outputted by the encoder for previous and current input frames respectively, $W_t$ and $M_t$ the ground-truth forward optical flow and occlusion mask downscaled to the size of feature maps, $D = C \times H \times W$ the multiplication of channel size $C$, image height $H$ and image width $W$ of the encoded feature maps $F$. We use downscaled optical flows and occlusion masks to simulate temporal motions in high-level features. \subsubsection{Perceptual Losses} We adopt the content loss $\mathcal{L}_{content}(t)$, the style loss $\mathcal{L}_{style}(t)$ and the total variation regularizer $\mathcal{L}_{tv}(t)$ in the perceptual losses model~\cite{johnson2016perceptual} for each time frame $t$. The content loss and the style loss utilize feature maps at $relu3\_3$ layer and $[relu1\_2, relu2\_2, relu3\_3, relu4\_3]$ layers respectively. \subsubsection{Summary} The final loss function for the two-frame synergic training is: \begin{multline} \mathcal{L}(t-1, t)= \sum_{i \in \{t-1, t\}}(\alpha \mathcal{L}_{content}(i) + \beta \mathcal{L}_{style}(i) + \gamma \mathcal{L}_{tv}(i))\\ + \lambda_f \mathcal{L}_{temp, f}(t-1, t) + \lambda_o \mathcal{L}_{temp, o}(t-1, t) \end{multline} where $\alpha, \beta, \gamma, \lambda_f$ and $\lambda_o$ are hyper-parameters for the training process. \section{Experiments} \subsection{Implementation Details} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{pics/styles.pdf} \caption{Video style transfer results using ReCoNet. The first column contains two groups of three consecutive image frames in videos downloaded from Videvo.net~\cite{videvonet}. Each video frames are followed by two style target images and their corresponding stylized results of the video frames. The styles are \textit{Mosaic}, \textit{Dream}, \textit{Autoportrait} (Picasso, 1907), and \textit{Candy}} \label{styles} \end{figure} We use Monkaa and FlyingThings3D in the Scene Flow datasets~\cite{mayer2016large} as the training dataset, and MPI Sintel dataset~\cite{butler2012naturalistic} as the testing dataset. The Scene Flow datasets provide optical flows and motion boundaries for each consecutive frames, from which we can also obtain occlusion masks using the method provided by Sundaram~\textit{et al}~\cite{sundaram2010dense}. Monkaa dataset is extracted from the animation movie Monkaa and contains around 8640 frames, resembling MPI Sintel dataset. FlyingThings3D dataset is a large dataset of everyday objects flying along random 3D trajectories and contains around 20150 frames, resembling animated and real-world complex scenes. Same as the verification process of previous works~\cite{chen2017coherent,gupta2017characterizing,huang2017real}, we use MPI Sintel dataset to verify the temporal consistency and perceptual styles of our stylization results. All image frames are resized to $640 \times 360$. We train the model with a batch size of 2 for 30,000 steps, roughly two epochs over the training dataset. We pair up consecutive frames for the two-frame synergic training and adopt random horizontal flip on each pair. The frame pairs are shuffled in training process. We use Adam optimizer~\cite{kingma2015adam} with a learning rate of $ 10^{-3}$, and set the default training hyper-parameters to be $\alpha = 1, \beta = 10, \gamma = 10^{-3}, \lambda_f = 10^7, \lambda_o = 2 \times 10^3$. We implement our style transfer pipeline on PyTorch 0.3~\cite{paszkepytorch} with cuDNN 7~\cite{chetlur2014cudnn}. All tensor calculations are performed on a single GTX 1080 Ti GPU. Further details of the training process can be found in our supplementary materials. We also download 50 videos from Videvo.net~\cite{videvonet} to verify our generalization capacity on videos in real world. Figure \ref{styles} shows style transfer results of four different styles on three consecutive video frames. We observe that the color, strokes and textures of the style target can be successfully reproduced by our model, and the stylized frames are visually coherent \subsection{Comparison to Methods in the Literature} \label{sec:comparison} \subsubsection{Quantitative Analysis} \setlength{\tabcolsep}{4pt} \begin{table}[t] \caption{Temporal error $e_{stab}$ and average FPS in the inference stage with style \textit{Candy} on different models. Five scenes from MPI Sintel Dataset are selected for validation} \begin{center} \begin{tabular}{| c || c | c | c | c | c || c |}\hline Model & Alley-2 & Ambush-5 & Bandage-2 & Market-6 & Temple-2 & FPS\\\hline Chen~\textit{et al}~\cite{chen2017coherent} & 0.0934 & 0.1352 & 0.0715 & 0.1030 & 0.1094 & 22.5\\\hline ReCoNet & 0.0846 & 0.0819 & 0.0662 & 0.0862 & 0.0831 & 235.3\\\hline Huang~\textit{et al}~\cite{huang2017real} & 0.0439 & 0.0675 & 0.0304 & 0.0553 & 0.0513 & 216.8 \\\hline Ruder~\textit{et al}~\cite{ruder2016artistic} & 0.0252 & 0.0512 & 0.0195 & 0.0407 & 0.0361 & 0.8\\\hline \end{tabular} \end{center} \label{quant_experiment} \end{table} \setlength{\tabcolsep}{1.4pt} Table \ref{quant_experiment} shows the temporal error $e_{stab}$ of four video style transfer models on five scenes in MPI Sintel Dataset with style \textit{Candy}. $e_{stab}$ is the square root of output-level temporal error over one whole scene: \begin{gather} e_{stab} = \sqrt{ \frac{1}{T-1} \sum_{t=1}^{T} \frac{1}{D} M_t \lVert O_t - W_t(O_{t-1}) \rVert^2 } \end{gather} where $T$ is the total number of frames. Other variables are identical to those in the output-level temporal loss. This error function verifies the temporal consistency of traceable pixels in the stylized output. All scene frames are resized to $640 \times 360$. We use a single GTX 1080 Ti GPU for computation acceleration. From the table, we observe that Ruder~\textit{et al}~\cite{ruder2016artistic}'s model is not suitable for real-time usage due to low inference speed, despite its lowest temporal error among all models in our comparison. Among the rest models which reach the real-time standard, our model achieves lower temporal error than Chen~\textit{et al}~\cite{chen2017coherent}'s model, primarily because of the introduction of the multi-level temporal loss. Although our temporal error is higher than Huang~\textit{et al}~\cite{huang2017real}'s model, our model is capable of capturing strokes and minor textures in the style image while Huang~\textit{et al}'s model could not. Please refer to the qualitative analysis below for details. Another finding is that ReCoNet and Huang~\textit{et al}'s model achieve far better inference speed than the others. Compared with recurrent models~\cite{chen2017coherent,gupta2017characterizing,ruder2018artistic}, feed-forward models are easier to be accelerated with parallelism since the current iteration do not need to wait for the previous frame to be fully processed. \subsubsection{Qualitative Analysis} We examine our style transfer results qualitatively with other real-time models proposed by Chen~\textit{et al}~'s~\cite{chen2017coherent} and Huang~\textit{et al}~'s~\cite{huang2017real}. \begin{figure}[t] \centering \includegraphics[width=11cm]{pics/vsliterature.pdf} \caption{Qualitative comparison of style transfer results in the literature. (a) Style transfer results between Huang~\textit{et al}~\cite{huang2017real}'s model and ReCoNet on image frames. (b) Style transfer results between Chen~\textit{et al}~\cite{chen2017coherent}'s model and ReCoNet on consecutive image frames with zoom-ins of flickering regions} \label{vsliterature} \end{figure} \setlength{\tabcolsep}{4pt} \begin{table}[t] \caption{User study result. In each of the two comparisons, we aggregate the results of all four video clips for the three questions. ``Same'' means the voter find that results of both models are similar to each other or it is hard to support one against another} \begin{center} \begin{tabular}{| c | c | c | c || c | c | c | c |}\hline Models & Q1 & Q2 & Q3 & Models & Q1 & Q2 & Q3\\\hline ReCoNet & 64 & 162 & 152 & ReCoNet & 164 & 42 & 115\\\hline Chen~\textit{et al}~\cite{chen2017coherent} & 64 & 15 & 23 & Huang~\textit{et al}~\cite{huang2017real} & 22 & 91 & 42\\\hline Same & 72 & 23 & 25 & Same & 14 & 67 & 43 \\\hline \end{tabular} \end{center} \label{user_study_result} \end{table} \setlength{\tabcolsep}{1.4pt} Figure \ref{vsliterature}(a) shows the stylization comparison between Huang~\textit{et al}~'s model and ReCoNet. Although Huang~\textit{et al}~'s model achieves low temporal error quantitatively and is able to capture the color information in the style image, it fails to learn much about the perceptual strokes and patterns. There are two reasons that may account for their weak perceptual styles as shown in the two examples in the figure. First, they use a low weight ratio between perceptual losses and temporal loss to maintain temporal coherence, which brings obvious reduction to the quality of output style. However, in ReCoNet, the introduction of the new temporal losses makes it possible to maintain temporal coherence with a larger perceptual to temporal losses ratio, leading to better preserved perceptual styles. As shown in the first example, our stylized image reproduces the distinct color blocks in the \textit{Composition} style much better than Huang~\textit{et al}'s result, especially on the uneven sand surfaces and the sea wave. Second, Huang~\textit{et al}'s model uses feature maps from a deeper layer $relu4\_2$ in the loss network to calculate the content loss, which is difficult to capture low-level features such as edges. In the second example, although sharp bold contours are characteristic in the \textit{Girl} image, their model fails to clearly reproduce such style. Unlike Huang~\textit{et al}~'s model, as shown in Figure \ref{coherence} and \ref{vsliterature}(b), Chen~\textit{et al}~'s work can well maintain the perceptual information of both the content image and the style image. However, from zoom-in regions, we can find noticeable inconsistency in their stylized results, which can also be quantitatively validated by its high temporal errors. To further compare our video style transfer results with these two models, we conducted a user study. For each of the two comparisons (ReCoNet vs Huang~\textit{et al}~'s and ReCoNet vs Chen~\textit{et al}~'s), we chose 4 different styles on 4 different video clips downloaded from Videvo.net~\cite{videvonet}. We invited 50 people to answer (Q1) which model perceptually resembles the style image more, regarding the color, strokes, textures, and other visual patterns; (Q2) which model is more temporally consistent such as fewer flickering artifacts and consistent color and style of the same object; and (Q3) which model is preferable overall. The voting results are shown in Table \ref{user_study_result}. Compared with Chen~\textit{et al}~'s model, our model achieves much better temporal consistency while maintaining good perceptual styles. Compared with Huang~\textit{et al}~'s model, our results are much better in perceptual styles and the overall feeling although our temporal consistency is slightly worse. This validates our previous qualitative analysis. Detailed procedures and results of the user study can be found in our supplementary materials. \subsection{Ablation Study} \label{sec:ablation} \subsubsection{Temporal Loss on Different Levels} \setlength{\tabcolsep}{4pt} \begin{table}[t] \caption{Temporal error $e_{stab}$ with style \textit{Candy} for different temporal loss settings in ReCoNet. Five scenes from MPI Sintel Dataset are selected for validation} \begin{center} \begin{tabular}{|c || c | c | c | c | c || c |}\hline Loss Levels & Alley-2 & Ambush-5 & Bandage-2 & Market-6 & Temple-2 & Average\\\hline Feature-map only & 0.1028 & 0.1041 & 0.0752 & 0.1062 & 0.0991 & 0.0975\\\hline Output only & 0.0854 & 0.0840 & 0.0672 & 0.0868 & 0.0820 & 0.0813\\\hline Both & 0.0846 & 0.0819 & 0.0662 & 0.0862 & 0.0831 & 0.0804\\\hline \end{tabular} \end{center} \label{temporal_loss_levels} \end{table} \setlength{\tabcolsep}{1.4pt} \begin{figure}[t] \centering \includegraphics[width=11cm]{pics/featout.pdf} \caption{Temporal inconsistency in traceable objects. (a) The style target and two consecutive frames in MPI Sintel Dataset. (b) Stylized frames generated without feature-map-level temporal loss. (c) Stylized frames generated with feature-map-level temporal loss. A specific traceable region is circled for comparison} \label{featout} \end{figure} To study whether the multi-level temporal loss does help reduce temporal inconsistency and maintain perceptual style, we implement our video style transfer model on \textit{Candy} style with three different settings: feature-map-level temporal loss only, output-level temporal loss only, and feature-map-level temporal loss plus output-level temporal loss. Table \ref{temporal_loss_levels} shows the temporal error $e_{stab}$ of these settings on five scenes in MPI Sintel Dataset. We observe that the temporal error is greatly reduced with the output-level temporal loss, while the feature-map-level temporal loss also improves temporal consistency on average. Figure \ref{featout} demonstrates a visual example of object appearance inconsistency. When only using output-level temporal loss, the exactly same object may alter its color due to the changes of surrounding environment. With the feature-map-level temporal loss, features are preserved for the same object \subsubsection{Luminance Difference} \begin{figure}[t] \centering \includegraphics[width=12cm]{pics/luminance.pdf} \caption{Style transfer results using three different approaches described in Section \ref{sec:ablation} to target luminance difference. The style target is \textit{Candy}, and the validation scenes are same as Table \ref{temporal_loss_levels} for temporal error $e_{stab}$ calculation. The total and the luminance-wise temporal error maps show the absolute value of temporal errors in all color channels and in the relative luminance channel respectively} \label{luminance} \end{figure} We compare three different approaches taking or not taking luminance difference into consideration at the output level: \begin{enumerate} \item A relative luminance warping constraint on each RGB channel (Formula \ref{eq:temp_o}); \item Change color space of output-level temporal loss into XYZ color space, then add a relative luminance warping constraint to Y channel: $\mathcal{L}_{temp}^o = \frac{1}{D} M_t ( \lVert (O_t - W_t(O_{t-1}))_{Y} - (I_t - W_t(I_{t-1}))_{Y} \rVert_2+ \lVert (O_t - W_t(O_{t-1}))_{X, Z}\rVert_2)$ where $X, Y, Z$ are the XYZ channels; \item No luminance constraint: $\mathcal{L}_{temp}^o = \frac{1}{D} M_t \lVert (O_t - W_t(O_{t-1}))_{R,G,B}\rVert_2$. \end{enumerate} Other variables in approach 2 and 3 are identical to those in Formula \ref{eq:temp_o}. As shown in Figure \ref{luminance}, all three approaches can obtain pleasant perceptual styles of \textit{Candy} despite some variations in color. However, the first approach has a more similar luminance-wise temporal error map to the input frames compared with the other two methods, especially in the circled illuminated region. This shows the first approach can preserve proper luminance changes between consecutive frames as those in the input, and therefore leads to more natural stylizing outputs. Moreover, the total temporal error map of the first approach is also closer to zero than the results of other two approaches, implying more stable stylized results. This is also supported numerically by a much lower overall temporal error produced by the first approach in the validation scenes. Based on both qualitative and quantitative analysis, we can conclude that adding a relative luminance warping constraint to all RGB channels can generate smoother color change on areas with illumination effects and achieve better temporal coherence \section{Conclusions} In this paper, we present a feed-forward convolutional neural network \textit{ReCoNet} for video style transfer. Our model is able to generate coherent stylized videos in real-time processing speed while maintaining artistic styles perceptually similar to the style target. We propose a luminance warping constraint in the output-level temporal loss for better stylization stability under illumination effects. We also introduce a feature-map level temporal loss to further mitigate temporal inconsistency. In future work, we plan to further investigate the possibility of utilizing both chromaticity and luminance difference in inter-frame warping results for better video style transfer algorithm. \clearpage \bibliographystyle{splncs}
1,477,468,750,722
arxiv
\section{Introduction} Most pulsars have a narrow duty cycle of emission (5-10 \% of pulsar period). This is generally consistent with the expectations of the angular width of the polar cap, given typical viewing geometries. However, there are a small but significant number of pulsars with unusually wide profiles, for which the emission is seen over a wide range of longitudes ($\geq$ 90 degrees). This is expected from pulsars which are highly aligned, i.e. the magnetic dipole axis is almost parallel to the spin axis, and emission is seen from one magnetic pole of the pulsar. In such a case, the line of sight (LOS) is very close to both the rotation and the magnetic axes, and consequently, we sample a large region of the polar cap. This has the exciting potential for a detailed study of the distribution and behavior of emission regions located in annular rings around the magnetic axis. The study of pulsars showing systematic subpulse drift patterns provides important clues for understanding the unsolved problems of pulsar emission mechanism. Constraints from such observations can have far reaching implications for the theoretical models, as exemplified by some recent results (e.g. Deshpande \& Rankin (1999) and Gupta et al. (2004)). In this context, wide profile drifting pulsars can provide extra insights because of the presence of multiple drift bands. In this work, we have mainly concentrated on studies of two such pulsars, PSR B0818$-$41 and PSR B0826$-$34. B0818$-$41 is a relatively less studied wide profile pulsar with emission occurring for more than 180 deg of pulse longitude. It has a period of 0.545 $s$ and is relatively old, with a characteristic age of $4.57\times10^{8}$ years. The inferred dipolar magnetic field of this pulsar is $1.03\times10^{11}$ G, which is a typical value for slow pulsars. From a study of its average polarization behaviour at 660 and 1440 MHz, Qiao et al. (1995) predict that the pulsar must have a small inclination angle between the magnetic and rotation axes. PSR B0826$-$34 is a pulsar with one of the widest known profiles. Earlier studies of this pulsar (Durdin et al. (1979), Biggs et al. (1985), Gupta et al. (2004)) have revealed some unique properties : strong evolution of the average profile with frequency, apparent nulling for 70\% of time, and a remarkable subpulse drift property $-$ multiple curved drift bands with frequent changes and sign reversals of drift rate. \section{Observations and preliminary analysis} The GMRT is a aperture synthesis telescope consisting of 30 antennas, each of 45 m diameter, spread over a region of 25 km diameter. The GMRT can also be used in the tied array mode by adding the signals from the individual dishes, either coherently or incoherently (Gupta et al 2000). We performed polarimetric observations of PSR B0818$-$41 and PSR B0826$-$34 at multiple frequency bands (157, 244, 325, 610 and 1060 MHz), at different epochs, using the phased array mode of the GMRT. Some of the observations were simultaneous at 303 and 610 MHz, and were carried out using the ``band masking'' technique, details of which are described in Bhattacharyya et al. (2008). Data were recorded at a sampling rate of 0.512 ms. During the off-line analysis the raw data were further integrated to achieve the final resolution of 2.048 ms. Average pulse profiles at different observing frequencies were obtained by dedispersion with a DM of 113.4 $pc/cm^3$ for PSR B0818$-$41, and 52.2 $pc/cm^3$ for PSR B0826$-$34. The main source of corruption of the data was found to be power line interference (50 Hz and its harmonics). The dedispersed data were put through a filtering routine which detected most (but probably not all) of the power line interferences and replaced them by appropriate random noise. The dedispersed, interference free data stretch were then synchronously folded with the topocentric pulsar period. Single pulse data streams were also generated from the dedispersed data, for both total intensity and full Stokes parameters. \section{Results} {\bf PSR B0818$-$41:} We confirm the remarkable subpulse drift pattern for this pulsar, first reported by us in Bhattacharyya et al. (2007). At 325 MHz, we can clearly see simultaneous occurrence of three drift regions with two different drift rates (Fig 1a): an inner region with steeper apparent drift rate, flanked on each side by a region of slower apparent drift rate. Similar subpulse drifting is observed in both the inner and outer regions at 610 MHz, though the drift bands are weaker (Fig 1b). The closely spaced drift bands always maintain a constant phase relationship: the subpulse emission from the inner drift region is in phase with that from the outer drift region on the right hand side, and at the same time the emission in the inner drift region is out of phase with the outer drift region situated on the left hand side. A new technique is introduced by us for resolving aliasing, utilising the constant offset ($\sim9P_1$) that is seen between the peak emission from the leading and trailing outer regions (Bhattacharyya et al. 2008b). The basic idea here is that combination of the angular separation between the sparks in the outer ring, the time of traverse of the LOS from the leading to the trailing outer component, and the drift rate of the sparks in the outer ring should all be matched so as to produce that this offset. From the results of our analysis, we find that the unaliased drift is too slow to allow this to happen. We propose that the drift rate is most likely first order aliased, and the corresponding pattern rotation period, $P_4$, is 10 s. This implies that PSR B0818$-$41 has the fastest known carousel.\\ {\bf PSR B0826$-$34:} At any given time, the simultaneous multiple subpulses present in the main pulse window follow the same drift rate and sign, at both the frequencies (Figs 2a \& 2b). The pulse regions showing different drift rates of opposite signs are always connected by a region showing smooth transition of drift rate. The gray scale plot of the single pulses from the higher sensitivity single frequency observations at 610 and 1060 MHz show coherent drifting in the main pulse (MP) and inter pulse (IP) regions, with approximately 6 drift bands in the MP and 4 drift bands in the IP, as seen in Figs 2a \& 2b (see also Bhattacharyya et al. 2008a). The drifts in MP and IP regions are always locked in phase. \begin{figure} \plottwo{0818_sp_reg.ps}{0818_sp_reg_610.ps} \caption{{\itshape Left (Fig 1a) : Gray scale plot of the single pulse data from single frequency observations of PSR B0818$-$41 at 325 Hz.} {\itshape Right (Fig 1b) : Same as Fig 1a, but at 610 Hz.}} \end{figure} \section{Interpretations} {\bf PSR B0818$-$41:} The unique drift pattern of this pulsar can be naturally explained as being created by the intersection of our LOS with two conal rings on the polar cap of a fairly aligned rotator. Based on the frequency evolution of the average profile, observed PA swing and results from subpulse drifting, we converged on two possible choices of emission geometry: {\bf G-1} ($\alpha=11$ deg and $\beta=-5.4$ deg) and {\bf G-2} ($\alpha=175.4$ deg and $\beta=-6.9$ deg). Whereas the features of the observed drift pattern are better reproduced by simulations of the radiation pattern using {\bf G-1}, the geometry {\bf G-2} appears to produce a better fit to the position angle swing of the linear polarisation. Though the regular drift patterns observed in Fig. 1 are quite common for this pulsar, we sometimes observe changes of the drift rates, which are almost always associated with the occurrence of nulls. However, the phase locked relation is maintained across the regions of irregular drifting or nulling. This phase locked relation implies a common electrodynamic control between the rings.\\ {\bf PSR B0826$-$34:} The radiation pattern is interpreted to be created by the intersection of our LOS with two conal rings on the polar cap of a fairly aligned rotator (Gupta et al. (2004), Esamdin et al. (2005), Bhattacharyya et al. 2008a). The observed wide pulse profile and its remarkable evolution with frequency can be explained by the intersection of the LOS with two concentric rings of emission. At lower frequencies (e.g. 157 or 325 MHz), the LOS mostly sees the inner ring and that gives rise to the MP emission. The LOS is sufficiently further than the outer ring of emission at these frequencies and almost no inter pulse emission is observed. As the frequency increases (e.g. 610 or 1060 MHz), one starts seeing emission coming from the second outer ring as well. As a consequence, the IP emission becomes dominant with increasing frequency. For PSR B0826$-$34 we observe frequent nulling and changes of drift rates which are simultaneous for both the inner and outer rings. Esamdin et al. (2005) applied the method of phase tracking to the single pulse data and detected in total 13 drift bands in the pulse window. Phase locked relation between the inner and outer rings is evident from their results (Fig. 6 of Esamdin et al. (2005)). In addition, we observe significant correlation between total energy of the main pulse and the inter pulse. With increasing pulse lag this correlation follows similar trend as the auto correlation of total energy of the main pulse or the inter pulse. This indicate that the conditions of the magnetosphere are similar for inner and outer rings of emission (Bhattacharyya et al. 2008b). \begin{figure} \plottwo{0826_sp_610.ps}{0826_sp_1060.ps} \caption{{\itshape Left (Fig 2a) : Gray scale plot of the single pulse data from single frequency observations of PSR B0826$-$34 at 610 Hz.} {\itshape Right (Fig 2b) : Same as Fig 2a, but at 1060 Hz.}} \end{figure} \section{Conclusions} We have investigated multiple drift regions in the pulsars B0818$-$41 and B0826$-$34 and found that the emission from inner and outer rings are locked in phase. Such a phase locked relation between the inner and outer rings could well be a common property for all wide profile pulsars, and maybe for all pulsars in general. The phase locked relation implies that inner and outer rings drift with the same angular rate, the emissions from the two rings are not independent, and conditions responsible for drifting are similar in both rings. This puts constrains on the theoretical models, favoring a pan magnetospheric radiation mechanism.\\ Hence our main conclusions are:\\ $\bullet$ Wide profile pulsars provide extra insights into pulsar emission processes;\\ $\bullet$ PSR B0818-41 and PSR B0826$-$34 are wide profile drifting pulsars with simultaneous multiple drift regions;\\ $\bullet$ Aliasing can be resolved using the phase relationship between leading and trailing outer regions; for PSR B0818$-$41 it indicates P4 ~ 10s, making it the fastest known carousel;\\ $\bullet$ Phase locked drift between inner and outer rings implies a common electrodynamic control over the entire polar cap.\\
1,477,468,750,723
arxiv
\section{Additional details of the RGSep-based logic} The unit $u_{\sf RGsep}$ does not restrict states and the allowed state transitions of the environment, while disallowing any action in the current thread: \[ u_{\sf RGsep} = ( \{ ([\ ], [\ ], [\ ]) \} \times ({\sf State} \times {\sf AState} \times {\sf Tokens}), ({\sf State} \times {\sf AState} \times {\sf Tokens})^2, \emptyset ) \] Since action judgements are essential for reasoning about primitive commands in our logic, we further refine conditions under which it holds of views from the RGSep-based view monoid. \begin{prop}\label{prop:logic-rgaxioms} The action judgement $\sat{t}{{\sf \alpha}}{(P, R, G)}{(Q, R, G)}$ holds, if it is true that: \begin{itemize} \item $\begin{multlined}[t] \forall \sigma_l, \sigma_s, \sigma'_l, \sigma'_s, \Sigma_l, \Sigma_s, \Delta_l, \Delta_s \ldotp ((\sigma_l,\Sigma_l,\Delta_l), (\sigma_s,\Sigma_s,\Delta_s)) \in P \land{} \\ \sigma'_l \bullet \sigma'_s \in \intp{t}{{\sf \alpha}}{\sigma_l \bullet \sigma_s} \implies \exists \Sigma'_l, \Sigma'_s, \Delta'_l, \Delta'_s \ldotp ((\sigma'_l,\Sigma'_l,\Delta'_l), (\sigma'_s,\Sigma'_s,\Delta'_s))\inQ \land{}\\ ((\sigma_s,\Sigma_s,\Delta_s),(\sigma'_s,\Sigma'_s,\Delta'_s)) \in G \land \lptrans{ \Sigma_l \bullet \Sigma_s, \Delta_l \uplus \Delta_s }{ \Sigma'_l \bullet \Sigma'_s, \Delta'_l \uplus \Delta'_s }; \end{multlined}$ \item $\intp{t}{{\sf \alpha}}{\sigma} \neq \lightning \implies \forall \sigma' \ldotp \intp{t}{{\sf \alpha}}{\sigma \bullet \sigma'} = \{ \sigma'' \bullet \sigma' \mid \sigma'' \in \intp{t}{{\sf \alpha}}{\sigma} \}$. \end{itemize} \end{prop} The requirement to primitive commands in Proposition~\ref{prop:logic-rgaxioms} is similar to that of the action judgements. The difference is that in the RG-based proof system it is not necessary to require ${\sf \alpha}$ to preserve any view $r$ of the environment: since a predicate $P_r$ of any view $(P_r, R_r, G_r)$ in another thread is stable under $R_r$, it is also stable under $G \subseteq R_r$ whenever $(P, R, G) \mathbin{*} (P_r, R_r, G_r)$ is defined. Consequently, views of the environment are never invalidated by local transitions. Using the premise of Proposition~\ref{prop:logic-rgaxioms} in {\sc Prim} rule makes it closer to the standard proof rule for the atomic step in Rely/Guarantee. \section{Compositionality properties of the safety relation} In this section, we formulate and prove compositionality properties of the safety relation. For further reference we restate the definition of a repartitioning implication: \begin{equation}\label{def:rimpl} p \Rrightarrow q \triangleq \forall r {.\,} \reif{p * r} \subseteq \reif{q * r}. \end{equation} \begin{lem}\label{lem:safe} The safety relation ${\sf safe}_t$ has the following closure properties: \begin{itemize} \item {\sc Frame}: $\forall t, C, p, q, r {.\,} \safe{t}{p}{C}{q} \implies \safe{t}{p * r}{C}{q * r}$; \item {\sc Choice}: $\forall t, C_1, C_2, p, q {.\,} \safe{t}{p}{C_1}{q} \land \safe{t}{p}{C_2}{q} \implies \safe{t}{p}{C_1 \mathbin{+} C_2}{q}$; \item {\sc Iter}: $\forall t, C, p {.\,} \safe{t}{p}{C}{p} \implies \safe{t}{p}{\iter{C}}{p}$; \item {\sc Seq}: $\forall t, C_1, C_2, p, p', q {.\,} \safe{t}{p}{C_1}{p'} \land \safe{t}{p'}{C_2}{q} \implies \safe{t}{p}{C_1 \mathbin{\,;\,} C_2}{q}$; \item {\sc Conseq}: $\forall t, C, p, p', q, q' {.\,} p' \Rrightarrow p \land \safe{t}{p}{C}{q} \land q \Rrightarrow q' \implies \safe{t}{p'}{C}{q'}$; \item {\sc Disj}: $\begin{multlined}[t] \forall t, C, p_1, p_2, q_1, q_2 {.\,} \safe{t}{p_1}{C}{q_1} \land \safe{t}{p_2}{C}{q_2} \implies{}\\ \safe{t}{p_1 \lor p_2}{C}{q_1 \lor q_2}.\end{multlined}$ \end{itemize} \end{lem} We prove all of the properties by coinduction. To this end, we take the fixed-point definition of ${\sf safe}_t$. We consider \[ F_t : \powerset{{\sf Views} \times {\sf Com} \times {\sf Views}} \to \powerset{{\sf Views} \times {\sf Com} \times {\sf Views}} \] defined as follows: \[ \begin{array}{rcl} F_t(X) & \triangleq & \dc{(p, C, q) \mid \forall C', {\sf \alpha} {.\,} \trans{C}{{\sf \alpha}}{C'} \implies \exists p' {.\,} \sat{t}{{\sf \alpha}}{p}{p'} \land (p', C', q) \in X} \\ & & \hfill {} \cup \dc{(p, {\sf skip}, q) \mid p \Rrightarrow q} \end{array} \] Note that a powerset domain ordered by inclusion is a complete lattice and $F$ is a mapping on it, which means that $F$ is monotone. Consequently, by Knaster-Tarski fixed-point theorem $F$ has the greatest fixed-point. It is easy to see that ${\sf safe}_t \triangleq \gfp{F_t}$ in Definition~\ref{def:safety}. In the proof of Lemma~\ref{lem:safe} we use the following properties of the action judgement and the $\Rrightarrow$ relation. \begin{prop}[Locality]\label{prop:axiomlocal} \[ \begin{array}{l} \forall p, q, r {.\,} p \Rrightarrow q \implies p * r \Rrightarrow q * r;\\ \forall t, {\sf \alpha}, p, q, r {.\,} \sat{t}{{\sf \alpha}}{p}{q} \implies \sat{t}{{\sf \alpha}}{p * r}{q * r}. \end{array} \] \end{prop} \begin{prop}[Consequence]\label{prop:axiomconseq} \[ \forall t, {\sf \alpha}, p, q, p', q' {.\,} p' \Rrightarrow p \land \sat{t}{{\sf \alpha}}{p}{q} \land q \Rrightarrow q' \implies \sat{t}{{\sf \alpha}}{p'}{q'}. \] \end{prop} The proofs of Propositions~\ref{prop:axiomlocal} and \ref{prop:axiomconseq} are straighforward: both properties can be easily checked after unfolding definitions of action judgements. \begin{prop}[Distributivity]\label{prop:axiomdist} \[ \forall t, {\sf \alpha}, p_1, p_2, q_1, q_2 {.\,} \sat{t}{{\sf \alpha}}{p_1}{q_1} \land \sat{t}{{\sf \alpha}}{p_2}{q_2} \implies \sat{t}{{\sf \alpha}}{p_1 \lor p_2}{q_1 \lor q_2}. \] \end{prop} \begin{proof} According to the Definition~\ref{def:axiom} of the action judgement $\sat{t}{{\sf \alpha}}{p_1 \lor p_2}{q_1 \lor q_2}$, in order to prove the latter we need to demonstrate the following: \begin{multline}\label{eq:axiomdist1} \forall r, \sigma, \sigma', \Sigma, \Delta {.\,} \sigma' \in \intp{t}{{\sf \alpha}}{\sigma} \land (\sigma, \Sigma, \Delta) \in \reif{(p_1 \lor p_2) \mathbin{*} r} \implies{} \\ \exists \Sigma', \Delta' {.\,} \lptrans{\Sigma, \Delta}{\Sigma', \Delta'} \land (\sigma', \Sigma', \Delta') \in \reif{(q_1 \lor q_2) \mathbin{*} r}. \end{multline} Let us consider any view $r$, states $\sigma, \sigma', \Sigma$ and tokens $\Delta$ such that both $\sigma' \in \intp{t}{{\sf \alpha}}{\sigma}$ and $(\sigma, \Sigma, \Delta) \in \reif{(p_1 \lor p_2) \mathbin{*} r}$ hold. According to the properties of disjunction stated in equalities (\ref{eq:disj}), \[ \reif{(p_1 \lor p_2) \mathbin{*} r} = \reif{(p_1 \mathbin{*} r) \lor (p_2 \mathbin{*} r)} = \reif{p_1 \mathbin{*} r} \cup \reif{p_2 \mathbin{*} r}. \] Consequently, $(\sigma, \Sigma, \Delta) \in \reif{p_1 \mathbin{*} r} \cup \reif{p_2 \mathbin{*} r}$. Let us assume that $(\sigma, \Sigma, \Delta) \in \reif{p_1 \mathbin{*} r}$ (the other case is analogous). Then according to the action judgement $\sat{t}{{\sf \alpha}}{p_1}{q_1}$, there exist $\Sigma'$ and $\Delta'$ such that: \begin{equation}\label{eq:axiomdist2} \lptrans{\Sigma, \Delta}{\Sigma', \Delta'} \land (\sigma', \Sigma', \Delta') \in \reif{q_1 \mathbin{*} r}. \end{equation} Once again, according to the properties (\ref{eq:disj}) of disjunction: \[ \reif{q_1 \mathbin{*} r} \subseteq \reif{q_1 \mathbin{*} r} \cup \reif{q_2 \mathbin{*} r} = \reif{(q_1 \mathbin{*} r) \lor (q_2 \mathbin{*} r)} = \reif{(q_1 \lor q_2) \mathbin{*} r}, \] which together with (\ref{eq:axiomdist2}) means that $(\sigma', \Sigma', \Delta') \in \reif{(q_1 \lor q_2) \mathbin{*} r}$. Overall we have shown that there exist $\Sigma'$ and $\Delta'$ such that $\lptrans{\Sigma, \Delta}{\Sigma', \Delta'} $ and $(\sigma', \Sigma', \Delta') \in \reif{(q_1 \lor q_2) \mathbin{*} r}$, which concludes the proof of (\ref{eq:axiomdist1}). \end{proof} \medskip \noindent We now prove the closure properties from Lemma~\ref{lem:safe}. \mypar{Proof of {\sc Frame}} Let us define an auxiliary function: \[ \fbin{X}{r} \triangleq \dc{ (p*r, C, q*r) \mid (p, C, q) \in X}. \] Then our goal is to prove that $\fbin{{\sf safe}_t}{r} \subseteq {\sf safe}_t$. Since ${\sf safe}_t = \gfp{F_t}$, we can do a proof by coinduction: to conclude that $\fbin{{\sf safe}_t}{r} \subseteq {\sf safe}_t$ holds, we demonstrate $\fbin{{\sf safe}_t}{r} \subseteq F_t(\fbin{{\sf safe}_t}{r})$. Consider any $(p', C, q') \in \fbin{{\sf safe}_t}{r}$. There necessarily are $p, q, r$ such that $p' = p * r$, $q' = q * r$ and $(p, C, q) \in {\sf safe}_t$. Let us assume that $C = {\sf skip}$. Then $(p, C, q) \in {\sf safe}_t$ implies that $p \Rrightarrow q$. By Proposition~\ref{prop:axiomlocal}, $p * r \Rrightarrow q * r$, which implies $(p*r, {\sf skip}, q*r) \in {\sf safe}_t$ to hold. Now let $C \not= {\sf skip}$. Since $(p, C, q) \in {\sf safe}_t$, by definition of the safety relation the following holds of every ${\sf \alpha}$, $C'$ and any transition $\trans{C}{{\sf \alpha}}{C'}$: \[ \exists p' {.\,} \sat{t}{{\sf \alpha}}{p}{p'} \land (p', C', q) \in {\sf safe}_t. \] By Proposition~\ref{prop:axiomlocal}, $\sat{t}{{\sf \alpha}}{p}{p'}$ implies $\sat{t}{{\sf \alpha}}{p*r}{p'*r}$. Also, when $(p', C', q) \in {\sf safe}_t$, it is the case that $(p'*r, C', q * r) \in \fbin{{\sf safe}_t}{r}$. Thus, we have shown for every transition $\trans{C}{{\sf \alpha}}{C'}$ that there exists $p'' = p' * r$ such that $\sat{t}{{\sf \alpha}}{p*r}{p''}$ and $(p'', C', q*r) \in \fbin{{\sf safe}_t}{r}$: \[ \forall {\sf \alpha}, C {.\,} \trans{C}{{\sf \alpha}}{C'} \implies \exists p'' {.\,} \sat{t}{{\sf \alpha}}{p*r}{p''} \land (p'', C', q*r) \in \fbin{{\sf safe}_t}{r}, \] which is sufficient to conclude that $(p * r, C, q * r) \in F_t(\fbin{{\sf safe}_t}{r})$. \qed \mypar{Proof of {\sc Choice}} Let us define an auxiliary function: \[ \fbin{X}{Y} \triangleq \dc{ (p, C_1 \mathbin{+} C_2, q) \mid (p, C_1, q) \in X \land (p, C_2, q) \in Y}. \] Then our goal is to prove that $\fbin{{\sf safe}_t}{{\sf safe}_t} \subseteq {\sf safe}_t$. For convenience, we prove an equivalent inequality $\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t \subseteq {\sf safe}_t$ instead. Since ${\sf safe}_t = \gfp{F_t}$, we can do a proof by coinduction: to conclude that $\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t \subseteq {\sf safe}_t$ holds, we demonstrate $\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t \subseteq F_t(\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t)$. Let us consider $(p, C, q) \in {\sf safe}_t$. Since ${\sf safe}_t = \gfp{F_t}$, we know that ${\sf safe}_t = F_t({\sf safe}_t)$ holds. Then by monotonicity of $F_t$, $(p, C, q) \in F_t({\sf safe}_t) \subseteq F_t(\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t)$. Now let us consider $(p, C, q) \in \fbin{{\sf safe}_t}{{\sf safe}_t}$. There necessarily are $C_1$ and $C_2$ such that $C = C_1 \mathbin{+} C_2$, $(p, C_1, q) \in {\sf safe}_t$, and $(p, C_2, q) \in {\sf safe}_t$. For $(p, C_1 \mathbin{+} C_2, q)$ to belong to $F_t(\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t)$, the following has to be proven for every transition $\trans{C_1 \mathbin{+} C_2}{{\sf \alpha}}{C'}$: \begin{equation}\label{eq:safechoice1} \exists p' {.\,} \sat{t}{{\sf \alpha}}{p}{p'} \land (p', C', q) \in \fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t. \end{equation} According to the rules of the operational semantics (Figure~\ref{fig:opsem}), whenever $\trans{C_1 \mathbin{+} C_2}{{\sf \alpha}}{C'}$, necessarily ${\sf \alpha} = {\sf id}$ and either $C' = C_1$ or $C' = C_2$. Let us assume that $C' = C_1$ (the other case is analogous). The action judgement $\sat{t}{{\sf id}}{p}{p}$ holds trivially. Knowing that $(p, C_1, q) \in {\sf safe}_t$, it is easy to see that (\ref{eq:safechoice1}) can be satisfied by letting $p' = p$. Consequently, $(p, C_1 \mathbin{+} C_2, q) \in F_t(\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t)$, which concludes the proof. \qed \mypar{Proof of {\sc Disj}} Let $$ \fun{X} \triangleq \dc{ (p_1 \lor p_2, C, q_1 \lor q_2) \mid (p_1, C, q_1) \in X \land (p_2, C, q_2) \in X}. $$ Then our goal is to prove that $\fun{{\sf safe}_t} \subseteq {\sf safe}_t$. Since ${\sf safe}_t = \gfp{F_t}$, we can do a proof by coinduction: to conclude that $\fun{{\sf safe}_t} \subseteq {\sf safe}_t$ holds, we demonstrate $\fun{{\sf safe}_t} \subseteq F_t(\fun{{\sf safe}_t})$. Let us consider $(p, C, q) \in \fun{{\sf safe}_t}$. Then there necessarily are $p_1, q_1, p_2$ and $q_2$ such that $p = p_1 \lor p_2$, $q = q_1 \lor q_2$, and $(p_1, C, q_1), (p_2, C, q_2) \in {\sf safe}_t$. From the latter we get that for any ${\sf \alpha}, C'$ and a transition $\trans{C}{{\sf \alpha}}{C'}$ the following holds: \[\begin{array}{l} \exists p'_1 {.\,} \sat{t}{{\sf \alpha}}{p_1}{p'_1} \land (p'_1, C', q_1) \in {\sf safe}_t; \\ \exists p'_2 {.\,} \sat{t}{{\sf \alpha}}{p_2}{p'_2} \land (p'_2, C', q_2) \in {\sf safe}_t. \end{array}\] Then it is the case that $(p'_1 \lor p'_2, C', q_1 \lor q_2) \in \fun{{\sf safe}_t}$. Moreover, $\sat{t}{{\sf \alpha}}{p_1 \lor p_2}{p'_1 \lor p'_2}$ holds by Proposition~\ref{prop:axiomdist}. Thus, we have shown for every transition $\trans{C}{{\sf \alpha}}{C'}$ that there exists $p' = p'_1 \lor p'_2$ such that $\sat{t}{{\sf \alpha}}{p_1 \lor p_2}{p'}$ and $(p', C', q_1 \lor q_2) \in \fun{{\sf safe}_t}$: \[ \forall {\sf \alpha}, C {.\,} \trans{C}{{\sf \alpha}}{C'} \implies \exists p' {.\,} \sat{t}{{\sf \alpha}}{p_1 \lor p_2}{p'} \land (p', C', q_1 \lor q_2) \in \fun{{\sf safe}_t}. \] which is sufficient to conclude that $(p_1 \lor p_2, C, q_1 \lor q_2) \in F_t(\fun{{\sf safe}_t})$. \qed \mypar{Proof of {\sc Iter}} To do a proof by coinduction, we strengthen {\sc Iter} property as follows: \begin{multline}\label{eq:safeiter1} \forall p, C {.\,} ((p, C, p) \in {\sf safe}_t \implies (p, \iter{C}, p) \in {\sf safe}_t) \land{} \\ (\forall p_1, C_1 {.\,} (p_1, C_1, p) \in {\sf safe}_t \land (p, C, p) \in {\sf safe}_t \implies (p_1, C_1 \mathbin{\,;\,} \iter{C}, p) \in {\sf safe}_t). \end{multline} \ag{Can you simplify this proof by using the result for Seq as a black box? E.g., the last conjunct above follows from Seq.} Let us define auxilliary functions: \begin{gather*} \fun{X} \triangleq \dc{ (p, \iter{C}, p) \mid (p, C, p) \in X}\\ \funx{X} \triangleq \dc{ (p_1, C_1 \mathbin{\,;\,} \iter{C}_2, p_2) \mid (p_1, C_1, p_2), (p_2, C_2, p_2) \in X}. \end{gather*} Using them, we rewrite (\ref{eq:safeiter1}) as $\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \subseteq {\sf safe}_t$. Let $\xi = \{ (p, {\sf skip}, p) \}$. It is easy to see that $\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi \subseteq {\sf safe}_t$ is also an equivalent reformulation of (\ref{eq:safeiter1}), since $\xi \subseteq {\sf safe}_t$ always holds. Since ${\sf safe}_t = \gfp{F_t}$, we can do a proof by coinduction: to conclude that $\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi \subseteq {\sf safe}_t$, we demonstrate $\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi \subseteq F_t(\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi)$. Consider any $(p, C, q) \in \xi$. Necessarily, $C = {\sf skip}$ and $q = p$. Note that $p \Rrightarrow p$ always holds, which by definition of $F_t$ is sufficient for $(p, {\sf skip}, p) \in F_t(\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi)$. Thus, $(p, C, q) \in F_t(\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi)$. Consider any $(p, C', q) \in \fun{{\sf safe}_t}$. Necessarily, $p = q$ and there exists a sequential command $C$ such that $C' = \iter{C}$ and $(p, C, p) \in {\sf safe}_t$. We need to show that $(p, \iter{C}, p) \in F_t(\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi)$. For the latter to hold, by definition of $F_t$ it is sufficient that for every ${\sf \alpha}$, $C''$ and a transition $\trans{C}{{\sf \alpha}}{C''}$ the following be true: \begin{equation}\label{eq:safeiter2} \exists p'' {.\,} \sat{t}{{\sf \alpha}}{p}{p''} \land (p'', C'', p) \in \fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi. \end{equation} According to the operational semantics in Figure~\ref{fig:opsem}, when there is a transition $\trans{C}{{\sf \alpha}}{C''}$, necessarily ${\sf \alpha} = {\sf id}$ and either $C'' = {\sf skip}$ or $C'' = C \mathbin{\,;\,} \iter{C}$. Let us assume that $C'' = {\sf skip}$. Since both $(p, {\sf skip}, p) \in \xi$ and $\sat{t}{{\sf \alpha}}{p}{p}$ always hold, it is easy to see that letting $p'' = p$ satisfies (\ref{eq:safeiter2}). Now let us turn to the case when $C'' = C \mathbin{\,;\,} \iter{C}$. Note that $(p, C \mathbin{\,;\,} \iter{C}, p) \in \funx{{\sf safe}_t}$ holds by definition of $\psi$. Thus, by letting $p'' = p$ we satisfy (\ref{eq:safeiter2}). Consider $(p_1, C_0, p_2) \in \funx{{\sf safe}_t}$. Necessarily, there exist $C_1$ and $C_2$ such that: \begin{equation}\label{eq:safeiter3pr} C_0 = C_1 \mathbin{\,;\,} \iter{C}_2 \land (p_1, C_1, p_2) \in {\sf safe}_t \land (p_2, C_2, p_2) \in {\sf safe}_t. \end{equation} We need to show that $(p_1, C_1 \mathbin{\,;\,} \iter{C}_2, p_2) \in F_t(\fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi)$. For the latter to hold, we need to prove the following for every ${\sf \alpha}$, $C'$ and a transition $\trans{C_1 \mathbin{\,;\,} \iter{C}_2}{{\sf \alpha}}{C'}$: \begin{equation}\label{eq:safeiter3} \exists p' {.\,} \sat{t}{{\sf \alpha}}{p_1}{p'} \land (p', C', p_2) \in \fun{{\sf safe}_t} \cup \funx{{\sf safe}_t} \cup \xi. \end{equation} According to the operational semantics in Figure~\ref{fig:opsem}, when there is a transition $\trans{C_1 \mathbin{\,;\,} \iter{C}_2}{{\sf \alpha}}{C'}$, either of the following is true: \begin{itemize} \item there are $C'_1$ and a transition $\trans{C_1}{C}{C'_1}$ such that $C' = C'_1 \mathbin{\,;\,} \iter{C}_2$; \item $C_1 = {\sf skip}$, $C' = C_2$ and ${\sf \alpha} = {\sf id}$. \end{itemize} Let us assume that the former is the case. From (\ref{eq:safeiter3pr}) we know that $(p_1, C_1, p_2) \in {\sf safe}_t$, so by definition of the safety relation we get that: \[ \exists p'_1 {.\,} \sat{t}{{\sf \alpha}}{p_1}{p'_1} \land (p'_1, C'_1, p_2) \in {\sf safe}_t. \] Consequently, $(p'_1, C'_1 \mathbin{\,;\,} \iter{C}_2, p_2) \in \funx{{\sf safe}_t}$. Thus, by letting $p' = p'_1$ we can satisfy (\ref{eq:safeiter3}). Now let $C_1 = {\sf skip}$ and ${\sf \alpha} = {\sf id}$. From (\ref{eq:safeiter3pr}) we know that $(p_1, {\sf skip}, p_2) \in {\sf safe}_t$, meaning that necessarily $p_1 \Rrightarrow p_2$. It is easy to see that $p_1 \Rrightarrow p_2$ holds if and only if so does $\sat{t}{{\sf id}}{p_1}{p_2}$. Knowing that $(p_2, C_2, p_2) \in {\sf safe}_t$, we can satisfy (\ref{eq:safeiter3}) by letting $p' = p_2$. \qed \ag{I think you can remove q' below. Just combine any safe triples agreeing on the middle predicate. If this doesn't cause too much disruption, perhaps you could implement this.} \mypar{Proof of {\sc Seq}} Let $$ \fbin{X}{q'} \triangleq \dc{ (p, C_1 \mathbin{\,;\,} C_2, q) \mid(p, C_1, q') \in X \land (q', C_2, q) \in X}. $$ Then our goal is to prove that $\fbin{{\sf safe}_t}{q'} \subseteq {\sf safe}_t$. For convenience, we prove an equivalent inequality $\fbin{{\sf safe}_t}{q'} \cup {\sf safe}_t \subseteq {\sf safe}_t$ instead. Since ${\sf safe}_t = \gfp{F_t}$, we can do a proof by coinduction: to conclude that $\fbin{{\sf safe}_t}{q'} \cup {\sf safe}_t \subseteq {\sf safe}_t$ holds, we demonstrate $\fbin{{\sf safe}_t}{q'} \cup {\sf safe}_t \subseteq F_t(\fbin{{\sf safe}_t}{q'} \cup {\sf safe}_t)$. Let us consider any $(p, C, q) \in {\sf safe}_t$. Since ${\sf safe}_t = \gfp{F_t}$, we know that ${\sf safe}_t = F_t({\sf safe}_t)$ holds. Then by monotonicity of $F_t$, $p, C, q) \in F_t({\sf safe}_t) \subseteq F_t(\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t)$. Now let us consider any $(p, C, q) \in \fbin{{\sf safe}_t}{q'}$. There necessarily are $C_1$ and $C_2$ such that: \begin{equation} \label{eq:safeseq1} C = C_1 \mathbin{\,;\,} C_2 \land (p, C_1, q') \in {\sf safe}_t \land (q', C_2, q) \in {\sf safe}_t. \end{equation} For $(p, C_1 \mathbin{\,;\,} C_2, q)$ to belong to $F_t(\fbin{{\sf safe}_t}{{\sf safe}_t} \cup {\sf safe}_t)$, the following has to be the case for every transition $\trans{C_1 \mathbin{\,;\,} C_2}{{\sf \alpha}}{C'}$: \begin{equation}\label{eq:safeseqc1} \exists p' {.\,} \sat{t}{{\sf \alpha}}{p}{p'} \land (p', C', q) \in \fbin{{\sf safe}_t}{q'} \cup {\sf safe}_t. \end{equation} According to the rules of the operational semantics (Figure~\ref{fig:opsem}), when there is a transition $\trans{C_1 \mathbin{\,;\,} C_2}{{\sf \alpha}}{C'}$, either of the following is true: \begin{itemize} \item there exists $C'_1$ such that $C' = C'_1 \mathbin{\,;\,} C_2$ and $\trans{C_1}{{\sf \alpha}}{C'_1}$; or \item $C_1 = {\sf skip}$, ${\sf \alpha} = {\sf id}$ and $C' = C_2$. \end{itemize} Let us assume that the former is the case. From (\ref{eq:safeseq1}) we know that $(p, C_1, q') \in {\sf safe}_t$, which means that the following holds of $\trans{C_1}{{\sf \alpha}}{C'_1}$: \[ \exists p'' {.\,} \sat{t}{{\sf \alpha}}{p}{p''} \land (p'', {\sf \alpha}'_1, q') \in {\sf safe}_t. \] When $(p'', {\sf \alpha}'_1, q') \in {\sf safe}_t$ and $(q', {\sf \alpha}_2, q) \in {\sf safe}_t$, it is the case that $(p'', {\sf \alpha}'_1 \mathbin{\,;\,} {\sf \alpha}_2, q) \in \fbin{{\sf safe}_t}{q'}$. Thus, by letting $p' = p''$ we satisfy (\ref{eq:safeseqc1}). We now consider the case when $C_1 = {\sf skip}$, ${\sf \alpha} = {\sf id}$ and $C' = C_2$. From (\ref{eq:safeseq1}) we know that $(p, {\sf skip}, q') \in {\sf safe}_t$, meaning that $p \Rrightarrow q'$, or equivalently $\sat{t}{{\sf id}}{p}{q'}$. We also know from (\ref{eq:safeseq1}) that $(q', C_2, q) \in {\sf safe}_t$. Thus, (\ref{eq:safeseqc1}) can be satisfied by letting $p' = q'$. \qed \mypar{Proof of {\sc Conseq}} Let us first show that $\safe{t}{p'}{C}{q}$ holds, when so do $p' \Rrightarrow p$ and $\safe{t}{p}{C}{q}$. When $C = {\sf skip}$, $\safe{t}{p}{C}{q}$ gives us that $p \Rrightarrow q$. It is easy to see that $p' \Rrightarrow p$ and $p \Rrightarrow q$ together imply $p' \Rrightarrow q$, which is sufficient to conclude that $\safe{t}{p'}{C}{q}$ holds. Let us assume that $C \not= {\sf skip}$. From $\safe{t}{p}{C}{q}$ we get that the following holds of every transition $\trans{C}{{\sf \alpha}}{C'}$: \[ \exists p'' {.\,} \sat{t}{{\sf \alpha}}{p}{p''} \land \safe{t}{p''}{C'}{q} \] However, by applying Proposition~\ref{prop:axiomconseq} about Consequence property of axiom judgements to $p' \Rrightarrow p$ and $\sat{t}{{\sf \alpha}}{p}{p''}$ we get that $\sat{t}{{\sf \alpha}}{p'}{p''}$. Together with the formula above, it allows us to conclude that $\safe{t}{p'}{C}{q}$ holds. \ag{Why are you fixing q? Can't phi just weaken the postcondition in any safe triple in any way?} Now let us prove that $\safe{t}{p}{C}{q'}$ holds, when so do $q \Rrightarrow q'$ and $\safe{t}{p}{C}{q}$. We define an auxilliary function: \[ \fbin{X}{q} \triangleq \dc{ (p, C, q') \mid (p, C, q) \in X \land q \Rrightarrow q'}. \] Our goal is to prove that $\fbin{{\sf safe}_t}{q} \subseteq {\sf safe}_t$. Since ${\sf safe}_t = \gfp{F_t}$, we can do a proof by coinduction: to conclude that $\fbin{{\sf safe}_t}{q} \subseteq {\sf safe}_t$ holds, we demonstrate $\fbin{{\sf safe}_t}{q} \subseteq F_t(\fbin{{\sf safe}_t}{q})$. Let us consider any $(p, C, q') \in \fbin{{\sf safe}_t}{q}$. Necessarily, $(p, C, q) \in {\sf safe}_t$ and $q \Rrightarrow q'$. When $C = {\sf skip}$, we need to show that $p \Rrightarrow q'$. Since $(p, {\sf skip}, q) \in {\sf safe}_t$, it is the case that $p \Rrightarrow q$. It is easy to see that $p \Rrightarrow q$ and $q \Rrightarrow q'$ together imply $p \Rrightarrow q'$, which is sufficient to conclude that $(p, C, q') \in F_t(\fbin{{\sf safe}_t}{q})$. Now consider the case when $C \not= {\sf skip}$. Since $(p, {\sf \alpha}, q) \in {\sf safe}_t$, by definition of the safety relation the following holds of every ${\sf \alpha}$, $C'$ and a transition $\trans{C}{{\sf \alpha}}{C'}$: \[ \exists p'' {.\,} \sat{t}{{\sf \alpha}}{p}{p''} \land (p'', C', q) \in {\sf safe}_t \] Knowing that $q \Rrightarrow q'$ and $(p'', C', q) \in {\sf safe}_t$, it is easy to see that $(p'', C', q') \in \fbin{{\sf safe}_t}{q}$. Thus, we have shown that: \[ \forall {\sf \alpha}, C' {.\,} \trans{C}{{\sf \alpha}}{C'} \implies \exists p'' {.\,} \sat{t}{{\sf \alpha}}{p}{p''} \land (p'', C', q') \in \fbin{{\sf safe}_t}{q}, \] which is sufficient for $(p, C, q') \in F_t(\fbin{{\sf safe}_t}{q})$ to hold. \qed \section{Proof of Lemma~\ref{lem:logic2safety}} \mypar{Lemma} $\forall t, \mathcal{P}, C, \mathcal{Q} {.\,} \infer{t}{\mathcal{P}}{C}{\mathcal{Q}} \implies \forall {\bf i} {.\,} \safe{t}{\evalf{\mathcal{P}}{{\bf i}}}{C}{\evalf{\mathcal{Q}}{{\bf i}}}.$ We prove Lemma~\ref{lem:logic2safety} by rule induction. For that we choose arbitrary thread identifier $t$ and demonstrate that $\forall {\bf i} {.\,} \safe{t}{\evalf{\mathcal{P}}{{\bf i}}}{C}{\evalf{\mathcal{Q}}{{\bf i}}}$ is closed under the proof rules from Figure~\ref{fig:proofrules}. The cases of {\sc Choice}, {\sc Iter}, {\sc Seq}, {\sc Conseq}, {\sc Frame} and {\sc Disj} rules are straightforward: they trivially follow from Lemma~\ref{lem:safe} after using the properties of $\evalf{-}{{\bf i}}$ from Figure~\ref{fig:sem4frm}. The {\sc Ex} rule uses the fact that ${\sf Val}$, which is the range of $i$, is finite, which makes possible proving it just like the {\sc Disj} rule. It remains to consider the {\sc Prim} rule to conclude Lemma~\ref{lem:logic2safety}. Let us assume that $\forall {\bf i}' {.\,} \sat{t}{{\sf \alpha}}{\evalf{\mathcal{P}}{{\bf i}'}}{\evalf{\mathcal{Q}}{{\bf i}'}}$ holds. We need to demonstrate that so does $\forall {\bf i} {.\,} \safe{t}{\evalf{\mathcal{P}}{{\bf i}}}{{\sf \alpha}}{\evalf{\mathcal{Q}}{{\bf i}}}$. To conclude that the latter holds, according Definition~\ref{def:safety} we need to prove the following for every ${\bf i}$: \begin{equation}\label{eq:l2s-prim} \forall C', {\sf \alpha}' {.\,} \trans{{\sf \alpha}}{{\sf \alpha}'}{C'} \implies \exists p' {.\,} \sat{t}{{\sf \alpha}'}{\evalf{\mathcal{P}}{{\bf i}}}{p'} \land \safe{t}{p'}{C'}{\evalf{\mathcal{Q}}{{\bf i}}} \end{equation} According to the operational semantics from Figure~\ref{fig:opsem}, the only transition from a command ${\sf \alpha}$ is $\trans{{\sf \alpha}}{{\sf \alpha}}{{\sf skip}}$. Thus, in the formula above ${\sf \alpha}' = {\sf \alpha}$ and $C' = {\sf skip}$. Note that $\safe{t}{\evalf{\mathcal{Q}}{{\bf i}}}{{\sf skip}}{\evalf{\mathcal{Q}}{{\bf i}}}$ holds trivially. Additionally, by our assumption, $\sat{t}{{\sf \alpha}}{\evalf{\mathcal{P}}{{\bf i}'}}{\evalf{\mathcal{Q}}{{\bf i}'}}$ holds for any ${\bf i}'$. Consequently, it holds for ${\bf i}' = {\bf i}$. We conclude that by letting $p' = \evalf{\mathcal{Q}}{{\bf i}}$ we satisfy (\ref{eq:l2s-prim}). \qed \section{Proof of Theorem~\ref{thm:lin}} We further refer to the assumptions of Theorem~\ref{thm:lin} as a relation ${\sf safelib}(\mathrm{\ell}, \mathcal{L}, \mathcal{P}, \mathcal{Q})$ defined as follows. \begin{dfn}\label{def:safelib} Given a concrete library $\mathrm{\ell}$, an abstract library $\mathcal{L}$ and $\mathcal{P}, \mathcal{Q} : {\sf ThreadID} \times {\sf APCom} \to {\sf VAssn}$, we say that a relation ${\sf safelib}(\mathrm{\ell}, \mathcal{L}, \mathcal{P}, \mathcal{Q})$ holds if and only if the following requirements are met: \begin{enumerate} \item ${\sf dom}(\mathrm{\ell}) = {\sf dom}(\mathcal{L})$; \item $\forall {\bf i}, t, {\rm A}, \sigma, \Sigma, \Delta, r {.\,} (\sigma, \Sigma, \Delta) \in \reif{\evalf{\mathcal{P}(t, {\rm A})}{{\bf i}} \mathbin{*} r} \implies \Delta(t) = \todo{{\rm A}}$; \item $\forall {\bf i}, t, {\rm A}, \sigma, \Sigma, \Delta, r {.\,} (\sigma, \Sigma, \Delta) \in \reif{\evalf{\mathcal{Q}(t, {\rm A})}{{\bf i}} \mathbin{*} r} \implies \Delta(t) = \done{{\rm A}}$; \item $\begin{multlined}[t] \forall {\bf i}, t, {\rm A}, {\rm A}', r, \Delta {.\,} ((\sigma, \Sigma, \Delta [t : \todo{{\rm A}}]) \in \reif{\evalf{\mathcal{P}(t, {\rm A})}{{\bf i}} \mathbin{*} r} \iff{}\\ (\sigma, \Sigma, \Delta [t : \done{{\rm A}'}]) \in \reif{\evalf{\mathcal{Q}(t, {\rm A}')}{{\bf i}} \mathbin{*} r}). \end{multlined}$ \item $\begin{multlined}[t] \forall m, a, r, t {.\,} m \in {\sf dom}(\mathrm{\ell}) \land a, r \in {\sf Val} \land t \in {\sf ThreadID} \implies{} \\ \infer{t}{ \mathcal{P}(t, \mathcal{L}(m, a, r))}{ \mathrm{\ell}(m, a, r) }{ \mathcal{Q}(t, \mathcal{L}(m, a, r)) }; \end{multlined}$ \end{enumerate} \end{dfn} To strengthen the statement of Theorem~\ref{thm:lin} as necessary for its proof, we define an auxilliary relation, a {\em thread pool invariant}. With this relation we establish a correspondence between the information about LP in a thread $t$ from a given view $\vV{t}$ and sequential commands in a thread $t$ of a concrete thread pool $\tau$ and abstract thread pool $\mathcal{T}$. \begin{dfn}\label{def:tpinv} Given a concrete library $\mathrm{\ell}$, an abstract library $\mathcal{L}$, predicates $\mathcal{P}, \mathcal{Q} : {\sf ThreadID} \times {\sf APCom} \to {\sf VAssn}$, a concrete thread pool $\tau$, an abstract thread pool $\mathcal{T}$, a view $\vV{t}$ and an interpretation of logical variables ${\bf i}$, we say that a thread pool invariant ${\sf inv}_t({\bf i}, \tau, \mathcal{T}, \vV{t}, \Delta)$ holds in a thread $t$ if and only if the following requirements are met: \begin{itemize} \item if $\tau(t) = {\sf idle}$, then $\mathcal{T}(t) = {\sf idle}$ and $\vV{t} \Rrightarrow \evalf{\mathcal{Q}(t, \_)}{{\bf i}}$, or \item there exist $C, r, m, a$ such that $\tau(t) = (C, r)$ and the following holds: \begin{multline*} \safe{t}{\vV{t}}{C}{\mathcal{Q}(t, \mathcal{L}(m, a, r))} \land ( (\Delta(t) = \todo{\mathcal{L}(m, a, r)} \land \mathcal{T}(t) = (\mathcal{L}(m, a, r), r) ) \lor{} \\ (\Delta(t) = \done{\mathcal{L}(m, a, r)} \land \mathcal{T}(t) = ({\sf skip}, r) ) ). \end{multline*} \end{itemize} \end{dfn} Finally, analogously to Definition~\ref{def:hsem}, we write down formally a definition of the set of histories of abstract libraries. \begin{dfn}\label{def:ahsem} We define $\hsemn{n}{\mathcal{L}, \mathcal{T}, \Sigma}$ as a set of histories such that $\hsemn{0}{\mathcal{L}, \mathcal{T}, \Sigma} \triangleq \dc{ \varepsilon }$ and: \[ \begin{array}{rcl} \hsemn{n}{\mathcal{L}, \mathcal{T}, \Sigma} & \triangleq & \{((t, \call{m}{a}) ::h) \mid m \in {\sf dom}(\mathcal{L}) \land \mathcal{T}(t) = {\sf idle} \land{} \\ & & \hfill \exists r {.\,} h \in \hsemn{n-1}{\mathcal{L}, \mathcal{T}[t : (\mathcal{L}(m, a, r), r)], \Sigma}\} \\ & & {} \cup \{h \mid \exists t, {\rm A}, \Sigma', r {.\,} \mathcal{T}(t) = ({\rm A}, r) \land \Sigma' \in \intp{t}{{\rm A}}{\Sigma} \land{} \\ & & \hfill h \in \hsemn{n-1}{\mathcal{L}, \mathcal{T}[t : ({\sf skip}, r)], \Sigma'}\} \\ & & {} \cup \{((t, \ret{m}{r}) ::h) \mid m \in {\sf dom}(\mathcal{L}) \land \mathcal{T}(t) = ({\sf skip}, r) \land{} \\ & & \hfill h \in \hsemn{n-1}{\mathcal{L}, \mathcal{T}[t : {\sf idle}], \Sigma}\} \end{array} \] We let $\hsem{\mathrm{\ell}, \sigma} = \bigcup_{n \geq 0} \hsemn{n}{\mathcal{L}, (\lambda t {.\,} {\sf idle}), \Sigma}$ denote the set of all possible histories of a library $\mathcal{L}$ that start from a state $\Sigma$. \end{dfn} We are now ready to prove Theorem~\ref{thm:lin}. \mypar{Proof} Let us consider any $\mathrm{\ell}, \mathcal{L}, \mathcal{P}, \mathcal{Q}$ such that ${\sf safelib}(\mathrm{\ell}, \mathcal{L}, \mathcal{P}, \mathcal{Q})$ holds. Let us explain how we strengthen the statement of the theorem in this proof. We prove that $\forall n {.\,} \phi(n)$ holds with $\phi(n)$ formulated as follows: \begin{multline}\label{eq:thmphi} \phi(n) = \forall {\bf i}, \sigma, \Sigma, \Delta, \tau, \mathcal{T} {.\,} (\exists \vV{1}, \dots, \vV{N} {.\,} \tpinvall{{\bf i}}{\tau}{\mathcal{T}}{\vV{k}}{\Delta} \land{} \\ (\sigma, \Sigma, \Delta) \in \reif{\circledast_{t \in {\sf ThreadID}} \vV{t}}) \implies \hsemn{n}{\mathrm{\ell}, \tau, \sigma} \subseteq \hsem{\mathcal{L}, \mathcal{T}, \Sigma}. \end{multline} \noindent Note that according to the semantics of the assertion language ${\sf Assn}$ (Figure~\ref{fig:sem4frm}): \[ \evalf{\circledast_{k \in {\sf ThreadID}} (\exists {\rm A} {.\,} \mathcal{Q}(k, {\rm A}))}{{\bf i}} = \circledast_{k \in {\sf ThreadID}} \evalf{(\exists {\rm A} {.\,} \mathcal{Q}(k, {\rm A}))}{{\bf i}}. \] With that in mind, it is easy to see that letting $\vV{k} = \evalf{(\exists {\rm A} {.\,} \mathcal{Q}(k, {\rm A}))}{{\bf i}}$ for all $k \in {\sf ThreadID}$, $\tau = (\lambda t {.\,} {\sf idle})$ and $\mathcal{T} = (\lambda t {.\,} {\sf idle})$ in (\ref{eq:thmphi}) yields the formula: \begin{multline*} (\forall {\bf i}, \sigma, \Sigma, \Delta {.\,} (\sigma, \Sigma, \Delta) \in \reif{\evalf{\circledast_{k \in {\sf ThreadID}} (\exists {\rm A} {.\,} \mathcal{Q}(k, {\rm A}))}{{\bf i}}} \implies{} \\ \bigcup_{n \geq 0} \hsemn{n}{\mathrm{\ell}, \lambda t {.\,} {\sf idle}, \sigma} \subseteq \hsem{\mathcal{L}, \lambda t {.\,} {\sf idle}, \Sigma}), \end{multline*} which coincides with the statement of the theorem. We prove $\forall n {.\,} \phi(n)$ by induction on $n$. Let us take any ${\bf i}, \sigma, \Sigma, \Delta, \tau$ and $\mathcal{T}$, and consider $\vV{1}, \dots, \vV{N}$ such that the premisses of $\phi(n)$ hold: \begin{equation}\label{eq:thmpremiss} \tpinvall{{\bf i}}{\tau}{\mathcal{T}}{\vV{k}}{\Delta} \land (\sigma,\Sigma, \Delta) \in \reif{\circledast_{k \in {\sf ThreadID}} \vV{k}} \end{equation} We need to demonstrate that every history $h$ of the concrete library $\mathrm{\ell}$ from the set $\hsemn{n}{\mathrm{\ell}, \tau, \sigma}$ is also a history of the abstract library $\mathcal{L}$: $h \in \hsem{\mathcal{L}, \mathcal{T}, \Sigma}$. By Definition~\ref{def:hsem} of $\hsemn{n}{\mathrm{\ell}, \tau, \sigma}$, if $n = 0$, then $h$ is an empty history that is trivially present in $\hsem{\mathcal{L}, \mathcal{T}, \Sigma}$. Let us now consider $n > 0$ and assume that $\phi(n-1)$ holds. By definition of $\hsemn{n}{\mathrm{\ell}, \tau, \sigma}$, $h$ corresponds to one of the three events in a thread $t$: a call of an arbitrary method $m$ with an argument $a$ in a thread $t$, a return from a method $m$ with a return value $r$ or a transition in a thread $t$. We consider each case separately. \mypar{Case \#1} There is a history $h'$, a thread $t$, a method $m \in {\sf dom}(\mathrm{\ell})$, its argument $a$ and a return value $r$ such that $h = (t, \call{m}{a}) :: h'$, $\tau(t) = {\sf idle}$ and $h' \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : (\mathrm{\ell}(m, a, r), r), \sigma]}$. By Definition~\ref{def:ahsem}, to conclude that $h = (t, \call{m}{a}) :: h' \in \hsem{\mathcal{L}, \mathcal{T}, \Sigma}$ it is necessary to show that $\mathcal{T}(t) = {\sf idle}$ and $h' \in \hsem{\mathcal{L}, \mathcal{T}[t : (\mathcal{L}(m, a, r), r)], \Sigma}$, which we further do in the proof of Case \#1. According to (\ref{eq:thmpremiss}), ${\sf inv}_t({\bf i}, \tau, \mathcal{T}, \vV{t}, \Delta)$ and $(\sigma,\Sigma, \Delta) \in \reif{\circledast_{k \in {\sf ThreadID}} \vV{k}}$ hold. Then necessarily $\mathcal{T}(t) = {\sf idle}$ and $\vV{t} \Rrightarrow \evalf{\mathcal{Q}(t, \_)}{{\bf i}}$, which corresponds to the only case when $\tau(t) = {\sf idle}$ in the thread pool invariant. By Definition~\ref{def:rimpl} of $\vV{t} \Rrightarrow \evalf{\mathcal{Q}(t, \_)}{{\bf i}}$, $(\sigma,\Sigma, \Delta) \in \reif{\circledast_{k \in {\sf ThreadID}} \vV{k}}$ implies $(\sigma, \Sigma, \Delta) \in \reif{\evalf{\mathcal{Q}(t, \_)}{{\bf i}} \mathbin{*} \circledast_{k \in {\sf ThreadID} \setminus \dc{t}} \vV{k}}$. From requirements to predicates $\mathcal{P}$ and $\mathcal{Q}$ in ${\sf safelib}(\mathrm{\ell}, \mathcal{L}, \mathcal{P}, \mathcal{Q})$ we obtain that the following holds of $(\sigma, \Sigma, \Delta)$: \begin{itemize} \item $\Delta(t) = \done{\_}$, and \item $(\sigma, \Sigma, \Delta [t : \todo{\mathcal{L}(m, a, r)}]) \in \reif{\evalf{\mathcal{P}(t, \mathcal{L}(m, a, r))}{{\bf i}} \mathbin{*} \circledast_{k \in {\sf ThreadID} \setminus \dc{t}} \vV{k}}$. \end{itemize} Let $\vV{t}' = \evalf{\mathcal{P}(t, \mathcal{L}(m, a, r))}{{\bf i}}$ and $\vV{k}' = \vV{k}$ for all $k \not= t$. Obviously, $(\sigma, \Sigma, \Delta [t : \todo{\mathcal{L}(m, a, r)}]) \in \reif{\circledast_{k} \vV{k}'}$. Also, by Lemma~\ref{lem:logic2safety}, $\safe{t}{\evalf{\mathcal{P}(t, \mathcal{L}(m, a, r))}{{\bf i}}}{\mathrm{\ell}(m, a, r)}{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}}$ holds. This allows us to conclude that in a thread $t$ a thread pool invariant ${\sf inv}_t({\bf i}, \tau[t : (\mathrm{\ell}(m, a, r), r)], \mathcal{T}[t : (\mathcal{L}(m, a, r), r)], \vV{t}', \Delta[t : \mathcal{L}(m, a, r)])$ holds. Moreover, according to (\ref{eq:thmpremiss}), thread pool invariants hold in all other threads as well. We have shown that there exist $\vV{1}', \dots, \vV{N}'$ such that: \begin{multline*} (\forall k {.\,} {\sf inv}_k({\bf i}, \tau[t : (\mathrm{\ell}(m, a, r), r)], \mathcal{T}[t : (\mathcal{L}(m, a, r), r)], \vV{k}', \Delta[t : \mathcal{L}(m, a, r)])) \land{} \\ (\sigma, \Sigma, \Delta[t : \todo{\mathcal{L}(m, a, r)}]) \in \reif{\circledast_{k \in {\sf ThreadID}} \vV{k}'}, \end{multline*} which by the induction hypothesis $\phi(n-1)$ implies that $h' \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : (\mathrm{\ell}(m, a, r), r)], \sigma} \subseteq \hsem{\mathcal{L}, \mathcal{T}[t : (\mathcal{L}(m, a, r), r)], \Sigma}$. We have also show that $\mathcal{T}(t) = {\sf idle}$. By Definition~\ref{def:ahsem}, $h = (t, \call{m}{a}) :: h' \in \hsem{\mathcal{L}, \mathcal{T}, \Sigma}$, which concludes the proof of Case~\#1. \mypar{Case \#2} There is a history $h'$, a thread $t$, a method $m \in {\sf dom}(\mathrm{\ell})$, its argument $a$ and a return value $r$ such that $h = (t, \ret{m}{r}) :: h'$, $\tau(t) = ({\sf skip}, r)$ and $h' \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : {\sf idle}], \sigma}$. By Definition~\ref{def:ahsem}, to conclude that $h = (t, \ret{m}{r}) :: h' \in \hsem{\mathcal{L}, \mathcal{T}, \Sigma}$ it is necessary to show that $\mathcal{T}(t) = ({\sf skip}, r)$ and $h' \in \hsem{\mathcal{L}, \mathcal{T}[t : {\sf idle}], \Sigma}$, which we further do in this proof of Case \#2. According to (\ref{eq:thmpremiss}), a thread invariant ${\sf inv}_t({\bf i}, \tau, \mathcal{T}, \vV{t}, \Delta)$ holds. Then the following is true: \begin{multline}\label{eq:thm2} \safe{t}{\vV{t}}{{\sf skip}}{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}} \land ( (\Delta(t) = \todo{\mathcal{L}(m, a, r)} \land \mathcal{T}(t) = (\mathcal{L}(m, a, r), r) ) \lor{} \\ (\Delta(t) = \done{\mathcal{L}(m, a, r)} \land \mathcal{T}(t) = ({\sf skip}, r) ) ). \end{multline} By Definition~\ref{def:safety} of $\safe{t}{\vV{t}}{{\sf skip}}{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}}$, $\vV{t} \Rrightarrow \evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}$ holds. Consequently, by Definition~\ref{def:rimpl}: \[ (\sigma, \Sigma, \Delta) \in \reif{\vV{t} \mathbin{*} \circledast_{k \in {\sf ThreadID} \setminus \dc{t}} \vV{k}} \subseteq \reif{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}} \mathbin{*} \circledast_{k \in {\sf ThreadID} \setminus \dc{t}} \vV{k}}. \] From the third requirement to $\mathcal{Q}$ in Definition~\ref{def:safelib}: \[ (\sigma, \Sigma, \Delta) \in \reif{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}} \mathbin{*} \circledast_k \vV{k}} \implies \Delta(t) = \done{\mathcal{L}(m, a, r)}. \] Consequently, from (\ref{eq:thm2}) we get that $\Delta(t) = \done{\mathcal{L}(m, a, r)}$ and $\mathcal{T}(t) = ({\sf skip}, r)$. Let $\vV{t}' = \evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}$ and $\vV{k}' = \vV{k}$ for $k \not= t$. It is easy to see that ${\sf inv}_t({\bf i}, \tau[t : {\sf idle}], \mathcal{T}[t : {\sf idle}], \vV{t}', \Delta)$ holds trivially by Definition~\ref{def:tpinv}. Moreover, according to (\ref{eq:thmpremiss}), thread pool invariants hold in other threads as well. We have shown that there exist $\vV{1}', \dots, \vV{N}'$ such that: \[ (\forall k {.\,} {\sf inv}_k({\bf i}, \tau[t : {\sf idle}], \mathcal{T}[t : {\sf idle}], \vV{k}', \Delta)) \land (\sigma, \Sigma, \Delta) \in \reif{\circledast_{k \in {\sf ThreadID}} \vV{k}'}, \] which by the induction hypothesis $\phi(n-1)$ implies that $h' \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : {\sf idle}], \sigma} \subseteq \hsem{\mathcal{L}, \mathcal{T}[t : {\sf idle}], \Sigma}$. We have also show that $\mathcal{T}(t) = ({\sf skip}, r)$. By Definition~\ref{def:ahsem}, $h = (t, \ret{m}{r}) :: h' \in \hsem{\mathcal{L}, \mathcal{T}, \Sigma}$, which concludes the proof of Case~\#2. \mypar{Case \#3} There is a thread $t$, sequential commands $C$ and $C'$, a primitive command ${\sf \alpha}$, concrete states $\sigma$ and $\sigma'$ and a return value $r$ such that $\tau(t) = (C, r)$, $\sttrans{C}{\sigma}{t}{{\sf \alpha}}{C'}{\sigma'}$ and $h \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : (C', r)], \sigma'}$. According to (\ref{eq:thmpremiss}), ${\sf inv}_t({\bf i}, \tau, \mathcal{T}, \vV{t}, \Delta)$ holds. Consequently, there exist a method $m$ with its argument $a$ such that $C = \mathcal{L}(m, a, r)$ and: \begin{multline}\label{eq:thm3} \safe{t}{\vV{t}}{C}{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}} \land{}\\ ( (\Delta(t) = \todo{\mathcal{L}(m, a, r)} \land \mathcal{T}(t) = (\mathcal{L}(m, a, r), r) ) \lor{} \\ (\Delta(t) = \done{\mathcal{L}(m, a, r)} \land \mathcal{T}(t) = ({\sf skip}, r) ) ). \end{multline} It is easy to see that whenever there is a transition $\sttrans{C}{\sigma}{t}{{\sf \alpha}}{C'}{\sigma'}$, there also is a stateless transition $\trans{C}{{\sf \alpha}}{C'}$. By Definition~\ref{def:safety} of $\safe{t}{\vV{t}}{C}{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}}$, if $\trans{C}{{\sf \alpha}}{C'}$, then there exists a view $\vV{t}'$ such that $\sat{t}{{\sf \alpha}}{\vV{t}}{\vV{t}'}$ and $\safe{t}{\vV{t}'}{C'}{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}}$. Let $\vV{k}' = \vV{k}$ for any $k \not= t$. By Definition~\ref{def:axiom} of the action judgement $\sat{t}{{\sf \alpha}}{\vV{t}}{\vV{t}'}$, for $(\sigma, \Sigma, \Delta) \in \reif{\vV{t}\mathbin{*}\circledast_{k \not= t} \vV{k}}$ and any $\sigma' \in \intp{t}{{\sf \alpha}}{\sigma}$, there exist $\Sigma', \Delta'$ such that: \begin{equation}\label{eq:thm32} \lptrans{\Sigma, \Delta}{\Sigma', \Delta'} \land (\sigma', \Sigma', \Delta') \in \reif{\vV{t}' \mathbin{*} \circledast_{k \not= t} \vV{k}}. \end{equation} Let us assume that $\Delta = \Delta'$. Note that $\safe{t}{\vV{t}'}{C'}{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}}$ holds, and according to (\ref{eq:thm3}) the following holds too: \begin{multline*} (\Delta(t) = \todo{\mathcal{L}(m, a, r)} \land \mathcal{T}(t) = (\mathcal{L}(m, a, r), r) ) \lor{} \\ (\Delta(t) = \done{\mathcal{L}(m, a, r)} \land \mathcal{T}(t) = ({\sf skip}, r) ) \end{multline*} Thus, it is easy to see that ${\sf inv}_t({\bf i}, \tau[t : (C', r)], \mathcal{T}, \vV{t}', \Delta)$ holds. Combining this observation with (\ref{eq:thm32}), we conclude that we have demonstrated existance of $\vV{1}', \dots, \vV{N}'$ such that: \[ (\forall k {.\,} {\sf inv}_k({\bf i}, \tau[t : (C', r)], \mathcal{T}, \vV{k}', \Delta)) \land (\sigma', \Sigma', \Delta) \in \reif{\circledast_{k \in {\sf ThreadID}} \vV{k}'}, \] which by the induction hypothesis $\phi(n-1)$ implies that $h \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : (C', r)], \sigma'} \subseteq \hsem{\mathcal{L}, \mathcal{T}, \Sigma}$. This concludes the proof of the case when $\Delta = \Delta'$. We now return to the case when $\Delta \not= \Delta'$. According to (\ref{eq:thm32}), $\lptrans{\Sigma, \Delta}{\Sigma', \Delta'}$ holds, meaning that linearization points of one or more threads have been passed. Without loss of generality, we assume the case of exactly one linearization point, i.e. that $\lp{\Sigma, \Delta}{\Sigma', \Delta'}$ holds. Consequently, according to Definition \ref{def:axiom} there exist $t'$ and ${\rm A}'$ such that: \begin{equation}\label{eq:thm33} \Sigma' \in \intp{t'}{{\rm A}'}{\Sigma} \land \Delta(t') = \todo{{\rm A}'} \land \Delta' = \Delta[t' : \done{{\rm A}'} \end{equation} Let us consider the thread pool invariant ${\sf inv}_{t'}({\bf i}, \tau, \mathcal{T}, \vV{t'}, \Delta)$, which holds according to (\ref{eq:thmpremiss}). We show that $\tau(t') \neq {\sf idle}$. From (\ref{eq:thm33}) we know that $\Delta(t') = \todo{{\rm A}'}$. Since the third requirement to $\mathcal{Q}$ in Definition~\ref{def:safelib} requires that $\Delta(t) = \done{\mathcal{L}(m, a, r)}$ hold, by Definition~\ref{def:tpinv} it can only be the case that there exist $C'', m', a', r'$ such that $\tau(t') = (C'', r')$ and the following is true: \begin{multline}\label{eq:thm34} \safe{t'}{\vV{t}}{C''}{\evalf{\mathcal{Q}(t', \mathcal{L}(m', a', r'))}{{\bf i}}} \land{}\\ ( (\Delta(t') = \todo{\mathcal{L}(m', a', r')} \land \mathcal{T}(t') = (\mathcal{L}(m', a', r'), r') ) \lor{} \\ (\Delta(t') = \done{\mathcal{L}(m', a', r')} \land \mathcal{T}(t') = ({\sf skip}, r') ) ). \end{multline} From formula (\ref{eq:thm33}) we know that ${\rm A}' = \mathcal{L}(m', a', r')$. Consequently, $\Delta'(t') = \done{\mathcal{L}(m', a', r')} \land \mathcal{T}[t' : ({\sf skip}, r')](t') = ({\sf skip}, r')$ holds, which allows us to conclude the thread pool invariant ${\sf inv}_{t'}({\bf i}, \tau[t : (C', r)], \mathcal{T}[t' : ({\sf skip}, r')], \vV{t'}', \Delta')$ in case of $t' \neq t$. We now show that ${\sf inv}_{t}({\bf i}, \tau[t : (C', r)], \mathcal{T}[t' : ({\sf skip}, r')], \vV{t}', \Delta')$ hold, both when $t = t'$ and $t \neq t'$. Let us first assume $t = t'$ ($r = r'$). Then from (\ref{eq:thm3}) we get that ${\rm A} = \mathcal{L}(m, a, r)$ and $\Delta'(t) = \done{\mathcal{L}(m, a, r)}$ hold. When $t \neq t'$, no abstract transition is made in $t$, so $\Delta(t) = \Delta'(t)$ and $\mathcal{T}(t) = \mathcal{T}[t' : ({\sf skip}, r')](t)$. Consequently, the following is true in both cases: \begin{multline} ( (\Delta'(t) = \todo{\mathcal{L}(m, a, r)} \land \mathcal{T}[t' : ({\sf skip}, r')](t) = (\mathcal{L}(m, a, r), r) ) \lor{} \\ (\Delta'(t) = \done{\mathcal{L}(m, a, r)} \land \mathcal{T}[t' : ({\sf skip}, r')](t) = ({\sf skip}, r') ) ). \end{multline} Together with $\safe{t}{\vV{t}'}{C'}{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, r))}{{\bf i}}}$ , those observations imply ${\sf inv}_t({\bf i}, \tau[t : (C', r)], \mathcal{T}[t' : ({\sf skip}, r')], \vV{t}', \Delta')$. Combining these observations with $(\sigma', \Sigma', \Delta') \in \reif{\vV{t}' \mathbin{*} \circledast_{k \not= t} \vV{k}}$ following from (\ref{eq:thm32}), we conclude that we have demonstrated existance of $\vV{1}', \dots, \vV{N}'$ such that: \[ (\forall k {.\,} {\sf inv}_k({\bf i}, \tau[t : (C', r)], \mathcal{T}[t' : ({\sf skip}, r')], \vV{k}', \Delta')) \land (\sigma', \Sigma', \Delta') \in \reif{\circledast_{k \in {\sf ThreadID}} \vV{k}'}, \] which by the induction hypothesis $\phi(n-1)$ implies that $h \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : (C', r)], \sigma'} \subseteq \hsem{\mathcal{L}, \mathcal{T}[t : ({\sf skip}, r')], \Sigma'}$. Now that we demonstrated that $\mathcal{T}(t') = (\mathcal{L}(m', a', r'), r')$, $\Sigma' \in \intp{t}{\mathcal{L}(m', a', r')}{\Sigma}$ and $h \in \hsem{\mathcal{L}, \mathcal{T}[t : ({\sf skip}, r')], \Sigma'}$ all hold, by Definition~\ref{def:ahsem} we can conclude that $h \in \hsem{\mathcal{L}, \mathcal{T}, \Sigma}$. \qed \section{Related Work} \label{sec:related} There has been a significant amount of research on methods for proving linearizability. Due to space constraints, we do not attempt a comprehensive survey here (see~\cite{DongolD14}) and only describe the most closely related work. The existing logics for linearizability that use linearization points differ in the thread-modular reasoning method used and, hence, in the range of concurrent algorithms that they can handle. Our goal in this paper was to propose a uniform basis for designing such logics and to formalise the method they use for reasoning about linearizability in a way independent of the particular thread-modular reasoning method used. We have only shown instantiations of our logic based on disjoint concurrent separation logic~\cite{csl} and RGSep~\cite{rgsep}. However, we expect that our logic can also be instantiated with more complex thread-modular reasoning methods, such as those based on concurrent abstract predicates~\cite{CAP} or islands and protocols~\cite{turon-popl13}. Our notion of tokens is based on the idea of treating method specifications as resources when proving atomicity, which has appeared in various guises in several logics~\cite{rgsep,rgsim-lin,tada}. Our contribution is to formalise this method of handling linearization points independently from the underlying thread-modular reasoning method and to formulate the conditions for soundly combining the two (Definition~\ref{def:axiom}, \S\ref{sec:logic}). We have presented a logic that unifies the various logics based on linearization points with helping. However, much work still remains as this reasoning method cannot handle all algorithms. Some logics have introduced {\em speculative} linearization points to increase their applicability~\cite{turon-popl13,rgsim-lin}; our approach to helping is closely related to this, and we hope could be extended to speculation. But there are still examples beyond this form of reasoning: for instance there are no proofs of the Herlihy-Wing queue~\cite{linearizability} using linearization points (with helping and/or speculation). This algorithm can be shown linearizable using forwards/backwards simulation~\cite{linearizability} and more recently has been shown to only require a backwards simulation~\cite{schellhorn-cav12}. But integrating this form of simulation with the more intrincate notions of interference expressible in the Views framework remains an open problem. Another approach to proving linearizability is the aspect-oriented method. This gives a series of properties of a queue~\cite{henzinger-concur13} (or a stack~\cite{dodds-popl15}) implementation which imply that the implementation is linearizable. This method been applied to algorithms that cannot be handled with standard linearization-point-based methods. However, the aspect-oriented approach requires a custom theorem per data structure, which limits its applicability. In this paper we concentrated on linearizability in its original form~\cite{linearizability}, which considers only finite computations and, hence, specifies only safety properties of the library. Linearizability has since been generalised to also specify liveness properties~\cite{gotsman-icalp11}. Another direction of future work is to generalise our logic to handle liveness, possibly building on ideas from~\cite{liang-lics14}. When a library is linearizable, one can use its atomic specification instead of the actual implementation to reason about its clients~\cite{filipovic-tcs}. Some logics achieve the same effect without using linearizability, by expressing library specifications as judgements in the logic rather than as the code of an abstract library~\cite{iris,icap,sergey-esop15}. It is an interesting direction of future work to determine a precise relationship between this method of specification and linearizability, and to propose a generic logic unifying the two. \section{Conclusion} We have presented a logic for proving the linearizability of concurrent libraries that can be instantiated with different methods for thread-modular reasoning. To this end, we have extended the Views framework~\cite{views} to reason about relations between programs. Our main technical contribution in this regard was to propose the requirement for axiom soundness (Definition~\ref{def:axiom}, \S\ref{sec:logic}) that ensures a correct interaction between the treatment of linearization points and the underlying thread-modular reasoning. We have shown that our logic is powerful enough to handle concurrent algorithms with challenging features, such as helping. More generally, our work marks the first step towards unifying the logics for proving relational properties of concurrent programs. \section{Example} \input{outline-rgsep} In this section, we demonstrate how to reason about algorithms with helping using relational views. We choose a simple library $\mathrm{\ell}$ implementing a concurrent increment and prove its linearizability with the RGSep-based logic. The concrete library $\mathrm{\ell}$ has one method ${\sf inc}$, which increments the value of a shared counter ${\tt k}$ by the argument of the method. The specification of $\mathrm{\ell}$ is given by an abstract library $\mathcal{L}$. The abstract command, provided by $\mathcal{L}$ as an implementation of ${\sf inc}$, operates with an abstract counter ${\tt K}$ as follows (assuming that ${\tt K}$ is initialised by zero): \begin{lstlisting}[basicstyle=\small\ttfamily] #$\mathcal{L}({\sf inc}, a, r)$#: < __kabs := __kabs + #$a$#; assume(__kabs == #$r$#); > \end{lstlisting} That is, $\mathcal{L}({\sf inc}, a, r)$ atomically increments a counter and a command ${\tt assume}({\tt K} == r)$, which terminates only if the return value $r$ chosen at the invocation equals to the resulting value of ${\tt K}$. This corresponds to how we specify methods' return values in \S\ref{sec:soundness}. In Figure~\ref{fig:fcproof}, we show the pseudo-code of the implementation of a method ${\sf inc}$ in a C-style language along with a proof outline. The method $\mathrm{\ell}({\sf inc}, a, r)$ takes one argument, increments a shared counter ${\tt k}$ by it and returns the increased value of the counter. Since ${\tt k}$ is shared among threads, they follow a protocol regulating the access to the counter. This protocol is based on flat combining \cite{flatcombining}, which is a synchronisation technique enabling a parallel execution of sequential operations. The protocol is the following. When a thread $t$ executes $\mathrm{\ell}({\sf inc}, a, r)$, it first makes the argument of the method visible to other threads by storing it in an array ${\tt arg}$, and lets ${\tt res}[t] = {\tt nil}$ to signal to other threads its intention to execute an increment with that argument. It then spins in the loop on line~\ref{line:loop}, trying to write its thread identifier into a variable ${\tt L}$ with a compare-and-swap (CAS). Out of all threads spinning in the loop, the one that succeeds in writing into ${\tt L}$ becomes a {\em combiner}: it performs the increments requested by all threads with arguments stored in ${\tt arg}$ and writes the results into corresponding cells of the array ${\tt res}$. The other threads keep spinning and periodically checking the value of their cells in ${\tt res}$ until a non-${\tt nil}$ value appears in it, meaning that a combiner has performed the operation requested and marked it as finished. The protocol relies on the assumption that ${\tt nil}$ is a value that is never returned by the method. Similarly to the specification of the increment method, the implementation in Figure~\ref{fig:fcproof} ends with a command ${\tt assume}({\tt res}[{\tt mytid}()] = r)$. The proof outline features auxiliary assertions defined in Figure~\ref{fig:preds}. In the assertions we let $\_$ denote a value or a logical variable whose name is irrelevant. We assume that each program variable ${\tt var}$ has a unique location in the heap and denote it with $\& {\tt var}$. Values $a$, $r$ and $t$ are used in the formulas and the code as constants. We prove the following specification for $\mathrm{\ell}({\sf inc}, a, r)$: \[ \rginfer{t}{\mathcal{R}_t, \mathcal{G}_t}{ \nlfrac{ {\sf global} * M(t) * }{ \ltodo{t}{\mathcal{L}({\sf inc}, a, r)} } }{ \mathrm{\ell}({\sf inc}, a, r) }{ \nlfrac{ {\sf global} * M(t) * }{ \ldone{t}{\mathcal{L}({\sf inc}, a, r)} } } \] In the specification, $M(t)$ asserts the presence of ${\tt arg}[t]$ and ${\tt res}[t]$ in the shared state, and ${\sf global}$ is an assertion describing the shared state of all the threads. Thus, the pre- and postcondition of the specification differ only by the kind of token given to $t$. \begin{figure}[t] $ \begin{array}{l c l} X \mathbin{\not\mapsto} Y & \triangleq & \exists Y' {.\,} X \mathbin{\mapsto} Y' * Y \neq Y' \\[1pt] M(t) & \triangleq & \boxed{ {\sf true} * (\addr{{\tt arg}[t]} \mathbin{\mapsto} \_ * \addr{{\tt res}[t]} \mathbin{\not\mapsto} {\tt nil})} \\[1pt] {\sf task_{todo}}(t, a, r) & \triangleq & \addr{{\tt arg}[t]} \mathbin{\mapsto} a * \addr{{\tt res}[t]} \mathbin{\mapsto} {\tt nil} * \ltodo{t}{\mathcal{L}({\sf inc}, a, r)}; \\[1pt] {\sf task_{done}}(t, a, r) & \triangleq & \addr{{\tt arg}[t]} \mathbin{\mapsto} a * \addr{{\tt res}[t]} \mathbin{\mapsto} r * r \not= {\tt nil} * \ldone{t}{\mathcal{L}({\sf inc}, a, r)}; \\[1pt] {\sf kinv}(V) & \triangleq & \addr{{\tt k}} \mathbin{\mapsto} V * \addr{{\tt K}} \mathbin{\Mapsto} V \\[1pt] {\sf LI}(i, t, a, r) & \triangleq & \boxed{ \begin{array}[t]{l} {\sf true} * ((t < i \land {\sf task_{done}}(t, a, r))\lor{} \\[1pt] (t \geq i \land ({\sf task_{todo}}(t, a, r) \lor {\sf task_{done}}(t, a, r)))) \end{array}} \\[1pt] {\sf tinv}(i) & \triangleq & \addr{{\tt arg}[i]} \mathbin{\mapsto} \_ * \addr{{\tt res}[i]} \mathbin{\Mapsto} \_ \lor {\sf task_{todo}}(i, \_, \_) \lor {\sf task_{done}}(i, \_, \_) \\[1pt] {\sf global} & \triangleq & \boxed{ (\addr{{\tt L}} \mathbin{\mapsto} 0 * {\sf kinv}(\_) \lor \addr{{\tt L}} \mathbin{\not\mapsto} 0) * \circledast_{j \in {\sf ThreadID}}\,\tinv(j)}, \end{array} $ \caption{Auxiliary predicates. $\circledast_{j \in {\sf ThreadID}}\,{\sf tinv}(j)$ denotes ${\sf tinv}(1) \mathbin{*} {\sf tinv}(2) \mathbin{*} \cdot \mathbin{*} {\sf tinv}(N)$} \label{fig:preds} \end{figure} \FloatBarrier The main idea of the proof is in allowing a thread $t$ to share the ownership of its token $\ltodo{t}{\mathcal{L}({\sf inc}, a, r)}$ with the other threads. This enables two possibilities for $t$. Firstly, $t$ may become a combiner. Then $t$ has a linearization point on line~\ref{line:lp} (when the loop index $i$ equals to $t$). In this case $t$ also {\em helps} other concurrent threads by performing their linearization points on line~\ref{line:lp} (when $i \neq t$). The alternative possibility is that some other thread becomes a combiner and does a linearization point of $t$. Thus, the method has a non-fixed linearization point, as it may occur in the code of a different thread. We further explain how the tokens are transferred. On line~\ref{line:task} the method performs the assignment \texttt{res[mytid()] := nil}, signalling to other threads about a task this thread is performing. At this step, the method transfers its token $\ltodo{t}{\mathcal{L}({\sf inc}, a, r)}$ to the shared state, as represented by the assertion $\boxed{{\sf true} * {\sf task_{todo}}(t, a, r)}$. In order to take into consideration other threads interfering with $t$ and possibly helping it, here and further we stabilise the assertion by adding a disjunct ${\sf task_{done}}(t, a, r)$. If a thread $t$ gets help from other threads, then ${\sf task_{done}}(t, a, r)$ holds, which implies that ${\tt res}[t] \neq {\tt nil}$ and $t$ cannot enter the loop on line~\ref{line:loop}. Otherwise, if $t$ becomes a combiner, it transfers ${\sf kinv}(\_)$ from the shared state to the local state of $t$ to take over the ownership of the counters ${\tt k}$ and ${\tt K}$ and thus ensure that the access to the counter is governed by the mutual exclusion protocol. At each iteration $i$ of the forall loop, \texttt{res[i] = nil} implies that ${\sf task_{todo}}(i, \_, \_)$ holds, meaning that there is a token of a thread $i$ in the shared state. Consequently, on line~\ref{line:lp} a thread $t$ may use it to perform a linearization point of $i$. The actions defining the guarantee relation $\mathcal{G}_t$ of a thread $t'$ are the following: \begin{enumerate} \item $\rgact{\addr{{\tt arg}[t]} \mathbin{\mapsto} \_ * \addr{{\tt res}[t]} \mathbin{\not\mapsto} {\tt nil}}{\addr{{\tt arg}[t]} \mathbin{\mapsto} a * \addr{{\tt res}[t]} \mathbin{\not\mapsto} {\tt nil}}$; \item $\rgact{\addr{{\tt arg}[t]} \mathbin{\mapsto} a * \addr{{\tt res}[t]} \mathbin{\not\mapsto} {\tt nil}}{{\sf task_{todo}}(t, a, r)}$; \item $\rgact{\addr{{\tt L}} \mathbin{\mapsto} 0 * {\sf kinv}(\_)}{\addr{{\tt L}} \mathbin{\mapsto} t}$; \item $\rgact{\addr{{\tt L}} \mathbin{\mapsto} t * {\sf task_{todo}}(T, A, R)}{\addr{{\tt L}} \mathbin{\mapsto} t * {\sf task_{done}}(T, A, R)}$ \item $\rgact{\addr{{\tt L}} \mathbin{\mapsto} t}{\addr{{\tt L}} \mathbin{\mapsto} 0 * {\sf kinv}(\_)}$ \item $\rgact{{\sf task_{done}}(t, a, r)}{\addr{{\tt arg}[t]} \mathbin{\mapsto} a * \addr{{\tt res}[t]} \mathbin{\mapsto} r}$ \end{enumerate} Out of them, conditions 2 and 6 specify transfering the token of a thread $t$ to and from the shared state, and condition 4 describes using the shared token of a thread $T$. The rely relation of a thread $t$ is then defined as the union of all actions from guarantee relations of other threads and an additional action for each thread $t' \in {\sf ThreadID} \setminus \{ t \}$ allowing the client to prepare a thread $t'$ for a new method call by giving it a new token: $\rgact{\ldone{t'}{\mathcal{L}({\sf inc}, A, R)}}{\ltodo{t'}{\mathcal{L}({\sf inc}, A', R')}}$. \section{Introduction} To manage the complexity of constructing concurrent software, programmers package often-used functionality into {\em libraries} of concurrent algorithms. These encapsulate data structures, such as queues and lists, and provide clients with a set of methods that can be called concurrently to operate on these (e.g.,~\textsf{java.util.concurrent}). To maximise performance, concurrent libraries may use sophisticated non-blocking techniques, allowing multiple threads to operate on the data structure with minimum synchronisation. Despite this, each library method is usually expected to behave as though it executes atomically. This requirement is formalised by the standard notion of correctness for concurrent libraries, {\em linearizability}~\cite{linearizability}, which establishes a form of a simulation between the original {\em concrete} library and another {\em abstract} library, where each method is implemented atomically. A common approach to proving linearizability is to find a {\em linearization point} for every method of the concrete library at which it can be thought of taking effect. \footnote{Some algorithms cannot be reasoned about using linearization points, which we discuss in \S\ref{sec:related}.} Given an execution of a concrete library, the matching execution of the abstract library, required to show the simulation, is constructed by executing the atomic abstract method at the linearization point of the concrete method. A difficulty in this approach is that linearization points are often not determined by a statically chosen point in the method code. For example, in concurrent algorithms with {\em helping}~\cite{herlihy-book}, a method may execute an operation originally requested by another method, called in a different thread; then the linearization point of the latter method is determined by an action of the former. Recent years have seen a number of program logics for proving linearizability (see~\cite{DongolD14} for a survey). To avoid reasoning about the high number of possible interleavings between concurrently executing threads, these logics often use {\em thread-modular} reasoning. They establish protocols that threads should follow when operating on the shared data structure and reason separately about every thread, assuming that the rest follow the protocols. The logics for proving linearizability, such as~\cite{rgsep,rgsim-lin}, usually borrow thread-modular reasoning rules from logics originally designed for proving non-relational properties of concurrent programs, such as rely-guarantee~\cite{rg}, separation logic~\cite{seplogic} or combinations thereof~\cite{rgsep,lrg}. Although this leads the logics to differ in technical details, they use similar methods for reasoning about linearizability, usually based on linearization points. Despite this similarity, designing a logic for proving linearizability that uses a particular thread-modular reasoning method currently requires finding the proof rules and proving their soundness afresh. To consolidate this design space of linearization-point-based reasoning, we propose a logic for linearizability that is {\em generic}, i.e., can be instantiated with different means of thread-modular reasoning about concurrency, such as separation logic~\cite{seplogic} or rely-guarantee~\cite{rg}. To this end, we build on the recently-proposed Views framework~\cite{views}, which unifies thread-modular logics for concurrency, such as the above-mentioned ones. Our contribution is to generalise the framework to reason about relations between programs, required for proving linearizability. In more detail, assertions in our logic are interpreted over a monoid of {\em relational views}, which describe relationships between the states of the concrete and the abstract libraries and the protocol that threads should follow in operating on these. The operation of the monoid, similar to the separating conjunction in separation logic~\cite{seplogic}, combines the assertions in different threads while ensuring that they agree on the protocols of access to the state. The choice of a particular relational view monoid thus determines the thread-modular reasoning method used by our logic. To reason about linearization points, relational views additionally describe a set of special {\em tokens} (as in~~\cite{rgsep,rgsim-lin,tada}), each denoting a one-time permission to execute a given atomic command on the state of the abstract library. The place where this permission is used in the proof of a concrete library method determines its linearization point, with the abstract command recorded by the token giving its specification. Crucially, reasoning about the tokens is subject to the protocols established by the underlying thread-modular reasoning method; in particular, their \emph{ownership} can be transferred between different threads, which allows us to deal with helping. We prove the soundness of our generic logic under certain conditions on its instantiations (Definition~\ref{def:axiom}, \S\ref{sec:logic}). These conditions represent our key technical contribution, as they capture the essential requirements for soundly combining a given thread-modular method for reasoning about concurrency with the linearization-point method for reasoning about linearizability. To illustrate the use of our logic, we present its example instantiations where thread-modular reasoning is done using disjoint concurrent separation logic~\cite{csl} and a combination of separation logic and rely-guarantee~\cite{rgsep}. We then apply the latter instantiation to prove the correctness of a sample concurrent algorithm with helping. We expect that our results will make it possible to systematically design logics using the plethora of other methods for thread-modular reasoning that have been shown to be expressible in the Views framework~\cite{csl,CAP,seplogicperm}. \section{The generic logic}\label{sec:logic} In this section, we present our framework for designing program logics for linearizability proofs. Given a concrete method and a corresponding abstract method, we aim to demonstrate that the former has a linearization point either within its code or in the code of another thread. The idea behind such proofs is to establish simulation between concrete and abstract methods using linearization points to determine when the abstract method has to make a transition to match a given execution of the concrete method. To facilitate such simulation-based proofs, we design our relational logic so that formulas in it denote relations between concrete states, abstract states and special {\it tokens}. Tokens are our tool for reasoning about linearization points. At the beginning of its execution in a thread $t$, each concrete method $m$ is given a token $\todo{{\rm A}_m}$ of the corresponding abstract primitive command ${\rm A}_m$. The token represents a one-time permission for the method to take effect, i.e. to perform a primitive command ${\rm A}_m$ on an abstract machine. When the permission is used, a token $\todo{{\rm A}_m}$ in a thread $t$ is irreversibly replaced with $\done{{\rm A}_m}$. Thus, by requiring that a method start its execution in a thread $t$ with a token $\todo{{\rm A}_m}$ and ends with $\done{{\rm A}_m}$, we ensure that it has in its code a linearization point. The tokens of all threads are described by $\Delta \in {\sf Tokens}$: \[ {\sf Tokens} = {\sf ThreadID} \rightharpoonup (\dc{ \todo{{\rm A}} \mid {\rm A} \in {\sf APCom}} \cup \dc{ \done{{\rm A}} \mid {\rm A} \in {\sf APCom}}) \] Reasoning about states and tokens in the framework is done with the help of relational {\it views}. We assume a set ${\sf Views}$, ranged over by $p$, $q$ and $r$, as well as a reification function $\reif{\ } : {\sf Views} \to \powerset{{\sf State} \times {\sf AState} \times {\sf Tokens}}$ that interprets views as ternary relations on concrete states, abstract states and indexed sets of tokens. \begin{dfn} A relational view monoid is a commutative monoid $({\sf Views}, \mathbin{*}, u)$, where ${\sf Views}$ is an underlying set of relational views, $\mathbin{*}$ is a monoid operation and $u$ is a unit. \end{dfn} The monoid structure of relational views allows treating them as restrictions on the environment of threads. Intuitively, each thread uses views to declare a protocol that other threads should follow while operating with concrete states, abstract states and tokens. Similarly to the separating conjunction from separation logic, the monoid operation $\mathbin{*}$ (view composition) applied to a pair of views combines protocols of access to the state and ensures that they do not contradict each other. \mypar{Disjoint Concurrent Separation logic} To give an example of a view monoid, we demonstrate the structure inspired by Disjoint Concurrent Separation logic (DCSL). A distinctive feature of DCSL is that its assertions enforce a protocol, according to which threads operate on disjoint pieces of memory. We assume a set of values ${\sf Val}$, of which a subset ${\sf Loc} \subseteq {\sf Val}$ represents heap addresses. By letting ${\sf State} = {\sf AState} = ({\sf Loc} \rightharpoonup_{\rm fin} {\sf Val}) \cup \dc{\lightning}$ we represent a state as either a finite partial function from locations to values or an exceptional faulting state $\lightning$, which denotes the result of an invalid memory access. We define an operation $\bullet$ on states, which results in $\lightning$ if either of the operands is $\lightning$, or the union of partial functions if their domains are disjoint. Finally, we assume that the set ${\sf PCom}$ consists of standard heap-manipulating commands with usual semantics \cite{seplogic,views}. We consider the view monoid $(\powerset{({\sf State} \setminus \dc{\lightning}) \times ({\sf AState} \setminus \dc{\lightning}) \times {\sf Tokens}}, \mathbin{*}_{\sf SL}, ([\ ], [\ ], [\ ]))$: the unit is a triple of nowhere defined functions $[\ ]$, and the view composition defined as follows: \[ p \mathbin{*}_{\sf SL} p' \triangleq \dc{(\sigma \bullet \sigma', \Sigma \bullet \Sigma', \Delta \uplus \Delta') \mid (\sigma, \Sigma, \Delta) \in p \land (\sigma', \Sigma', \Delta') \in p'}. \] In this monoid, the composition enforces a protocol of exclusive ownership of parts of the heap: a pair of views can be composed only if they do not simultaneously describe the content of the same heap cell or a token. Since tokens are exclusively owned in DCSL, they cannot be accessed by other threads, which makes it impossible to express a helping mechanism with the DCSL views. In \S\ref{sec:rgsep}, we present another instance of our framework and reason about helping in it. \mypar{Reasoning about linearization points} We now introduce {\em action judgements}, which formalise linearization-points-based approach to proving linearizability within our framework. \begin{wrapfigure}[5]{r}{8em} \vspace{-35pt} \begin{center} $ \xymatrix @R=1.2em @C=1.5em { \sigma \ar@{-}[rr]^{\reif{p \mathbin{*} r}} \ar[dd]^{\db{{\sf \alpha}}} && \Sigma \ar@{-->}[dd]^{\db{{\rm A}}} \\ \\ \sigma' \ar@{--}[rr]^{\reif{q \mathbin{*} r}} && \Sigma' } $ \end{center} \end{wrapfigure} Let us assume that ${\sf \alpha}$ is executed in a concrete state $\sigma$ with an abstract state $\Sigma$ and a set of tokens $\Delta$ satisfying a precondition $p$. According to the action judgement $\sat{t}{{\sf \alpha}}{p}{q}$, for every update $\sigma' \in \intp{t}{{\sf \alpha}}{\sigma}$ of the concrete state, the abstract state may be changed to $\Sigma' \in \intp{t'}{{\rm A}}{\Sigma}$ in order to satisfy the postcondition $q$, provided that there is a token $\todo{{\rm A}}$ in a thread $t'$. When the abstract state $\Sigma$ is changed and the token $\todo{{\rm A}}$ of a thread $t'$ is used, the concrete state update corresponds to a linearization point, or to a regular transition otherwise. \begin{dfn}\label{def:axiom} The action judgement $\sat{t}{{\sf \alpha}}{p}{q}$ holds, iff the following is true: \begin{multline*} \forall r, \sigma, \sigma', \Sigma, \Delta {.\,} (\sigma, \Sigma, \Delta) \in \reif{p \mathbin{*} r} \land \sigma' \in \intp{t}{{\sf \alpha}}{\sigma} \implies{} \\ \exists \Sigma', \Delta' {.\,} \lptrans{\Sigma, \Delta}{\Sigma', \Delta'} \land (\sigma', \Sigma', \Delta') \in \reif{q \mathbin{*} r}, \end{multline*} where ${\sf LP}^*$ is the transitive closure of the following relation: \begin{equation*} \lp{\Sigma, \Delta}{\Sigma', \Delta'} \triangleq \exists t', {\rm A} \ldotp \Sigma' \in \intp{t'}{{\rm A}}{\Sigma} \land \Delta(t') = \todo{{\rm A}} \land \Delta' = \Delta[t' : \done{{\rm A}}], \end{equation*} and $f[x : a]$ denotes the function such that $f[x : a](x) = a$ and for any $y \neq x$, $f[x : a](y) = f(y)$. \end{dfn}\FloatBarrier Note that depending on pre- and postconditions $p$ and $q$, $\sat{t}{{\sf \alpha}}{p}{q}$ may encode a regular transition, a conditional or a standard linearization point. It is easy to see that the latter is the case only when in all sets of tokens $\Delta$ from $\reif{p}$ some thread $t'$ has a todo-token, and in all $\Delta'$ from $\reif{q}$ it has a done-token. Additionally, the action judgement may represent a conditional linearization point of another thread, as the ${\sf LP}$ relation allows using tokens of other threads. Action judgements have a closure property that is important for thread-modular reasoning: when $\sat{t}{{\sf \alpha}}{p}{q}$ holds, so does $\sat{t}{{\sf \alpha}}{p * r}{q * r}$ for every view $r$. That is, execution of ${\sf \alpha}$ and a corresponding linearization point preserves every view $r$ that $p$ can be composed with. Consequently, when in every thread action judgements hold of primitive commands and thread's views, all threads together mutually agree on each other's protocols of the access to the shared memory encoded in their views. This enables reasoning about every thread in isolation with the assumption that its environment follows its protocol. Thus, the action judgements formalise the requirements that instances of our framework need to satisfy in order to be sound. In this regard action judgements are inspired by semantic judgements of the Views Framework \cite{views}. Our technical contribution is in formulating the essential requirements for thread-modular reasoning about linearizability of concurrent libraries with the linearization-point method and in extending the semantic judgement with them. We let a {\it repartitioning implication} of views $p$ and $q$, written $p \Rrightarrow q$, denote $\forall r {.\,} \reif{p * r} \subseteq \reif{q * r}$. A repartitioning implication $p \Rrightarrow q$ ensures that states satisfying $p$ also satisfy $q$ and additionally requires this property to preserve any view $r$. \mypar{Program logic} We are now in a position to present our generic logic for linearizability proofs via the linearization-point method. Assuming a view monoid and reification function as parameters, we define a minimal language ${\sf Assn}$ for assertions $\mathcal{P}$ and $\mathcal{Q}$ denoting sets of views: \begin{gather*} \mathcal{P}, \mathcal{Q} \in {\sf Assn} ::= \rho \mid \mathcal{P} \mathbin{*} \mathcal{Q} \mid \mathcal{P} \lor \mathcal{Q} \mid \mathcal{P} \Rrightarrow \mathcal{Q} \mid \exists X {.\,} \mathcal{P} \mid \dots \end{gather* The grammar includes view assertions $\rho$, a syntax ${\sf VAssn}$ of which is a parameter of the framework. Formulas of ${\sf Assn}$ may contain the standard connectives from separation logic, the repartitioning implication and the existential quantification over logical variables $X$, ranging over a set ${\sf LVar}$. \begin{figure}[t] $ \begin{array}{l c r} \evalf{\mathcal{P} \mathbin{*} \mathcal{Q}}{{\bf i}} = \evalf{\mathcal{P}}{{\bf i}} \mathbin{*} \evalf{\mathcal{Q}}{{\bf i}} & \quad\quad\quad\quad\quad & \evalf{\mathcal{P} \Rrightarrow \mathcal{Q}}{{\bf i}} = \evalf{\mathcal{P}}{{\bf i}} \Rrightarrow \evalf{\mathcal{Q}}{{\bf i}} \\[1pt] \evalf{\mathcal{P} \lor \mathcal{Q}}{{\bf i}} = \evalf{\mathcal{P}}{{\bf i}} \lor \evalf{\mathcal{Q}}{{\bf i}} & \hfill & \evalf{\exists X {.\,} \mathcal{P}}{{\bf i}} = \bigvee_{n \in {\sf Val}} \evalf{\mathcal{P}}{{\bf i}[X : n]} \end{array} $ \caption{Satisfaction relation for the assertion language ${\sf Assn}$\label{fig:sem4frm}} \end{figure} Let us assume an interpretation of logical variables ${\bf i} \in {\sf Int} = {\sf LVar} \to {\sf Val}$ that maps logical variables from ${\sf LVar}$ to values from a finite set ${\sf Val}$. In Figure~\ref{fig:sem4frm}, we define a function $\db{\cdot}_\cdot : {\sf Assn} \times {\sf Int} \to {\sf Views}$ that we use to interpret assertions. Interpretation of assertions is parametrised by $\db{\cdot}_\cdot : {\sf VAssn} \times {\sf Int} \to {\sf Views}$. In order to interpret disjunction, we introduce a corresponding operation on views and require the following properties from it: \begin{equation}\label{eq:disj} \begin{array}{l@{\hspace{7em}}l} \reif{p \lor q} = \reif{p} \cup \reif{q}& (p \lor q) * r = (p * r) \lor (q * r) \end{array} \end{equation} The judgements of the program logic take the form $\infer{t}{\mathcal{P}}{C}{\mathcal{Q}}$. In Figure~\ref{fig:proofrules}, we present the proof rules, which are mostly standard. Among them, the {\sc Prim} rule is noteworthy, since it encorporates the simulation-based approach to reasoning about linearization points introduced by action judgements. The {\sc Frame} rule applies the idea of local reasoning from separation logic \cite{seplogic} to views. The {\sc Conseq} enables weakening a precondition or a postcondition in a proof judgement and uses repartitioning implications to ensure the thread-modularity of the weakened proof judgement. \begin{figure}[t] $ \begin{array}{r@{\hspace{2pt}}c@{\hspace{2pt}}r@{\hspace{2pt}}c} \mbox{\footnotesize\sc(Prim)}\hspace{0.25cm}& \dfrac{ \forall {\bf i} {.\,} \sat{t}{{\sf \alpha}}{\evalf{\mathcal{P}}{{\bf i}}}{\evalf{\mathcal{Q}}{{\bf i}}} }{ \infer{t}{\mathcal{P}}{{\sf \alpha}}{\mathcal{Q}} } & \mbox{\footnotesize\sc(Seq)}\hspace{0.25cm}& \dfrac{ \infer{t}{\mathcal{P}}{C_1}{\mathcal{P}'} \quad \infer{t}{\mathcal{P}'}{C_2}{\mathcal{Q}} }{ \infer{t}{\mathcal{P}}{C_1 \mathbin{\,;\,} C_2}{\mathcal{Q}} }\\[9pt] \mbox{\footnotesize\sc(Frame)}\hspace{0.25cm}& \dfrac{ \infer{t}{\mathcal{P}}{C}{\mathcal{Q}} }{ \infer{t}{\mathcal{P} \mathbin{*} \mathcal{R}}{C}{\mathcal{Q} \mathbin{*} \mathcal{R}} } & \mbox{\footnotesize\sc(Disj)}\hspace{0.25cm}& \dfrac{ \infer{t}{\mathcal{P}_1}{C}{\mathcal{Q}_1}\quad\infer{t}{\mathcal{P}_2}{C}{\mathcal{Q}_2} }{ \infer{t}{\mathcal{P}_1 \lor \mathcal{P}_2}{C}{\mathcal{Q}_1 \lor \mathcal{Q}_2} }\\[9pt] \mbox{\footnotesize\sc(Ex)}\hspace{0.25cm}& \dfrac{ \infer{t}{\mathcal{P}}{C}{\mathcal{Q}} }{ \infer{t}{\exists X {.\,} \mathcal{P}}{C}{\exists X {.\,} \mathcal{Q}} } & \mbox{\footnotesize\sc(Choice)}\hspace{0.25cm}& \dfrac{ \infer{t}{\mathcal{P}}{C_1}{\mathcal{Q}} \quad \infer{t}{\mathcal{P}}{C_2}{\mathcal{Q}} }{ \infer{t}{\mathcal{P}}{C_1 \mathbin{+} C_2}{\mathcal{Q}} } \\[9pt] \mbox{\footnotesize\sc(Iter)}\hspace{0.25cm}& \dfrac{ \infer{t}{\mathcal{P}}{C}{\mathcal{P}} }{ \infer{t}{\mathcal{P}}{C^\star}{\mathcal{P}} } & \mbox{\footnotesize\sc(Conseq)}\hspace{0.25cm}& \dfrac{ \mathcal{P}' \Rrightarrow \mathcal{P} \quad \infer{t}{\mathcal{P}}{C}{\mathcal{Q}} \quad \mathcal{Q} \Rrightarrow \mathcal{Q}' }{ \infer{t}{\mathcal{P}'}{C}{\mathcal{Q}'} } \end{array}$ \caption{Proof rules\label{fig:proofrules}} \end{figure} \mypar{Semantics of proof judgements} We give semantics to judgements of the program logic by lifting the requirements of action judgements to sequential commands. \begin{dfn}[Safety Judgement]\label{def:safety} We define ${\sf safe}_t$ as the greatest relation such that the following holds whenever $\safe{t}{p}{C}{q}$ does: \begin{itemize} \item if $C \not= {\sf skip}$, then $\forall C', {\sf \alpha} {.\,} \trans{C}{{\sf \alpha}}{C'} \implies \exists p' {.\,} \sat{t}{{\sf \alpha}}{p}{p'} \land \safe{t}{p'}{C'}{q}$, \item if $C = {\sf skip}$, then $p \Rrightarrow q$. \end{itemize} \end{dfn} \begin{lem}\label{lem:logic2safety} $\forall t, \mathcal{P}, C, \mathcal{Q} {.\,} \infer{t}{\mathcal{P}}{C}{\mathcal{Q}} \implies \forall {\bf i} {.\,} \safe{t}{\evalf{\mathcal{P}}{{\bf i}}}{C}{\evalf{\mathcal{Q}}{{\bf i}}}.$ \end{lem} We can understand the safety judgement $\safe{t}{\evalf{\mathcal{P}}{{\bf i}}}{C}{\evalf{\mathcal{Q}}{{\bf i}}}$ as an obligation to create a sequence of views $\evalf{\mathcal{P}}{{\bf i}} = p_1, p_2, \dots, p_{n+1} = \evalf{\mathcal{Q}}{{\bf i}}$ for each finite trace ${\sf \alpha}_1, {\sf \alpha}_2, \dots, {\sf \alpha}_n$ of $C$ to justify each transition with action judgements $\sat{t}{{\sf \alpha}_1}{p_1}{p_2}$, \dots, $\sat{t}{{\sf \alpha}_n}{p_n}{p_{n+1}}$. Thus, when $\safe{t}{\evalf{\mathcal{P}}{{\bf i}}}{C}{\evalf{\mathcal{Q}}{{\bf i}}}$ holds, it ensures that every step of the machine correctly preserves a correspondence between a concrete and abstract execution. Intuitively, the safety judgement lifts the simulation between concrete and abstract primitive commands established with action judgements to the implementation and specification of a method. In Lemma~\ref{lem:logic2safety}, we establish that the proof judgements of the logic imply the safety judgements. As a part of the proof, we show that each of the proof rules of the logic holds of safety judgements. Due to space constraints, this and other proofs are given in the extended version of the paper~\cite{ext}. \section{Methods Syntax and Sequential Semantics}\label{sec:proglang} We consider concurrent programs that consist of two components, which we call {\em libraries} and {\em clients}. Libraries provide clients with a set of methods, and clients call them concurrently. We distinguish {\em concrete} and {\em abstract} libraries, as the latter serve as specification for the former due to its methods being executed atomically. \mypar{Syntax} Concrete methods are implemented as {\em sequential commands} having the syntax: \[ C \in {\sf Com} ::= {\sf \alpha} \mid C \mathbin{\,;\,} C \mid C \mathbin{+} C \mid \iter{C} \mid {\sf skip},\quad \mbox{where } {\sf \alpha} \in {\sf PCom} \] The grammar includes {\em primitive commands} ${\sf \alpha}$ from a set ${\sf PCom}$, sequential composition $C \mathbin{\,;\,} C$, non-deterministic choice $C \mathbin{+} C$ and a finite iteration $\iter{C}$ (we are interested only in terminating executions) and a termination marker ${\sf skip}$. We use $\mathbin{+}$ and $(\cdot)^\star$ instead of conditionals and while loops for theoretical simplicity: as we show at the end of this section, given appropriate primitive commands the conditionals and loops can be encoded. We also assume a set ${\sf APCom}$ of {\em abstract primitive commands}, ranged over by ${\rm A}$, with which we represent methods of an abstract library. \begin{figure}[t] ${\longrightarrow} \subseteq {\sf Com} \times {\sf PCom} \times {\sf Com}$ {\centering $\begin{array}{l@{\ }l@{\ }l} \myfrac{ \trans{C_1}{{\sf \alpha}}{C'_1} }{ \trans{C_1 \mathbin{\,;\,} C_2}{{\sf \alpha}}{C'_1 \mathbin{\,;\,} C_2} } & \myfrac{ \, }{ \trans{\iter{C}}{{\sf id}}{C; \iter{C}} } & \myfrac{ \, }{ \trans{{\sf \alpha}}{{\sf \alpha}}{{\sf skip}} } \\ \myfrac{ i \in \{1, 2\} }{ \trans{C_1 \mathbin{+} C_2}{{\sf id}}{C_i} } & \myfrac{ \, }{ \trans{{\sf skip} \mathbin{\,;\,} C}{{\sf id}}{C} } & \myfrac{ \, }{ \trans{\iter{C}}{{\sf id}}{{\sf skip}} } \end{array} $\par} \medskip ${\xtwoheadrightarrow{}} \subseteq ({\sf Com} \times {\sf State}) \times ({\sf ThreadID} \times {\sf PCom}) \times ({\sf Com} \times {\sf State})$ {\centering $\myfrac{ \sigma' \in \intp{t}{{\sf \alpha}}{\sigma} \quad \trans{C}{{\sf \alpha}}{C'} }{ \sttrans{C}{\sigma}{t}{{\sf \alpha}}{C'}{\sigma'} }$\par} \caption{The operational semantics of sequential commands\label{fig:opsem}} \end{figure} \mypar{Semantics} We assume a set ${\sf State}$ of concrete states of the memory, ranged over by $\sigma$, and abstract states ${\sf AState}$, ranged over by $\Sigma$. The memory is shared among $N$ threads with thread identifiers ${\sf ThreadID} = \{1, 2, \dots, N\}$, ranged over by $t$. We assume that semantics of each primitive command ${\sf \alpha}$ is given by a non-deterministic {\em state transformer} $\db{{\sf \alpha}}_t : {\sf State} \to \powerset{{\sf State}}$, where $t \in {\sf ThreadID}$. For a state $\sigma$, the set of states $\intp{t}{{\sf \alpha}}{\sigma}$ is the set of possible resulting states for ${\sf \alpha}$ executed atomically in a state $\sigma$ and a thread $t$. State transformers may have different semantics depending on a thread identifier, which we use to introduce thread-local memory cells later in the technical development. Analogously, we assume semantics of abstract primitive commands with state transformers $\db{{\rm A}}_t : {\sf AState} \to \powerset{{\sf AState}}$, all of which update abstract states atomically. We also assume a primitive command ${\sf id} \in {\sf PCom}$ with the interpretation $\intp{t}{{\sf id}}{\sigma} \triangleq \{\sigma \}$, and its abstract counterpart ${\sf id} \in {\sf APCom}$. The sets of primitive commands ${\sf PCom}$ and ${\sf APCom}$ as well as corresponding state transformers are parameters of our framework. In Figure~\ref{fig:opsem} we give rules of operational semantics of sequential commands, which are parametrised by semantics of primitive commands. That is, we define a transition relation ${\xtwoheadrightarrow{}}~\subseteq~({\sf Com} \times {\sf State}) \times ({\sf ThreadID} \times {\sf PCom}) \times ({\sf Com} \times {\sf State})$, so that $\sttrans{C}{\sigma}{t}{{\sf \alpha}}{C'}{\sigma'}$ indicates a transition from $C$ to $C'$ updating the state from $\sigma$ to $\sigma'$ with a primitive command ${\sf \alpha}$ in a thread $t$. The rules of the operational semantics are standard. Let us show how to define traditional control flow primitives, such as an if-statement and a while-loop, in our programming language. Assuming a language for arithmetic expressions, ranged over by $E$, and a function $\db{E}_\sigma$ that evaluates expressions in a given state $\sigma$, we define a primitive command ${\tt assume}(E)$ that acts as a filter on states, choosing only those where $E$ evaluates to non-zero values. \[ \intp{t}{{\tt assume}(E)}{\sigma} \triangleq \mbox{ if } \db{E}_\sigma \not= 0 \mbox{ then } \dc{\sigma} \mbox{ else } \emptyset. \] Using ${\tt assume}(E)$ and the C-style negation $!E$ in expressions, a conditional and a while-loop can be implemented as the following commands: \begin{gather*} {\tt if}\ E\ {\tt then}\ C_1\ {\tt else}\ C_2 \triangleq ({\tt assume}(E); C_1) \mathbin{+} ({\tt assume}(!E); C_2) \\ {\tt while}\ E\ {\tt do}\ C \triangleq ({\tt assume}(E); C)^\star; {\tt assume}(!E) \end{gather*} \section{The RGSep-based Logic\label{sec:rgsep}} In this section, we demonstrate an instance of the generic proof system that is capable of handling algorithms with helping. This instance is based on RGSep \cite{rgsep}, which combines rely-guarantee reasoning \cite{rg} with separation logic \cite{seplogic}. The main idea of the logic is to partition the state into several thread-local parts (which can only be accessed by corresponding threads) and the shared part (which can be accessed by all threads). The partitioning is defined by proofs in the logic: an assertion in the code of a thread restricts its local state and the shared state. In addition, the partitioning is dynamic, meaning that resources, such as a part of a heap or a token, can be moved from the local state of a thread into the shared state and vice versa. By transferring a token to the shared state, a thread gives to its environment a permission to change the abstract state. This allows us to reason about environment helping that thread. \mypar{The RGSep-based view monoid} Similarly to DCSL, we assume that states represent heaps, i.e. that ${\sf State} = {\sf AState} = {\sf Loc} \rightharpoonup_{\rm fin} {\sf Val} \uplus \dc{\lightning}$, and we denote all states but a faulting one with $\LState_{\sf H} = \HState_{\sf H} = {\sf Loc} \rightharpoonup_{\rm fin} {\sf Val}$. We also assume a standard set of heap-manipulating primitive commands with usual semantics. We define views as triples consisting of three components: a predicate $P$ and binary relations $R$ and $G$. A predicate $P \in \powerset{(\LState_{\sf H} \times \HState_{\sf H} \times {\sf Tokens})^2}$ is a set of pairs $(l, s)$ of local and shared parts of the state, where each part consists of concrete state, abstract state and tokens. {\em Guarantee} $G$ and {\em rely} $R$ are relations from $\powerset{({\sf State} \times {\sf AState} \times {\sf Tokens})^2}$, which summarise how individual primitive commands executed by the method's thread (in case of $G$) and the environment (in case of $R$) may change the shared state. Together guarantee and rely establish a protocol that views of the method and its environment respectively must agree on each other's transitions, which allows us to reason about every thread separately without considering local state of other threads, assuming that they follow the protocol. The agreement is expressed with the help of a well-formedness condition on views of the RGSep-based monoid that their predicates must be {\em stable} under rely, meaning that their predicates take into account whatever changes their environment can make: \[ {\sf stable}(P, R) \triangleq \forall l, s, s' \ldotp (l, s) \in P \land (s, s') \in R \implies (l, s') \in P. \] \noindent A predicate that is stable under rely cannot be invalidated by any state transition from rely. Stable predicates with rely and guarantee relations form the view monoid with the underlying set of views $ {\sf Views}_{\sf RGsep} = \{ (P, R, G) \mid {\sf stable}(P, R) \} \cup \{ \bot \}, $ where $\bot$ denotes a special {\em inconsistent} view with the empty reification. The reification of other views simply joins shared and local parts of the state: \[ \reif{(P, R, G)} = \{ (\sigma_l \bullet \sigma_s, \Sigma_l \bullet \Sigma_s, \Delta_l \uplus \Delta_s) \mid ((\sigma_l, \Sigma_l, \Delta_l), (\sigma_s, \Sigma_s, \Delta_s)) \in P \}. \] Let an operation $\cdot$ be defined on states analogously to DCSL. Given predicates $P$ and $P'$, we let $P \mathbin{*} P'$ be a predicate denoting the pairs of local and shared states in which the local state can be divided into two substates such that one of them together with the shared state satisfies $P$ and the other together with the shared state satisfies $P'$: \[ P \mathbin{*} P' \triangleq \{ ((\sigma_l \bullet \sigma'_l, \Sigma_l \bullet \Sigma'_l, \Delta_l \uplus \Delta'_l), s) \mid ((\sigma_l, \Sigma_l, \Delta_l), s) \in P \land ((\sigma'_l, \Sigma'_l, \Delta'_l), s) \in P' \} \] We now define the monoid operation $\mathbin{*}$, which we use to compose views of different threads. When composing views $(P, R, G)$ and $(P', R', G')$ of the parallel threads, we require predicates of both to be immune to interference by all other threads and each other. Otherwise, the result is inconsistent: \[ (P, R, G) \mathbin{*} (P', R', G') \triangleq \mbox{if } G \subseteq R' \land G' \subseteq R \mbox{ then } (P \mathbin{*} P', R \cap R', G \cup G') \mbox{ else } \bot. \] That is, we let the composition of views be consistently defined when the state transitions allowed in a guarantee of one thread are treated as environment transitions in the other thread, i.e. $G \subseteq R'$ and $G' \subseteq R$. The rely of the composition is $R \cap R'$, since the predicate $P * P'$ is guaranteed to be stable only under environment transitions described by both $R$ and $R'$. The guarantee of the composition is $G \cup G'$, since other views need to take into account all state transitions either from $G$ or from $G'$. \begin{figure}[t!] $\vDash: ({\sf State} \times {\sf AState} \times {\sf Tokens}) \times ({\sf State} \times {\sf AState} \times {\sf Tokens}) \times {\sf Int} \times {\sf Assn}$\\[3pt] \hfill $ \begin{array}{l@{\ \ }l} \rgassert{(\sigma_l, \Sigma_l, \Delta_l), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{E \mathbin{\mapsto} F}, & \mbox{iff } \sigma_l = [\db{E}_{\bf i} : \db{F}_{\bf i}], \Sigma_l = [\ ], \mbox{ and } \Delta_l = [\ ]\\[1pt] \rgassert{(\sigma_l, \Sigma_l, \Delta_l), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{E \mathbin{\Mapsto} F}, & \mbox{iff } \sigma_l = [\ ], \Sigma_l = [\db{E}_{\bf i} : \db{F}_{\bf i}], \mbox{ and } \Delta_l = [\ ] \\[1pt] \rgassert{(\sigma_l, \Sigma_l, \Delta_l), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{\ltodo{t}{{\rm A}}}, & \mbox{iff } \sigma_l = [\ ], \Sigma_l = [\ ], \mbox{ and } \Delta_l = [t : \todo{{\rm A}}]\\[1pt] \rgassert{(\sigma_l, \Sigma_l, \Delta_l), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{\ldone{t}{{\rm A}}}, & \mbox{iff } \sigma_l = [\ ], \Sigma_l = [\ ], \mbox{ and } \Delta_l = [t : \done{{\rm A}}]\\[1pt] \rgassert{(\sigma_l, \Sigma_l, \Delta_l), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{\boxed{\pi}}, & \mbox{iff } \sigma_l = [\ ], \Sigma_l = [\ ], \Delta_l = [\ ], \mbox{ and} \\ & \hfill \rgassert{(\sigma_s, \Sigma_s, \Delta_s), ([\ ], [\ ], [\ ]), {\bf i}}{\pi}\\[1pt] \rgassert{(\sigma_l, \Sigma_l, \Delta_l), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{\pi * \pi'}, & \mbox{iff there exist } \sigma'_l, \sigma''_l, \Sigma'_l, \Sigma''_l, \Delta'_l, \Delta''_l \mbox{ such that}\\ & \sigma_l = \sigma'_l \bullet \sigma''_l, \Sigma_l = \Sigma'_l \bullet \Sigma''_l, \Delta_l = \Delta'_l \uplus \Delta''_l, \\ & \rgassert{(\sigma'_l, \Sigma'_l, \Delta'_l), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{\pi}, \mbox{ and }\\ & \rgassert{(\sigma''_l, \Sigma''_l, \Delta''_l), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{\pi'} \end{array} $ \caption{Satisfaction relation for a fragment of the assertion language ${\sf VAssn}$\label{fig:RGsem4frm}} \end{figure} \mypar{The RGSep-based program logic} We define the view assertion language ${\sf VAssn}$ that is a parameter of the proof system. Each view assertion $\rho$ takes form of a triple $(\pi, \mathcal{R}, \mathcal{G})$, and the syntax for $\pi$ is: \[ \begin{array}{c c l} E & ::= & a \mid X \mid E+E \mid \dots, \quad \mbox{where } X \in {\sf LVar}, a \in {\sf Val} \\ \pi & ::= & E = E \mid E \mathbin{\mapsto} E \mid E \mathbin{\Mapsto} E \mid \ltodo{t}{{\rm A}} \mid \ldone{t}{{\rm A}} \mid \boxed{\pi} \mid \pi \mathbin{*} \pi \mid \neg \pi \mid \dots \\ \end{array} \] Formula $\pi$ denotes a predicate of a view as defined by a satisfaction relation $\models$ in Figure~\ref{fig:RGsem4frm}. There $E \mathbin{\mapsto} E$ and $E \mathbin{\Mapsto} E$ denote a concrete and an abstract state describing singleton heaps. A non-boxed formula $\pi$ denotes the view with the local state satisfying $\pi$ and shared state unrestricted; $\boxed{\pi}$ denotes the view with the empty local state and the shared state satisfying $\pi$; $\pi \mathbin{*} \pi'$ the composition of predicates corresponding to $\pi$ and $\pi'$. The semantics of the rest of connectives is standard. Additionally, for simplicity of presentation of the syntax, we require that boxed assertions $\boxed{\pi}$ be not nested (as opposed to preventing that in the definition). The other components $\mathcal{R}$ and $\mathcal{G}$ of a view assertion are sets of {\em rely/guarantee actions} $\mathcal{A}$ with the syntax: $\mathcal{A} ::= \rgact{\pi}{\pi'}$. An action $\rgact{\pi}{\pi'}$ denotes a change of a part of the shared state that satisfies $\pi$ into one that satisfies $\pi'$, while leaving the rest of the shared state unchanged. We associate with an action $\rgact{\pi}{\pi'}$ all state transitions from the following set: \begin{multline*} \db{\rgact{\pi}{\pi'}} = \{ ((\sigma_s \bullet \sigma''_s, \Sigma_s \bullet \Sigma''_s, \Delta_s \uplus \Delta''_s), (\sigma'_s \bullet \sigma''_s, \Sigma'_s \bullet \Sigma''_s, \Delta'_s \uplus \Delta''_s)) \mid{} \\ \exists {\bf i} \ldotp \rgassert{([\ ], [\ ], [\ ]), (\sigma_s, \Sigma_s, \Delta_s), {\bf i}}{\boxed{\pi}} \land \rgassert{([\ ], [\ ], [\ ]), (\sigma'_s, \Sigma'_s, \Delta'_s), {\bf i}}{\boxed{\pi'}} \} \end{multline*} We give semantics to view assertions with the function $\evalf{\cdot}{\cdot}$ that is defined as follows: \[ \evalf{(\pi, \mathcal{R}, \mathcal{G})}{{\bf i}} \triangleq (\dc{ (l, s) \mid \rgassert{l, s, {\bf i}}{\pi}}, \bigcup\nolimits_{\mathcal{A} \in \mathcal{R}} \db{\mathcal{A}}, \bigcup\nolimits_{\mathcal{A} \in \mathcal{G}} \db{\mathcal{A}}). \] \section{Soundness}\label{sec:soundness} In this section, we formulate linearizability for libraries. We also formulate the soundness theorem, in which we state proof obligations that are necessary to conclude linearizability. \mypar{Libraries} We assume a set of method names ${\sf Method}$, ranged over by $m$, and consider {\em a concrete library} $\mathrm{\ell} : {\sf Method} \rightharpoonup (({\sf Val} \times {\sf Val}) \to {\sf Com})$ that maps method names to commands from ${\sf Com}$, which are parametrised by a pair of values from ${\sf Val}$. For a given method name $m \in {\sf dom}(\mathrm{\ell})$ and values $a, v \in {\sf Val}$, a command $\mathrm{\ell}(m, a, v)$ is an implementation of $m$, which accepts $a$ as a method argument and either returns $v$ or does not terminate. Such an unusual way of specifying method's arguments and return values significantly simplifies further development, since it does not require modelling a call stack. Along with the library $\mathrm{\ell}$ we consider its specification in the form of an abstract library $\mathcal{L} \in {\sf Method} \rightharpoonup (({\sf Val} \times {\sf Val}) \to {\sf APCom})$ implementing a set of methods ${\sf dom}(\mathcal{L})$ atomically as abstract primitive commands $\dc{\mathcal{L}(m, a, v) \mid m \in {\sf dom}(\mathcal{L})}$ parametrised by an argument $a$ and a return value $v$. Given a method $m \in {\sf Method}$, we assume that a parametrised abstract primitive command $\mathcal{L}(m)$ is intended as a specification for $\mathrm{\ell}(m)$. \mypar{Linearizability} The linearizability assumes a complete isolation between a library and its client, with interactions limited to passing values of a given data type as parameters or return values of library methods. Consequently, we are not interested in internal steps recorded in library computations, but only in the interactions of the library with its client. We record such interactions using {\em histories}, which are traces including only events $\call{m}{a}$ and $\ret{m}{v}$ that indicate an invocation of a method $m$ with a parameter $a$ and returning from $m$ with a return value $v$, or formally: \[ h::= \varepsilon \mid (t, \call{m}{a}) \mathbin{{:}{:}} h \mid (t, \ret{m}{v}) \mathbin{{:}{:}}h. \] Given a library $\mathrm{\ell}$, we generate all finite histories of $\mathrm{\ell}$ by considering $N$ threads repeatedly invoking library methods in any order and with any possible arguments. The execution of methods is described by semantics of commands from \S~\ref{sec:proglang}. We define a {\em thread pool} $\tau : {\sf ThreadID} \to ({\sf idle} \uplus ({\sf Com} \times {\sf Val}))$ to characterise progress of methods execution in each thread. The case of $\tau(t) = {\sf idle}$ corresponds to no method running in a thread $t$. When $\tau(t) = (C, v)$, to finish some method returning $v$ it remains to execute $C$. \begin{dfn}\label{def:hsem} We let $\hsem{\mathrm{\ell}, \sigma} = \bigcup_{n \geq 0} \hsemn{n}{\mathrm{\ell}, (\lambda t \ldotp {\sf idle}), \sigma}$ denote the set of all possible histories of a library $\mathrm{\ell}$ that start from a state $\sigma$, where for a given thread pool $\tau$, $\hsemn{n}{\mathrm{\ell}, \tau, \sigma}$ is defined as a set of histories such that $\hsemn{0}{\mathrm{\ell}, \tau, \sigma} \triangleq \dc{ \varepsilon }$ and: \[ \begin{array}{rcl} \hsemn{n}{\mathrm{\ell}, \tau, \sigma} & \triangleq & \{ ((t, \call{m}{a}) ::h) \mid a \in {\sf Val} \land m \in {\sf dom}(\mathrm{\ell}) \land \tau(t) = {\sf idle} \land{} \\ & & \hspace{9.25em} \exists v \ldotp h \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : (\mathrm{\ell}(m, a, v), v)], \sigma} \} \\ & & {} \cup \{ h \mid \exists t, {\sf \alpha}, C, C', \sigma', v \ldotp \tau(t) = (C, v) \land \sttrans{C}{\sigma}{t}{{\sf \alpha}}{C'}{\sigma'} \land{} \\ & & \hspace{9.25em} h \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : (C', v)], \sigma'} \} \\ & & {} \cup \{ ((t, \ret{m}{v}) ::h) \mid m \in {\sf dom}(\mathrm{\ell}) \land \tau(t) = ({\sf skip}, v) \land{} \\ & & \hspace{9.25em} h \in \hsemn{n-1}{\mathrm{\ell}, \tau[t : {\sf idle}], \sigma} \} \end{array} \] \end{dfn} Thus, we construct the set of all finite histories inductively with all threads initially idling. At each step of generation, in any idling thread $t$ any method $m \in {\sf dom}(\mathrm{\ell})$ may be called with any argument $a$ and an expected return value $v$, which leads to adding a command $\mathrm{\ell}(m, a, v)$ to the thread pool of a thread $t$. Also, any thread $t$, in which $\tau(t) = (C, v)$, may do a transition $\sttrans{C}{\sigma}{t}{{\sf \alpha}}{C'}{\sigma'}$ changing a command in the thread pool and the concrete state. Finally, any thread that has finished execution of a method's command ($\tau(t) = ({\sf skip}, v)$) may become idle by letting $\tau(t) = {\sf idle}$. We define $\hsemn{n}{\mathcal{L}, \mathcal{T}, \Sigma}$ analogously and let the set of all histories of an abstract library $\mathcal{L}$ starting from the initial state $\Sigma$ be $\hsem{\mathcal{L}, \Sigma} = \bigcup_{n \geq 0} \hsemn{n}{\mathcal{L}, (\lambda t \ldotp {\sf idle}), \Sigma}$. \begin{dfn} For libraries $\mathrm{\ell}$ and $\mathcal{L}$ such that ${\sf dom}(\mathrm{\ell}) = {\sf dom}(\mathcal{L})$, we say that $\mathcal{L}$ \textbfit{linearizes} $\mathrm{\ell}$ in the states $\sigma$ and $\Sigma$, written $(\mathrm{\ell}, \sigma) \sqsubseteq (\mathcal{L}, \Sigma)$, if $\hsem{\mathrm{\ell}, \sigma} \subseteq \hsem{\mathcal{L}, \Sigma}$. \end{dfn} That is, an abstract library $\mathcal{L}$ linearizes $\mathrm{\ell}$ in the states $\sigma$ and $\Sigma$, if every history of $\mathrm{\ell}$ can be reproduced by $\mathcal{L}$. The definition is different from the standard one \cite{linearizability}: we use the result obtained by Gotsman and Yang \cite{linown} stating that the plain subset inclusion on the sets of histories produced by concrete and abstract libraries is equivalent to the original definition of linearizability. \mypar{Soundness w.r.t. linearizability} We now explain proof obligations that we need to show for every method $m$ of a concrete library $\mathrm{\ell}$ to conclude its linearizability. Particularly, for every thread $t$, argument $a$, return value $v$, and a command $\mathrm{\ell}(m, a, v)$ we require that there exist assertions $\mathcal{P}(t, \mathcal{L}(m, a, v))$ and $\mathcal{Q}(t, \mathcal{L}(m, a, v))$, for which the following Hoare-style specification holds: \begin{equation}\label{eq:soundness1} \infer{t}{ \mathcal{P}(t, \mathcal{L}(m, a, v)) }{ \mathrm{\ell}(m, a, v) }{ \mathcal{Q}(t, \mathcal{L}(m, a, v)) } \end{equation} In the specification of $\mathrm{\ell}(m, a, v)$, $\mathcal{P}(t, \mathcal{L}(m, a, v))$ and $\mathcal{Q}(t, \mathcal{L}(m, a, v))$ are assertions parametrised by a thread $t$ and an abstract command $\mathcal{L}(m, a, v)$. We require that in a thread $t$ of all states satisfying $\mathcal{P}(t, \mathcal{L}(m, a, v))$ and $\mathcal{Q}(t, \mathcal{L}(m, a, v))$ there be only tokens $\todo{\mathcal{L}(m, a, v)}$ and $\done{\mathcal{L}(m, a, v)}$ respectively: \begin{multline}\label{eq:soundness2} \forall {\bf i}, t, \sigma, \Sigma, \Delta, r \ldotp{} \\[-4pt] \begin{array}{@{}l@{}} ((\sigma, \Sigma, \Delta) \in \reif{\evalf{\mathcal{P}(t, \mathcal{L}(m, a, v))}{{\bf i}} \mathbin{*} r} \implies \Delta(t) = \todo{\mathcal{L}(m, a, v)}) \\ {} \land ((\sigma, \Sigma, \Delta) \in \reif{\evalf{\mathcal{Q}(t, \mathcal{L}(m, a, v))}{{\bf i}} \mathbin{*} r} \implies \Delta(t) = \done{\mathcal{L}(m, a, v)}) \end{array} \end{multline} Together, (\ref{eq:soundness1}) and (\ref{eq:soundness2}) impose a requirement that a concrete and an abstract method return the same return value $v$. We also require that the states satisfying the assertions only differ by a token of a thread $t$: \begin{multline}\label{eq:soundness3} \forall {\bf i}, t, {\rm A}, {\rm A}', r, \Delta \ldotp (\sigma, \Sigma, \Delta [t : \done{{\rm A}}]) \in \reif{\evalf{\mathcal{Q}(t, {\rm A})}{{\bf i}} \mathbin{*} r} \iff{} \\ (\sigma, \Sigma, \Delta [t : \todo{{\rm A}'}]) \in \reif{\evalf{\mathcal{P}(t, {\rm A}')}{{\bf i}} \mathbin{*} r}. \end{multline} \begin{thm}\label{thm:lin} For given libraries $\mathrm{\ell}$ and $\mathcal{L}$ together with states $\sigma$ and $\Sigma$, $(\mathrm{\ell}, \sigma) \sqsubseteq (\mathcal{L}, \Sigma)$ holds, if ${\sf dom}(\mathrm{\ell}) = {\sf dom}(\mathcal{L})$ and (\ref{eq:soundness1}), (\ref{eq:soundness2}) and (\ref{eq:soundness3}) hold for every method $m$, thread $t$ and values $a$ and $v$. \end{thm}
1,477,468,750,724
arxiv
\section{Introduction} It is generally believed that quantised enveloping algebras are not enveloping algebras; Chari and Pressley, for example, assert \cite[p. 258]{ChaP} that ``there is no quantum Lie algebra'' underlying a quantised enveloping algebra $U_q(\mathfrak{g})$. The purpose of this paper is to challenge this belief. The ultimate intention is to describe all quantised enveloping algebras as genuine enveloping algebras --- that is, as associative algebras generated by a finite-dimensional system like a Lie algebra, with relations obtained by interpreting the Lie brackets as quadratic expressions like commutators. Here we succeed, for the case where $\mathfrak{g}$ is of type $A_n$, in finding generators and relations which are essentially of the desired form; moreover, they appear to yield a Poincar\'e-Birkhoff-Witt theorem for the enveloping algebra. They are based on Lie brackets defined by means of the adjoint action of a Hopf algebra on itself, the quantum Lie algebra being a certain ad-invariant subspace of the quantised enveloping algebra. A generalisation of the classical concept of Lie algebra emerges, which is close to that found by Woronowicz \cite{Wor} in his theory of non-commutative differential geometry on quantum groups. However, Woronowicz's theory yields a satisfactory deformation of the classical Lie algebra only in the case of general linear groups \cite{Diffform, SchWatZum, SS1}; in all other cases (see \cite{Jurco, Karpacz, CSSW, SS2}) the quantum Lie algebra has a different dimension from the corresponding classical one: it is always $n^2$ where $n$ is the dimension of a particular representation. In particular, quantum Lie algebras corresponding to $\mathfrak{sl} (n)$ and having dimension $n^2 - 1$, such as are described in this paper, appear to be new. The following reservations must be made: 1. The relations for the quantised enveloping algebra are not, as expected, quadratic-linear (deformed commutator = Lie algebra element) but homogeneous quadratic, the right-hand side containing an extra central element as a factor. To put it another way, the structure constants are multiples of a function of the Casimirs. This is an interesting feature; it means that the PBW theorem leads to a description of the vector space structure of the enveloping algebra as a space of polynomial functions not on a flat space (namely the Lie algebra) but on a hypersurface of the same dimension. Thus quantisation gives rise to curvature of the enveloping algebra. 2. The algebra generated by our quantum Lie algebra cannot be expected to be the same size as the quantised enveloping algebra, since this differs from the classical enveloping algebra in containing infinite power series (specifically, exponentials) in the generators. On the other hand, if, as is common in the mathematical literature, one takes the generators of the quantised enveloping algebra to be $q^{H_i}$ in place of the Cartan subalgebra elements $H_i$, then one loses the polynomials in the $H_i$. It is a remarkable fact \cite{JL} that the quantised enveloping algebra so defined contains a subalgebra, its locally finite part, which has the same structure as the classical enveloping algebra and is a large part of the quantised enveloping algebra. It seems likely that the enveloping algebra of the quantum Lie algebra, as defined in this paper, coincides with the locally finite part of the quantised enveloping algebra. In this paper this is proved for the case of $\mathfrak{sl}(2)$. 3. The construction of a quantum Lie algebra given here makes sense for any simple Lie algebra, but it is only for the Lie algebras of type $A_n$ that it yields a system with the same dimension as the classical Lie algebra. Thus it is only in these cases that a PBW theorem is likely to hold. Even here, I have no general proof of the theorem; it can be checked by explicit calculation for $A_1$ and $A_2$. 4. As indicated above, the commutator-like relations obtained from the quantum Lie algebra are not the only relations in the quantised enveloping algebra; there is also a relation between central elements, giving the extra (non-classical) central generator in terms of more familiar Casimir elements which are polynomials in Lie algebra elements. The existence of this extra relation is at present only a conjecture in all cases except $\mathfrak{sl}(2)$, for which it can be explicitly demonstrated. The organisation of the paper is as follows. The next section contains the definition of a quantum Lie algebra and the proof that such a system can be found inside the quantised enveloping algebra. This relies heavily on the theorems of Joseph and Letzter \cite{JL} concerning the locally finite part of a quantised enveloping algebra. In section 3 we complete the theory for $\mathfrak{sl}(2)_q$, proving the Poincar\'{e}-Birkhoff-Witt theorem, determining the extra relation between central elements, and proving that the algebra defined by the quantum Lie algebra relations, together with the extra relation, is the locally finite part of the quantised enveloping algebra. Section 4 contains some material on the quantum Lie algebra $\mathfrak{sl}(3)_q$, with formulae for the Lie brackets and a discussion of the rather surprising symmetry properties of the enveloping algebra. Delius et al. \cite{DelHuff, Del2} have also looked for quantum deformations of Lie algebras in the ad-invariant subspaces of quantised enveloping algebras. Their quantum Lie algebras lack the antisymmetry and Jacobi axioms that are given here; on the other hand, they retain more of the symmetry of classical Lie algebras. Majid \cite{Maj:Liealg} has proposed axioms for ``braided Lie algebras''; these include a Jacobi identity but no anticommutativity axiom and, not being satisfied in the classical case, do not constitute a generalisation of the Lie algebra axioms. I am grateful to Volodimir Lyubashenko for enlightening discussions. {\em Notation.}\quad I use the usual notation for coproducts in a Hopf algebra: \begin{eqnarray*} \Delta (x)&=&\sum x_{(1)}\otimes x_{(2)},\\ (\Delta \otimes \operatorname{id})\circ\Delta (x)&=&\sum x_{(1)}\otimes x_{(2)}\otimes x_{(3)}\qquad \text{etc.,} \end{eqnarray*} and for the $q$-analogue function: \[ [x]_q=\frac{q^x-q^{-x}}{q-q^{-1}}. \] \section{General Construction of Quantum Lie Algebras} \begin{defn} A {\em quantum Lie algebra} over a field $\mathbb{K}$ is a $\mathbb{K}$-vector space $L$ together with linear maps $\beta : L\otimes L \to L$ (the quantum Lie bracket, written $x \otimes y \mapsto [x,y]$) and $\gamma : L \otimes L \to L \otimes L$ (the quantum antisymmetriser) satisfying: \noindent 1. Quantum antisymmetry: For $t \in L \otimes L$, \begin{equation} \gamma(t)=0\; \Rightarrow\; \beta (t)=0. \end{equation} \noindent 2. The quantum Jacobi identity: For $x \in L$, define $\operatorname{ad} x: L \to L$ by $\operatorname{ad} x(y)=[x,y]$. Then \begin{equation} \label{antisym} \operatorname{ad} [x,y]=m\circ (\operatorname{ad} \otimes \operatorname{ad} )\circ \gamma (x \otimes y) \end{equation} where $m : \operatorname{End} L \otimes \operatorname{End} L \to \operatorname{End} L$ denotes multiplication of linear maps on $L$. \end{defn} \begin{defn} A quantum Lie algebra is {\em balanced} if it satisfies a second Jacobi identity, \noindent 3. The right quantum Jacobi identity: For $x \in L$, define $\operatorname{rad} x: L \to L$ by $\operatorname{rad} x(y)=[y,x]$. Then \begin{equation} \label{Jacobi} \operatorname{rad} [x,y]=\,\stackrel{\leftarrow}{m}\circ (\operatorname{rad} \otimes \operatorname{rad} )\circ \gamma (x \otimes y) \end{equation} where $\stackrel{\leftarrow}{m}:\operatorname{End} L \otimes \operatorname{End} L \to \operatorname{End} L$ denotes the opposite multiplication of linear maps: $\stackrel{\leftarrow}{m}(A\otimes B)=BA$. \end{defn} If $\gamma$ is the usual antisymmetrisation map, $\gamma (x \otimes y)=x\otimes y-y\otimes x$, the above become the usual antisymmetry and Jacobi axioms for a Lie algebra. These generalised axioms were found by Woronowicz to hold for the geometrical Lie bracket of left-invariant vector fields on a quantum Lie group \cite{Wor}; he also found that in this case the quantum antisymmetriser was of the form $\gamma = 1-\sigma$ where $\sigma$ satisfies the braid relation. This is not true of the quantum Lie algebras described in this paper; I do not know if there is a weaker axiom of this type that they satisfy. Woronowicz included only one Jacobi identity among his axioms; the geometrical construction does not obviously yield a balanced quantum Lie algebra. This is also, at first sight, true of the construction from universal enveloping algebras to be described in this paper. However, inspection of the quantum Lie algebra $\mathfrak{sl} (2)_q$ reveals that it is in fact balanced, and it seems likely that this is true more generally. \begin{defn} A {\em representation} of a quantum Lie algebra $(L, \beta, \gamma)$ is a linear map $\rho : L \to \operatorname{End} V$ for some vector space $V$, such that \begin{equation} \rho \circ \beta = m_V\circ (\rho \otimes \rho)\circ \gamma : L \otimes L \to \operatorname{End} V \end{equation} where $m_V$ denotes multiplication of linear operators on $V$. \end{defn} Then the Jacobi identity \eqref{Jacobi} states that ad is a representation of $L$ in $\operatorname{End}_{\mathbb{K}}V$, as with classical Lie algebras. \begin{defn} The {\em universal enveloping algebra} of a quantum Lie algebra $(L, \beta , \gamma)$ is the quotient of the tensor algebra of $L$ by the ideal generated by $(\beta - \gamma)(V \otimes V)$. Then every representation of $L$ automatically generates a representation of its universal enveloping algebra. \end{defn} Let $x_1, \ldots ,x_n$ be a basis for $L$, and let $\gamma _{ij}^{kl}$ be the matrix elements of $\gamma$ with respect to the basis ${x_i \otimes x_j}$ of $L \otimes L$; in the classical case, \[ \gamma _{ij}^{kl}=\delta_i^k\delta_j^l - \delta_i^l/\delta_j^k. \] Then the Jacobi identity can be written as \[ [\, [x_i, x_j], x_k]=\gamma_{ij}^{lm}[x_l, [x_m, x_k]\,], \] the right Jacobi identity as \[ [x_i, [x_j, x_k]\, ]=\gamma_{jk}^{lm}[\, [x_i, x_l], x_m], \] the representation property as \[ \rho ([x_i,x_j])=\gamma_{ij}^{kl}\rho(x_k)\rho(x_l) \] and the relations in the universal enveloping algebra as \[ \gamma_{ij}^{kl}x_k x_l = [x_i, x_j] = \beta_{ij}^k x_k \] where $\beta_{ij}^k$ are the structure constants of $L$ (i.e. the matrix elements of $\beta$). Let $q$ be a fixed element of the ground field $\mathbb{K}$, $\mathfrak{g}$ a simple Lie algebra over $\mathbb{K}$, and let $\mathcal{U}=\overline{U}_q(\mathfrak{g})$ be the simply-connected quantised enveloping algebra of $\mathfrak{g}$, which is defined as follows. $\mathcal{U}$ contains a copy of the group algebra of the weight lattice of $\mathfrak{g}$ (with basis elements denoted by $q^\lambda$ where $\lambda$ is an integral weight of $\mathfrak{g}$ with respect to a Cartan subalgebra $\mathfrak{h}$) and generators $E_1, \ldots , E_r, F_1, \ldots , F_r$ corresponding to fundamental roots $H_1, \ldots , H_r$ of $\mathfrak{g}$, with relations \begin{eqnarray*} q^\lambda E_i q^{-\lambda}&=&q^{\<H_i,\lambda\>}E_i\\ q^\lambda F_i q^{-\lambda}&=&q^{-\<H_i,\lambda\>}E_i\\ E_iF_j-F_jE_i &=& \delta_{ij} \( \frac{q^{2H_i}-q^{-2H_i}}{q_i-q_i^{-1}}\) \end{eqnarray*} \begin{eqnarray*} {}[E_i,\:[E_i,\ldots ,[E_i, E_j]_{q_i^{-n+1}}]_{q_i^{-n+3}}\cdots]_{q_i^{n-1}}&=&0\\ {}[F_i,\:[F_i,\ldots ,[F_i, F_j]_{q_i^{-n+1}}]_{q_i^{-n+3}}\cdots]_{q_i^{n-1}}&=&0\\ \end{eqnarray*} where $\<,\>$ denotes the Killing form in the Cartan subalgebra $\mathfrak{h}$ (which we do not distinguish from its dual $\mathfrak{h}^*$), \[ q_i = q^{\<H_i,H_i\>},\] \[ n = \frac{2\<H_i,H_j\>}{\<H_i,H_i\>}=\text{number of roots $H_j + kH_i$ with $k>0$,} \] $$ [X,Y]_p=pXY-p^{-1}YX. \leqno{\text{and}} $$ Then $\mathcal{U}$ is a Hopf algebra with comultiplication \begin{eqnarray*} \Delta (E_i)&=&E_i\otimes q^{-H_i}+q^{H_i}\otimes E_i,\\ \Delta (F_i)&=&F_i\otimes q^{-H_i}+q^{H_i}\otimes F_i,\\ \Delta(q^\lambda )&=&q^\lambda \otimes q^\lambda \end{eqnarray*} and antipodes \[ S(E_i)=-q_i^{-1}E_i,\qquad S(F_i)=-q_iF_i,\qquad S(q^\lambda )=q^{-\lambda}. \] As in any Hopf algebra, we have the adjoint representation of $\mathcal{U}$ on itself, given by $x\mapsto \operatorname{ad} x \in \operatorname{End}_\mathbb{K} \mathcal{U}$ where \[ \operatorname{ad} x (y)=\sum x_{(1)}yS(x_{(2)})\qquad\text{if}\quad\Delta(x)=\sum x_{(1)}\otimes x_{(2)}. \] This is a representation: \begin{equation} \label{rep} \operatorname{ad} (xy)=\operatorname{ad} x . \operatorname{ad} y, \end{equation} and each $\operatorname{ad} x$ is a generalised derivation of $U$, in the sense that \begin{equation} \label{Der1} \operatorname{ad} x (yz)=\sum\operatorname{ad} x_{(1)}(y) . \operatorname{ad} x_{(2)}(z). \end{equation} The adjoint action of the generators of $U$ is given by \begin{eqnarray} \operatorname{ad} E_i(x)&=&E_i xq^{H_i}-q^{H_i -1}xE_i, \notag \\ \operatorname{ad} F_i(x)&=&F_i xq^{H_i}-q^{H_i +1}xF_i,\\ \operatorname{ad} q^\lambda (x)&=&q^\lambda xq^{-\lambda}. \notag \end{eqnarray} We use the adjoint representation to define a bracket on $\overline{U}_q (\g)$, defining \begin{equation} \label{adj} {}[x, y]=\operatorname{ad} x(y). \end{equation} Each $\operatorname{ad} x$ is also a generalised derivation of this bracket: \begin{equation} \label{Jac1} {}[x, [y, z]]=\sum [ [x_{(1)}, y], [x_{(2)}, z] ]. \end{equation} This is a kind of Jacobi identity for the adjoint bracket. The representation property gives another kind of Jacobi identity: \begin{equation}\label{Jac2} {}[\, [x, y], z]=\sum [x_{(1)}, [y, [x_{(2)}, z]\,]\,]. \end{equation} Both of these are valid in any Hopf algebra \cite{Maj:Liealg}. However, they are not suitable as replacements for the Jacobi identity in defining a notion of quantum Lie algebra, as the ad-invariant subspaces of quantised enveloping algebras which are candidates for quantum Lie algebras are not usually subcoalgebras, and so the coproduct which is required to formulate the above Jacobi identities is not an intrinsic notion. That is why we are interested in the third kind of Jacobi identity, of the form (\ref{Jacobi}), which is specific to quantised enveloping algebras. The vector space $\overline{U}_q (\g)$ does not split up nicely into irreducible ad-invariant subspaces; strictly speaking it is not a direct sum of such spaces, for there are elements with components in an infinite number of them. The {\em locally finite part} of $\mathcal{U}=\overline{U}_q (\g)$ is the set of elements for which this is not true, i.e. \begin{equation} \mathcal{F} ={\rm LF}\overline{U}_q (\g)=\{x\in \mathcal{U} : \operatorname{ad} \mathcal{U} (x) \text{ is finite-dimensional}\} \end{equation} The discovery of Joseph and Letzter was that $\mathcal{F}$ is easily described, and has the same composition as the classical enveloping algebra $U(\mathfrak{g})$: \begin{thm}[Joseph and Letzter \cite{JL}] \label{JosLet} Let $\lambda$ be a dominant integral \linebreak weight of $\mathfrak{g}$, $V_\lambda$ the simple $\mathfrak{g}$-module with highest weight $\lambda $; $V_\lambda $ can also be regarded as a simple $\mathcal{U}$-module where $\mathcal{U}=\overline{U}_q (\g)$, and $\operatorname{End}_\mathbb{K} V_\lambda $ is a $\mathcal{U}$-module by conjugation (i.e. if $x\in \mathcal{U}$ acts on $V_\lambda $ by $\rho (x)$, it acts on $\operatorname{End} _\mathbb{K} V_\lambda $ by $T\mapsto\sum\rho (x_{(1)})T\rho (Sx_{(2)})$). Then \begin{enumerate} \item $E_\lambda =\operatorname{ad} \mathcal{U} (q^{-4\lambda })$ is a finite-dimensional $\operatorname{ad}$-invariant subspace of $\mathcal{U}$, isomorphic to $\operatorname{End}_\mathbb{K} V_\lambda $ as $\mathcal{U}$-module. \item The locally finite part of $\mathcal{U}$ is \begin{equation} \mathcal{F}=\sum_\lambda E_\lambda \end{equation} where the sum extends over all dominant integral weights of $\mathfrak{g}$. \end{enumerate} \end{thm} For a dominant integral weight $\lambda $, we write $N_\lambda =\dim_\mathbb{K} V_\lambda $. Then our result on the existence of quantum Lie algebras in $\overline{U}_q (\g)$ is \begin{thm}\label{genLiealg} For each dominant integral weight $\lambda $, $\overline{U}_q (\g)$ contains an \linebreak $(N_\lambda ^2 - 1)$-dimensional $\operatorname{ad}$-invariant subspace $L_\lambda $, a linear map $\sigma :L_\lambda \otimes L_\lambda \to L_\lambda \otimes L_\lambda $ and a central element $C_\lambda \in \overline{U}_q (\g)$ such that \begin{eqnarray} xy - m\circ \sigma (x \otimes y) &=& C_\lambda [x, y] \quad\quad\quad (x, y\in L_\lambda ) \label{qcom1}\\ \text{i.e.}\quad\quad\quad x_i x_j - \sigma _{ij}^{kl}x_kx_l &=& C_\lambda [x_i, x_j] \label{qcom2} \end{eqnarray} where $m$ denotes multiplication in $\overline{U}_q (\g)$, $[\,,\,]$ is the adjoint bracket, and $\{x_i\}$ is a basis of $L_\lambda $. \end{thm} \begin{proof} Writing $\mathcal{U}=\overline{U}_q (\g)$, let $\overline{L}=\operatorname{ad} \mathcal{U}(q^{-4\lambda })$. Then $\overline{L}$ is invariant under $\operatorname{ad} \mathcal{U}$, and by \thmref{JosLet} it is isomorphic as a $\mathcal{U}$-module to $\operatorname{End}_\mathbb{K} V_\lambda $. This contains a one-dimensional submodule (spanned by the identity of $\operatorname{End}_\mathbb{K} V_\lambda $) and so by complete reducibility of representations of $\overline{U}_q (\g)$ \cite[p.~324]{ChaP} we can write \[ \overline{L}=\mathbb{K} C_\lambda \oplus L \] as a $\mathcal{U}$-module, for some $C_\lambda \in \overline{L}$. Then $C_\lambda $ carries the trivial representation of $\mathcal{U}$: \[ \operatorname{ad} x (C_\lambda )=\varepsilon (x)C_\lambda ,\qquad \forall x \in \mathcal{U}.\] Hence $C_\lambda $ is a central element of $\mathcal{U}$, for \[ xC_\lambda =\sum x_{(1)}C_\lambda Sx_{(2)}x_{(3)} =\sum\operatorname{ad} x_{(1)}(C_\lambda )x_{(2)} =\sum\varepsilon (x_{(1)})C_\lambda x_{(2)} =C_\lambda x . \] Now $\overline{L}$ is a left coideal of $\mathcal{U}$: \begin{align} x \in\overline{L}\quad &\Longrightarrow &\quad x &=\operatorname{ad} u(q^{-4\lambda })\; \text{ for some }u\in \mathcal{U} \notag\\ &\Longrightarrow &\Delta (x) &=\sum\Delta (u_{(1)})\Delta (q^{-4\lambda})\Delta (Su_{(2)}) \label{Delta}\\ &&&=\sum (u_{(1)}\otimes u_{(2)})(q^{-4\lambda }\otimes q^{-4\lambda }) (Su_{(4)}\otimes Su_{(3)}) \notag\\ &&&\in \; \mathcal{U}\otimes \overline{L}.\notag \end{align} Hence for any $x\in L$ we can write \[\Delta (x)=x_0\otimes C_\lambda + \sum u^\prime \otimes x^\prime \] with $x_0, u^\prime \in U$ and $x^\prime \in L$. In fact we can choose $C_\lambda $ so that $x_0=x$: for if $q^{-4\lambda }=C_\lambda +w$ with $w\in L$ and if $x=\operatorname{ad} u(q^{-4\lambda })$, then from \eqref{Delta} \begin{eqnarray*} \Delta (x)&=&\sum u_{(1)}q^{-4\lambda}Su_{(3)}\otimes\operatorname{ad} u_{(2)}(C_\lambda +w)\\ &=&\sum u_{(1)}q^{-4\lambda} Su_{(3)}\otimes\(\varepsilon(u_{(2)})C_\lambda + \operatorname{ad} u_{(2)}(w)\)\\ &=&u_{(1)}q^{-4\lambda }Su_{(2)}\otimes C_\lambda +\sum u_{(1)}q^{-4\lambda }Su_{(3)}\otimes\operatorname{ad} u_{(2)}(w) \end{eqnarray*} but $\operatorname{ad} u_{(2)}(w)\in L$ since $L$ is invariant under $\operatorname{ad} U$, and the first term is $x\otimes C_\lambda $. In $L$ we have the bracket $[x, y]=\operatorname{ad} x(y)$. Define $\sigma : L\otimes L \to L\otimes L$ by \begin{equation} \label{sigma} \sigma (x\otimes y)=\sum\operatorname{ad} u^\prime (y)\otimes x^\prime \quad \text{where } \Delta (x)=x\otimes C_\lambda + \sum u^\prime\otimes x^\prime \end{equation} \begin{equation} \label{adsigma} \text{i.e.}\quad\quad\quad\sigma (x \otimes y)=\sum \operatorname{ad} x_{(1)}(y)\otimes x_{(2)} - [x, y]\otimes C_\lambda . \end{equation} Then \begin{eqnarray*} m\circ\sigma (x\otimes y)&=&\sum x_{(1)}y(Sx_{(2)})x_{(3)} - [x,y]C_\lambda \\[2pt] &=& xy - [x,y]C_\lambda . \end{eqnarray*} Since $C_\lambda $ is central, this establishes the theorem. \end{proof} The existence of a quantum Lie algebra in $U_q(\mathfrak{sl} (n))$ follows immediately: \begin{thm} If $q$ is not a $2n${\em th} root of unity nor a primitive twelfth root of unity, $U_q(\mathfrak{sl} (n))$ contains an $(n^2-1)$-dimensional quantum Lie algebra with the adjoint bracket. \end{thm} \begin{proof} In Theorem 2, take $\mathfrak{g} = \mathfrak{sl} (n)$ and $\lambda $ the highest weight of the $n$-dimensional (defining) representation. Then $L_\lambda $ carries an irreducible representation of $U_q(\mathfrak{g})$ (a deformation of the adjoint representation of $\mathfrak{sl} (n)$). Since $C_\lambda $ is central, $\operatorname{ad} C_\lambda $ acts on $L_\lambda $ as a multiple of the identity; in a lemma we will show that this multiple is $q^2-1+q^{-2}$ (which vanishes only if $q$ is a primitive 12th root of unity). Assuming this for the moment, we apply ad to \eqref{qcom2} and restrict to $L=L_\lambda $ to obtain \begin{equation} \label{adL} \operatorname{ad}_L[x_i, x_j]=\gamma _{ij}^{kl}\operatorname{ad}_Lx_k . \operatorname{ad}_Lx_l \end{equation} where \[ \gamma = \frac{1-\sigma }{q^2-1+q^{-2}}. \] Thus the quantum Jacobi identity is satisfied in $L$ by the adjoint bracket. Also, from \ref{qcom1}, \begin{eqnarray*} \gamma (x\otimes y)=0&\Longrightarrow & \sigma \(\sum x\otimes y\) =\sum x\otimes y\\ &\Longrightarrow & C_\lambda \sum [x, y]=0\\ &\Longrightarrow &\sum[x, y]=0 \end{eqnarray*} since $\overline{U}_q (\g)$ contains no zero divisors \cite{JL}. Thus the adjoint bracket restricted to $L_\lambda$ has the antisymmetry property with respect to the quantum antisymmetriser $\gamma $. Completion of the proof now needs the following lemmas. \begin{lemma}\label{Clemma} The central element in $\overline{L}$ is given by \begin{equation}\label{C} C_\lambda =\sum_{r=0}^{n-1}(-1)^r\frac{[n-r]_q}{[n]_q}K_r \end{equation} where $K_r$ is defined recursively by \begin{equation}\label{Kr} K_r=\operatorname{ad}(F_rE_r)K_{r-1},\qquad K_0=q^{-4\lambda }. \end{equation} \end{lemma} \begin{lemma} \label{adc} The adjoint action of the central element on $L$ is \begin{equation} \operatorname{ad} C_\lambda (x) = (q^2-1+q^{-2})x \quad \text{for } x\in L. \end{equation} \end{lemma} The proofs, both unenlightening calculations, are relegated to an appendix. \end{proof} Although the antisymmetriser $\gamma $ in this Lie algebra is not of the Worono\-wicz form $\gamma =1-\sigma $ where $\sigma $ satisfies the braid relation, it is of a related form. According to \ref{adL} and \ref{adsigma}, $\gamma $ is a scalar multiple of $1-\sigma $ where \[ \sigma (x\otimes y)=\overline{\sigma }(x\otimes y)-[x, y]\otimes C_\lambda \] and $\overline{\sigma} :\overline{L}\otimes\overline{L}\to\overline{L}\otimes\overline{L}$ is given by \[\overline{\sigma }(x\otimes y)=\sum\operatorname{ad} x_{(1)}(y)\otimes x_{(2)}. \] It is a straightforward exercise in Hopf algebra manipulation \cite{Wor2} to show that $\overline{\sigma }$ satisfies the braid relation. \section {The quantum Lie algebra $\mathfrak{sl} (2)_q$} In the case of $\mathfrak{g} = \mathfrak{sl} (2)$, the simply connected quantised enveloping algebra is generated by $E$, $F$ and $q^{\pm \frac{1}{2} H}$ with relations \begin{eqnarray*} q^H Eq^{-H} &=& qE,\\ q^H F q^{-H} &=& q^{-1}F,\\ EF-FE &=& \frac{q^{2H} - q^{-2H}}{q-q^{-1}}. \end{eqnarray*} The highest weight of the fundamental two-dimensional representation is (i.e. corresponds via the Killing form to) $\lambda =\frac{1}{2} H$, so in the notation of \lemref{Clemma} we have \begin{eqnarray*} K_0&=&q^{-4\lambda }\,=\,q^{-2H},\\ K_1&=&\operatorname{ad}(FE)K_0\,=\,-(q-q^{-1})(qEF - q^{-1}FE). \end{eqnarray*} Hence, according to \lemref{Clemma}, the central element is \begin{equation} C=q^{-2H}+\(\frac{q-q^{-1}}{q+q^{-1}}\) (qEF-q^{-1}FE) \end{equation} A basis for the quantum Lie algebra is $(X_\pm ,\:X_0)$ where \begin{eqnarray} X_+&=& \frac {\operatorname{ad} E(q^{-2H})}{q-q^{-1}}=q^{-H}E,\notag\\ X_- &=& -\frac{\operatorname{ad} F(q^{-2H})}{q-q^{-1}}=q^{-H}F,\\ X_0 &=& - \frac{\operatorname{ad} (FE)q^{-2H}}{q^2-q^{-2}}=\frac{qEF-q^{-1}FE}{q+q^{-1}} \notag\\ &=&\frac{C-q^{-2H}}{q-q^{-1}}. \notag \end{eqnarray} The adjoint brackets of these basis elements can be calculated to be \begin{equation}\label{[]} \begin{array}{ccc} {[}X_{+},X_{+}]=0, & [X_{+},X_{0}]=-q^{-1}X_{+}, & [X_{+},X_{-}]=(q+q^{-1})X_{0}, \\[5pt] {[}X_{0},X_{+}]=qX_{+},& [X_{0},X_{0}]=(q-q^{-1})X_{0},& [X_{0},X_{-}]=-q^{-1}X_{-}, \\[5pt] {[}X_{-},X_{+}]=-(q+q^{-1})X_{0}, & \:[X_{-},X_{0}]=qX_{-}, & [X_{-}, X_{-}]=0. \end{array} \end{equation} The quantum Lie algebra $\mathfrak{sl} (2)_q$ is defined by these brackets together with the quantum antisymmetriser $\gamma =(q^2-1+q^{-2})^{-1}\gamma ^\prime$ where \begin{equation}\label{qantisym} \begin{split} \gamma ^\prime (X_\pm\otimes X_\pm)&=0,\\[3pt] \gamma ^\prime (X_\pm\otimes X_0)&=q^{\mp 2}X_\pm\otimes X_0 - X_0\otimes X_\pm,\\[3pt] \gamma ^\prime (X_0\otimes X_\pm)&=q^{\pm 2}X_0\otimes X_\pm - X_\pm\otimes X_0,\\[3pt] \gamma ^\prime (X_+\otimes X_-)&=X_+\otimes X_- - X_ -\otimes X_+ +(q^2-q^{-2})X_0\otimes X_0,\\ \gamma ^\prime (X_-\otimes X_+)&=-\gamma^\prime(X_+\otimes X_-),\\ \gamma ^\prime (X_+\otimes X_0)&=(q-q^{-1})^2X_0\otimes X_0 +\(\frac{q-q^{-1}}{q+q^{-1}}\) (X_+\otimes X_- - X_-\otimes X_+)\\ &=\(\frac{q-q^{-1}}{q+q^{-1}}\)\gamma ^\prime (X_+\otimes X_-). \end{split} \end{equation} We note the following properties of $\mathfrak{sl} (2)_q$: \begin{propn} \begin{enumerate} \item $\mathfrak{sl} (2)_q$ is a balanced quantum Lie algebra. \item The quantum antisymmetriser $\gamma$ is essentially idempotent: \[ \gamma ^2 = \(\frac{q^2+q^{-2}}{q^2-1+q^{-2}}\)\gamma . \] \end{enumerate} \end{propn} \begin{proof} By calculation. \end{proof} The second of these properties is peculiar to $\mathfrak{sl} (2)_q$, reflecting the simple structure of its representations, as will become apparent when we consider $\mathfrak{sl} (3)_q$. The first, however, may be more general. The enveloping algebra of the quantum Lie algebra $U(\mathfrak{sl} (2)_q)$ --- not to be confused with the quantised enveloping algebra $U_q(\mathfrak{sl} (2))$ --- is defined by means of the brackets \eqref{[]} and the antisymmetriser \eqref{qantisym}. By redefining the generators to absorb the factor $q^2-1+q^{-2}$, it can be presented as the associative algebra generated by three elements $Y_0,\;Y_\pm$ with relations \begin{equation}\label{Y} \begin{split} qY_0Y_+ - q^{-1}Y_+Y_0 &=Y_+\\[3pt] q^{-1}Y_0Y_- - qY_-Y_0&=-Y_-\\[3pt] Y_+Y_--Y_-Y_++(q^2-q^{-2})Y_0^2&=(q+q^{-1})Y_0. \end{split} \end{equation} On the other hand, as elements of the quantised enveloping algebra $U_q(\mathfrak{sl} (2))$ the generators $X_\pm$, $X_0$ and the central element $C$ satisfy (by \linebreak \thmref{genLiealg}) \begin{eqnarray}\label{XC} q^2X_0X_+-X_+X_0&=&qCX_+,\notag\\[3pt] q^{-2}X_0X_--X_-X_0&=& - q^{-1}CX_-,\notag\\[3pt] X_+X_--X_-X_++(q^2-q^{-2})X_0^2&=&(q+q^{-1})CX_0\\[3pt] \text{and}\quad\quad\quad\quad CX_m=X_mC\qquad &\phantom{=}& \qquad (m=0,\: \pm 1).\notag \end{eqnarray} However, the central element $C$ is not independent of the other generators $X_m$: there is a further relation \begin{equation}\label{Cas} C^2=(q-q^{-1})^2\( X_0^2 + \frac{qX_-X_++q^{-1}X_+X_-}{q+q^{-1}}\) +1 \end{equation} as can be verified by direct calculation. The main results of this section are that the locally finite part of $\overline{U}_q (\g)$ is isomorphic, as an algebra, to the algebra with generators $X_0,\: X_\pm ,\: C$ and relations \eqref{XC}, and as a vector space to the algebra of polynomials in four commuting variables $X_0,\: X_\pm,\: C$ subject to the relation \eqref{Cas}. We use the following notation for the various algebras: $\mathcal{A}$ = enveloping algebra of the quantum Lie algebra $\mathfrak{sl} (2)_q$ {}\quad = algebra generated by $Y_\pm ,\; Y_0$ with relations \eqref{Y}, $\mathcal{B}$ = algebra generated by $X_\pm,\; X_0,\; C$ with relations \eqref{XC}, $\mathcal{C}$ = algebra generated by $X_\pm,\; X_0,\; C$ with relations \eqref{XC} and \eqref{Cas}, $\mathcal{F}$ = locally finite part of $\overline{U}_q (\g)$. \newline Then $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ are related by homomorphisms \begin{eqnarray*} \varphi &:& \mathcal{B} \to \mathcal{A}\quad \text{with } \varphi (C) =1\\ \psi &:& \mathcal{B}\to\mathcal{C}\quad \text{enforcing \eqref{Cas}}. \end{eqnarray*} \begin{lemma} \label{lem1} The algebra $\mathcal{A}$ has a basis of ordered monomials $Y_-^lY_0^mY_+^n$ where $l,\: m,\: n$ are non-negative integers. \end{lemma} \begin{proof} It is evident that the commutation relations \eqref{Y} allow any product of $Y$s to be expressed as a combination of ordered monomials. We have only to show that these monomials are independent. According to the diamond lemma \cite{Berg}, this will follow if it can be shown that the two methods of reducing $Y_+Y_0Y_-$ to a combination of ordered monomials both lead to the same result. This is easily checked using \eqref{Y}. \end{proof} \begin{lemma}\label{lem2} The algebra $\mathcal{B}$ has a basis of ordered monomials $C^kX_-^lX_0^mX_+^n$. \end{lemma} \begin{proof} $\mathcal{A}\otimes\mathbb{K}[C]$ contains an isomorphic copy of $\mathcal{B}$ generated by $C$ and $X_m=CY_m$. This is spanned by monomials $C^kX_-^lX_0^mX_+^n=C^{k+l+m+n}Y_-^lY_0^mY_+^n$ and by \lemref{lem1} these are independent. \end{proof} \begin{lemma}\label{lem3} The algebra $\mathcal{C}$ is isomorphic as a vector space to the space of polynomials in four commuting variables $C,\; X_0,\; X_+,\; X_-$ subject to the relation \eqref{Cas}. \end{lemma} \begin{proof} Let \[ C_2=(q+q^{-1})X_0^2+qX_-X_++q^{-1}X_+X_-. \] Using \eqref{XC}, it can be checked that $C_2$ commutes with $X_0,\; X_+$ and $X_-$. Hence the two-sided ideal $I$ generated by $(q+q^{-1})(C^2-1)-(q-q^{-1})^2C_2$ (i.e. the kernel of the relation \eqref{Cas}) is the same as the left ideal generated by it. It follows that the monomials $X_-^lX_0^mX_+^n$ and $CX_-^lX_0^mX_+^n$ are independent modulo $I$, and therefore form a basis of $\mathcal{C}$. But these monomials also form a basis of the commutative algebra described in the theorem. \end{proof} \begin{thm} \label{locfin} The locally finite part of $\overline{U}_q(\ssll (2))$ is isomorphic to $\mathcal{C}$. \end{thm} \begin{proof} By \thmref{JosLet}, the locally finite part of $\mathcal{U}=\overline{U}_q (\g)$ is \[ \mathcal{F} = \sum_{n=0}^\infty V_n \qquad \text{where}\quad V_n=\operatorname{ad} \mathcal{U}(q^{-2nH}). \] Since $\frac{1}{2} nH$ is the highest weight of the $(n+1)$-dimensional representation of $\mathcal{U}$, $\dim V_n=(n+1)^2$. By the derivation property \eqref{Der1} of the adjoint action, \[ V_{m+n} \subseteq V_mV_n. \] $$ V_n \subseteq V_1^n. \leqno{\text{Hence}} $$ Let $\mathcal{U}^\prime$ be the subalgebra of $\mathcal{U}$ generated by $V_1$, i.e. by $C,\: X_-,\: X_+$ and $X_0$. Since the relations \eqref{XC} and \eqref{Cas} hold in $\mathcal{U}$, $\mathcal{U}^\prime$ is a quotient of $\mathcal{C}$ and therefore, by \lemref{lem3}, is spanned by monomials $X_-^kX_0^lX_+^m$ and $CX_-^kX_0^lX_+^m$. Let $W_n$ be the subspace spanned by monomials $X_-^kX_0^lX_+^m$ with $k+l+m=n$. Then \begin{equation}\label{V1} V_1^{n+2}=V_1^n+CW_{n+1} +W_{n+2} \end{equation} as can be proved by induction, using $$ V_1W_r =CW_r +W_{r+1} $$ $$ C^2 W_r +W_{r+2}=W_r + W_{r+2} \leqno{\text{and}} $$ which follows from \eqref{Cas}. But \begin{equation}\label{V3} \dim W_r\leq \frac{1}{2} (r+1)(r+2), \end{equation} $$ \dim(V_1^{n+2})\leq\dim (V_1^n)+(n+3)^2. \leqno{\text{so}} $$ We can now prove by induction that \[ V_1^n = V_n\oplus V_{n-2}\oplus \cdots \oplus (V_0 \text{ or } V_1). \] For if this is true, then from \eqref{V1} we have \[ V_1^{n+2}\supseteq V_{n+2}\oplus V_n \oplus \cdots \] since the $V_n$ are known to be complementary in $\mathcal{U}$ \cite{JL}. By dimensions the inclusion must be equality, so the sum in \eqref{V1} is direct and also \eqref{V3} must be equality. It follows that the monomials $X_-^kX_0^lX_+^m$ and $CX_-^kX_0^lX_+^m$ are independent and form a basis of $\sum V_n=\mathcal{F}$. By \lemref{lem3}, $\mathcal{F}$ is isomorphic to $\mathcal{C}$. \end{proof} In one sense, the locally finite part of the quantised enveloping algebra is the same size as the classical enveloping algebra of $\mathfrak{sl} (2)$. Both can be written as $Z\otimes \mathcal{U}_0$ where $Z$, the centre, is a polynomial algebra in one variable and $\mathcal{U}_0$ has the same decomposition into simple modules under the adjoint action in both cases. However, the above shows that it is more appropriate to regard the locally finite part of $\overline{U}_q(\ssll (2))$ as twice as big as the classical enveloping algebra. In the classical case the centre is generated by a Casimir element which is a polynomial in the Lie algebra generators; in the quantum case the centre is generated by a square root of such a polynomial, tending to 1 in the classical limit. \section{The Quantum Lie Algebra $\mathfrak{sl} (3)_q$} This section contains some details of and observations on the quantum Lie algebra of type $A_2$. All were established by mindless calculation. We take $\lambda $ to be the highest weight of the triplet representation, \[ \lambda =\frac{1}{3} (2H_1 + H_2) \] (identifying weights with elements of the Cartan subalgebra by means of the Killing form, and taking $H_1$ and $H_2$ to have unit length). A convenient basis for the quantum Lie algebra $\mathfrak{sl} (3)_q$ is the following, consisting of two elements $T_1, T_2$ (scalar multiples of the $K_1, K_2$ defined in \lemref{Clemma}) and elements $X_\alpha = X_{\pm 1},\,X_{\pm2},\,X_{\pm 12}$ corresponding to the roots $\alpha = \pm H_1,\,\pm H_2,\,\pm (H_1+H_2)$ of $\mathfrak{sl} (3)$: \begin{eqnarray*} T_1&=&\frac{\operatorname{ad} (F_1E_1)q^{-4\lambda }}{q-q^{-1}} = q^{-\thrd{2}(H_1+2H_2)} (qE_1F_1-q^{-1}F_1E_1)\\ T_2&=&\operatorname{ad} (F_2E_2)T_1\\ X_1&=&\frac{\operatorname{ad} E_1(q^{-4\lambda})}{q-q^{-1}} =q^{-\frac{1}{3}(5H_1+4H_2)}E_1 \\ X_{12}&=&\operatorname{ad} E_2(X_1) =q^{-\frac{1}{3}(5H_1+H_2)}(E_2E_1-q^{-1}E_1E_2)\\ X_2&=&\operatorname{ad} F_1(X_{12})\\ X_{-1}&=&\frac{\operatorname{ad} F_1(q^{-4\lambda})}{q-q^{-1}} =-q^{-\frac{1}{3}(5H_1+4H_2)}F_1\\ X_{-12}&=&\operatorname{ad} F_2(X_{-1}) =q^{-\frac{1}{3}(5H_1+H_2)}(qF_1F_2-F_2F_1)\\ X_{-2}&=&-\operatorname{ad} E_1(X_{-12})\\ \end{eqnarray*} The adjoint action of $\overline{U}_q (\g)$ on these basis elements is: \begin{eqnarray*} \operatorname{ad}(q^{H_i})T_j&=&T_j\\ \operatorname{ad}(q^{H_i})X_{\pm j}&=&\begin{cases} q^{\pm 1}X_{\pm j} &\text{if $i=j$}\\ q^{\mp 1/2 }X_{\pm j} &\text{if $i\ne j$} \end{cases}\\ \operatorname{ad}(q^{H_i})X_{\pm 12}&=&q^{\pm 1/2}X_{\pm 12} \end{eqnarray*} \begin{equation*} \operatorname{ad} E_i(X_\alpha )=\begin{cases} X_{\alpha +H_i} &\text{if $\alpha +H_i$ is a root}\\ T_i &\text{if $\alpha =-H_i$ }\\ 0 &\text{otherwise} \end{cases} \end{equation*} \begin{equation*} \operatorname{ad} F_i(X_\alpha )=\begin{cases} X_{\alpha -H_i} &\text{if $\alpha -H_i$ is a root}\\ T_i &\text{if $\alpha =H_i$}\\ 0 &\text{otherwise} \end{cases} \end{equation*} \begin{equation*} \operatorname{ad} E_i(T_j)=\begin{cases} (q+q^{-1})X_i &\text{if $i=j$}\\ X_{12} &\text{if $i\ne j$} \end{cases} \end{equation*} \begin{equation*} \operatorname{ad} F_i(T_j)=\begin{cases} (q+q^{-1})X_{-i} &\text{if $i=j$}\\ X_{-12} &\text{if $i\ne j$} \end{cases} \end{equation*} {}From this the adjoint brackets between the basis elements can be calculated; the result is given in the table, in which $[X,Y]$ is given in the row labelled by $X$ and the column labelled by $Y$. If $\alpha +\beta \ne 0$, $[X_\alpha , X_\beta ]=c_{\alpha \beta }X_{\alpha +\beta}$ is represented by the coefficient $c_{\alpha \beta }$; similarly, $[T_i, X_\alpha ] = b_{i\alpha }X_\alpha $ is represented by $b_{i\alpha }$. \begin{figure}[h] \begin{tabular}{c||c|c|c|c|} &$T_1$ & $T_2$ & $X_1$ & $X_{-1}$\\ \hline \hline $T_1$ & $-(q^2-q^{-2})T_1$ & $-(q-q^{-1})T_1$ & $-q(q+q^{-1})$ & $q^{-1}(q+q^{-1})$\\ \hline $T_2$ & $-(q-q^{-1})T_1$ & & $-q$ & $q^{-1}$\\ \hline $X_1$ & $q^{-1}(q+q^{-1})$ & $q^{-1}$ & 0 & $T_1$\\ \hline $X_{-1}$ & $-q(q+q^{-1})$ & $-q$ & $-T_1$ & 0\\ \hline $X_2$ & $-q^2$ & $-q^{-1}(q^2+q^{-2})$ & $-q^{3/2}$ & 0\\ \hline $X_{-2}$ & $q^{-2}$ & $q(q^2+q^{-2})$ & 0 & $q^{-3/2}$\\ \hline $X_{12}$ & 1 & $-q^{-3}$ & 0 & $q^{1/2}$\\ \hline $X_{-12}$ & $-1$ & $q^3$ & $-q^{-1/2}$ & 0\\ \hline \end{tabular} \noindent\par\medskip \begin{tabular}{c||c|c|c|c|} &$X_2$ & $X_{-2}$ & $X_{12}$ & $X_{-12}$\\ \hline \hline $T_1$ & $q^{-2}$ & $-q^2$ & $-1$ & 1\\ \hline $T_2$ & $q(q^2+q^{-2})$ & $-q^{-1}(q^2+q^{-2})$ & $q^3$ & $-q^{-3}$\\ \hline $X_1$ & $q^{-3/2}$ & 0 & 0 & $q^{1/2}$\\ \hline $X_{-1}$ & 0 & $-q^{3/2}$ & $-q^{-1/2}$ & 0\\ \hline $X_2$ & 0 & & 0 & $-q^{-5/2}$\\ \hline $X_{-2}$ & & 0 & $q^{5/2}$ & 0\\ \hline $X_{ 12}$ & 0 & $-q^{-5/2}$ & 0 & $-q^{-1}T_1+T_2$\\ \hline $X_{-12}$ & $q^{5/2}$ & 0 & $qT_1-T_2$ & 0\\ \hline \end{tabular} \caption{The brackets of $\mathfrak{sl} (3)_q$.} \end{figure} The entries which are too long to fit into the table are \begin{eqnarray*} {[}T_2, T_2] &=& -(q^2-q^{-2})T_1+(q^3-q^{-3})T_2,\\ {[}X_2, X_{-2}] &=& q^{-1}(q-q^{-1})T_1 - qT_2,\\ {[}X_{-2}, X_2] &=& q(q-q^{-1})T_1+q^{-1}T_2. \end{eqnarray*} Notice that if $L$ is regarded as a module over $\mathbb{K} [q, q^{-1}]$ then, with this basis, the brackets exhibit the simple antisymmetry which was also found by Delius and H\"uffmann. Define a map of {\em $q$-conjugation} on $L$, $X \to X^\blacktriangledown$, by the requirements that it leaves the above basis elements fixed and that $\{f(q)X\}^\blacktriangledown=f(q^{-1})X^\blacktriangledown$. Then it is apparent from the table that \begin{equation} [Y, X]=-[X, Y]^\blacktriangledown \quad \text{for all } X, Y \in L. \end{equation} The quantum antisymmetriser of $\overline{U}_q (\g)$ can be described as follows. The quantum Lie algebra $L=\mathfrak{sl} (3)_q$ carries an irreducible representation of $\overline{U}_q (\g)$; as in the classical case, $L \otimes L$ splits into irreducible representations as \[ L \otimes L = M_{27} \oplus M_{10} \oplus M_{10}^* \oplus M_8^{\text{a}} \oplus M_8^{\text{s}} \oplus M_1 \] where the numerical labels 27, 10, 8, 1 indicate the dimensions of the simple modules. These are completely defined by their highest-weight elements: \begin{equation}\label{hweight} \begin{split} W_{27}&=X_{12}\otimes X_{12},\\ W_{10}&=q^{1/2}X_1\otimes X_{12} - q^{_1/2}X_{12}\otimes X_1,\\ W_{10}^* &= q^{1/2}X_2 \otimes X_{12} - q^{-1/2}X_{12}\otimes X_2,\\ W_8^{\text{s}} &= \left\{q^{5/2}(q+q^{-1})+q^{-5/2}\right\}X_1 \otimes X_2 + \left\{q^{-5/2}(q+q^{-1})+q^{5/2}\right\}X_2 \otimes X_1\\ &\quad -(q^4T_1+q^{-1}T_2)\otimes X_2 -X_2 \otimes (q^{-4}T_1+qT_2),\\ W_8^{\text{a}} &= q^{3/2}X_1\otimes X_2 - q^{-3/2}X_2 \otimes X_1 -q(qT_1-T_2)\otimes X_{12}\\ &\quad + q^{-1}X_{12}\otimes (q^{-1}T_1 - T_2),\\ W_1&=T_1\otimes T_2 +T_2\otimes T_1 -(q+q^{-1})(T_1\otimes T_1 +T_2\otimes T_2)\\ &\quad +(q^2+1+q^{-2})(q^{-1}X_1\otimes X_{-1} +qX_{-1}\otimes X_1\\ &\qquad +q^{-1}X_2\otimes X_{-2} + qX_{-2}\otimes X_2 -q^{-2}X_{12}\otimes X_{-12} - q^2 X_{-12}\otimes X_{12}). \end{split} \end{equation} Let $\Pi _{27}, \Pi _{10}, \Pi _{10}^*, \Pi _8^{\text{s}}, \Pi _8^{\text{a}}$ and $\Pi _1$ denote the projectors onto these simple modules. Then \begin{lemma} The quantum antisymmetriser of $\overline{U}_q (\g)$ is \begin{equation} \gamma = \frac {(q^2+1)\Pi _{10}^* + (q^2+q^{-2})\Pi _8^{\text{a}} +(1+q^{-2})\Pi _{10}} {q^2 - 1 + q^{-2}} \end{equation} \end{lemma} \begin{proof} Straightforward Hopf-algebra manipulations show that $\gamma $ commutes with the diagonal adjoint action of $\mathcal{U}=U_q(\mathfrak{sl} (3))$ on $L \otimes L$, i.e. with $(\operatorname{ad} \otimes \operatorname{ad} )\circ \Delta (u)$ for all $u \in \mathcal{U}$. Thus it is only necessary to calculate the action of $\gamma $ on the highest-weight elements listed above. This is not too laborious for the cases of $W_{10}, W_{10}^*$ and $W_{27}$. For the octets the relevant highest-weight elements (eigenvectors of $\gamma $) and the eigenvalues can be determined from the Jacobi identity: in the two-dimensional space of highest-weight octet vectors (elements of $L\otimes L$ with weight $H _1 + H _2$ which are annihilated by $(\operatorname{ad}\otimes\operatorname{ad})\circ\Delta (E_1)$ and $(\operatorname{ad}\otimes\operatorname{ad})\circ\Delta (E_2)$, the eigenvectors of $\gamma $ are the relative eigenvectors of $\operatorname{ad}\circ\beta $ and $m\circ(\operatorname{ad}\otimes\operatorname{ad})$. The eigenvalue 0 for the singlet can be verified in a similar way. We find that $\beta (W_1)=0$ but $m\circ (\operatorname{ad}\otimes\operatorname{ad} )(W_1)\ne 0$. Since $\operatorname{ad}\circ\beta (W_1)=m\circ (\operatorname{ad}\otimes\operatorname{ad} )\circ\gamma (W_1)$ and $\gamma (W_1)$ must be a multiple of $W_1$, it follows that $\gamma (W_1)=0$. \end{proof} The quantised enveloping algebra $\mathcal{U}=\overline{U}_q(\ssll (3))$ has a symmetry between $(E_1,F_1,H_1)$ and $(E_2,F_2,H_2)$ corresponding to the outer automorphism of $\mathfrak{sl} (3)$ which comes from the symmetry of the Dynkin diagram $\circ$---$\circ$. The quantum Lie algebra $\L=\mathfrak{sl} (3)_q=\operatorname{ad}\mathcal{U}(q^{-4\lambda })$ is not invariant under this automorphism of $\mathcal{U}$. Indeed, the weight diagram of the 3-dimensional representation is taken by this symmetry to that of the conjugate representaion, with highest weight $\lambda ^* \ne\lambda $. This gives another quantum Lie algebra $L^*\subset\operatorname{ad}\mathcal{U}(q^{-4\lambda ^*})$ with $L^*\ne L$. This is another octet (under the adjoint representation) which also generates the locally finite part of $\mathcal{U}$. It is interesting to express the elements of $L^*$ in terms of those of $L$ and to compare this situation with the classical one. In the classical enveloping algebra the octets are the Lie algebra itself, one which is quadratic in the generators, and multiples of these by functions of the Casimirs. The outer automorphism acts linearly on the generators. In the locally finite part of the quantised enveloping algebra we have the same general structure \cite{JL}; the quadratic octet has highest-weight element obtained by multiplying the factors of the tensor $W_8^{\text{s}}$ in \eqref{hweight}. The highest-weight element of the octet $L$ is $X_{12}$. Let $X_{12}^*$ be the highest-weight element of $L^*$, and let $Y_{12}=m(W_8^{\text{s}})$. Then a calculation gives \begin{equation} X_{12}^*=\frac{q^{1/2}(q+q^{-1})CX_{12}+q(q-q^{-1})Y_{12}} {(q^{1/2}+q^{-1/2})(q^2+1+q^{-2})}. \end{equation} Thus the elements of $L^*$ are quadratic in the generators $L$, and the quantum counterpart of the outer automorphism is a nonlinear symmetry of $\overline{U}_q(\ssll (3))$. \section{Appendix. Proofs of Lemmas \ref{Clemma} and \ref{adc}} \noindent \textbf {Lemma \ref{Clemma}} The central element in $\overline{L}$ is given by \begin{equation}\tag{\ref{C}} C_\lambda =\sum_{r=0}^{n-1}(-1)^r\frac{[n-r]_q}{[n]_q}K_r \end{equation} where $K_r$ is defined recursively by \begin{equation}\tag{\ref{Kr}} K_r=\operatorname{ad}(F_rE_r)K_{r-1},\qquad K_0=q^{-4\lambda }. \end{equation} \begin{proof} Let $C$ be defined by \eqref{C}. Since this has zero weight for the Cartan algebra generated by $H_1,\ldots ,H_{n-1}$ (i.e. $\operatorname{ad} q^{H_i}(C)=C$), in order to show that it is central it is sufficient to show that $\operatorname{ad} E_i(C)=0$ for $i=1,\ldots ,n-1$. We calculate $\operatorname{ad} E_i(K_j)$. Since $\operatorname{ad} \overline{U}_q (\g) (q^{-4\lambda })$ is the direct sum of the $q$-analogues of the scalar and the adjoint modules of $\mathfrak{sl} (n)$, the structure of the roots of $\mathfrak{sl} (n)$ gives \begin{equation}\label{adE} \operatorname{ad} E_j \operatorname{ad} E_i \operatorname{ad} E_{i-1} \cdots \operatorname{ad} E_1(q^{-4\lambda })=0 \quad \text{unless }j=i+1. \end{equation} $$ \operatorname{ad} E_j(K_i)=0 \quad \text{if } j > i+1 \text{ and } i > 0. \leqno{\text{Hence}} $$ This also holds if $i=0$ since $q^{-4\lambda }$ commutes with $E_j$ for $j > 1$. For $i=j$ we have \begin{eqnarray*} \operatorname{ad} E_i(K_i) &=& \operatorname{ad} \( F_iE_i + [2H_i]_q \)\operatorname{ad} E_i(K_{i-1}\\ &=& [\< 2H_i, H_i\>]_q\operatorname{ad} E_i(K_{i-1})\\ &=& [2]_q\operatorname{ad} E_i(K_{i-1}) \end{eqnarray*} since \begin{eqnarray*} (\operatorname{ad} E_i)^2K_{i-1} &=& \operatorname{ad} F_{i-1} \ldots \operatorname{ad} F_1 (\operatorname{ad} E_i)^2 \operatorname{ad} E_{i-1} \ldots \operatorname{ad} E_1(q^{-4\lambda })\\ &=& 0 \qquad \text{by \eqref{adE}}. \end{eqnarray*} For $j = i - 1$ we have \begin{eqnarray} \label{FEEK} \operatorname{ad} E_j(K_{j+1}) &=& \operatorname{ad} F_{j+1}\operatorname{ad} ( F_jE_j + [2H_j])\operatorname{ad} E_{j+1}\operatorname{ad} E_j(K_{j-1})\notag\\ &=& \operatorname{ad} ( [2H_j - 1]_q)\operatorname{ad} F_{j+1}\operatorname{ad} E_{j+1} \operatorname{ad} E_j(K_{j-1})\notag\\ &=& \operatorname{ad} (F_{j+1}E_{j+1}E_j)K_{j-1} \quad\quad\quad\text{since } [2\< H_j, H_j\> - 1]_q =1\notag\\ &=& \operatorname{ad} [(E_{j+1}F_{j+1} - [2H_{j+1}]_q) E_j] K_{j-1}\notag\\ &=& \operatorname{ad} E_j(K_{j-1}) \end{eqnarray} since $ \operatorname{ad} F_{j+1} \operatorname{ad} E_j(K_{j-1}) = 0$ by the structure of the roots of $\mathfrak{sl} (n)$. \smallskip For $j < i - 1$, \begin{eqnarray*} \operatorname{ad} E_j(K_i) &=& \operatorname{ad} (F_i \ldots F_{j+1}E_j F_j \ldots F_1 E_i \ldots E_1) q^{-4\lambda }\\[3pt] &=& \operatorname{ad} [ F_i\ldots F_{j+1}\( F_jE_j + [2H_j]_q\) F_{j-1}\ldots F_1 E_i\ldots E_1] q^{4\lambda }\\[3pt] &=& [\< 2H_j, H_{j+1}+\cdots+H_i\> ]_q \operatorname{ad}\left( F_i E_i\ldots F_{j+1}E_{j+1}E_j\right) K_{j-1}\\ &&\quad\quad\quad\quad\qqqquad\quad\quad\quad\quad\qqqquad\qquad\text{ by \eqref{adE}}\\[3pt] &=& - \operatorname{ad} \left( F_iE_i\ldots F_{j+2}E_{j+2}E_j\right) K_{j-1} \quad\quad\quad\quad\text{ by \eqref{FEEK}}\\[3pt] &=& 0 \quad\quad\quad\quad \text{ by \eqref{adE}.} \end{eqnarray*} To summarise, \begin{equation} \begin{split} \operatorname{ad} E_j(K_i) &= 0 \quad\quad\quad \text{if $i < j-1$ or $i > j+1$,}\\ \operatorname{ad} E_j(K_{j+1}) &= \operatorname{ad} E_j (K_{j-1})\\ \text{and }\quad\quad\quad\quad \operatorname{ad} E_j (K_j) &= [2]_q \operatorname{ad} E_j (K_{j - 1}). \end{split} \end{equation} Hence for $j=1,\ldots, n - 2$, \begin{eqnarray*} \operatorname{ad} E_j (C) &=& \frac{(-1)^{j-1}}{[n]_q} \( [n-j-1]_q - [2]_q[n - j]_q + [n-j+1]_q \)\operatorname{ad} E_j (K_{j-1})\\ & =& 0, \end{eqnarray*} while \[ \operatorname{ad} E_{n-1}(C) = \frac{(-1)^{n-1}}{[n]_q}\([2]_q - [2]_q[1]_q \) = 0. \] Thus $C$ is a highest-weight vector in the module $\overline{L}$. Since its weight is zero, it follows that $C$ is an invariant element ($\operatorname{ad} x(C)=\varepsilon (x)C$ for all $x\in\mathcal{U}$) and therefore central. It is therefore a multiple of $C_\lambda$. But $C=q^{-4\lambda}+w$ with $w\in L$; hence $C=C_{\lambda}$. \end{proof} \noindent \textbf{\lemref{adc}} The adjoint action of the central element on $L$ is \begin{equation} \operatorname{ad} C_\lambda (x) = (q^2-1+q^{-2})x \quad \text{for } x\in L. \end{equation} \begin{proof} It follows from \lemref{Clemma} by Schur's lemma that $\operatorname{ad} C_\lambda $ acts as a multiple of the identity on the irreducible component $L$. To find the multiple we calculate $\operatorname{ad} C_\lambda (X_1)$ for $X_1=\operatorname{ad} E_1(q^{-4\lambda })$. We use \begin{eqnarray*} \operatorname{ad} E_r(X_1) &=& 0 \quad \text{unless } r=2,\\ \operatorname{ad} F_r(X_1) &=& 0 \quad \text{unless } r=1,\\ \operatorname{ad} F_1(X_1) &=& K_1. \end{eqnarray*} Then we have \[ \operatorname{ad} K_0(X_1) = q^{-\< 4\lambda , H_1\>}X_1 = q^{-2}X_1. \] Now \[ \operatorname{ad} E_1 (q^{-4\lambda }) = (q-q^{-1})q^{-4\lambda + H_1}E_1, \] so \begin{eqnarray}\label{K1} K_1 &=& (q-q^{-1})\operatorname{ad} F_1(q^{-4\lambda +H_1}E_1)\nonumber\\ &=& - (q-q^{-1})q^{-4\lambda +2H_1}(qE_1F_1 - q^{-1}F_1E_1)\nonumber\\ &=&(q-q^{-1})^2q^{-4\lambda +2H_1}F_1E_1 - q.q^{- 4\lambda }(q^{4H_1 - 1}) \end{eqnarray} Therefore \begin{eqnarray}\label{K1X1} \operatorname{ad} K_1 (X_1) &=& - q\( q^{\<-4\lambda +4H_1, H_1\>} - q^{\<-4\lambda , H_1\>}\)X_1 \nonumber \\ &=& -q(q^2 - q^{-2})X_1. \end{eqnarray} To calculate $\operatorname{ad} K_2(X_1)$, note that \[ \operatorname{ad} E_2 (K_1) = E_2 K_1 q^{H_2} - q^{H_2 - 1}K_1E_2 = q^{H_2 - 1}(E_2 K_1 - K_1 E_2). \] Since $\operatorname{ad} F_2 (X_1) = 0$, we have \begin{eqnarray*} \operatorname{ad} K_2 (X_1) &=& \operatorname{ad} \left[ F_2.\operatorname{ad} E_2(K_1).q^{H_2}\right] X_1\\ &=& \operatorname{ad} \left[ q^{2H_2}F_2(E_2K_1 - K_1E_2)\right] X_1. \end{eqnarray*} Using \eqref{K1X1} and the fact that $\operatorname{ad} F_2(X_1)=0$, \begin{eqnarray*} \operatorname{ad}\left[ q^{2H_2}F_2K_1E_2\right] X_1 &=& - q\operatorname{ad}\left[ q^{2H_2} \( E_2F_2 - [2H_2]_q\) \right] X_1\\ &=& - (q^2 - q^{-2})X_1; \end{eqnarray*} using \eqref{K1} and the fact that $\operatorname{ad}(E_1E_2)X_1=0$, \begin{eqnarray*} \operatorname{ad}\left[ q^{2H_2}F_2K_1E_2\right] X_1 &=& - q\operatorname{ad}\left[ q^{2H_2}F_2q^{-4\lambda }(q^{4H_1} - 1)E_2\right]X_1\\ &=& - q\operatorname{ad}\left[ q^{2H_2 - 4\lambda }\( q^{4H_1 - 2} - 1\) \( E_2F_2 - [2H_2]_q\) \right]X_1\\ &=& - q^{-2}(q^2 - 1)X_1; \end{eqnarray*} \[ \therefore \qquad \operatorname{ad} K_2(X_1) = - (q^2 - 1)X_1. \] We now have that $K_3$ is a sum of products of $\operatorname{ad} E_3$ and $\operatorname{ad} F_3$, both of which annihilate $X_1$, and operators of which $X_1$ is an eigenvector; it follows that $\operatorname{ad} K_3(X_1)=0$. Similarly $\operatorname{ad} K_r(X_1)=0$ for $r > 3$; hence \begin{eqnarray*} \operatorname{ad} C_\lambda (X_1) &=& \(q^{-2} + q(q-q^{-1})\frac{(q+q^{-1})[n-1]_q - [n-2]_q}{[n]_q} \)X_1\\ &=&(q^2 - 1 + q^{-2})X_1. \end{eqnarray*} \end{proof} \bibliographystyle{plain}
1,477,468,750,725
arxiv
\section{Introduction} In this article we consider a scalar-on-function historical linear regression model where the functional predictor $X_i(t), i=1,\ldots,n,$ is defined on a time interval $[0,T]$ but influences the scalar response $Y_i$ only on $[0,\delta]$ for some unknown cutoff time $\delta\leq T$. Specifically, the model is written as \begin{equation} \label{equ: 1dim_his_fun_lin_mod} Y_i = \mu + \int_{0}^\delta X_i(t)\beta(t) \operatorname{d} t + \varepsilon_i, \end{equation} where, without loss of generality, $X_i(\cdot)$ is assumed to be centered, i.e., $\mathbf{E} X_i(t)\equiv 0$, $\mu$ is then the mean of $Y_i$, $\beta(t)$ is the slope function (or coefficient function), and $\varepsilon_i$ represents the noise that is independent of $X_i(\cdot)$. By setting a new process $x_i(t)=X_i(T-t)$ and slope function $b(t)=\beta(T-t)$, the above model can be equivalently expressed as $Y_i=\mu+\int_{ T-\delta}^{T}x_i(t)b(t)\operatorname{d} t+\varepsilon_i$ in which the response $Y_i$ depends only on the recent past of the process $x_i(\cdot)$ up to a time lag $\delta$. The term ``historical'' stems from its resemblance to the function-on-function historical linear model $Y_i(s)=\mu(s)+\int_{s-\delta}^s X_i(t)\beta(s,t)\operatorname{d} t+\varepsilon_i(s)$ considered in \cite{Malfait2003}, where the response is a function instead of a scalar. In the case of $s=T$, such model is reduced to model \eqref{equ: 1dim_his_fun_lin_mod} or its equivalent form. An example of the scalar-on-function historical linear regression is to determine the effects of the past engine acceleration on the current particulate matter emission. The response variable is the current particulate matter emission and the explanatory function is the smoothed engine acceleration curve for the past $60$ seconds. Figure \ref{fig: spec_truck_data_all_4x5}(a) displays $108$ smoothed engine acceleration curves against the backward time, in which $0$ means the current time, while Figure \ref{fig: spec_truck_data_all_4x5}(b) shows the slope function estimated by the smoothing spline method \citep{Cardot2003}. We observe from Figure \ref{fig: spec_truck_data_all_4x5}(b) that the acceleration over the past 20--60 seconds does not have apparent contribution to predicting the current particulate matter emission. Intuitively, the particulate matter emissions shall depend on the recent acceleration, but not the ancient one. Therefore, if a linear relation between the particulate matter emissions and the acceleration curve is assumed, one might naturally use the historical linear model \eqref{equ: 1dim_his_fun_lin_mod} to analyze such data, where the task includes identifying the cutoff time beyond which the engine acceleration has no influence on the current particulate matter emission. \begin{figure}[H] \begin{center} \includegraphics[width=2.5in]{intro_truck_data_all_4x4} \hspace{0.1in} \includegraphics[width=2.5in]{intro_truck_beta_smooth_4x4} \caption{(a) $108$ smoothed engine acceleration curves. (b) Estimated slope function using the smoothing spline approach \citep{Cardot2003}. The arrows indicate the direction of time.} \label{fig: spec_truck_data_all_4x5} \end{center} \end{figure} The degenerate case $\delta=T$ in model \eqref{equ: 1dim_his_fun_lin_mod} corresponds to the classic functional linear regression that has been studied in vast literature. \cite{Hastie1993} pioneered the smooth estimation of $\beta(t)$ via penalized least squares and/or smooth basis expansion. \cite{Cardot2003} adopted B-spline basis expansion, while \cite{Li2007} utilized Fourier basis, both with a roughness penalty to control the smoothness of estimated slope functions. Data-driven bases such as eigenfunctions of the covariance function of the predictor process $X_i(t)$ were considered in \cite{Cardot2003}, \cite{Cai2006} and \cite{Hall2007}. \cite{Yuan2010} took a reproducing kernel Hilbert space approach to estimate the slope function. The case of sparsely observed functional data was studied by \cite{Yao2005b}. These estimation procedures for classic functional linear regression do not apply to the historical linear model where $\delta \leq T$ is often assumed. For models beyond linear regression and a comprehensive introduction to functional data analysis, readers are referred to the monographs by \cite{Ramsay2005}, \cite{Ferraty2006}, \cite{Hsing2015} and \cite{Kokoszka2017}, as well as the review papers by \cite{Morris15} and \cite{Wang2016} and references therein. Model \eqref{equ: 1dim_his_fun_lin_mod} has been investigated by \cite{Hall2016} who proposed to estimate $\beta(t)$ and $\delta$ by penalized least squares with a penalty on $\delta^2$. The resulting estimates for $\beta(t)$ are discontinuous at $t=\hat{\delta}$ where $\hat{\delta}$ stands for the estimated $\delta$. This feature might not be desirable when $\beta(t)$ is \emph{a priori} assumed to be continuous. For example, it is more reasonable to assume the acceleration function influences particulate matter in a continuous and smooth manner. Moreover, in practice, predictor functions are often not very smooth, while our simulation study suggests that estimates of \cite{Hall2016} generally do not perform well in such case. Alternatively, we observe that model \eqref{equ: 1dim_his_fun_lin_mod} is equivalent to a classic functional linear model with $\beta(t)=0$ for all $t\in [\delta,T]$. Such a slope function $\beta(t)$ is a special case of locally sparse functions which by definition are functions being zero in a substantial portion of their defining domains. Locally sparse slope functions have been studied in \cite{Lin2017}, as well as pioneering works \cite{James2009} and \cite{Zhou2013}. For example, in \cite{Lin2017}, a general functional shrinkage regularization technique, called fSCAD, was proposed and demonstrated to be able to encourage the local sparseness. Although these endeavors are able to produce a smooth and locally sparse estimate, they do not specifically focus on the tail region $[T-\delta,T]$. Therefore, the estimated slope functions produced by such methods might not be zero in the region that is very close to the endpoint $T$, in particular when the boundary effect is not negligible. In this article, we propose a new nested group bridge approach to estimate the slope function $\beta(t)$ and the cutoff time $\delta$. Comparing to the existing methods, the proposed approach has two features. First, it is based on B-spline basis expansion and penalized least squares with a roughness penalty. Therefore, the resulting estimator of $\beta(t)$ is continuous and smooth over the entire domain $[0,T]$, contrasting the discontinuous estimator of \cite{Hall2016}. Second, it employs a new nested group bridge shrinkage method proposed in Section \ref{sec:methodology} to specifically shrink the estimated function on the tail region $[T-\delta,T]$. Group bridge was proposed in \cite{Huang2009} for variable selection, and utilized by \cite{Wang2015} for locally sparse estimation in the setting of nonparametric regression. In our approach, we creatively organize the coefficients of B-spline basis functions into a sequence of nested groups and apply the group bridge penalty to the groups. With the aid from B-spline basis expansion, such nested structure enables us to shrink the tail of the estimated slope function. This fixes the problem of the aforementioned generic locally sparse estimation procedures. We structure the rest of the paper as follows. In Section \ref{sec:methodology} we present the proposed estimation method for the slope function and the cutoff time, and also provide computational details. In Section \ref{sec:theory} we investigate the asymptotic properties of derived estimators. Simulation studies are discussed in Section \ref{sec:simulation}, and an application to the particulate matter emissions data is given in Section \ref{sec:application}. \section{Methodology}\label{sec:methodology} \subsection{Nested Group Bridge Approach} Our estimation method utilizes B-spline basis functions that are detailed in \cite{deBoor2001}. Let $\bm{B}(t) = (B_1(t), \ldots, B_{M + d}(t))^{{ \mathrm{\scriptscriptstyle T} }}$ be a vector that contains $M + d$ B-spline basis functions defined on $[0, T]$ with degree $d$ and $M+1$ equally spaced knots $0 = t_0 < t_1 < \cdots < t_M = T$. For $m\geq 0$, let $\bm{B}^{(m)}(t) = (B_1^{(m)}(t), \ldots, B_{M + d}^{(m)}(t))^{{ \mathrm{\scriptscriptstyle T} }}$ denote the vector of the $m$-th derivatives of the B-spline basis functions. Each of these basis functions is a piecewise polynomial of degree $d$. B-spline basis functions are well known for their compact support property, i.e., each basis function is positive over at most $d+1$ adjacent subintervals. For illustration, Figure \ref{fig: Bspline_basis_funs} shows thirteen B-spline basis functions defined on $[0, 1]$ with $d = 3$ and $M=10$. Due to this compact support property, if we approximate $\beta(t)$ by a linear combination of B-spline basis functions, then such approximation is locally sparse if the coefficients are sparse in groups. \begin{figure} \begin{center} \includegraphics[width=3.3in]{fig_Bspline_basis_funs} \caption{The thirteen B-spline basis functions defined on $[0, 1]$ with degree three and eleven equally spaced knots. The red vertical dashed lines represent the nine interior knots.} \label{fig: Bspline_basis_funs} \end{center} \end{figure} We shall further introduce some notations. Let $I_j = (t_{j-1}, t_M)$, and $\emph{A}_j = \{j, j + 1, \ldots, M + d\}$ for $j = 1, \ldots, M$. Intuitively, each group $A_j$ represents the indices of B-spline basis functions that are nonzero on $I_j$. For a vector $\bm{b} = (b_1, \ldots, b_{M+d})^{{ \mathrm{\scriptscriptstyle T} }}$ of scalars, we denote by $b_{A_j} = \{b_k: k \in A_j\}$ the subvector of elements whose indices are in the $j$-th group $A_j$. We shall use $\| \bm{a} \|_1 = |a_1| + \cdots + |a_q|$ to denote the $L_1$ norm of a generic $q$-dimensional vector $\bm{a}$, and use $\left\| x \right\|$ to denote the $L_2$ norm of a generic function $x(t)$. As our focus is on the estimation of $\beta(t)$ and $\delta$, without loss of generality, we assume that $\mu = 0$ in model \eqref{equ: 1dim_his_fun_lin_mod} in the sequel. For a fixed $0 < \gamma < 1$, the historically sparse and smooth estimators for $\beta$ and $\delta$ are defined as \begin{equation} \label{estimator: hiss_est} \hat{\beta}_n(t) = \hat{\bm{b}}_n^{ \mathrm{\scriptscriptstyle T} }\bm{B(t)} , \quad \hat{\delta}_n = t_{J_0-1}, \end{equation} where $J_0 =$ $\min\{M + 1, \mathrm{min}\{l: \hat{b}_{nk} = 0, \mathrm{for} \ \mathrm{all} \ k \geq l \}\}$ and $\hat{\bm{b}}_n = (\hat{b}_{n1}, \ldots, \hat{b}_{nM + d})^{{ \mathrm{\scriptscriptstyle T} }}$ minimizes the penalized least squares \begin{align} \label{obj_fun: sqr_rough_gbr} \dfrac{1}{n}\sum_{i=1}^{n}\left(Y_i- \sum_{k = 1}^{M + d} b_k \int_{0}^{T} X_i(t)B_k(t)\operatorname{d} t\right)^2 + \kappa \left\| \bm{b}^{{ \mathrm{\scriptscriptstyle T} }}\bm{B}^{(m)} \right\|^2 + \lambda\sum_{j=1}^{M}c_j \left \| b_{\emph{A}_j} \right\|_1^\gamma, \end{align} with known weights $c_j$ and nonnegative tuning parameters $\kappa$ and $\lambda$. In the above criterion, the first term is the ordinary least squares error that encourages the fidelity of model fitting, while the second term is a roughness penalty that aims to enforce smoothness of the estimate $\hat{\beta}_n(t)$. In practice, $m=2$ is a common choice, which corresponds to measuring the roughness of a function by its integrated curvature. The last term in the objective function \eqref{obj_fun: sqr_rough_gbr} is designed to shrink the estimated slope function toward zero specifically on the tail region. It originates from the group bridge penalty that was introduced by \cite{Huang2009} for simultaneous selection of variables at both the group and within-group individual levels. In \eqref{obj_fun: sqr_rough_gbr}, the groups have a special structure: $A_1\supset \cdots \supset A_M$. In other words, the groups are nested as a sequence and hence we call the last term in \eqref{obj_fun: sqr_rough_gbr} \emph{nested group bridge}. Due to such nested nature, if $k>j$, then one can observe in \eqref{obj_fun: sqr_rough_gbr} that (i) the coefficient $b_k$ appears in all groups where the coefficient $b_j$ also appears, and (ii) $b_k$ appears in more groups than $b_j$. As a consequence, $b_k$ is always penalized more heavily than $b_j$. These two features suggest that the nested group bridge penalty spends more effort on shrinking those coefficients of B-spline basis functions whose support is in a closer proximity to $T$. As B-spline basis functions enjoy the aforementioned compact support property and our estimate is represented by a linear combination of such basis functions as in \eqref{estimator: hiss_est}, the progressive shrinkage of nested group bridge encourages the estimate of $\beta(t)$ to be locally sparse specifically on the tail part of the time domain. Such estimate is exactly what we are after in the scalar-on-function historical linear model \eqref{equ: 1dim_his_fun_lin_mod}. The weights $c_j$ are introduced to offset the effect of different dimensions of ${\emph{A}_j}$. As suggested by \cite{Huang2009}, a simple choice for $c_j$ is $c_j \propto |A_j|^{1-\gamma}$, where $|A_j|$ denotes the cardinality of $A_j$. \subsection{Computational Method} \label{equ_reform} The objective function \eqref{obj_fun: sqr_rough_gbr} is not convex and thus difficult to optimize. \cite{Huang2009} suggested the following formulation that was easier to work with. Based on Proposition 1 of \cite{Huang2009}, for $0 < \gamma <1$, if $\lambda = \tau^{1-\gamma}\gamma^{-\gamma}(1-\gamma)^{\gamma-1}$, then $\hat{\bm{b}}_n$ minimizes \eqref{obj_fun: sqr_rough_gbr} if and only if $(\hat{\bm{b}}_n, \hat{\boldsymbol{\theta}})$ minimizes \begin{align} \label{obj_fun2: sqr_rough_gbr} \dfrac{1}{n}\sum_{i=1}^{n}\left(Y_i- \sum_{k = 1}^{M + d} b_k \int_{0}^{T} X_i(t)B_k(t)\operatorname{d} t\right)^2 + \kappa \left\| \bm{b}^{{ \mathrm{\scriptscriptstyle T} }}\bm{B}^{(m)} \right\|^2 + \sum_{j=1}^{M}\theta_j^{1-1/\gamma}c_j^{1/\gamma}\|b_{A_j}\|_1+\tau\sum_{j=1}^{M}\theta_j, \end{align} subject to $\theta_j \geq 0$ $(j = 1, \ldots, M)$, where $\boldsymbol{\theta} = (\theta_1, \ldots, \theta_M)^{{ \mathrm{\scriptscriptstyle T} }}$ and $\hat{\boldsymbol{\theta}} = (\hat{\theta}_1, \ldots, \hat{\theta}_M)^{{ \mathrm{\scriptscriptstyle T} }}$. Below we develop an algorithm following this idea. Let $\bm{U}$ denote the $n\times(M+d)$ matrix with elements $u_{ij} = \int_{0}^{T} X_i(t)B_j(t)\operatorname{d} t$, and let $\bm{V}$ denote the $(M + d) \times (M + d)$ matrix with elements $v_{ij} = \int_{0}^{T} B_i^{(m)}(t)B_j^{(m)}(t)\operatorname{d} t$. The first term of (\ref{obj_fun2: sqr_rough_gbr}) can be expressed as $ 1/n\left(\bm{Y} - \bm{U}\bm{b}\right)^{ \mathrm{\scriptscriptstyle T} }\left(\bm{Y} - \bm{U}\bm{b}\right)$ and the second term of (\ref{obj_fun2: sqr_rough_gbr}) yields $\kappa \bm{b}^{ \mathrm{\scriptscriptstyle T} }\bm{V}\bm{b}$. Since $\bm{V}$ is a positive semidefinite matrix, by Cholesky decomposition we write $\bm{V} = \bm{W} \bm{W}$, where $\bm{W}$ is symmetric. Define $$\bm{U}_* = \begin{pmatrix} \bm{U}\\ \sqrt{n\kappa}\bm{W} \end{pmatrix}\,\,\,\,\text{and}\,\,\,\, \tilde{\bm{Y}} = \begin{pmatrix} \bm{Y}\\ \mathbf{0} \end{pmatrix},$$ where $\mathbf{0}$ is the zero vector of length $M + d$. If we write $g_k = \sum_{j = 1}^{\mathrm{min}\{k, M\}}\theta_j^{1-1/\gamma}c_j^{1/\gamma}$ for $k = 1, \ldots, M + d$, then (\ref{obj_fun2: sqr_rough_gbr}) can be written in the form \begin{align} \label{obj_fun2_matrix1: sqr_rough_gbr} \dfrac{1}{n}\left(\tilde{\bm{Y}} - \bm{U}_*\bm{b}\right)^{ \mathrm{\scriptscriptstyle T} }\left(\tilde{\bm{Y}} - \bm{U}_*\bm{b}\right) + \sum_{k=1}^{M+d}g_k|b_k|+\tau\sum_{j=1}^{M}\theta_j . \end{align} Let $\bm{G}$ be the $(M + d) \times (M + d)$ diagonal matrix with the $i$th diagonal element $(ng_i)^{-1}$. With notation $\tilde{\bm{U}} = \bm{U}_{*}\bm{G}$ and $\tilde{\bm{b}} = \bm{G}^{-1}\bm{b}$, \eqref{obj_fun2_matrix1: sqr_rough_gbr} can be expressed in a form of lasso problem \citep{Tibshirani1996}, \begin{align*} \dfrac{1}{n}\left \{\left(\tilde{\bm{Y}} - \tilde{\bm{U}}\tilde{\bm{b}}\right)^{ \mathrm{\scriptscriptstyle T} }\left(\tilde{\bm{Y}} - \tilde{\bm{U}}\tilde{\bm{b}}\right) + \sum_{k=1}^{M+d}|\tilde{b}_{k}|\right \} + \tau\sum_{j=1}^{M}\theta_j , \end{align*} where $\tilde{b}_{k}$ denote the $k$th element of vector $\tilde{\bm{b}}$. Now, we take the following iterative approach to compute $\hat{\bm{b}}_n$. \begin{enumerate} \item[] \emph{Step 1.} Obtain an initial estimate $\bm{b}^{(0)}$. \item[] \emph{Step 2.} At iteration $s$, $s = 1, 2, \dots$, compute \begin{align*} \theta_j^{(s)} = & c_j \left( \frac{1-\gamma}{\tau\gamma} \right)^\gamma \|b_{A_j}^{(s-1)}\|_1^\gamma, \quad j =1, \dots, M, \\ g_k^{(s)} = & \sum\limits_{j = 1}^{\mathrm{min}\{k, M\}}(\theta_j^{(s)})^{1-1/\gamma}c_j^{1/\gamma}, \quad k =1, \dots, M + d, \end{align*} \begin{equation*} \\ \bm{G}^{(s)} = n^{-1} \mathrm{diag} \left ( 1/g_1^{(s)}, \dots, 1/g_{M + d}^{(s)} \right ),\quad \tilde{\bm{U}}^{(s)} = \bm{U}_{*} \bm{G}^{(s)}. \end{equation*} \item[] \emph{Step 3.} At iteration $s$, compute \begin{align} \label{alg: b} \bm{b}^{(s)} = \bm{G}^{(s)}\underset{\tilde{\bm{b}}}{\mathrm{arg\ min}} \left(\tilde{\bm{Y}} - \tilde{\bm{U}}^{(s)}\tilde{\bm{b}}\right)^{ \mathrm{\scriptscriptstyle T} }\left(\tilde{\bm{Y}} - \tilde{\bm{U}}^{(s)}\tilde{\bm{b}}\right) + \sum_{k=1}^{M+d}|\tilde{b}_{k}|. \end{align} \item[] \emph{Step 4.} Repeat \emph{Step 2} and \emph{Step 3} until convergence is reached. \end{enumerate} A choice for the initial estimate is $\bm{b}^{(0)} = (\bm{U}^{ \mathrm{\scriptscriptstyle T} } \bm{U} + n\kappa\bm{V})^{-1}\bm{U}^{ \mathrm{\scriptscriptstyle T} }\bm{Y}$, which is obtained by the smoothing spline method \citep{Cardot2003}. Once $\hat{\bm{b}}_n$ is produced, the estimates for $\beta$ and $\delta$ are given in \eqref{estimator: hiss_est}. As the nested group bridge penalty is not convex, the above algorithm converges to a local minimizer. It is worth emphasizing that (\ref{alg: b}) is a lasso problem, which can be efficiently solved by the least angle regression algorithm \citep{efron2004least}. In our fitting procedure, there are a few tuning parameters including the smoothing parameter $\kappa$, the shrinkage parameter $\lambda$, and the parameters for constructing the B-spline basis functions such as the degree $d$ of the B-spline basis and the number of knots $M+1$. Following the schemes of \cite{Marx1999}, \cite{Cardot2003} and \cite{Lin2017}, we choose $M$ to be relatively large to capture the local features of $\beta(t)$. In addition, $\delta$ is estimated by the knot $t_{J_0-1}$, therefore a small $M$ may lead to a large bias of the estimator $\hat{\delta}_n$. The effect of potential overfitting caused by a large number of knots can be offset by the roughness penalty. Compared to $M$, the degree $d$ is of less importance, and therefore we fix it to a reasonable value, i.e., $d=3$. The smoothing parameter $\kappa$ and shrinkage parameter $\lambda$ can be chosen via Bayesian information criterion, as follows. Let $\hat{\bm{b}}_n = \hat{\bm{b}}_{n}(\kappa, \lambda)$ be the estimate based on a chosen pair of $\kappa$ and $\lambda$. Let $\bm{U}_{\kappa, \lambda}$ denote the submatrix of $\bm{U}$ with columns corresponding to the nonzero $ \hat{\bm{b}}_{n}(\kappa, \lambda)$, and $\bm{V}_{\kappa, \lambda}$ denote the submatrix of $\bm{V}$ with rows and columns corresponding to the nonzero $ \hat{\bm{b}}_{n}(\kappa, \lambda)$. The approximated degree of freedom for $\kappa$ and $\lambda$ is \begin{align*} \mathrm{df}(\kappa, \lambda) = \mathrm{trace} \left (\bm{U}_{\kappa, \lambda}(\bm{U}_{\kappa, \lambda}^{ \mathrm{\scriptscriptstyle T} }\bm{U}_{\kappa, \lambda} + n\kappa\bm{V}_{\kappa, \lambda})^{-1}\bm{U}_{\kappa, \lambda}^{ \mathrm{\scriptscriptstyle T} } \right). \end{align*} Then, Bayesian information criterion (\textsc{bic}) can be approximated by \begin{align*} \textsc{bic}({\kappa, \lambda}) = n\mathrm{log}\big ( \| \bm{Y} - \bm{U}\hat{\bm{b}}_{n}(\kappa, \lambda) \|_2^2/n \big) + \mathrm{log}(n)\mathrm{df}(\kappa, \lambda). \end{align*} The optimal $\kappa$ and $\lambda$ are selected to minimize $\textsc{bic}({\kappa, \lambda})$. \section{Asymptotic Properties} \label{sec:theory} Let $\delta_{0}$ and $\beta_0(t)$ be the true values of the cutoff time $\delta$ and the slope function $\beta(t)$, respectively. We assume that realizations $X_1,\ldots,X_n$ are fully observed, while notice that the analysis can be extended to sufficiently densely observed data. Without loss of generality, we assume $T=1$. If $\delta_0 = 0$, set $J_1=0$, and if $\delta_0 = 1$, let $J_1 = M + d$. Otherwise, let $J_1$ be an integer such that $\delta_0 \in [t_{J_1-1}, t_{J_1})$. According to Theorem XII(6) of de Boor ($2001$), there exists some $\beta_{s}(t) = \sum_{j = 1}^{M+d}b_{sj}B_j(t) = \bm{B}^{ \mathrm{\scriptscriptstyle T} }\bm{b}_s$ with $\bm{b}_s = (b_{s1}, \ldots, b_{sM+d})^{ \mathrm{\scriptscriptstyle T} }$ , such that $\|\beta_{s} - \beta_0\|_{\infty} \leq C_0M^{-p}$ for some positive constant $C_0$ and $p$. Define $b_{0j} = b_{sj} I_{(j \leq J_1)}$, $j = 1, \ldots, M+d$. For simplicity, we derive the theoretical results based on $c_j = |A_j|^{1-\gamma}$. Define $\Gamma$ as the the covariance operator of the random process $X$, and $\Gamma_{n}$ as the empirical version of $\Gamma$, which is defined by \[ (\Gamma_{n}x)(v) = \frac{1}{n}\sum_{i=1}^{n}\int_0^1 X_{i}(v)X_{i}(u)x(u)\operatorname{d} u. \] For tow functions $g$ and $f$ defined on $[0, 1]$, we define the inner product in the Hilbert space $L^2$ as $\langle g, f\rangle = \int_0^1 g(t)f(t) \operatorname{d} t$. Let $\bm{H}$ be the $(M+d) \times (M+d)$ matrix with elements $h_{i,j}=\innerprod{\Gamma_nB_i}{B_j}$. In order to establish our asymptotic properties, we assume that the following conditions are satisfied. \begin{enumerate}[label=\emph{C.\arabic*}] \item \label{con: x2bound} $E\|X\|_2^2 < \infty$. \item \label{con: Holder} The $k$th derivative $\beta^{(k)}(t)$ exists and satisfies the H\"{o}lder condition with exponent $\nu$, that is $|\beta^{(k)}(t^{'}) - \beta^{(k)}(t)| \leq c |t^{'} - t|^{\nu}$, for some constant $c > 0$, $\nu \in (0, 1]$. Define $p = k + \nu$. Assume $3/2 < p \leq d$. \item \label{con: alpha}$M = o(n^{1/2})$, $M = \omega(n^{\frac{1}{2p}})$ and $\kappa = o(n^{-1/2}M^{1/2-2m})$. \item \label{con: innerproduct_H} There are constants $C_{max} > C_{min} > 0$ such that \begin{align*} C_{min}M^{-1} \leq \rho_{min}(H) \leq \rho_{max}(H) \leq C_{max}M^{-1} \end{align*} with probability tending to one as $n$ goes to infinity, where $\rho_{min}$ and $\rho_{max}$ denote the smallest and largest eigenvalues of a matrix, respectively. \item \label{con: l1use} $\lambda\eta = O(n^{-1/2}M^{-1/2})$, where $\eta = \big(\sum\limits_{j = 1}^{J_1} c_{j}^2 \|b_{0A_j}\|_1^{2\gamma-2}|A_j|\big)^{1/2}$. \item \label{con: l2use} $\dfrac{\lambda}{M^{1-\gamma}n^{\gamma/2}} \to \infty$. \end{enumerate} The condition \ref{con: x2bound} assures the existence of the covariance function of $X$. The second condition concerns the smoothness of the slope function $\beta$, which has been used by \cite{Cardot2003} and \cite{Lin2017}. In condition \ref{con: alpha} we set the growth rate for the smoothing tuning parameter $\kappa$. Our analysis applies to $m=0$, which is equivalent to Tikhonov regularization in \cite{Hall2007} and simplifies our analysis. A similar result can be derived for $m>0$. The last two conditions pose certain constraints on the decay rate of $\lambda$ and $\eta$ (and hence $\gamma$). Similar conditions appear in \cite{Wang2015}. Below we state the main results, and relegate their proofs to the supplementary file. Our first result provides the convergence rate of the estimator $\hat{\beta}_n$ defined in \eqref{estimator: hiss_est}. \begin{theorem}[Convergence Rate]\label{thm:convergence-rate} Suppose that conditions \ref{con: x2bound}--\ref{con: l2use} hold. Then, $\|\hat{\beta}_n - \beta_0\|_2 =O_p(Mn^{-1/2}+M^{-p})$. \end{theorem} The convergence rate consists of two competing components, the variance term $Mn^{-1/2}$ and the bias term $M^{-p}$. With an increase of $M$, the approximation to $\beta(t)$ by B-spline basis functions is improved, however, at the cost of increased variance. The next result shows that the null tail of $\beta(t)$, as well as the cutoff time $\delta$, can be consistently estimated. \begin{theorem}[Consistency]\label{thm:consistency} Suppose that conditions \ref{con: x2bound}--\ref{con: l2use} hold. \begin{itemize} \item[(i)] For any $ \zeta \in (0, 1-\delta_{0})$, $\hat{\beta}_n(t)= 0$ for all $t \in [\delta_{0}+\zeta, 1]$ with probability tending to $1$. \item[(ii)] $\hat{\delta}_n$ converges to $\delta_{0}$ in probability. \end{itemize} \end{theorem} \section{Simulation Studies} \label{sec:simulation} We conduct simulation studies to evaluate the numerical performance of our nested group bridge method, and compare the results with the smoothing spline approach, as well as the two truncation methods proposed by Hall and Hooker ($2016$). The two truncation methods first expand the slope function with an orthonormal basis and then penalize $\delta$ by adding a penalty on $\delta^2$ to the least squares. Two estimation procedures were suggested by \cite{Hall2016}. The first one (called Method A) estimates $\delta$ and $\beta(t)$ simultaneously, while the second one (called Method B) estimates them in an iterative fashion. \begin{figure} \captionsetup{justification=centering,margin=2cm} \includegraphics[width=1.9in]{beta1_4x4} \includegraphics[width=1.9in]{beta2_4x4} \includegraphics[width=1.9in]{beta3_4x4} \caption{The slope functions in three scenarios.} \label{fig:true-beta} \end{figure} In our studies, for the purpose of fair comparison, we consider the same scenarios for $\beta(t)$ in \cite{Hall2016}, namely, Scenario \RomanNumeralCaps{1}. $\beta(t) = \emph{I}_{(0\leq t < 0.5)}$, Scenario \RomanNumeralCaps{2}. $\beta(t) = \mathrm{sin}(2\pi t)\emph{I}_{(0 \leq t < 0.5)}$, Scenario \RomanNumeralCaps{3}. $\beta(t) = \left(\mathrm{cos}(2\pi t)+1\right)\emph{I}_{(0 \leq t < 0.5)}$, \noindent where $I_{(\cdot)}$ denotes the indicator function. For all cases the slope function $\beta(t) > 0$ on $(0, 0.5)$ and $\beta(t) = 0$ on $[0.5, 1]$. As illustrated in Figure \ref{fig:true-beta}, the slope function is discontinuous for scenario \RomanNumeralCaps{1}, and the first and second derivatives of the slope functions are discontinuous for scenario \RomanNumeralCaps{2} and \RomanNumeralCaps{3}, respectively. The predictor functions $X_i(t)$ are generated by $X_i(t) = \sum a_{ij}B_j(t)$, where $B_j(t)$ are cubic B-spline basis functions defined on $64$ (the number 64 is randomly selected between 50 and 100) equally spaced knots over $[0, 1]$, and the coefficients $a_{ij}$ are generated from the standard normal distribution. The errors $\varepsilon$ are normally distributed and sampled so that the signal-to-noise ratio equals to $2$. We consider sample sizes $n = 100$ and $n = 500$. For each of the three scenarios and for each sample size, we replicate the simulation independently for $200$ times. For our nested group bridge approach, we choose $c_j = |A_j|^{1-\gamma}/\|b^{(0)}_{A_j}\|_2^\gamma$, where dividing $|A_j|^{1-\gamma}$ by $\|b^{(0)}_{A_j}\|_2^\gamma$ borrows the idea of adaptive lasso \citep{Zou2006}. We obtain $\bm{b}^{(0)}$ by the smoothing spline method \citep{Cardot2003}. We set $m = 2$ and $\gamma = 0.5$. Table \ref{table: comparison_mean_sd_delta} summarizes the Monte Carlo mean and standard error of $\hat{\delta}$. The results suggest that the proposed estimator is more accurate than the truncation methods in Scenario \RomanNumeralCaps{3} when the second derivative of the slope function is discontinuous. On the other hand, in Scenario \RomanNumeralCaps{1} and \RomanNumeralCaps{2} when the slope function and the first derivative of the slope function are discontinuous, respectively, the proposed method is comparable to the truncation method B and better than the truncation method A. It is observed here and also discussed in \cite{Hall2016} that the truncation methods tend to underestimate $\delta$ and exhibit a large bias when $\beta(t)$ is smooth. The figures in Table \ref{table: comparison_mean_sd_delta} seem inconsistent with those reported in \cite{Hall2016}. A possible reason is that the predictor functions $X_i(t)$ in their paper are much smoother than those in our setting. Indeed, an exponential decay in eigenvalues was assumed in the simulation setup of \cite{Hall2016}, corresponding to rather smooth predictor functions. However, such smooth random functions might not be common in practice. The histograms shown in Figure \ref{fig: histofdelta} provide more details of the performance of our method. They indicate that when $\beta(t)$ is not smooth, the proposed estimator is conservative, in the sense that $\hat{\delta}>\delta_0$, which might be better than being aggressive when accurate prediction of the response is the primary goal. \begin{table}[H] \centering \begin{threeparttable} \def~{\hphantom{0}} \caption{The mean of estimators for $\delta$ based on $200$ simulation replications with the corresponding Monte Carlo standard deviation included in parentheses. }{ \begin{tabular}{lcccc} \hline & NGR & TR \scriptsize{(Method A)} & TR \scriptsize{(Method B)} & True Value\\ \hline Scenario \RomanNumeralCaps{1} \\ \quad $n=100$ & $0.66$ \scriptsize{($0.06$)} & $0.30$ \scriptsize{($0.13$)} & $0.27$ \scriptsize{($0.08$)} & $0.50$\\ \quad $n=500$ & $0.65$ \scriptsize{($0.05$)} & $0.35$ \scriptsize{($0.13$)} & $0.39$ \scriptsize{($0.12$)} & $0.50$\\ \hline Scenario \RomanNumeralCaps{2} \\ \quad $n=100$ & $0.60$ \scriptsize{($0.07$)} & $0.34$ \scriptsize{($0.14$)} & $0.31$ \scriptsize{($0.09$)} & $0.50$\\ \quad $n=500$ & $0.59$ \scriptsize{($0.03$)} & $0.34$ \scriptsize{($0.09$)} & $0.41$ \scriptsize{($0.07$)} & $0.5$\\ \hline Scenario \RomanNumeralCaps{3} \\ \quad $n=100$ & $0.50$ \scriptsize{($0.09$)} & $0.26$ \scriptsize{($0.10$)} & $0.25$ \scriptsize{($0.05$)} & $0.50$\\ \quad $n=500$ & $0.51$ \scriptsize{($0.04$)} & $0.26$ \scriptsize{($0.10$)} & $0.30$ \scriptsize{($0.05$)} & $0.50$\\ \hline \end{tabular}} \label{table: comparison_mean_sd_delta} \begin{tablenotes} \setlength\labelsep{0pt} \footnotesize \item NGR, our proposed nested group bridge method; TR (Method A), the truncation method that estimates $\delta$ and $\beta(t)$ simultaneously; TR (Method B), the truncation method that estimates $\delta$ and $\beta(t)$ iteratively. \end{tablenotes} \end{threeparttable} \end{table} \begin{figure} \captionsetup{justification=centering,margin=2cm} \includegraphics[width=1.9in]{hist_delta_beta1_100_SNR2_4x4} \includegraphics[width=1.9in]{hist_delta_beta2_100_SNR2_4x4} \includegraphics[width=1.9in]{hist_delta_beta3_100_SNR2_4x4} \caption{Histograms of the estimated $\delta$ in 200 simulation replications in three scenarios. The results were obtained based on $200$ Monte Carlo simulations with $n = 100$. The red vertical lines indicate the true value of $\delta$. } \label{fig: histofdelta} \end{figure} To examine the quality of the estimation for $\beta(t)$, we report the mean integrated squared errors of the estimated $\beta(t)$ in Table \ref{table: comparison_ise_beta}. It is observed that in general, the proposed estimator outperforms the smoothing spline method and the two truncation methods. The truncation methods do not regularize the roughness of the estimated slope function, which leads to a less favorable performance when the predictor function is relatively rough, as in our setting and common in practice. The smoothing spline method is comparable to the proposed method in terms of estimation accuracy of $\beta(t) $, but it is unable to provide an estimate for $\delta$. To examine the quality of the estimation for $\beta(t)$, we report the mean integrated squared errors of the estimated $\beta(t)$ in Table \ref{table: comparison_ise_beta}. It is observed that in general, the proposed estimator outperforms the smoothing spline method and the two truncation methods. The truncation methods do not regularize the roughness of the estimated slope function, which leads to a less favorable performance when the predictor function is relatively rough, as in our setting and common in practice. The smoothing spline method is comparable to the proposed method in terms of estimation accuracy of $\beta(t) $, but it is unable to provide an estimate for $\delta$. \begin{table}[H] \centering \begin{threeparttable} \def~{\hphantom{0}} \caption{Mean integrated squared errors of estimators for $\beta(t)$ based on $200$ simulation replications with the corresponding Monte Carlo standard deviation included in parentheses.}{ \begin{tabular}{lcccc} \hline & NGR & SS & TR \scriptsize{(Method A)} & TR \scriptsize{(Method B)} \\ \hline Scenario \RomanNumeralCaps{1} \\ n = $100$ & $0.0254$ \scriptsize{($0.0093$)}& $0.0457$ \scriptsize{($0.0170$)}& $0.3866$ \scriptsize{($0.0629$)}& $0.3680$ \scriptsize{($0.0800$)}\\ n = $500$ & $0.0142$ \scriptsize{($0.0038$)}& $0.0189$ \scriptsize{($0.0050$)}& $0.3148$ \scriptsize{($0.0750$)}& $0.2160$ \scriptsize{($0.1087$)}\\ \hline Scenario \RomanNumeralCaps{2} \\ n = $100$ & $0.0064$ \scriptsize{($0.0044$)}& $0.0144$ \scriptsize{($0.0070$)}& $0.1771$ \scriptsize{($0.0415$)}& $0.1560$ \scriptsize{($0.0559$)}\\ n = $500$ & $0.0021$ \scriptsize{($0.0011$)}& $0.0024$ \scriptsize{($0.0015$)}& $0.1433$ \scriptsize{($0.0490$)}& $0.0747$ \scriptsize{($0.0456$)}\\ \hline Scenario \RomanNumeralCaps{3}\\ n = $100$ & $0.0136$ \scriptsize{($0.0105$)}& $0.0246$ \scriptsize{($0.0150$)}& $0.4748$ \scriptsize{($0.1504$)}& $0.3958$ \scriptsize{($0.1285$)}\\ n = $500$ & $0.0034$ \scriptsize{($0.0025$)}& $0.0064$ \scriptsize{($0.0044$)}& $0.3495$ \scriptsize{($0.1510$)}& $0.2196$ \scriptsize{($0.0872$)}\\ \hline \end{tabular}} \label{table: comparison_ise_beta} \begin{tablenotes} \footnotesize \item NGR, our proposed nested group bridge method; SS, the smoothing spline method; TR (Method A), the truncation method that estimates $\delta$ and $\beta(t)$ simultaneously; TR (Method B), the truncation method that estimates $\delta$ and $\beta(t)$ iteratively. \end{tablenotes} \end{threeparttable} \end{table} \section{Applications: Particulate Matter Emissions Data} \label{sec:application} In this section, we demonstrate the proposed approach to analyze the particulate matter emissions data which are taken from the Coordinating Research Councils E55/E59 research project \citep{clark2007heavy}. In this project, trucks were placed on the chassis dynamometer bed to mimic inertia and particulate matter was measured by an emission analyzer on standard test cycles. The engine acceleration of diesel trucks was also recorded. We are interested in determining the effects of the past engine acceleration on the current particulate matter emission, and in particular, identifying the cutoff time in the past that have a predicting power on the current particulate matter emission. As noted in \cite{Hall2016}, we obtain observation every $10$ second after the first $120$ seconds to remove dependence in the data. Let $Y_i$ be the logarithm of the particulate matter emission measured at the $i$-th $10$ second after the first $120$ seconds, and $X_i(t), t\in[0,60],$ be the corresponding engine acceleration at the past time $t$. Both $Y_i$ and $X_i(t)$ are centered such that $\mathbf{E} Y_i\equiv 0$ and $\mathbf{E} X_i(t)\equiv 0$. We estimate the functional linear model (\ref{equ: 1dim_his_fun_lin_mod}), where $\mu=0$, the engine acceleration in the past $60$ seconds $X_i(t)$ is the predictor curve, and $T=60$. In total, we have $108$ such samples. Figure \ref{fig: analysis_truck_data}(a) displays $10$ randomly selected smoothed engine acceleration curves recorded on every second for $60$ seconds. Figure \ref{fig: analysis_truck_data}(b) and (c) provides estimates for $\beta(t)$ obtained by the proposed approach and the smoothing spline method, respectively, both of which use cubic B-spline basis functions. The proposed estimate $\hat{\beta}(t)$ is zero over $[20,60]$ and the estimate for $\delta$ is $20$s. It suggests that the engine acceleration influences particulate matter emission for no longer than 20 seconds. A similar trend can be observed for the smoothing spline method which, however, does not give a clear cutoff time of influence of acceleration on particulate matter emission. \cite{Hall2016} suggested that the point estimate for $\delta$ is $13$s using \emph{Method A} and $15$s using \emph{Method B}, both of which are more aggressive than our estimator. We also construct a 95\% pointwise bootstrap pivotal confidence interval for $\beta(t)$ which is depicted in Figure \ref{fig: analysis_truck_data}(b) together with the proposed estimate. The bootstrap confidence interval is derived by resampling the residuals, re-estimating $\beta(t)$, and at each time point $t$ calculating the sample quantile. Let $\hat{\beta}^*_{b}(t)$ denote the $b$ sample quantile of the re-estimated slope functions at time point $t$. The $1-a$ bootstrap pivotal confidence interval for $\beta(t)$ is $(2\hat{\beta}(t) - \hat{\beta}^*_{1-a/2}(t), 2\hat{\beta}(t) - \hat{\beta}^*_{a/2}(t))$. For further details of pivotal bootstrap confidence intervals, we refer readers to \cite{wasserman2010}, Chapter $8$. The $95$\% pointwise bootstrap confidence interval in Figure \ref{fig: analysis_truck_data}(b) implies that there is little effect of the acceleration on the current particulate matter emission when the time is beyond the past $30$ seconds. \begin{figure} \captionsetup{justification=centering,margin=2cm} \includegraphics[width=1.9in]{app_truck_data_4x4} \includegraphics[width=1.9in]{app_truck_beta_4x4} \includegraphics[width=1.9in]{app_truck_beta_smooth_hiss_4x4} \caption{(a) $10$ randomly selected smoothed acceleration curves. (b) Estimated $\beta(t)$ using the proposed approach with dashed lines representing the $95$\% pointwise bootstrap confidence interval. (c) Estimated $\beta(t)$ using smoothing spline method (grey dashed line) and the proposed approach (black solid line).} \label{fig: analysis_truck_data} \end{figure} \section{Concluding Remarks} In this paper, we consider to study the relation between a scalar response and a functional predictor in a historical functional linear model. We propose a nested group bridge approach to achieve the historical sparseness, which reduces the variability and enhances the interpretability. Compared with the truncation methods by \cite{Hall2016}, the proposed approach is able to provide a smooth and continuous estimate for the coefficient function and performs much better when the coefficient function tends to zero more smoothly. The proposed estimator of the cutoff time enjoys the estimation consistency. We demonstrate in simulation studies and an real data application that the proposed approach performs well for predictor functions that are not very smooth. We also show that even when the signal to noise ratio is low, our proposed approach can still accommodate the situation very well. \section*{Supplementary materials} \label{SM} A supplementary document is available online at {\it Journal of Computational and Graphical Statistics}, which includes proofs of the theoretical results. The R codes for our real data analysis and the simulation studies can be downloaded online.
1,477,468,750,726
arxiv
\section{Introduction} \input{parts/part0-introduction} \section{The iteration framework}\label{sect:framework} \input{parts/part1-framework} \section{Preservation of hyperimmunity}\label{sect:hyperimmunity} \input{parts/part2-hyperimmunity} \section{Basic preservations of hyperimmunity}\label{sect:basic-hyperimmunity} \input{parts/part3-basic-hyperimmunity} \section{The Erd\H{o}s-Moser theorem and preservation of hyperimmunity}\label{sect:em-hyperimmunity} \input{parts/part4-em-hyperimmunity} \section{Thin set theorem and preservation of hyperimmunity}\label{sect:ts-hyperimmunity} \input{parts/part5-ts2-hyperimmunity} \vspace{0.5cm} \noindent \textbf{Acknowledgements}. The author is thankful to Wei Wang for useful comments and discussions. \vspace{0.5cm} \bibliographystyle{plain} \section{Preservation of hyperimmunity} \begin{proof}[Lemma~\ref{lem:hyperimmune-fairness}] Let~$D_0, D_1, \dots$ be a computable list of all finite sets. \begin{itemize} \item Fix some set~$Z$ and some $Z$-hyperimmune set~$B$. For every essential $\Sigma^{0,Z}_1$ formula~$\varphi(U)$, define the $Z$-computable function~$f$ inductively so that $\varphi(D_{f(0)})$ holds and for every~$i$, $D_{f(i+1)} > D_{f(i)}$ and~$\varphi(D_{f(i+1)})$ holds. Because~$\varphi(U)$ is essential, the function~$f$ is total. $\{ D_{f(i)}\}_{i \geq 0}$ is a $Z$-c.e.\ weak array, so by~$Z$-hyperimmunity, $D_{f(i)} \cap B = \emptyset$ for some~$i$, hence $D_{f(i)} \subseteq \overline{B}$ and~$\varphi(D_{f(i)})$ holds. \item Fix some set~$Z$ and some set~$B$ such that the fairness property of Lemma~\ref{lem:hyperimmune-fairness} holds. For every $Z$-c.e.\ weak array $\{ D_{f(i)}\}_{i \geq 0}$, define the~$\Sigma^{0,Z}_1$ formula $\varphi(U) = (\exists i)[U = D_{f(i)}]$. The formula~$\varphi(U)$ is essential, so there exists some finite set~$A \subseteq \overline{B}$ such that~$\varphi(A)$ holds. In particular, there exists some~$i$ such that~$D_{f(i)} \subseteq \overline{B}$. \qed \end{itemize} \end{proof} \begin{proof}[Lemma~\ref{lem:hyperimmunity-separation}] Fix a set~$X_0$, a countable collection of~$X_0$-hyperimmune sets~$B_0, B_1, \dots$ and an~$X_0$-computable $\mathsf{Q}$-instance~$J$ such that for every solution~$Y$ to~$J$, one of the~$B$'s is not~$Y \oplus X_0$-hyperimmune. By preservation of hyperimmunity of~$\mathsf{P}$ and carefully choosing a sequence of $\mathsf{P}$-instances~$I_0, I_1, \dots$, we can define an infinite sequence of sets~$X_1, X_2, \dots$ such that for each~$n \in \omega$ \begin{itemize} \item[(a)] $X_{n+1}$ is a solution to the~$\mathsf{P}$-instance~$I_n^{X_0 \oplus \dots \oplus X_n}$ \item[(b)] The~$B$'s are $X_0 \oplus \dots \oplus X_n$-hyperimmune \item[(c)] For every~$X_0 \oplus \dots \oplus X_n$-computable $\mathsf{P}$-instance~$I$, there exists some~$m$ such that~$I = I_m^{X_0 \oplus \dots \oplus X_m}$. \end{itemize} Let~$\mathcal{M}$ be the~$\omega$-structure whose second order part is the Turing ideal \[ \mathcal{I} = \{ Y : (\exists n)[ Y \leq_T X_0 \oplus \dots \oplus X_n] \} \] In particular, the~$\mathsf{Q}$-instance $J$ is in $\mathcal{I}$, but he~$B$'s are $Y$-hyperimmune for every~$Y \in \mathcal{I}$, so $J$ has no solution~$Y \in \mathcal{I}$ and $\mathcal{M} \not \models \mathsf{Q}$. By construction of~$\mathcal{I}$, every $\mathsf{P}$-instance~$I \in \mathcal{I}$ has a solution~$X_n \in \mathcal{I}$, so by Friedman~\cite{Friedman1974Some}, $\mathcal{M} \models \s{RCA}_0 \wedge \mathsf{P}$. \qed \end{proof} \section{Basic preservation proofs} The proofs in this appendix are inspired by Wang's work in~\cite{Wang2014Definability}. \begin{proof}[Item 1 of Theorem~\ref{thm:typicality-hyperimmunity}] It suffices to prove that for every~$\Sigma^{0,Z}_1$ formula~$\varphi(G, U)$ and every~$i \in \omega$, the set of conditions~$\sigma$ forcing~$\varphi(G, U)$ not to be essential or such that $\varphi(\sigma, A)$ holds for some finite set~$A \subset \overline{B_i}$ is dense. Fix any string~$\sigma \in 2^{<\omega}$. Define \[ \psi(U) = (\exists \tau \succeq \sigma)\varphi(\tau, U) \] The formula~$\psi(U)$ is~$\Sigma^{0,Z}_1$, so by $Z$-hyperimmunity of~$B_i$, either~$\psi(U)$ is not essential, or~$\psi(A)$ holds for some finite set~$A \subseteq \overline{B_i}$. If~$\psi(U)$ is not essential with witness $x \in \omega$, then~$\sigma$ forces~$\varphi(G, U)$ not to be essential with the same witness. If~$\psi(U)$ is essential, then there exists some finite set~$A \subset \overline{B_i}$ such that~$\psi(A)$ holds. Unfolding the definition of~$\psi(A)$, there exists some~$\tau \succeq \sigma$ such that~$\varphi(\tau, A)$ holds. The condition $\tau$ is an extension such that~$\varphi(\tau, A)$ holds for some~$A \subset \overline{B_i}$. \qed \end{proof} \begin{proof}[Item 2 of Theorem~\ref{thm:typicality-hyperimmunity}] It suffices to prove that for every~$\Sigma^{0,Z}_1$ formula~$\varphi(G, U)$ and every~$i \in \omega$, the following class is Lebesgue null. \[ \mathcal{S} = \{X : [\varphi(X,U) \mbox{ is essential }] \wedge (\forall A \subseteq_{\tiny \texttt{fin}} \omega)\varphi(X,A) \rightarrow A \not \subseteq \overline{B_i} \} \] Suppose it is not the case. There exists~$\sigma \in 2^{<\omega}$ such that \[ \mu(X \in \mathcal{S} : \sigma \prec X) > 2^{-|\sigma|-1} \] Define \[ \psi(U) = [\mu(X : (\exists \tilde{A} \subseteq U)\varphi(X,\tilde{A})) > 2^{-|\sigma|-1}] \] The formula~$\psi(U)$ is~$\Sigma^{0,Z}_1$ and by compactness, $\psi(U)$ is essential. By~$Z$-hyperimmunity of~$B_i$, there exists some finite set~$A \subseteq \overline{B_i}$ such that~$\psi(A)$ holds. For every set~$A$ such that~$\psi(A)$ holds, there exists some~$X \in \mathcal{S}$ and some~$\tilde{A} \subseteq A$ such that~$\varphi(X,\tilde{A})$ holds. By definition of~$X \in \mathcal{S}$, $\tilde{A} \not \subseteq \overline{B_i}$ and therefore~$A \not \subseteq \overline{B_i}$. Contradiction. \qed \end{proof} \begin{proof}[Theorem~\ref{thm:wkl-hyperimmunity}] Fix some set~$Z$, some countable collection of~$Z$-hyperimmune sets~$B_0, B_1, \dots$ and some~$Z$-computable tree~$T \subseteq 2^{<\omega}$. Our forcing conditions are~$(\sigma, R)$ where~$\sigma$ is a stem of the infinite, $Z$-computable tree~$R \subseteq T$. A condition~$(\tau, S)$ \emph{extends} $(\sigma, R)$ if~$\sigma \preceq \tau$ and~$S \subseteq R$. The result is a direct consequence of the following lemma. \begin{lemma}\label{lem:wkl-preservation-lemma} For every condition~$c = (\sigma, R)$, every~$\Sigma^{0,Z}_1$ formula~$\varphi(G, U)$ and every~$i \in \omega$, there exists an extension~$d = (\tau, S)$ such that $\varphi(P, U)$ is not essential for every path~$P \in [S]$, or $\varphi(\tau, A)$ holds for some~$A \subseteq \overline{B_i}$. \end{lemma} \begin{proof} Define \[ \psi(U) = (\exists s)(\forall \tau \in R \cap 2^s)(\exists \tilde{A} \subseteq_{\tiny \texttt{fin}} U)\varphi(\tau, \tilde{A}) \] The formula~$\psi(U)$ is~$\Sigma^{0,Z}_1$ so we have two cases: \begin{itemize} \item Case 1: $\psi(U)$ is not essential with some witness~$x$. By compactness, the following set is an infinite $Z$-computable subtree of~$R$: \[ S = \{ \tau \in R : (\forall A > x) \neg \varphi(\tau, A) \} \] The condition~$d = (\sigma, S)$ is an extension such that~$\varphi(P, U)$ is not essential for every~$P \in [S]$. \item Case 2: $\psi(U)$ is essential. By~$Z$-hyperimmunity of~$B_i$, there exists some finite set~$A \subseteq \overline{B_i}$ such that~$\psi(A)$ holds. Unfolding the definition of~$\psi(A)$, there exists some~$\tau \in R$ such that~$R^{[\tau]}$ is infinite and~$\varphi(\tau, \tilde{A})$ holds for some~$\tilde{A} \subseteq A \subseteq \overline{B_i}$. The condition $d = (\tau, R^{[\tau]})$ is an extension such that~$\varphi(\tau, \tilde{A})$ holds for some finite set~$\tilde{A} \subseteq \overline{B_i}$. \end{itemize} \end{proof} Using Lemma~\ref{lem:wkl-preservation-lemma}, define an infinite descending sequence of conditions~$c_0 = (\epsilon, T) \geq c_1 \geq \dots$ such that for each~$s \in \omega$ \begin{itemize} \item[(i)] $|\sigma_s| \geq s$ \item[(ii)] $\varphi(P, U)$ is not essential for every path~$P \in [R_{s+1}]$, or $\varphi(\sigma_{s+1}, A)$ holds for some finite set~$A \subseteq \overline{B_i}$ if~$s = \tuple{\varphi, i}$ \end{itemize} where~$c_s = (\sigma_s, R_s)$. \qed \end{proof} \section{Thin set for pairs and preservation of hyperimmunity} All the proofs in this section are very similar to~\cite{Patey2015weakness}. We reprove everything in the context of preservation of hyperimmunities for the sake of completeness. \begin{definition}[Strong preservation of $n$ hyperimmunities]\ A~$\Pi^1_2$ statement~$\mathsf{P}$ \emph{admits strong preservation of $n$ hyperimmunities} if for each set $Z$, each $Z$-hyperimmune sets $B_0, \dots, B_{n-1}$ and each (arbitrary) $\mathsf{P}$-instance $X$, there exists a solution $Y$ to~$X$ such that the~$B$'s are $Y \oplus Z$-hyperimmune. \end{definition} The following lemma has been proven by the author in its full generality in~\cite{PateyCombinatorial}. We reprove it in the context of preservation of $n$ hyperimmunities. \begin{lemma}\label{lem:ts-strong-to-weak} For every~$k,n \geq 1$ and~$m \geq 2$, if~$\ts^k_m$ admits strong preservation of~$n$ hyperimmunities, then $\ts^{k+1}_m$ admits preservation of~$n$ hyperimmunities. \end{lemma} \begin{proof} Fix any set~$Z$, some~$Z$-hyperimmune sets~$B_0, \dots, B_{n-1}$ and any $Z$-computable coloring $f : [\omega]^{k+1} \to m$. Consider the uniformly~$Z$-computable sequence of sets~$R_{\sigma,i}$ defined for each~$\sigma \in [\omega]^k$ and~$i < m$ by \[ R_{\sigma,i} = \{s \in \omega : f(\sigma,s) = i\} \] As~$\s{COH}$ admits preservation of~$n$ hyperimmunities, there exists some~$\vec{R}$-cohesive set~$G$ such that the~$B$'s are $G \oplus Z$-hyperimmune. The cohesive set induces a $(G \oplus Z)'$-computable coloring~$\tilde{f} : [\omega]^k \to m$ defined by: \[ (\forall \sigma \in [\omega]^k) \tilde{f}(\sigma) = \lim_{s \in G} f(\sigma,s) \] As~$\ts^k_m$ admits strong preservation of $n$ hyperimmunities, there exists an infinite $\tilde{f}$-thin set~$H$ such that the~$B$'s are $H \oplus G \oplus Z$-hyperimmune. $H \oplus G \oplus Z$ computes an infinite $f$-thin set. \end{proof} Thanks to Lemma~\ref{lem:ts-strong-to-weak}, it suffices to prove the following theorem to deduce Theorem~\ref{thm:ts2-hyperimmunity-preservation}. \begin{theorem}\label{thm:ts1-strong-preservation} For every~$n \geq 1$, $\ts^1_{n+1}$ admits preservation of $n$ hyperimmunities. \end{theorem} The remainder of this section is devoted to the proof of Theorem~\ref{thm:ts1-strong-preservation}. Fix some set~$Z$, some~$Z$-hyperimmune sets~$B_0, \dots, B_{n-1}$ and some $(n+1)$-partition~$A_0 \cup \dots \cup A_n = \omega$. We will construct an infinite set~$G$ such that $G \cap \overline{A_i}$ is infinite for each~$i \leq n$ and the $B$'s are $(G \cap \overline{A_i}) \oplus Z$-hyperimmune for some~$i \leq n$. Our forcing conditions are Mathias conditions~$(F, X)$ such that the $B$'s are $X \oplus Z$-hyperimmune. \subsection{Forcing limitlessness} We want to satisfy the following scheme of requirements to ensure that~$G \cap \overline{A_i}$ is infinite for each~$i \leq n$. \[ \mathcal{Q}_p : (\exists m_0, \dots, m_n > p)[m_0 \in G \cap \overline{A_0} \wedge \dots \wedge m_n \in G \cap \overline{A_n}] \] We say that an~$(n+1)$-partition~$A_0 \cup \dots \cup A_{n-1} = \omega$ is \emph{non-trivial} if there exists no infinite $f$-thin set~$H$ such that the $B$'s are $H \oplus Z$-hyperimmune. A condition~$(F, X)$ \emph{forces $\mathcal{Q}_p$} if there exists~$m_0, \dots, m_n > p$ such that~$m_i \in F \cap \overline{A_i}$ for each~$i \leq n$. Therefore, if~$G$ satisfies~$c$ and~$c$ forces~$\mathcal{Q}_p$, then~$G$ satisfies the requirement~$\mathcal{Q}_p$. We now prove that the set of conditions forcing~$\mathcal{Q}_p$ is dense for each~$p \in \omega$. Therefore, every sufficiently generic filter will induce~$n+1$ infinite sets~$G \cap \overline{A_0}, \dots, G \cap \overline{A_n}$. \begin{lemma}\label{lem:ts2-reduc-force-Q} For every condition~$c$ and every~$p \in \omega$, there is a condition~$d$ extending~$c$ such that~$d$ forces~$\mathcal{Q}_p$. \end{lemma} \begin{proof} Fix some~$p \in \omega$. It is sufficient to show that for a condition~ $c = (F, X)$ and some~$i \leq n$, there exists an extension~$d_0 = (H, Y)$ and some integer~$m_i > p$ that~$m_i \in H \cap \overline{A_i}$. By iterating the process for each~$i \leq n$, we obtain the desired extension~$d$. Suppose for the sake of contradiction that~$X \cap \overline{A_i}$ is finite. Then one can $X$-compute an infinite set~$H$ thin for the~$A$'s with witness $j$ for any~$j \neq i$, contradicting non-triviality of~$f$. Therefore, there exists an~$m_i \in X \cap \overline{A_i}$, $m_i > p$. The condition $d_0 = (F \cup \{m_i\}, X \setminus [0, m_i] )$ is the desired extension. \qed \end{proof} \subsection{Forcing non-preservation} Fix an enumeration~$\varphi_0(G, U), \varphi_1(G, U), \dots$ of all~$\Sigma^{0,Z}_1$ formulas. The second scheme of requirements consists in ensuring that the sets~$B_0, \dots, B_{n-1}$ are all $G \cap \overline{A_i}$-hyperimmune for some~$i \leq n$. The requirements are of the following form for each~$\vec{e}$. \[ \mathcal{R}_{\vec{e}} : \bigwedge_{j < n} \mathcal{R}^{A_0, B_j}_{e_0} \vee \dots \vee \bigwedge_{j < n} \mathcal{R}^{A_n, B_j}_{e_n} \] where \[ \mathcal{R}^{A_i, B_j}_e : \varphi_e(G \cap \overline{A_i}, U) \mbox{ essential } \Rightarrow (\exists A \subseteq_{\tiny \texttt{fin}} \overline{B_j}) \varphi_e(G \cap \overline{A_i}, A) \] A condition~\emph{forces $\mathcal{R}_{\vec{e}}$} if every set~$G$ satisfying this condition also satisfies the requirement~$\mathcal{R}_{\vec{e}}$. \begin{lemma}\label{lem:ts2-reduc-step} For every condition~$c = (F, X)$, every~$i_0 < i_1 \leq n$, every $j < n$ and every indices~$\vec{e}$, there exists an extension~$d$ such that for some~$i \in \{i_0, i_1\}$, $d$ forces~$\varphi_{e_i}(G \cap \overline{A_i}, U)$ not to be essential or forces~$\varphi_{e_i}(G \cap \overline{A_i}, A)$ for some finite set~$A \subseteq \overline{B_j}$. \end{lemma} \begin{proof} Let~$\psi(U)$ be the formula which holds if for every $2$-partition~$X_{i_0} \cup X_{i_1} = X$, there is some $i \in \{i_0,i_1\}$ and some set~$G_i \subseteq X_i$ such that~$\varphi_{e_i}((F \cap \overline{A_i}) \cup G_i, \tilde{A})$ holds for some~$\tilde{A} \subseteq U$. By compactness, the formula~$\psi(U)$ is~$\Sigma^{0,X \oplus Z}_1$. We have two cases: \begin{itemize} \item Case 1: $\psi(U)$ is essential. As~$B_j$ is $X \oplus Z$-hyperimmune, there exists some finite set~$A \subseteq \overline{B_j}$ such that~$\psi(A)$ holds. In particular, taking~$X_i = X \cap \overline{A_i}$ for each~$i \in \{i_0, i_1\}$, there exists some~$i \in \{i_0, i_1\}$ and some finite set~$G_i \subseteq X_i$ such that~$\varphi_{e_i}((F \cap \overline{A_i}) \cup G_i, \tilde{A})$ holds for some~$\tilde{A} \subseteq A$. The condition~$d = (F \cup G_i, X \setminus [0, max(G_i)])$ is an extension forcing~$\varphi_{e_i}(G \cap \overline{A_i}, A)$ for some finite set~$A \subseteq \overline{B_j}$ \item Case 2: $\psi(U)$ is not essential, say with witness~$x$. By compactness, the~$\Pi^{0,X \oplus Z}_1$ class~$\mathcal{C}$ of sets~$X_{i_0} \oplus X_{i_1}$ such that $X_{i_0} \cup X_{i_1} = X$ and for every~$A > x$, every~$i \in \{i_0, i_1\}$ and every set~$G_i \subseteq X_i$, $\varphi_{e_i}((F \cap \overline{A_i}) \cup G_i, A)$ does not hold is not empty. By preservation of hyperimmunity of~$\s{WKL}_0$, there exists some~$X_{i_0} \oplus X_{i_1} \in \mathcal{C}$ such that the~$B$'s are~$X_{i_0} \oplus X_{i_1} \oplus Z$-hyperimmune. Let~$i \in \{i_0, i_1\}$ be such that~$X_i$ is infinite. The condition~$d = (F, X_i)$ is an extension of~$c$ forcing~$\varphi_{e_i}(G \cap \overline{A_i}, U)$ not to be essential. \qed \end{itemize} \end{proof} \begin{lemma}\label{lem:ts2-reduc-force-R} For every condition~$c$, and every indices~$\vec{e}$, there exists an extension~$d$ forcing~$\mathcal{R}_{\vec{e}}$. \end{lemma} \begin{proof} Fix a condition~$c$, and apply iteratively Lemma~\ref{lem:ts2-reduc-step} to obtain an extension~$d$ such that for each~$j < n$, $d$ forces~$\varphi_{e_i}(G \cap \overline{A_i}, U)$ not to be essential or forces $\varphi_{e_i}(G \cap \overline{A_i}, A)$ for some finite set~$A \subseteq \overline{B_j}$ for $n$ different~$i$'s. By the pigeonhole principle, there exists some~$i \leq n$ such that $d$ forces~$\varphi_{e_i}(G \cap \overline{A_i}, U)$ not to be essential or forces $\varphi_{e_i}(G \cap \overline{A_i}, A)$ for some finite set~$A \subseteq \overline{B_j}$ for each~$j < n$. Therefore, $d$ forces~$\mathcal{R}_{\vec{e}}$. \qed \end{proof} \subsection{Construction} Thanks to Lemma~\ref{lem:ts2-reduc-force-R} and Lemma~\ref{lem:ts2-reduc-force-Q}, define an infinite descending sequence of conditions $c_0 = (\emptyset, \omega) \geq c_1 \geq \dots$ such that for each~$s \in \omega$, \begin{itemize} \item[(a)] $c_{s+1}$ forces~$\mathcal{R}_{\vec{e}}$ if~$s = \tuple{\vec{e}}$ \item[(b)] $c_{s+1}$ forces~$\mathcal{Q}_s$ \end{itemize} where~$c_s = (F_s, X_s)$. Let~$G = \bigcup_s F_s$. The sets~$G \cap \overline{A_0}, \dots, G \cap \overline{A_n}$ are all infinite and the~$B$'s are $(G \cap \overline{A_i}) \oplus Z$-hyperimune for some~$i \leq n$. This finishes the proof. \subsection{Notations} \emph{String, sequence}. Fix an integer $k \in \omega$. A \emph{string} (over $k$) is an ordered tuple of integers $a_0, \dots, a_{n-1}$ (such that $a_i < k$ for every $i < n$). The empty string is written~$\varepsilon$. A \emph{sequence} (over $k$) is an infinite listing of integers $a_0, a_1, \dots$ (such that $a_i < k$ for every $i \in \omega$). Given $s \in \omega$, $k^s$ is the set of strings of length $s$ over~$k$. As well, $k^{<\omega}$ is the set of finite strings over~$k$. Given a string $\sigma \in k^{<\omega}$, we denote by $|\sigma|$ its length. Given two strings $\sigma, \tau \in k^{<\omega}$, $\sigma$ is a \emph{prefix} of $\tau$ (written $\sigma \preceq \tau$) if there exists a string $\rho \in k^{<\omega}$ such that $\sigma \rho = \tau$. A \emph{binary string} (resp. real) is a \emph{string} (resp. sequence) over $2$. We may equate a real with a set of integers by considering that the real is its characteristic function. \emph{Tree, path}. A tree $T \subseteq \omega^{<\omega}$ is a set downward closed under the prefix relation. The tree~$T$ is \emph{finitely branching} if every node~$\sigma \in T$ has finitely many immediate successors. A \emph{binary} tree is a tree~$T \subseteq 2^{<\omega}$. A set $P \subseteq \omega$ is a \emph{path} though~$T$ if for every $\sigma \prec P$, $\sigma \in T$. A string $\sigma \in k^{<\omega}$ is a \emph{stem} of a tree $T$ if every $\tau \in T$ is comparable with~$\sigma$. Given a tree $T$ and a string $\sigma \in T$, we denote by $T^{[\sigma]}$ the subtree $\{\tau \in T : \tau \preceq \sigma \vee \tau \succeq \sigma\}$. \emph{Sets}. Given two sets~$X$ and~$Y$, $X \subseteq^{*} Y$ means that~$X$ is almost included into~$Y$, $X =^{*} Y$ means~$X \subseteq^{*} Y \wedge Y \subseteq^{*} X$ and $X \subseteq_{\tiny \texttt{fin}} Y$ means that~$X$ is a finite subset of~$Y$. Given some~$x \in \omega$, $A > x$ denotes the formula~$(\forall y \in A)[y > x]$.
1,477,468,750,727
arxiv
\section{Introduction} The problem of stability of spacecrafts and underwater vehicles attracted numerous resources and led to the emergence of a significant number of theoretical studies. In this paper we consider an idealized dynamics of spacecrafts and underwater vehicles which is a Hamilton-Poisson system. For a spacecraft we consider the dynamics presented in \cite{wang-xu} and for an underwater vehicle we work with the dynamics considered in \cite{leonard-1996}. Specific Casimir functions of these situations allow description of the symplectic leaves containing some equilibrium points by using the rotation matrices. We construct a coordinate chart on a symplectic leaf specified above by means of the double covering map between the 3-sphere $S^3$ and the special orthogonal group $SO(3)$. The expression of Hamilton function by using this coordinate chart lead us to some stability results for particular states of spacecrafts and underwater vehicles. To decide the stability we use the algebraic method presented in \cite{comanescu}. Alternative methods are Arnold method (see \cite{arnold}) or equivalent methods as Casimir method and Ortega-Ratiu method (see \cite{birtea-puta}). In the second section we present stability results of some generic equilibrium points of a spacecraft moving around an asteroid. We find the sufficient conditions for stability from the paper \cite{wang-xu}. Our method reduces to the study of the eigenvalues of a $6\times 6$ matrix while paper \cite{wang-xu} work with a $12\times 12$ matrix. In Section 3 we study the stability of some generic equilibrium points of an underwater vehicle. We consider a more general situation than studied in papers \cite{leonard-automatica}, \cite{leonard-1996}, \cite{leonard-1996-1}, \cite{leonard-marsden}; we suppose that the third axis is a principal axis of inertia for the vehicle but the first and second axes of the vehicle-fixed frame may not be principal axes of inertia for the vehicle. We prove that the conditions for the stability of an equilibrium point does not depend on the position of the first and second axes of inertia of the vehicle in the perpendicular plane on the third axis of the vehicle-fixed frame. \section{Stability of a spacecraft moving around an asteroid} We consider a rigid spacecraft moving on a stationary orbit around a rigid asteroid. The fixed-body frame of the asteroid $(O,{\bf u},{\bf v},{\bf w})$ has the origin in the mass center and the axes are principal axes of inertia of the asteroid. According to the hypotheses made in the paper \cite{wang-xu}, we assume that the mass center of the asteroid is stationary in an inertial frame, the asteroid has an uniform rotation around its maximum-moment principal axis, the spacecraft is on a stationary orbit, and the orbital motion is not affected by the attitude motion. The fixed-body frame of the spacecraft $(C,{\bf i},{\bf j},{\bf k})$ has the origin $C$ in the mass center and the axes are principal axes of inertia of the spacecraft. "A stationary orbit in the inertial frame corresponds to an equilibrium in the fixed-body frame of the asteroid; there are two kinds of stationary orbits: those that lie on the intermediate moment principal axis of the asteroid, and those that lie on the minimum-moment principal axis of the asteroid" (see \cite{wang-xu}). The attitude of the spacecraft is desribed by the vectors $\boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{\gamma}$, where $\boldsymbol{\gamma}$ is the versor with the origin in the mass center of the spacecraft towards the mass center of the asteroid, $\boldsymbol{\beta}$ is the versor in the opposite direction of the orbital angular momentul, and $\boldsymbol{\alpha}=\boldsymbol{\beta}\times \boldsymbol{\gamma}$. We denote by $\boldsymbol{\Pi}$ the angular momentum of the spacecraft with respect to the inertial frame and ${\bf z}=(\boldsymbol{\Pi},\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma})$. \begin{figure}[h!] \caption{Spacecraft around an asteroid.} \centering \includegraphics[width=1.0\textwidth]{spacecraft-1} \end{figure} According to the paper \cite{wang-xu}, the system of motion can be written as \begin{equation}\label{spacecraft-asteroid-eq} \dot{\bf z}=\Lambda({\bf z})\nabla H({\bf z}); \end{equation} The Hamiltonian function is given by \begin{equation} H({\bf z})=\frac{1}{2}<\boldsymbol{\Pi},\mathbb{I}^{-1}\boldsymbol{\Pi}>+\omega_T<\boldsymbol{\Pi},\boldsymbol{\beta}>+V(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma}),\footnote{$<\cdot,\cdot>$ is the scalar product on $\R^3$.} \end{equation} where $\omega_T$ is the angular velocity of the uniform rotation of the asteroid, $\mathbb{I}$ is the inertia tensor of the spacecraft and its matrix in the fixed-body frame is $\text{diag}(I_1,I_2,I_3)$. The function $V$ represents the perturbation due to the gravity gradient torque and it has the expression: $$V(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma})=k_1<\boldsymbol{\alpha},\mathbb{I}\boldsymbol{\alpha}>+k_2<\boldsymbol{\beta},\mathbb{I}\boldsymbol{\beta}>+ k_3<\boldsymbol{\gamma},\mathbb{I}\boldsymbol{\gamma}>,$$ where $k_1,k_2$ and $k_3$ are constants. The Poisson tensor is given by $$ \Lambda({\bf z})=\left \begin{array}{cccc} \widehat{\boldsymbol{\Pi}} & \widehat{\boldsymbol{\alpha}} & \widehat{\boldsymbol{\beta}} & \widehat{\boldsymbol{\gamma}} \\ \widehat{\boldsymbol{\alpha}} & O_3 & O_3 & O_3 \\ \widehat{\boldsymbol{\beta}} & O_3 & O_3 & O_3 \\ \widehat{\boldsymbol{\gamma}} & O_3 & O_3 & O_3 \end{array \right). \footnote{$O_3$ is the null matrix, and for a vector ${\bf v}=(a,b,c)$ we denote by $\widehat{\bf v}:=\left \begin{array}{ccc} 0 & -c & b \\ c & 0 & -a \\ -b & a & 0 \end{array \right).$} $$ The Casimir functions are $C_{11}({\bf z})=||\boldsymbol{\alpha}||^2$, $C_{12}({\bf z})=<\boldsymbol{\alpha},\boldsymbol{\beta}>$, $C_{13}({\bf z})=<\boldsymbol{\alpha},\boldsymbol{\gamma}>$, $C_{22}({\bf z})=||\boldsymbol{\beta}||^2$ where $1\leq i\leq j\leq 3$ and we denote by ${\bf C}:\R^{12}\rightarrow \R^6$ the vectorial Casimir function. We have seven conserved quantities $H,C_{11},C_{12},C_{13},C_{22},C_{23},C_{33}$ of the dynamics generated by \eqref{spacecraft-asteroid-eq}. We observe that we have the following equilibrium point ${\bf z}^e$: \begin{equation}\label{equilibrium-spacecraft} {\boldsymbol{\Pi}}^e=-\omega_TI_2{\bf j},\,\,{\boldsymbol{\alpha}}^e={\bf i},\,\,{\boldsymbol{\beta}}^e={\bf j},\,\,{\boldsymbol{\gamma}}^e={\bf k}. \end{equation} According to \cite{birtea-comanescu}, it is a generic equilibrium point because it is a regular point of ${\bf C}$; i.e. $\text{rank} \,\nabla {\bf C}({\bf z}^e)=6$. For our study we construct a coordinate chart on the symplectic leaf ${\bf C}^{-1}({\bf C}({\bf z}^e))$ around the equilibrium point ${\bf z}^e$. We consider the open subset: $${\bf C}^+_{{\bf z}^e}=\{{\bf z}=(\boldsymbol{\Pi},\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma})\in {\bf C}^{-1}({\bf C}({\bf z}^e))\,|\,\text{sgn}<\boldsymbol{\alpha},\boldsymbol{\beta}\times\boldsymbol{\gamma}>=\text{sgn}<\boldsymbol{\alpha}^e,\boldsymbol{\beta}^e\times\boldsymbol{\gamma}^e>\}\footnote{$\text{sgn}$ is the signum function}$$ and we observe that ${\bf z}^e\in {\bf C}^+_{{\bf z}^e}$. \begin{lem} The function $\mathcal{F}_{{\bf z}_e}:SE(3)\rightarrow {\bf C}^+_{{\bf z}^e}$ given by $\mathcal{F}_{{\bf z}^e}(\boldsymbol{\Pi},R):=(\boldsymbol{\Pi}, R\boldsymbol{\alpha}^e, R\boldsymbol{\beta}^e, R\boldsymbol{\gamma}^e)$ is a homeomorphism and we have $\mathcal{F}_{{\bf z}^e}(\boldsymbol{\Pi}^e,I_3)={\bf z}^e$. \footnote{We use the special orthogonal group and the special Euclidean group, $$SO(3)=\{X\in \mathcal{M}_3(\R)\,|\,XX^T=X^TX=I_3,\,\det(X)=1\},\,\, SE(3):=\R^3\rtimes SO(3).$$ $\mathcal{M}_3(\R)$ is a normed space by using the Frobenius norm $||X||_F=\sqrt{\text{Trace}(X^TX)}.$} \end{lem} \begin{proof} If $R\in SO(3)$ and ${\bf u},{\bf v}\in \R^3$ then we have $<R{\bf u},R{\bf v}>=<{\bf u},{\bf v}>$ and consequently $(\boldsymbol{\Pi}, R\boldsymbol{\alpha}^e, R\boldsymbol{\beta}^e, R\boldsymbol{\gamma}^e)\in {\bf C}^{-1}({\bf C}({\bf z}^e))$. Also, we have $$<R\boldsymbol{\alpha}^e,R\boldsymbol{\beta}^e\times R\boldsymbol{\gamma}^e>=<R\boldsymbol{\alpha}^e,R(\boldsymbol{\beta}^e\times \boldsymbol{\gamma}^e)>=<\boldsymbol{\alpha}^e,\boldsymbol{\beta}^e\times \boldsymbol{\gamma}^e>$$ which implies that $(\boldsymbol{\Pi}, R\boldsymbol{\alpha}^e, R\boldsymbol{\beta}^e, R\boldsymbol{\gamma}^e)\in{\bf C}^+_{{\bf z}^e}$. Let $(\boldsymbol{\Pi},\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma})\in {\bf C}^+_{{\bf z}^e}$. The sets $\{\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma}\}$ and $\{\boldsymbol{\alpha}^e,\boldsymbol{\beta}^e,\boldsymbol{\gamma}^e\}$ are orthonormal bases in $\R^3$ with the same orientation and consequently, there exists $R\in SO(3)$ such that $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma})=(R\boldsymbol{\alpha}^e,R\boldsymbol{\beta}^e,R\boldsymbol{\gamma}^e)$. We have the surjectivity. If $\mathcal{F}_{{\bf z}^e}(\boldsymbol{\Pi},R)=\mathcal{F}_{{\bf z}^e}(\tilde{\boldsymbol{\Pi}},\tilde{R})$, then we have $\boldsymbol{\Pi}=\tilde{\boldsymbol{\Pi}}$, $R^T\tilde{R}\boldsymbol{\alpha}^e=\boldsymbol{\alpha}^e$, $R^T\tilde{R}\boldsymbol{\beta}^e=\boldsymbol{\beta}^e$, and $R^T\tilde{R}\boldsymbol{\gamma}^e=\boldsymbol{\gamma}^e$. Because $\{\boldsymbol{\alpha}^e,\boldsymbol{\beta}^e,\boldsymbol{\gamma}^e\}$ is a base of $\R^3$ we have that $R^T\tilde{R}=\mathbb{I}_3$ and consequently $R=\tilde{R}$. We have the injectivity. \end{proof} We construct a double covering map between $\R^3\times S^3$ and $SE(3)$, where $$S^3=\{{\bf q}:=(q_0,q_1,q_2,q_3)\in \R^4\,|\,q_0^2+q_1^2+q_2^2+q_3^2=1\}$$ is the 3-sphere. For ${\bf q}\in S^3$ we consider the rotation matrix: \begin{equation*} \mathbf{R}^{\mathbf{q}}=\left( \begin{array}{ccc} q_0^{2}+q_{1}^{2}-q_{2}^{2}-q_{3}^{2} & 2(q_{1}q_{2}-q_{0}q_{3}) & 2(q_{1}q_{3}+q_{0}q_{2}) \\ 2(q_{1}q_{2}+q_{0}q_{3}) & q_0^{2}-q_{1}^{2}+q_{2}^{2}-q_{3}^{2}& 2(q_{2}q_{3}-q_{0}q_{1}) \\ 2(q_{1}q_{3}-q_{0}q_{2}) & 2(q_{2}q_{3}+q_{0}q_{1}) & q_0^{2}-q_{1}^{2}-q_{2}^{2}+q_{3}^{2} \end{array} \right). \end{equation*} The function $\mathbf{P}:S^{3}\rightarrow SO(3)$, $\mathbf{P}(\mathbf{q})=R^{\mathbf{q}}$ is a smooth double covering map.\footnote{ For $R\in SO(3)$ there exists two distinct points ${\bf q}_1,{\bf q}_2\in S^3$ such that ${\bf P}({\bf q}_1)={\bf P}({\bf q}_2)=R$. ${\bf P}$ is a continuous surjective function with the following property: for all $R\in SO(3)$ there exists $U_R$ an open neighborhood of $R$ (evenly-covered neighborhood) such that ${\bf P}^{-1}(U_R)$ is a union of two disjoint open sets in $S^3$ (sheets over $U_R$), each of which is mapped homeomorphically onto $U_R$ by ${\bf P}$.} For ${\bf q}\in S^3$ we have $R^{\mathbf{q}}=R^{\mathbf{-q}}$ and $I_3=R^{(1,\mathbf{0})}=R^{(-1,\mathbf{0})}$. We construct the double covering map $\widetilde{\bf P}:\R^3\times S^3\rightarrow SE(3)$ given by $\widetilde{\bf P}(\boldsymbol{\Pi},{\bf q})=(\boldsymbol{\Pi},{\bf P}({\bf q})),$ which has the property $\widetilde{\bf P}(\boldsymbol{\Pi}^e,(1,{\bf 0}))=(\boldsymbol{\Pi}^e,I_3)$. Between $\R^3\times B_3({\bf 0},1)\subset\R^6$ and $\R^3\times S^3_+:=\R^3\times \{{\bf q}=(q_0,q_1,q_2,q_3)\in S^3\,|\,q_0>0\}\subset \R^3\times S^3$ we have the following homeomorphism $\mathcal{G}(\boldsymbol{\Pi},{\bf p})=(\boldsymbol{\Pi},(\sqrt{1-||{\bf p}||^2},{\bf p}))$. It has the property $\mathcal{G}(\boldsymbol{\Pi}^e,{\bf 0})=(\boldsymbol{\Pi}^e,(1,{\bf 0}))$. The above considerations lead to the following result. \begin{prop}\label{chart-spacecraft} There exists an open subset $U\subset {\bf C}^+_{{\bf z}^e}\subset {\bf C}^{-1}({\bf C}({\bf z}^e))$ such that ${\bf z}^e\in U$ and the set $U$ with the function $(\mathcal{F}_{{\bf z}^e}\circ \tilde{\bf P}\circ\mathcal{G})^{-1}:U\rightarrow V\subset \R^6$ is a coordinate chart. \end{prop} The Hamiltonian function written in the chart defined in Proposition \ref{chart-spacecraft} is: $$H_{{\bf z}^e}(\boldsymbol{\Pi},{\bf p})=H(\boldsymbol{\Pi},R^{(\sqrt{1-||{\bf p}||^2},{\bf p})}\boldsymbol{\alpha}^e, R^{(\sqrt{1-||{\bf p}||^2},{\bf p})}\boldsymbol{\beta}^e, R^{(\sqrt{1-||{\bf p}||^2},{\bf p})}\boldsymbol{\gamma}^e).$$ We have the following stability result. \begin{thm}\label{chart-spacecraft-Hess} Let ${\bf z}^e$ be the equilibrium point described by \eqref{equilibrium-spacecraft}. If the point $(\boldsymbol{\Pi}^e,{\bf 0})$ is a strict local extremum of $H_{{\bf z}^e}$, then the equilibrium point ${\bf z}^e$ is Lyapunov stable. \end{thm} \begin{proof} We use the algebraic method, presented in the paper \cite{comanescu} and used in the papers \cite{comanescu-1}, \cite{comanescu-2} and \cite{birtea-casu}. If $(\boldsymbol{\Pi}^e,{\bf 0})$ is a strict local extremum of $H_{{\bf z}^e}$, then the algebraic system \begin{equation}\label{algebraic-system} H({\bf z})=H({\bf z}^e),\,C_{ij}({\bf z})=C_{ij}({\bf z}^e),\,\,1\leq i\leq j\leq 3 \end{equation} has no root besides ${\bf z}^e$ in some neighborhood of ${\bf z}^e$. Consequently, the equilibrium point is Lyapunov stable . \end{proof} An immediate consequence is the following result. \begin{thm}\label{conditions-stability-Hessian} Let ${\bf z}^e$ be the equilibrium point due by \eqref{equilibrium-spacecraft}. If $(\boldsymbol{\Pi}^e,{\bf 0})$ is a stationary point for $H_{{\bf z}^e}$ and the Hessian matrix $\text{Hess}\,H_{{\bf z}^e}(\boldsymbol{\Pi}^e,{\bf 0})$ is positive or negative definite, then the equilibrium point ${\bf z}^e$ is Lyapunov stable. \end{thm} We can announce the main result of this section. \begin{thm}\label{stability-spacecraft} A sufficient condition for the Lyapunov stability of the equilibrium point \eqref{equilibrium-spacecraft} is: \begin{align*} & (I_3-I_1)(k_1-k_3)>0; \\ \text{and}\,\, & (I_2-I_1)(\omega_T^2-2(k_2-k_1))>0; \\ \text{and}\,\, & (I_2-I_3)(\omega_T^2-2(k_2-k_3))>0. \end{align*} \end{thm} \begin{proof} For the computations we use the components of the vectors in the fixed-body frame. We have $$\boldsymbol{\alpha}=(1-2p_2^2-2p_3^2, 2p_1p_2+2p_3\sqrt{1-(p_1^2+p_2^2+p_3^2)}, 2p_1p_3-2p_2\sqrt{1-(p_1^2+p_2^2+p_3^2)}),$$ $$\boldsymbol{\beta}=(2p_1p_2-2p_3\sqrt{1-(p_1^2+p_2^2+p_3^2)}, 1-2p_1^2-2p_3^2, 2p_2p_3+2p_1\sqrt{1-(p_1^2+p_2^2+p_3^2)}),$$ $$\boldsymbol{\gamma}=(2p_1p_3+2p_2\sqrt{1-(p_1^2+p_2^2+p_3^2)}, 2p_2p_3-2p_1\sqrt{1-(p_1^2+p_2^2+p_3^2)}, 1-2p_1^2-2p_2^2),$$ and we observe that $(\boldsymbol{\Pi}^e,{\bf 0})$ is a critical point for $H_{{\bf z}^e}$. The Hessian matrix is: $$\text{Hess}\,H_{{\bf z}^e}(\boldsymbol{\Pi}^e,{\bf 0})= \left \begin{array}{cccccc} \frac{1}{I_1} & 0 & 0 & 0 & 0 & -2\omega_T \\ 0 & \frac{1}{I_2} & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{I_3} & 2\omega_T & 0 & 0 \\ 0 & 0 & 2\omega _T & h_{44} & 0 & 0 \\ 0 & 0 & 0 & 0 & 8(I_3-I_1)(k_1-k_3) & 0 \\ -2\omega_T & 0 & 0 & 0 & 0 & h_{66} \end{array \right),$$ where $h_{44}=4(\omega_T^2I_2+2(I_3-I_2)(k_2-k_3))$ and $h_{66}= 4(\omega_T^2I_2+2(I_1-I_2)(k_2-k_1)).$ If we have the above inequalities, then the Hessian matrix $\text{Hess}\,H_{{\bf z}^e}(\boldsymbol{\Pi}^e,{\bf 0})$ is positive definite and we apply Theorem \ref{conditions-stability-Hessian}. \end{proof} \begin{rem} The analysis of the above inequalities leads us to the following sufficient conditions for the stability of the equilibrium point \eqref{equilibrium-spacecraft}. \begin{itemize} \item [(i)] $I_2>I_3>I_1$, and $k_1>k_3$ and $\omega_T^2>2(k_2-k_3)$; \item [(ii)] $I_3>I_2>I_1$, and $k_1>k_3$ and $2(k_2-k_1)<\omega_T^2<2(k_2-k_3)$; \item [(iii)] $I_3>I_1>I_2$, and $k_1>k_3$ and $\omega_T^2<2(k_2-k_1)$; \item [(iv)] $I_2>I_1>I_3$, and $k_1<k_3$ and $\omega_T^2>2(k_2-k_1)$; \item [(v)] $I_1>I_2>I_3$, and $k_1<k_3$ and $2(k_2-k_3)<\omega_T^2<2(k_2-k_1)$; \item [(vi)] $I_1>I_3>I_2$, and $k_1<k_3$ and $\omega_T^2<2(k_2-k_3)$. $\Box$ \end{itemize} \end{rem} According to \cite{wang-xu} the radius of the stationary orbit $R_S$ satisfies the relation \begin{equation}\label{Rs} R_S^5-\frac{GM}{\omega_T^2}\left(R_S^2-\frac{3}{2}a_e^2C_{20}-9a_e^2C_{22}\right)=0, \end{equation} where $M$ is the mass of the asteroid, $G$ is the Gravitational constant, $a_e$ is the mean radius of the asteroid, and $C_{20}$, $C_{22}$ are the harmonic coefficients generated by the gravity field of the asteroid. Also, we have \begin{equation} k_1=\frac{3GMa_e^2C_{22}}{R_S^5},\,\,k_2=\frac{3GMa_e^2C_{20}}{2R_S^5},\,\, k_3=\frac{3GM}{2R_S^3}-\frac{3GMa_e^2}{4R_S^5}\left(5C_{20}+34C_{22}\right). \end{equation} If we use the above expressions of the coefficient $k_1,k_2$ and $k_3$, then the conditions (56a), (56b), and (56c) from the paper \cite{wang-xu} coincide with the inequalities from Theorem \ref{stability-spacecraft}. In the paper \cite{wang-xu} is used a modified energy-Casimir method in order to find the cited conditions. This method work with the eigenvalues of a $12\times 12$ matrix. For a specific asteroid the coefficients $M,a_e,\omega_T,C_{20}$ and $C_{22}$ are specified. The harmonic coefficients $C_{20}$ and $C_{22}$ are calculated by the formulas: $$C_{20}=-\frac{1}{2Ma_e^2}(2I_w^A-I_u^A-I_v^A),\,\,C_{22}=\frac{1}{4Ma_e^2}(I_v^A-I_u^A),$$ where $I_u^A,I_v^A$ and $I_w^A$ are the principal moments of the asteroid which are calculated in the mass center of the asteroid. The numbers of the positive solutions of the equation \eqref{Rs} can be characterized by the parameters of the asteroid. The equation can have two positive solutions, one positive solution or no positive solutions. If we fix a radius of the stationary orbit around a specific asteroid, then we obtain fixed values of the coefficients $k_1,k_2$ and $k_3$. \medskip {\bf The stability of a spacecraft moving around the asteroid 4769 Castalia.} According to \cite{scheers} the asteroid 4769 Castalia has the following physical data: $M=1.4091\cdot 10^{12}\,kg$, $a_e=543.1\,m$, $\omega_T=4.2882\cdot 10^{-4}\,s^{-1}$, $C_{20}=-7.257\cdot 10^{-2}$, $C_{22}=2.984\cdot 10^{-2}$. The universal constant has the value $G=6.67384\cdot 10^{-11}\,m^3kg^{-1}s^{-1}$. The equation \eqref{Rs} has two positive solutions: $R_{S1}=219.31\,m$ and $R_{S2}=778.39\,m$. Because $R_{S1}<a_e$ we can not have a stationary orbit with the radius $R_{S1}$. For a stationary orbit with the radius $R_{S2}$ we have $k_1<k_3$, $\omega_T^2>2(k_2-k_1)$ and, consequently a sufficient condition for the stability of \eqref{equilibrium-spacecraft} is that the inertia moments of the spacecraft satisfies the inequalities $I_2>I_1>I_3$. \section{Stability of an underwater vehicle} Following \cite{leonard-1996}, the dynamics for a six degree-of-freedom vehicle modeled as a neutrally buoyant, submerged rigid body in an infinitely large volume of irrotational, incompressible, inviscid fluid that is at rest at infinity is described by the system \begin{equation}\label{underwater-1} \left\ \begin{array}{ll} \dot{\boldsymbol \Pi}={\boldsymbol \Pi}\times {\boldsymbol \Omega}+{\bf Q}\times {\bf v}-mgl{\boldsymbol \Gamma}\times {\bf r} \\ \dot{\bf Q}={\bf Q}\times {\boldsymbol \Omega} \\ \dot{\boldsymbol \Gamma }={\boldsymbol \Gamma}\times {\boldsymbol \Omega}, \end{array \right. \end{equation} where ${\boldsymbol \Pi}$ is the angular impulse, ${\bf Q}$ is the linear impulse, ${\boldsymbol \Gamma}$ is the direction of gravity, $l{\bf r}$ is the vector from center of buoyancy to the center of gravity (with $l\geq 0$ and ${\bf r}$ an unit vector), $m$ is the mass of the vehicle, $g$ is gravitational acceleration, ${\boldsymbol \Omega}$ and ${\bf v}$ are the angular and translational velocity of the vehicle. In a body-fixed frame with the origin in the centre of buoyancy the relationship between $({\boldsymbol \Pi},{\bf Q})$ and $({\boldsymbol \Omega},{\bf v})$ is given by \begin{equation}\label{relation-between} \left \begin{array}{c} {\boldsymbol \Pi} \\ {\bf Q} \end{array \right)=\left \begin{array}{cc} J & D \\ D^T & M \end{array \right)\left \begin{array}{c} {\boldsymbol \Omega} \\ {\bf v} \end{array \right) , \end{equation} where $J$ is the matrix that is the sum of the body inertia matrix plus the added inertia matrix associated with the potential flow model of the fluid, $M$ is the sum of the mass matrix for the body alone, and $D$ accounts for the cross terms. According to \cite{leonard-automatica} we have \begin{equation}\label{matrices-underwater} M=m I_3+\Theta_{11}^f,\,\,J=J_b+\Theta_{11}^f,\,\,D=ml\widehat{\bf r}+\Theta_{21}^f. \end{equation} We denote by $I_3$ the identity matrix and by $J_b$ is the inertia matrix of the vehicle. The matrix \begin{equation}\label{Theta-f} \Theta^f= \left \begin{array}{cc} \Theta_{11}^f &(\Theta_{21}^f)^T \\ \Theta_{21}^f & \Theta_{22}^f \end{array \right) \end{equation} is symmetric and it is determined by the configuration of the vehicle and the density of the fluid. The relationship between $({\boldsymbol \Omega},{\bf v})^T$ and $({\boldsymbol \Pi},{\bf Q})^T$ is given by \begin{equation}\label{relation-between} \left \begin{array}{c} {\boldsymbol \Omega} \\ {\bf v} \end{array \right)= \left \begin{array}{cc} A & B^T \\ B & C \end{array \right) \left \begin{array}{c} {\boldsymbol \Pi} \\ {\bf Q} \end{array \right),\,\,\left \begin{array}{cc} A & B^T \\ B & C \end{array \right)=\left \begin{array}{cc} J & D \\ D^T & M \end{array \right)^{-1}. \end{equation} The matrix $ \left \begin{array}{cc} A & B^T \\ B & C \end{array \right)$ is symmetric and positive definite and consequently the matrices $A$ and $C$ are symmetric and positive definite. The system \eqref{underwater-1} has the Hamilton-Poisson form (see \cite{leonard-automatica}) \begin{equation}\label{underwater-1-Poisson} \dot{\bf u}=\Lambda({\bf u})\nabla H({\bf u}), \end{equation} where ${\bf u}=({\boldsymbol \Pi},{\bf Q},{\boldsymbol \Gamma})$, $\Lambda({\bf u})=\left \begin{array}{ccc} \widehat{{\boldsymbol \Pi}} & \widehat{\bf Q} & \widehat{{\boldsymbol \Gamma}} \\ \widehat{\bf Q} & O_3 &O_3 \\ \widehat{{\boldsymbol \Gamma}} & O_3 &O_3 \end{array \right),$ and the Hamiltonian function is given by $$H({\bf u})=\frac{1}{2}(<{\boldsymbol \Pi},A{\boldsymbol \Pi}>+2<{\boldsymbol \Pi},B^T{\bf Q}>+<{\bf Q},C{\bf Q}>-2mgl<{\boldsymbol \Gamma},{\bf r}>). $$ In this case the Casimir functions are $C_{11}({\bf u})=||{\bf Q}||^2$, $C_{12}({\bf u})=<{\bf Q},{\boldsymbol \Gamma}>$ and $C_{22}({\bf u})=||{\boldsymbol \Gamma}||^2$ and we denote by ${\bf C}:\R^{9}\rightarrow \R^3$ the vectorial Casimir function. We are interested in a generic equilibrium point ${\bf u}^e=({\boldsymbol \Pi}^e,{\bf Q}^e,{\boldsymbol \Gamma}^e)$ which satisfy \begin{equation}\label{condition-equilibria} {\bf Q}^e\neq {\bf 0},\,\,{\boldsymbol \Gamma}^e\neq {\bf 0},\,\,<{\bf Q}^e,{\boldsymbol \Gamma}^e>=0. \end{equation} This equilibria has the following properties: \noindent (i) The equilibrium point has no spin. Because ${\bf u}^e$ is a generic equilibrium point we have $${\boldsymbol \Omega}^e=A{\boldsymbol \Pi}^e+B^T{\bf Q}^e=\frac{\partial H}{\partial {\boldsymbol \Pi}}({\bf u}^e)={\bf 0}.$$ \noindent (ii) The translational velocity is ${\bf v}^e=B{\boldsymbol \Pi}^e+C{\bf Q}^e$. \noindent (iii) The vector ${\bf r}$ is located in the plane generated by the vectors ${\bf Q}^e$ and ${\boldsymbol \Gamma}^e$ (we have $<{\bf Q}^e\times {\boldsymbol \Gamma}^e, {\bf r}>=0$). \begin{rem} In the paper \cite{birtea-comanescu} is presented a stability study for a nongeneric equilibria of the system \eqref{underwater-1} which is situated to singular symplectic leaves that are not characterized as a preimage o a regular value of the Casimir functions. \end{rem} For an equilibrium point ${\bf u}^e$ of our type we construct a coordinate chart on the regular symplectic leaf ${\bf C}^{-1}({\bf C}({\bf u}^e))$ around the equilibrium point using the same scheme as in the previous section. The open subset ${\bf C}^+_{{\bf u}^e}\subset {\bf C}^{-1}({\bf C}({\bf u}^e))$ defined by: $${\bf C}^+_{{\bf u}^e}=\{{\bf u}=({\bf x},{\bf y}_1,{\bf y}_2)\in {\bf C}^{-1}({\bf C}({\bf u}^e))\,|\,{\bf y}_1\times {\bf y}_2 \,\text{and}\,{\bf y}_1^e\times {\bf y}_2^e\,\text{have the same orientation}\}$$ contains the equilibrium point ${\bf u}^e$. Between $SE(3)$ and ${\bf C}^+_{{\bf u}_e}$ we have the homeomorphism $$\mathcal{F}_{{\bf u}^e}(\boldsymbol{\Pi},R):=(\boldsymbol{\Pi}, R{\bf Q}^e, R\boldsymbol{\Gamma}^e)$$ with the property $\mathcal{F}_{{\bf u}^e}(\boldsymbol{\Pi}^e,I_3):={\bf u}^e$. By using the notations of the previous section we have the following result. \begin{prop}\label{chart-reduced} There exists an open subset $U\subset {\bf C}^+_{{\bf u}^e}\subset {\bf C}^{-1}({\bf C}({\bf u}^e))$ such that ${\bf u}^e\in U$ and $U$ with function $(\mathcal{F}_{{\bf u}^e}\circ \tilde{\bf P}\circ\mathcal{G})^{-1}:U\rightarrow V\subset \R^6$ is a coordinate chart. \end{prop} The Hamiltonian function written in the coordinates of the chart defined in the above proposition is: $$H_{{\bf u}^e}(\boldsymbol{\Pi},{\bf p})=H(\boldsymbol{\Pi},R^{(\sqrt{1-||{\bf p}||^2},{\bf p})}{\bf Q}^e, R^{(\sqrt{1-||{\bf p}||^2},{\bf p})}\boldsymbol{\Gamma}^e ).$$ Analogously with the proofs of Theorem \ref{chart-spacecraft} and Theorem \ref{chart-spacecraft-Hess} we obtain the following stability results. \begin{thm} Let ${\bf u}^e$ be a generic equilibrium point satisfying conditions \eqref{condition-equilibria}. If the point $(\boldsymbol{\Pi}^e,{\bf 0})$ is a strict local extremum of $H_{{\bf u}^e}$, then the equilibrium point ${\bf u}^e$ is Lyapunov stable. \end{thm} \begin{thm}\label{conditions-stability-Hessian-reduced} Let ${\bf u}^e$ be a generic equilibrium point satisfying conditions \eqref{condition-equilibria}. If $(\boldsymbol{\Pi}^e,{\bf 0})$ is a stationary point for $H_{{\bf u}^e}$ and the Hessian matrix $\text{Hess}\,H_{{\bf u}^e}(\boldsymbol{\Pi}^e,{\bf 0})$ is positive or negative definite, then the equilibrium point is Lyapunov stable. \end{thm} \subsection{Stability for an ellipsoidal bottom-heavy underwater vehicle} In this section we suppose that the vehicle can be approximated by an ellipsoid. The origin of the body-fixed frame is located in the center of the buoyancy and we set the axes to be the principal axes of the displaced fluid. In this case the matrix $\Theta^f$ (see \eqref{Theta-f}) is a diagonal matrix. Suppose that the vector ${\bf r}$ is along the third axis; more precisely ${\bf r}=(0,0,1)$. In the papers \cite{leonard-automatica}, \cite{leonard-1996}, \cite{leonard-1996-1}, \cite{leonard-marsden} is supposed that the principal axes of displaced fluid coincide with the principal axes of the vehicle. In this paper we consider a more general situation, we suppose that the third axis is a principal axis of inertia for the vehicle but the first and second axes of the vehicle-fixed frame may not be principal axes of inertia for the vehicle. Using our hypotheses and the relation \eqref{matrices-underwater} we deduce: \begin{equation} M=\left \begin{array}{ccc} m_1 & 0 & 0 \\ 0 & m_2 & 0\\ 0 & 0 & m_{3} \end{array \right), J=\left \begin{array}{ccc} I_{11} & I_{12} & 0 \\ I_{12} & I_{22} & 0\\ 0 & 0 & I_{3} \end{array \right) , D=ml\widehat{\bf r}=\left \begin{array}{ccc} 0 & -ml & 0 \\ ml & 0 & 0\\ 0 & 0 & 0 \end{array \right). \end{equation} \begin{figure}[h!] \caption{Underwater with noncoincident center of gravity and buoyancy.} \centering \includegraphics[width=0.8\textwidth]{underwater-1} \end{figure} By physical reasons we have the inequalities: \begin{equation}\label{inequality-physics-1} m_1>0,\,\,m_2>0,\,\,m_3>0,\,\,I_{11}>0,\,\,I_{22}>0,\,\,I_{11}I_{22}-I_{12}^2>0,\,\,\,I_3>0. \end{equation} In this case we have $$A=\frac{1}{k}\left \begin{array}{ccc} m_2(m_1I_{22}-m^2l^2) & -m_1m_2I_{12} & 0 \\ -m_1m_2I_{12} & m_1(m_2I_{11}-m^2l^2) & 0\\ 0 & 0 & \frac{k}{I_3} \end{array \right),$$ $$B=\frac{1}{k}\left \begin{array}{ccc} m_1m_2I_{12} & -(m_2I_{11}-m^2l^2)ml & 0 \\ (m_1I_{22}-m^2l^2)ml & -m_1m_2I_{12} & 0 \\ 0 & 0 & 0 \end{array \right). $$ $$C=\frac{1}{k}\left \begin{array}{ccc} m_2(I_{11}I_{22}-I_{12}^2)-m^2l^2I_{22} & m^2l^2I_{12} & 0 \\ m^2l^2I_{12} & m_1(I_{11}I_{22}-I_{12}^2)-m^2l^2I_{11} & 0\\ 0 & 0 & \frac{k}{m_3} \end{array \right),$$ $$k=m_1m_2(I_{11}I_{22}-I_{12}^2)-m^2l^2(m_1I_{22}+m_2I_{11})+m^4l^4.$$ By calculus we observe that the matrix $ \left \begin{array}{cc} A & B^T \\ B & C \end{array \right)$ is positive definite if and only if the following inequalities are satisfied: \begin{equation}\label{inequality-physics-1} m_1I_{22}>m^2l^2,\,\,m_2I_{11}>m^2l^2,\,\,k>0. \end{equation} We study a generic equilibrium point ${\bf u}^e$ with the coordinates: \begin{equation}\label{equilibrium-underwater-particular-10} \Pi_1^e=-\frac{ml}{m_2}Q_2^e,\,\,\Pi_2^e=\Pi_3^e=0, Q_1^e=0,\,\,Q_2^e\in\R^*,\,\,Q_3^e=0, \Gamma_1^e=\Gamma_2^e=0,\,\,\Gamma_3^e=1. \end{equation} This equilibrium point correspond to a constant translation with no spin along a horizontal direction of a bottom-heavy underwater vehicle. By using the coordinate chart defined above, we have: \small{$${\bf Q}=((2p_1p_2-2p_3\sqrt{1-(p_1^2+p_2^2+p_3^2)})Q_2^e, (1-2p_1^2-2p_3^2)Q_2^e,(2p_2p_3+2p_1\sqrt{1-(p_1^2+p_2^2+p_3^2)})Q_2^e)$$} $$\boldsymbol{\Gamma}=(2p_1p_3+2p_2\sqrt{1-(p_1^2+p_2^2+p_3^2)}, 2p_2p_3-2p_1\sqrt{1-(p_1^2+p_2^2+p_3^2)}, 1-2p_1^2-2p_2^2).$$ We can present the main result of this section. \begin{thm} If we have \begin{equation} Q_2^e\neq 0,\,\,l>0,\,\,mgl>\left(\frac{1}{m_2}-\frac{1}{m_3}\right)(Q_2^e)^2,\,\,m_2>m_1, \end{equation} then the equilibrium point \eqref{equilibrium-underwater-particular-10} is Lyapunov stable. \end{thm} \begin{proof} By direct calculus we observe that the point $(-\frac{ml}{m_2}Q_2^e,0,0,0,0,0)$ is a critical point for the function $H_{{\bf u}^e}$. The Hessian matrix calculated in the equilibrium point has the components: \begin{align*} h_{11} & =\frac{m_2}{k}(m_1I_{22}-m^2l^2),\,h_{12}=-\frac{m_1m_2I_{12}}{k},\,h_{13}=h_{14}=h_{15}=0,\,h_{16}=-\frac{2m_2mlI_{12}}{k}Q_2^e, \\ h_{22} & = \frac{m_1}{k}(m_2I_{11}-m^2l^2),\,h_{23}=h_{24}=h_{25}=0,\,h_{26}=\frac{2ml}{k}(m_2I_{11}-m^2l^2)Q_2^e, \\ h_{33} & = \frac{1}{I_3}, h_{34}=h_{35}=h_{36}=0, \\ h_{44} & =4(mgl+(\frac{1}{m_3}-\frac{1}{m_2})(Q_2^e)^2),\,h_{35}=h_{36}=0, \\ h_{55} & =4mgl,\,h_{56}=0,\\ h_{66} & = -\frac{4(Q_2^e)^2}{km_2}(k-m_2^2(J_{11}J_{22}-J_{12}^2)+m_2m^2l^2J_{22}). \end{align*} The determinant of the Hessian matrix is $$\frac{64mgl}{kI_{3}}(m_2-m_1)\left(mgl+(\frac{1}{m_3}-\frac{1}{m_2})(Q_2^e)^2\right)(Q_2^e)^2.$$ By using the hypotheses of this Theorem and the inequalities \eqref{inequality-physics-1} we observe that the Hessian matrix is positive definite. Applying the Theorem \ref{conditions-stability-Hessian-reduced} we obtain the announced result. \end{proof} We remark that the conditions for the stability of an equilibrium point of type \eqref{equilibrium-underwater-particular-10} do not depend by the position of the first and second axes of inertia of the vehicle in the perpendicular plane on the third axis of the vehicle-fixed frame. The conditions for the stability of the above theorem have been obtained in Theorem 2 of the paper \cite{leonard-automatica} for the particular case when the principal axes of inertia of the displaced fluid and the principal axes of inertia of the underwater vehicle are coincident. In the paper \cite{leonard-1996-1} is used the energy-Casimir method to prove the stability of an equilibrium point.
1,477,468,750,728
arxiv
\section{Introduction} \label{intro} The present paper is a sequel of a previous work\cite{borghiPRE-08} concerning the evaluation of saddle-point integrals of the form \begin{equation} \mathcal{I}(k)=\displaystyle\int_{\mathcal{C}}\, g(s)\,\exp[-k\,f(s)]\,\mathrm{d}s, \label{sd.1} \end{equation} where $\mathcal{C}$ is a suitable integration path in the complex $s$-plane, $g(s)$ and $f(s)$ are functions which, for simplicity, will be assumed to be nonsingular, and $k$ will be intended as a ``large'' (in modulus) complex parameter. As is well known, the numerical evaluation of integrals of the kind in Eq. (\ref{sd.1}) is customarily required for solving several classes of physical problems, occurring in optics, quantum mechanics, statistical physics, fluid mechanics, and so on. In optics, the evaluation of several diffraction integrals is customarily carried out asymptotically by identifying the parameter $k$ as the wavenumber of the radiation\cite{born&wolf}. In quantum mechanics, the same role is played by the inverse of the Planck's constant, while in fluid mechanics by the Reynold's number\cite{berryLectures-89}. In the stationary phase treatment of diffraction integrals the values of the associated complex wavefield are asymptotically evaluated by taking the contributions coming from the stationary points of $f(s)$, each of them associated to a ``ray" in the corresponding geometrical picture. Of particular importance is the birth and the death, as the spatial parameters ruling the ``phase" function $\mathrm{i} f(s)$ vary, of ``evanescent" rays across sets of codimension 1, named ``Stokes sets" \cite{wrightJPA-80,berryNL-90}. The $\delta$-, or Weniger, transformation\cite{wenigerCPR-89,wenigerJMP-04,calicetiArXiv07} (WT for short), is particularly efficient for resumming the factorial divergent asymptotic series well away from Stokes sets, as well as sets where two or more saddles are symmetrically placed in the complex singulant space\cite{endnote31}. { Unfortunately, as with other resummation techniques\cite{wenigerCPR-89,brezinski}, the WT fails to perform across Stokes sets. The reason of such a failure stems to the extreme "specialization" of the transformation itself, which requires, for a successful resummation, an alternating sign pattern of the sequence of the single terms of the series\cite{jentschuraCPC-99}. Several methods have been conceived for resumming nonalternating, slowly convergent or divergent, sequences \cite{jentschuraPRD-00,calicetiArXiv07}, some of them being based on the serial combination of various resummation techniques\cite{jentschuraCPC-99,aksenovCPC-03}. For the class of saddle-point integrals in Eq. (\ref{sd.1}), the marriage between hyperasymptotics\cite{berryPRSA-90,berryPRSA-91} (H for short) and the WT \cite{borghiPRE-08}, generating the so-called H-WT (which stands for hyperasymptotic-Weniger transformation), allows the WT to successfully operate also across Stokes sets. } Basically, the H-WT consists in the sequential application, to the integral in Eq. (\ref{sd.1}), of a classical hyperasymptotic treatment, as described in Ref. \cite{berryPRSA-91}, followed by the action of the WT on all asymptotic divergent series generated by H. In particular, the results obtained have shown how the 1st-order H-WT, for which only the first-stage of H is anticipated to the WT, is able to provide relative errors several orders of magnitude smaller than those achievable via the use of full hyperasymptotic treatments and with considerably lighter computational complexity and effort. A key aspect is that, differently from H, the first truncation operated on the starting asymptotic series has not to be an optimal, in the sense of superasymptotics (i.e., at the least term) one, but rather the corresponding truncation order, say $N$, must be used as a free parameter for the subsequent application of the WT. A question which was not addressed in Ref. \cite{borghiPRE-08}, but mentioned only in the last sentence, is whether WT and H can be combined to higher orders in H, and if so, how the accuracy improves with order. The present paper is aimed at giving a first answer to such a question. We shall limit our analysis only to the second stage of H. Further increasing of the H-WT order would be achievable along the same guidelines outlined here. On the other hand, it should be noted how, on increasing the order of H, the number of asymptotic series associated to the corresponding remainder that have to be resummed exponentially grows for topologies involving more than two saddles and, at the same time, the number of free parameters (i.e., the truncation orders at each H step) linearly increases. Accordingly, from a mere computational viewpoint, it is mandatory to find a compromise between the H-WT order and the computational effort. Some limits of the 1st-order H-WT have already been emphasized in Ref. \cite{borghiPRE-08}, where asymptotic evaluations of saddle-point integrals for ``small" values of the asymptotic parameter were considered. In such cases, in fact, to increase the parameter $N$ of the 1st-order H-WT does not necessarily lead to an improvement of the reached accuracy, but often results in the opposite, i.e., a worsening of it. It is just the above scenario that we are interested in when the 2nd-level H-WT will be developed. Numerical experiments will be carried out on the class of saddle-point integrals already considered in the numerical sections of the first paper \cite{borghiPRE-08}. Moreover, asymptotic evaluations of the so-called swallowtail diffraction catastrophe \cite{berryPIO-80} will be proposed as a new numerical experiment. The swallowtail function is defined via Eq. (\ref{sd.1}) with $f(s)$ being a 5th-order polynomial with respect to $s$, thus involving a four saddle network. We will present a study of the accuracy achievable via H-WT asymptotic evaluations of the swallowtail diffraction catastrophe for points placed at the Stokes set, following the prescriptions by Berry and Howls\cite{berryNL-90}. In doing so, we will find that the corresponding asymptotic expanding coefficients can be expressed in closed-form terms. {For any practical implementation of the H-WT, a key role is played by the numerical evaluation of the corresponding hyperterminants\cite{berryPRSA-90,berryPRSA-91} which are defined through suitable multiple integrals. For the lowest-order hyperterminant the exact analytical expression is available from literature\cite{berryPRSA-90}, but unfortunately this is not true for higher-order hyperterminants, included those involved in the 2nd-level H-WT. In the present paper we solve the problem of the 2nd-level hyperterminant exact evaluation for a particular, but very important, choice of the hyperterminant parameters, which often occurs in the implementation of H for evaluating a wide class of saddle-point integrals. Up to our knowledge, this is a new result which also provides an interesting connection of such hypeterminants to the Meijer-G functions \cite{GR}. Moreover, although the closed-form evaluation of 2nd-level hypeterminants for arbitrary choices of their parameters seems to remain an open problem, in the present paper we find a semi-analytical representation which turns out to be suitable for numerical calculations via standard integration packages. } Similarly as done in Ref. \cite{borghiPRE-08}, one of our aims is to keep the paper reasonably self-consistent. Accordingly, in the next section a brief review of H, up to the 2nd-level, is given. As far as the WT is concerned, we believe that what is contained in Ref. \cite{borghiPRE-08}, together with the extensive bibliography, should be enough also for a nonexpert reader. For this reason, we do not repeat it in the present paper. \section{Resuming Hyperasymptotics} \label{hyperAsymptotics} \subsection{Preliminaries and notations} \label{notations} For simplicity, we shall refer to the asymptotic evaluation of saddle-point integrals of the type in Eq. (\ref{sd.1}) where the set of saddle points of $f(s)$ will be denoted $\mathcal{S}$ and the integration path $\mathcal{C}$ will be thought of as the union of a finite number of steepest descent arcs each of them, say $\mathcal{C}_n$, passing through the contributive saddle point $s_n$, which will be supposed to be a simple one. Accordingly, the quantity $\mathcal{I}(k)$ can generally be written as \begin{equation} \mathcal{I}(k)=\displaystyle\int_{\mathcal{C}}\,g(s)\,\exp[-k\,f(s)]\,\mathrm{d}s= \displaystyle\sum_{n \in \mathcal{S}'}\,\mathcal{I}^{(n)}(k), \label{sdReview.5} \end{equation} where $\mathcal{S}'$ denotes the subset of $\mathcal{S}$ containing all the contributive saddles, and \begin{equation} \mathcal{I}^{(n)}(k)= \displaystyle\int_{\mathcal{C}_n}\, g(s)\,\exp[-k f(s)]\,\mathrm{d}s. \label{sdReview.5.1} \end{equation} The last integral can be written as\cite{berryPRSA-91} \begin{equation} \mathcal{I}_n(k)=k^{-1/2}\,\exp(-kf_n)\,T^{(n)}(k), \label{sdReview.3} \end{equation} where $f_n=f(s_n)$, and where $T^{(n)}(k)$ can \emph{formally} be written through the following asymptotic series expansion: \begin{equation} T^{(n)}(k)=\displaystyle\sum_{r=0}^{\infty}\,k^{-r}\,T^{(n)}_r, \label{sdReview.4} \end{equation} the expanding coefficients $T^{(n)}_r$ being expressed via the integral representation \cite{berryPRSA-91} \begin{equation} T^{(n)}_r= \displaystyle\frac{(r-1/2)!}{2\pi\mathrm{i}}\, \oint_{n}\,\displaystyle\frac{g(s)}{[f(s)-f_n]^{r+1/2}}\,\mathrm{d}s, \label{sdReview.5.0} \end{equation} where the subscript $n$ denotes a small positive loop around the saddle $s_n$. \subsection{Development of H up to the second stage} \label{H} H starts by writing Eq. (\ref{sdReview.4}) in the form \begin{equation} T^{(n)}(k)=\displaystyle\sum_{r=0}^{N-1}\,k^{-r}\,T^{(n)}_r+ R^{(n)}(k,N), \label{sdReview.5bis} \end{equation} where $N$ represents a positive integer and $R^{(n)}(k,N)=\sum_{r=N}^\infty\,k^{-r}\,T^{(n)}_r$, denotes the corresponding remainder which, due to the diverging character of the asymptotic series, turns out to be a \emph{diverging} quantity too. H is based on a couple of fundamental results, found via a nontrivial analysis in Ref. \cite{berryPRSA-91}. The first is that the value of the expanding coefficients $T^{(n)}_r$ at the saddle $s_n$ is strictly related to the values of the expanding coefficients $T^{(m)}_r$ at all those saddles, say $\{s_m\}$, which are \emph{adjacent} to $s_n$, via the following formal {resurgence} relation\cite{berryPRSA-91}: \begin{equation} T^{(n)}_r=\displaystyle\frac 1{2\pi\mathrm{i}}\, \displaystyle\sum_{m\in \mathcal{A}_n}\,(-1)^{\gamma_{nm}}\, \displaystyle\sum_{l=0}^\infty\, \displaystyle\frac{(r-l-1)!}{F^{r-l}_{nm}}\,T^{(m)}_l, \label{resurgence.1} \end{equation} where $\mathcal{A}_n$ denotes the set containing the indexes pertinent to all saddles adjacent to $s_n$, the quantities $F_{nm}$, called \emph{singulants}, are defined by \begin{equation} F_{nm}=f_m-f_n, \label{singulant} \end{equation} and the binary quantities $\gamma_{nm} \in \{0,1\}$ are obtained through a topological rule\cite{berryPRSA-91}. The other fundamental tool of H is the following integral representation of the remainder $R^{(n)}(k,N)$\cite{berryPRSA-91}: \begin{equation} \begin{array}{l} R^{(n)}(k,N)=\displaystyle\frac 1{2\pi\mathrm{i}}\, \displaystyle\sum_{m \in \mathcal{A}_n}\,\displaystyle\frac{(-1)^{\gamma_{nm}}}{(kF_{nm})^N}\\ \\ \times \displaystyle\int_0^\infty\, \mathrm{d}v\,\displaystyle\frac{v^{N-1}\,\exp(-v)}{1-\displaystyle\frac v{kF_{nm}}}\, T^{(m)}\left(\displaystyle\frac v{F_{nm}}\right). \end{array} \label{remainder.2} \end{equation} Equations (\ref{sdReview.4})-(\ref{remainder.2}) allow hyperasymptotic expansions for the saddle integral in Eq. (\ref{sdReview.5.1}) to be built up in principle to any order\cite{berryPRSA-91}. For instance, the direct substitution of Eq. (\ref{sdReview.4}) into Eq. (\ref{remainder.2}) leads to the 1st-stage hyperasymptotic expansion \cite{borghiPRE-08}, \begin{equation} \begin{array}{lcl} T^{(n)}(k) = \displaystyle\sum_{r=0}^{N-1}\,k^{-r}\,T^{(n)}_r\\ \\ + \displaystyle\frac {(-1)^N}{2\pi\mathrm{i}}\, \displaystyle\sum_{m \in \mathcal{A}_n}\,(-1)^{\gamma_{nm}}\\ \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times \displaystyle\sum_{r=0}^{\infty}\, (-1)^r\,k^{-r}\,T^{(m)}_r\,K^{(1)}_{N-r}(-kF_{nm}), \end{array} \label{Istage.complete} \end{equation} where the function $K^{(1)}_{n}(\beta)$, called \emph{hyperterminant} of order 1\cite{berryPRSA-90,berryPRSA-91}, is defined through the integral \begin{equation} \begin{array}{lcl} K^{(1)}_{n}(\beta)= \displaystyle\frac 1{\beta^n}\, \displaystyle\int_0^\infty\, \mathrm{d}v\,\displaystyle\frac{v^{n-1}\,\exp(-v)}{1+\displaystyle\frac v\beta}, \end{array} \label{remainder.3.1} \end{equation} where, in order for it to converge, $n>0$. Moreover, it can be shown that\cite{borghiPRE-08} \begin{equation} K^{(1)}_{n}(\beta)= \exp(\beta)\,\displaystyle\frac{E_n(\beta)}{\beta^{n-1}}\,(n-1)!+(-1)^{n-1}\,\mathrm{i}\pi \epsilon\,\exp(\beta), \label{remainder.3.1.1} \end{equation} where $E_n(\cdot)$ denotes the exponential integral function \cite{GR}, while $\epsilon$ equals 1 if $\beta<0$ and zero otherwise. The presence of the term containing $\epsilon$ has to be ascribed to the evaluation of the integral in Eq. (\ref{remainder.3.1}), when $\beta<0$, in the Cauchy principal-value sense. Equation (\ref{Istage.complete}) represents the first hyperasymptotic stage, at which the divergence of the asymptotic series in Eq. (\ref{sdReview.4}) is led back to the presence of adjacent saddles \cite{berryPRSA-91}. Furthermore, the asymptotic series in Eq. (\ref{Istage.complete}) are only formal, since for $r>N$ the terminant $K^{(1)}_{N-r}$ diverges. In Ref. \cite{borghiPRE-08} Eq. (\ref{Istage.complete}) was taken as the starting point for introducing the H-WT. In particular, instead of using the WT directly on the single terms of the series in Eq. (\ref{sdReview.4}), it is employed for resumming the asymptotic series associated to all saddles $s_m$, with $m\in\mathcal{A}_n$, which appear in Eq. (\ref{Istage.complete}). Of course, due to the fact that $r \le N$ in Eq. (\ref{Istage.complete}), it is mandatory that $N$ be left as a free parameter, in order for the WT to be able in decoding the above asymptotic series. The 2nd-level H can be derived by truncating each of the asymptotic series in Eq. (\ref{Istage.complete}) at an order, say $M$, and by generating, for \emph{each} adjacent saddle $s_m$, with $m\in \mathcal{A}_n$, a list of asymptotic series associated to all saddles, say $s_h$, such that $h \in \mathcal{A}_m$. In Appendix \ref{IIstage}, only for the reader convenience, the derivation of the 2nd-level hyperasymptotic expansion of the integral in Eq. (\ref{sd.1}) is briefly recalled according to the formalism of Ref.~\cite{berryPRSA-91}. In particular, it is found that \begin{equation} \begin{array}{lcl} T^{(n)}(k)=\displaystyle\sum_{r=0}^{N-1}\,k^{-r}\,T^{(n)}_r\\ \\+ \displaystyle\frac {(-1)^N}{2\pi\mathrm{i}}\, \displaystyle\sum_{m \in \mathcal{A}_n}\,(-1)^{\gamma_{nm}}\\ \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times \displaystyle\sum_{r=0}^{M-1}\, (-1)^r\,k^{-r}\,T^{(m)}_r\,K^{(1)}_{N-r}(-kF_{nm})\\ \\ + \displaystyle\frac{(-1)^{N+M}}{(2\pi\mathrm{i})^2}\, \displaystyle\sum_{m \in \mathcal{A}_n} \displaystyle\sum_{h \in \mathcal{A}_m}\,(-1)^{\gamma_{nm}+\gamma_{mh}}\,\\ \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times \displaystyle\sum_{r=0}^\infty\,k^{-r}T^{(h)}_r\, K^{(2)}_{M-r,N-M}\left(-kF_{nm};-\displaystyle\frac{F_{mh}}{F_{nm}}\right), \end{array} \label{IIstage.complete} \end{equation} where $K^{(2)}_{n,m}(\beta;\gamma)$, the hyperterminant of order 2, is now defined through the double integral \begin{equation} \begin{array}{lcl} K^{(2)}_{n,m}(\beta;\gamma) = \displaystyle\frac 1{\beta^{n+m}}\,\\ \\ \times \displaystyle\int_0^\infty\, \displaystyle\int_0^\infty\, \mathrm{d}u\,\mathrm{d}v\, \exp(-u-\gamma\,v)\, \displaystyle\frac{u^{m-1} v^{n-1}}{\left(1+\displaystyle\frac u\beta\right)\,\left(1+\displaystyle\frac v u \right)}. \end{array} \label{terminant2} \end{equation} Similarly as we done in Ref. \cite{borghiPRE-08}, Eq. (\ref{IIstage.complete}) can be used to give estimates of $T^{(n)}(k)$, as functions of the two (free) parameters $N$ and $M$, by resumming, via the WT, all asymptotic series generated at the second stage of H which are inside the double sum with respect to $h$ and $m$. \section{On the evaluation of $K^{(2)}_{n,m}(\beta;\gamma)$} \label{K2} {The numerical evaluation of the hyperterminants represents a step of fundamental importance for any practical implementation of the H-WT algorithm. Unfortunately, differently from the lowest-order H-WT, for which the corresponding hyperterminants are achievable via the closed-form expression in Eq. (\ref{remainder.3.1.1}), there are no analytical expressions available for higher-order hyperterminants. In a series of important papers, Olde Daalhuis~\cite{daalhiusJCAM-96,daalhiusJCAM-98} addressed the general problem of the hyperterminants evaluation, up to arbitrary precisions, through the use of convergent series representations based on hypergeometric functions. However, for the particular case of the 2nd-level hyperterminant, it seems that some new, at least up to our knowledge, results could be established. From Eq. (\ref{terminant2}) where, in order for it to converge, $n>0$ and $m>0$, the hyperterminant can be written as \begin{equation} \begin{array}{lcl} K^{(2)}_{n,m}(\beta;\gamma) = \displaystyle\frac 1{\beta^{n+m}}\,\\ &&\\ \times \displaystyle\int_0^\infty\, \displaystyle\int_0^\infty\, \mathrm{d}u\,\mathrm{d}v\, \exp(-u-\gamma\,v)\, \displaystyle\frac{u^{m} v^{n-1}}{\left(1+\displaystyle\frac u\beta\right)\,(u+v)}, \end{array} \label{K2.0.1} \end{equation} and, by formally expanding the factor $1/(1+u/\beta)$ as a geometric series, after some algebra takes the form \begin{equation} \begin{array}{lcl} K^{(2)}_{n,m}(\beta;\gamma) = \displaystyle\frac {(-1)^{m}}{\beta^{n}} \displaystyle\sum_{k=m}^\infty\, \left(-\displaystyle\frac 1\beta\right)^k\,\\ \\ \times \displaystyle\int_0^\infty\, \displaystyle\int_0^\infty\, \mathrm{d}u\,\mathrm{d}v\, \exp(-u-\gamma\,v)\, \displaystyle\frac{u^{k} v^{n-1}}{u+v}=\\ \\ =(-1)^m\, \displaystyle\frac{(n-1)!}{(\beta\gamma)^{n}}\\ \\ \times \displaystyle\sum_{k=m}^\infty\, \left(-\displaystyle\frac 1\beta\right)^k\, \displaystyle\frac{k!}{k+n}\, F\left(n,1;k+n+1;1-\displaystyle\frac{1}{\gamma}\right), \end{array} \label{K2.0.2} \end{equation} where $F(\cdot,\cdot;\cdot;\cdot)$ denotes the hypergeometric function\cite{GR}. Although the series in Eq. (\ref{K2.0.2}) is divergent, it can be decoded via Borel summation, i.e., by replacing the term $k!$ by its integral representation, i.e., \begin{equation} k!=\displaystyle\int_0^\infty\,\mathrm{d}t\, \exp(-t)\,t^k, \label{K2.0.3} \end{equation} which, once substituted into Eq.~(\ref{K2.0.2}), leads to \begin{equation} \begin{array}{lcl} K^{(2)}_{n,m}(\beta;\gamma) = (-1)^m\, \displaystyle\frac{(n-1)!}{(\beta\gamma)^{n}}\, \displaystyle\int_{0}^{\infty}\, \mathrm{d}t\, \exp(-t)\\ \\ \times \displaystyle\sum_{k=m}^{\infty}\, \left(-\displaystyle\frac t\beta\right)^k\, \displaystyle\frac{1}{k+n}\, F\left(n,1;k+n+1;1-\displaystyle\frac{1}{\gamma}\right). \end{array} \label{K2.0.3.1} \end{equation} Although, as we shall in a moment, it is possible to express the series inside the last equation through a closed form, it is better to carry out the evaluations for the case $\gamma=1$ and $\gamma \ne 1$ separately. On letting into Eq.~(\ref{K2.0.3.1}) $\gamma=1$ we have \begin{equation} \begin{array}{l} K^{(2)}_{n,m}(\beta;1) = (-1)^m\,\displaystyle\frac{(n-1)!}{\beta^{n}}\\ \\ \times \displaystyle\int_{0}^{\infty}\, \mathrm{d}t\, \exp(-t)\, \displaystyle\sum_{k=m}^{\infty}\, \left(-\displaystyle\frac t\beta\right)^k\, \displaystyle\frac{1}{k+n}=\\ \\ = \displaystyle\frac{(n-1)!}{(m+n)\beta^{n-1}}\\ \\ \times \displaystyle\int_0^\infty\, \mathrm{d}t\, \exp(-t)\, \left(\displaystyle\frac t\beta \right)^m\, F\left(m+n,1;m+n+1;-\displaystyle\frac t\beta\right). \end{array} \label{K2.0.3.2} \end{equation} The integral in Eq.~(\ref{K2.0.3.2}) can be evaluated by using the representation of the hypergeometric function given by formula 9.34.7 of Ref. \cite{GR}. In particular, it turns out that \begin{equation} \begin{array}{l} K^{(2)}_{n,m}(\beta;1)= \displaystyle\frac{(n-1)!}{\beta^{n-1}}\,\\ \\ \times \displaystyle\int_0^\infty\, \mathrm{d}t\, \exp(- t)\, \left(\displaystyle\frac t\beta \right)^{m+1}\, G^{12}_{22} \left(\displaystyle\frac t\beta\left|\begin{array}{l} -n-m,\,-1\\-1,\,-n-m-1\end{array}\right.\right), \end{array} \label{hyper2.10.3} \end{equation} where $G^{mn}_{pq}(\cdot)$ denotes the Meijer function\cite{GR}. Finally, by using formulas 9.31.5, 7.813.1, and 9.31.2 of \cite{GR}, after some algebra it is found that \begin{equation} \begin{array}{lcl} K^{(2)}_{n,m}(\beta;1)& = &(n-1)!\, G^{31}_{23}\left(\beta\left|\begin{array}{c} 1-n-m,1\\1-n,1-n-m,0\end{array}\right.\right). \end{array} \label{terminant2.1} \end{equation} Equation~(\ref{terminant2.1}) represents one of the main results of the present paper. As we shall see in the numerical section, in applying the 2nd-level H-WT the evaluation of the hyperterminants $K^{(2)}_{n,m}(\beta;\gamma)$ is often required for $\gamma=1$. This happens whenever the contributive saddle $s_{n}$ turns out to be adjacent to itself after two hyperasymptotic stages, i.e., when $h=n$ into Eq. (\ref{IIstage.complete})\cite{berryPRSA-91}. For $\gamma \ne 1$, the series inside the integral in Eq.~(\ref{K2.0.2}) can still be expressed through a closed form, although the subsequent integral unfortunately not. However, a semi-analytical expression, which turns out to be suitable for being evaluated via standard numerical integration packages can be derived. In doing this, Eq.~(\ref{K2.0.2}) is first rewritten as \begin{equation} \begin{array}{lcl} K^{(2)}_{n,m}(\beta;\gamma) = (-1)^{m}\,\displaystyle\frac{(n-1)!}{(\beta\gamma)^{n}}\\ \\ \times \left[ \mathcal{S}- \displaystyle\sum_{k=0}^{m-1}\, \left(-\displaystyle\frac 1\beta\right)^k\, \displaystyle\frac{k!}{k+n}\, F\left(n,1;k+n+1;\displaystyle\frac{\gamma-1}{\gamma}\right) \right], \end{array} \label{gn1.1} \end{equation} where \begin{equation} \begin{array}{lcl} \mathcal{S}= \displaystyle\sum_{k=0}^\infty\, \left(-\displaystyle\frac 1\beta\right)^k\, \displaystyle\frac{k!}{k+n}\, F\left(n,1;k+n+1;\displaystyle\frac{\gamma-1}{\gamma}\right), \end{array} \label{gn1.2} \end{equation} so that the task is to evaluate the series in Eq.~(\ref{gn1.2}) for $\gamma \ne 1$. On substituting from Eq.~(\ref{K2.0.3}) into Eq.~(\ref{gn1.2}) we have \begin{equation} \begin{array}{lcl} \mathcal{S}= \displaystyle\int_{0}^{\infty}\,\mathrm{d}t\, \exp(-t)\,\\ \\ \times \displaystyle\sum_{k=0}^\infty\, \left(-\displaystyle\frac t\beta\right)^k\, \displaystyle\frac{F\left(n,1;k+n+1;1-\displaystyle\frac{1}{\gamma}\right)}{k+n}=\\ \\ =\gamma\,\displaystyle\int_{0}^{\infty}\,\mathrm{d}t\, \exp(-t)\,\\ \\ \times \displaystyle\sum_{k=0}^\infty\, \left(-\displaystyle\frac t\beta\right)^k\, \displaystyle\frac{F(1+k,1;n+1+k;1-\gamma)}{k+n}, \end{array} \label{gn1.3} \end{equation} where use has been made of the relation[see Ref.~\cite{PrudnikovIII} p.~347] \begin{equation} \begin{array}{lcl} F\left(n,1;k+n+1;1-\displaystyle\frac{1}{\gamma}\right) =\gamma\,F(1+k,1;n+1+k;1-\gamma). \end{array} \label{gn1.4} \end{equation} Finally, on writing Eq.~(\ref{gn1.3}) as \begin{equation} \begin{array}{lcl} \mathcal{S}=\displaystyle\frac\gamma n\, \displaystyle\int_{0}^{\infty}\,\mathrm{d}t\,\exp(-t)\,\\ \\ \times \displaystyle\sum_{k=0}^\infty\, \displaystyle\frac{(1)_{k}}{k!}\, \displaystyle\frac{(n)_{k}}{(n+1)_{k}}\, \left(-\displaystyle\frac t\beta\right)^k\, F(1+k,1;n+1+k;1-\gamma), \end{array} \label{gn1.5} \end{equation} where $(\cdot)_{k}$ denotes the Pochhammer symbol, formula 6.7.1.8 of Ref.~\cite{PrudnikovIII} gives at once \begin{equation} \begin{array}{lcl} \mathcal{S}=\displaystyle\frac\gamma n\, \displaystyle\int_{0}^{\infty}\,\mathrm{d}t\, \displaystyle\frac{\exp(-t)}{1+\displaystyle\frac t\beta}\, F\left(1,1;n+1;1-\displaystyle\frac\gamma{1+\displaystyle\frac t\beta}\right). \end{array} \label{gn1.6} \end{equation} Notice that, when $\mathrm{Re}[\beta]>0$, the function $S$ can also be evaluated through the alternative form \begin{equation} \begin{array}{lcl} \mathcal{S}=\displaystyle\frac{\beta\gamma}n\, \displaystyle\int_{0}^{1}\,\mathrm{d}p\, \displaystyle\frac {\exp\left[-\beta\left(\displaystyle\frac 1p-1\right)\right]}{p}\, F(1,1;n+1;1-\gamma p). \end{array} \label{gn1.7} \end{equation} Although it seems that the above expressions cannot be further simplified, the numerical evaluation of the function $S$ can be done with high accuracies by using standard integration packages. Finally, it should be stressed that, for $\beta<0$, the evaluation of the double integral defining $K^{(2)}_{n,m}(\beta;\gamma)$ has to be done, with respect to the $v$-variable, in the Cauchy principal value sense, in order to overcome the singularity placed at $v=-\beta$. This, in turn, implies that an extra term must be added to the result. In Appendix~\ref{appC} such term is analytically evaluated starting from the definition in Eq.~(\ref{terminant2}), and turns out to be \begin{equation} \begin{array}{l} \mathrm{i}\pi\,(-1)^{n+m-1}(n-1)!\displaystyle\frac{\exp[\beta(1-\gamma)]} {(-\beta)^{n-1}} \,E_{n}(-\beta\gamma). \end{array} \label{extraTerm} \end{equation} \\ All subsequent numerical experiments will be done within the \emph{Mathematica} language. } \section{Numerical experiments} \label{numericalExperiments} \subsection{Evaluation of the Airy function across the Stokes line} \label{airy} { Consider the evaluation of the Airy function, defined as \begin{equation} \mathrm{Ai}(x)=\displaystyle\frac1{2\pi}\, \displaystyle\int_{\mathcal{C}}\, \exp\left[\mathrm{i}\left(\displaystyle\frac{s^3}3+xs\right)\right]\, \mathrm{d}s, \label{airy.1} \end{equation} which is of the form given in Eq. (\ref{sd.1}) with $g(s)=1/(2\pi)$, $f(s)=-\mathrm{i} (s^3/3+xs)$, and $k=1$. The detailed analysis of the saddle topology, as well as the expanding coefficients $T^{(n)}_r$ has been summarized in Ref. \cite{borghiPRE-08}, so that here it will not be given again. We only recall that the two saddles are $s_1= (-x)^{1/2}$ and $s_2=-s_1$, and that $\mathcal{A}_{{1\atop 2}}=\{{2\atop 1}\}$, $\gamma_{12}=0$, $\gamma_{21}=1$. We focus our attention on the evaluation of the Airy function across the Stokes line\cite{boyd-99}, i.e., for $\arg\{x\}=2\pi/3$, in order to compare the performances of the 2nd-level H-WT with respect those displayed by the 1st-order H-WT in the same situation. More precisely, we write the argument of the Airy function as $x=(3/4\times F)^{2/3}\,\exp(\mathrm{i}2\pi/3)$, where $F$ is a real positive parameter, whose value coincides with the singulant $F_{12}$. The study of the asymptotic evaluation of the Airy function across its Stokes line has played a pivotal role in the development of several asymptotic technique, mainly in light of the relative simplicity of the involved saddle topology. Such a simplicity could help in grasping, whereas possible, some conceptual aspects related to the use of the H-WT. Differently from what done in Ref.~\cite{borghiPRE-08}, where the relative error values were displayed via the use of tables, in the present paper we are going to resort to graphical visualizations, due to the presence of the two ``free'' parameters $N$ and $M$. In the first experiment, whose results are shown in Figure~\ref{FigErrorAiryF16}, the Airy function is evaluated for $F=16$. Note that the same experiment was carried out in Ref.~\cite{berryPRSA-90} via the use of H. The values of the relative error, obtained through the 1st-level H-WT, are shown, as black dots, versus the values of $N$, reported on the abscissa axis. For each value of $N$, the values of the relative error obtained via the 2nd-level H-WT, with $M \in [3,N-1]$, are also plotted and, for the sake of clarity, are joined with lines of different color each of them, which corresponds to a different value of $N$, departing from $N$ itself. This can be noted from the figure, where it is immediately seen how the higher the $N$, the longer the corresponding coloured ``leg'' is. From a first look to the figure, it appears that the relative error, obtained with both the 1st- and the 2nd-level H-WT, is lower bounded. We shall find that all subsequent numerical experiments present the same characteristic. As a first remark, it should be noting that the improvement of the estimate accuracy induced by the 2nd-level H-WT with respect to that obtained via the 1st-level one of the same order appears, at least for values of $N$ not too large, not to adequately refund the unavoidable increase of the computational complexity required by the application of the 2nd-level transformation. For example, it is seen from Fig. \ref{FigErrorAiryF16} that a relative error of the order of $10^{-18}$, achieved through the 2nd-level H-WT with $N=11$ and $M=8$, would be reached via a 1st-order H-WT by letting $N=13$, but with a considerable saving of computational effort. The above example clearly suggests the use of the 2nd-level H-WT only in those cases where the best accuracy attainable via the 1st-level H-WT turns out to be not adequate. This happens, for instance, when the integral is attempted to be evaluated beyond the asymptotic realm. To put into evidence this aspect, Figure~\ref{FigErrorAiryF14To2} shows the same as in Fig.~\ref{FigErrorAiryF16}, but for {a decreasing sequence of values of $F$, namely} 14 (a), 10 (b), 6 (c), and 2 (d). In particular, in Fig.~\ref{FigErrorAiryF14To2}(d), where the Airy function argument is located at a distance $\simeq 1$ from the origin of the complex plane, the 1st-level H-WT provides a best error of the order of $10^{-4}$, achieved for $N=4$. Higher accuracies are not allowed because the information gained at the first H stage turns out to be no longer sufficient to generate WT-resummable sequences. The 2nd-level H-WT, on the other hand, provides a best error of the order of $10^{-10}$, which is attained for $(N,M)=(15,11)$. Some intuitive insights about the resummation process associated to the H-WT could be grasped by noting that a lower bounded error is an intrinsic imprint of superasymptotic and hyperasymptotic resummations, whereas it is not generally featured by the application of the WT to alternating factorial divergent series\cite{wenigerCPR-89}. Accordingly, one should be inclined to think that such an error behavior could be ascribed to the presence of the ``regularization" step operated by H on the raw input data. { Speaking within a more general context, this should be somewhat related to the possible presence on nonanalytic, nonperturbative correction terms which cannot be grasped simply by resummation processes, but rather require the use of ``generalized nonanalytic expansions" \cite{calicetiArXiv07}. } \\ In a second experiment concerning the Airy function, the asymptotic parameter $F$ is let running within the interval $[2,4]$ and, for each value of $F$, an exhaustive search of the optimal values of the truncations $N$ and $(N,M)$, which minimize the 1st- and 2nd-level relative errors, respectively, is operated. The results are shown in Fig.~\ref{FigOptimalErrorAiry}, where the optimal relative errors obtained via the 1st- (open circles) and the 2nd-level (dots) H-WT are shown as functions of $F$. The values of the optimal truncation $N$ for the 1st-level H-WT are also reported, versus $F$, in Fig.~\ref{FigOptimalNAiry}, while those of $N$ and $M$, for the 2nd-level H-WT, in Fig.~\ref{FigOptimalNMAiry}(a) and (b), respectively. We will come back later on the above results. \\ In concluding the present section, however, we want to provide a table of explicit values obtained through the use of the 2nd-level H-WT. We choose to evaluate the Airy function for $F=2$, for which the optimal setting of the truncation parameters turns out to be $(N,M)=(15,11)$. The preliminary step is the evaluation, through a simple WT, of the contribution, to the Airy integral, coming from the saddle $s_2$. The result is shown in Table \ref{table.1}. Furthermore, the subsequent action of the 2nd-level H-WT on the saddle $s_1$ is shown in Table \ref{table.2}, where the complete estimates of the Airy function provided are reported together with the corresponding values of the truncation $M \in [3,14]$, with $N=15$. } \subsection{Instanton integral} \label{instanton} The second numerical experiment concerns the evaluation of the instanton integral \begin{equation} \mathcal{N}(k)=k^{1/2}\,\displaystyle\int_{-\infty}^{+\infty}\, \exp[-k (s^2-1)^2]\,\mathrm{d}s, \label{inst.1} \end{equation} with $k>0$, {already considered in Ref. \cite{berryPW-93} as a simplified prototype for the modeling of instanton tunneling between symmetric double wells. It was shown in Ref. \cite{borghiPRE-08} that the integral in Eq. (\ref{inst.1}) can be written as\cite{borghiPRE-08} } \begin{equation} \mathcal{N}(k)=2\,k^{1/2}\, \mathrm{Re}\left\{\mathcal{I}(k)\right\}, \label{inst.2} \end{equation} where, by referring to Eq. (\ref{sd.1}), $g(s)=1$, $f(s)=(s^2-1)^2$, and where $\mathcal{C}$ is the steepest descent path connecting the points $-\mathrm{i}\infty$ and $+\infty$ via the lines $\mathrm{Im}\{s\}\le 0$ and $\mathrm{Re}\{s\}\ge 0$. The complete saddle topology, as well as the expressions of the expanding coefficients associated to all saddles have been described in Ref.~\cite{borghiPRE-08}. In particular, there are three saddles, $s_1=-1$, $s_2=0$, and $s_3=1$, with $\mathcal{A}_{1}=\{2\}$, $\mathcal{A}_{2}=\{1,3\}$, and $\mathcal{A}_{3}=\{2\}$. {The saddles involved in the evaluation of $\mathcal{I}(k)$ are $s_2$ and $s_3$, but only the latter requires a H-WT treatment, since the associated singulant is $F_{32}=1 >0$, and the corresponding asymptotic series turns out to be nonalternating.} Furthermore, $\gamma_{12}= 1$, $\gamma_{21}= \gamma_{23}= 0$, and $\gamma_{32}=1$, while we recall that the integral in Eq. (\ref{inst.1}) can be expressed in closed form via \begin{equation} \mathcal{N}(k)= \displaystyle\frac{\pi\,\sqrt k}{2}\,\exp(-k/2)\, \left[ I_{-1/4}\left(\displaystyle\frac k2\right) + I_{1/4}\left(\displaystyle\frac k2\right) \right], \label{inst.4} \end{equation} where $I_n(\cdot)$ denotes the $n$th-order modified Bessel function of the first kind. The first experiment concerns the evaluation of $\mathcal{N}(1/2)$ via the 1st- and the 2nd-level H-WT. In Fig.~\ref{FigInstantonek1ov2} it is seen how the 2nd-level relative error is bounded, with a minimum value of the order of $10^{-3}$, achieved for $(N, M)=(6,5)$. {On the opposite, the 1st-level H-WT turns out to be completely inadequate to provide a reasonably accurate estimate of the function, due to the very low value of $k$.} The searching for optimal values has also been carried out in the present case, but using, as the varying asymptotic parameter, $k \in [1/2,3]$, i.e., where the 1st-order H-WT displays the worst results in terms of accuracy, as shown in Fig.~2a of Ref.~\cite{borghiPRE-08}. The error values are shown in Fig.~\ref{FigOptimalErrorInstanton}, versus $k$, while the optimal settings of $N$ and of $(N,M)$ are plotted, against $k$, in Fig.~\ref{FigOptimalNInstanton} and~\ref{FigOptimalNMInstanton}, respectively. {It is worth now comparing the results pertinent to the Airy and the $\mathcal{N}(k)$ functions. What we are going to show can seem at first sight somewhat surprising, but gives a possible first hint toward the understanding of the H-WT mechanisms. For simplicity we shall refer to the 1st-level transformation, but the results will apply also to higher-order levels. In Fig.~\ref{FigComparisonAiryInst}, the values of the relative error obtained for the Airy function (dots) and for the instanton function (solid curve) are plotted, versus $N$, when the values of the parameter $F$ and the parameter $k$ are numerically equal. In particular, figure (a) corresponds to $k=F=3$, (b) to 7, (c) to 12, and (d) to 20. It is clearly seen that the behavior of the relative error follows basically the same law. To give a possible explanation of this, in Fig.~\ref{FigCOmpAiryInst} a pictorial representation of the complete saddle network and the complex integration path involved in the evaluation of the Airy (a) and of the instanton (b) functions is plotted. In both pictures, the black dot denotes the saddle for which the H-WT is required. Although the two saddle distributions are clearly different, they present some common features that, together with Eq. (\ref{Istage.complete}), are enough to justify what happens in Fig.~\ref{FigComparisonAiryInst}. Each of the ``black'' saddles is adjacent to a single saddle. For the Airy function $s_{1}$ is adjacent to $s_{2}$, while for the instanton function $s_{3}$ is adjacent to $s_{2}$. The values of the corresponding singulants are $F$ and 1, respectively. The use of the resurgence relation in Eq.~(\ref{resurgence.1}) now gives, for the two ``black'' saddles, \begin{equation} \begin{array}{lcr} T^{(1)}_r \propto \displaystyle\frac{(r-1)!}{F^r}, \end{array} \label{airy1st} \end{equation} for the Airy function and \begin{equation} \begin{array}{lcr} T^{(3)}_r \propto \displaystyle\frac{(r-1)!}{k^r}, \end{array} \label{inst1st} \end{equation} for the instanton function. From the above equation it is seen that the behavior of the expanding coefficients follows the same asymptotic law as soon as $F=k$. At the same time, however, the above equality guarantees that also the asymptotic laws for the expanding coefficients corresponding to the adjacent saddles is identical. In fact, for the Airy function the saddle adjacent to $s_{2}$ is $s_{1}$ itself, with a singulant value of $-F$. As far as the instanton function is concerned, the saddles adjacent to $s_{2}$ are $s_{1}$ and $s_{3}$, but for both of them the singulant values are -1. Accordingly, the use of Eq.~(\ref{resurgence.1}), together with the condition $F=k$, provides again an equivalence between the asymptotic laws of $T^{(2)}_{r}$ for the Airy function and $T^{(2)}_{r}$ for the instanton function. Finally, on using Eq. (\ref{Istage.complete}) it is not difficult to convince that the retrieving process is the same for the two functions at the 1st-level\cite{endnote30}. Leaving a deeper understanding of this phenomenon to future investigations, it is here worthwhile to point out how an immediate consequence of the above described ``topological equivalence" could be the restriction of the study of the H-WT retrieving performances to a few classes of prototype test cases. } \subsection{Swallowtail diffraction catastrophe} \label{sw} As a last numerical experiment we consider asymptotic evaluations of the so-called swallowtail diffraction catastrophe \cite{berryPIO-80,connorJPA-84,nye,nyePRSA-07}, which is defined through the following integral: \begin{equation} S(x,y,z)= \displaystyle\int_{\mathcal{C}}\, \exp\left[\mathrm{i}\left( \displaystyle\frac{s^5}5+x\,\displaystyle\frac{s^3}3+y\,\displaystyle\frac{s^2}2+zs\right)\right]\, \mathrm{d}s, \label{sw.1} \end{equation} which is of the form given in Eq. (\ref{sd.1}) with $g(s)=1$, $f(s)=-\mathrm{i} (s^5/5+x s^3/3+y s^2/2+zs)$, and $k=1$. The integration path $\mathcal{C}$ can be thought of as the union of steepest descent paths approaching, for $|s|\gg 1$, the directions $\varphi=(2n+1/2)\pi/5$, with $n=0,1,...,4$. Although a systematic treatment of the swallowtail asymptotics, along the general classical rules recalled in Sec. \ref{notations}, can be derived by paralleling the analysis carried out, for the Pearcey function, in Ref.~\cite{berryPRSA-91}, up to our knowledge it is not present in the current literature. As shown in appendix \ref{swT}, all expanding coefficients $T^{(n)}_r$ are given by \begin{equation} \begin{array}{l} T^{(n)}_r= \displaystyle\frac {(5\mathrm{i})^{r+1/2}\,(r-1/2)!} {(10s^{3}_{n}+5s_{n}x+5y/2)^{5r/3+1/2}}\, B^{(r+1/2)}_{2r}(\alpha,\beta), \end{array} \label{generatingC.2} \end{equation} where \begin{equation} \begin{array}{l} \alpha=\displaystyle\frac{5s_{n}}{(10s^{3}_{n}+5s_{n}x+5y/2)^{1/3}},\\ \\ \beta= \displaystyle\frac {10 s^{2}_{n} + {5x}/{3}}{(10s^{3}_{n}+5s_{n}x+5y/2)^{2/3}}, \end{array} \label{sw.8.1.1.1.1.1.1.1.1} \end{equation} and where the polynomials $B^{(\lambda)}_{n}(u,v)$ are defined via the generating function formula \begin{equation} \begin{array}{l} \displaystyle\sum_{n=0}^\infty\, {t^n}\,B^{(\lambda)}_n(u,v)= \displaystyle\frac 1{(t^3+u t^2+v t+1)^\lambda}. \end{array} \label{generatingC.3} \end{equation} It is not difficult to prove that Eq. (\ref{generatingC.3}) allows the numerical evaluation of the polynomials $B^{(\lambda)}_n(u,v)$ to be efficiently performed via the use of the following recurrence rule, whose derivation is outlined in Appendix \ref{recurrenceRule}: \begin{equation} \begin{array}{l} nB^{(\lambda)}_n=-(n-3+3\lambda)\,B^{(\lambda)}_{n-3}-u\,(n-2+2\lambda)\,\,B^{(\lambda)}_{n-2}\\ \\-v\,(n-1+\lambda)\,B^{(\lambda)}_{n-1}, \end{array} \label{generatingC.4} \end{equation} with the triggering values $B^{(\lambda)}_0(u,v)=1$, $B^{(\lambda)}_1(u,v)=-\lambda v$, and $B^{(\lambda)}_2(u,v)=-u\lambda+v^2\lambda(\lambda+1)/2$. The numerical experiments we are going to illustrate concern asymptotic evaluations of $S(x,y,z)$ at points belonging to the corresponding Stokes set, {which has been estensively studied in Ref. \cite{berryNL-90} (see, in particular, Fig. 3 of this reference)}. Accordingly, the triplets $(x,y,z)$ have been chosen following the prescriptions given in Ref. \cite{berryNL-90}, in order to investigate points at the intersection between the Stokes surface and the plane $x=0$, along the branch corresponding to $y>0$ and $z>0$. This leads to triplets of the form $(x,y,z)=(0, \kappa^{3/2}, \kappa^2\times\,0.23012\ldots)$, with $\kappa$ being a positive parameter\cite{berryNL-90}. We start by considering the case $\kappa=2$. The saddle topology is constituted by four saddles, which are listed below together with the corresponding list of adjacent ones: \begin{equation} \begin{array}{lcl} s_1=0.8062\ldots- \mathrm{i}\,1.2357\ldots , & & \mathcal{A}_1=\{4\},\\ &&\\ s_2=s^*_1, & & \mathcal{A}_2=\{4\},\\ &&\\ s_3=-1.2828\ldots, & & \mathcal{A}_3=\{4\},\\ &&\\ s_4=-0.3296\ldots, & & \mathcal{A}_4=\{1,2,3\}, \end{array} \label{sw.1.1} \end{equation} with the orientation matrix being \begin{equation} \{\gamma_{nm}\}= \left[ \begin{array}{cccc} \cdot & \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot & 0 \\ \cdot & \cdot & \cdot & 0 \\ 0 & 1 & 1 & \cdot \end{array} \right], \label{sw.1.2} \end{equation} {where the values of $\gamma_{nm}$ which are not relevant for the present experiment have been replaced by dots.} {Three of the four saddles, namely $s_{2}$, $s_{3}$ and $s_{4}$, do contribute to the integral. In particular,} the integration path consists in the union of three steepest-descent arcs, the first connecting the points $\infty\,\exp(\mathrm{i}9\pi/10)$ and $\infty\,\exp(\mathrm{i}13\pi/10)$ passing through $s_3$, the second connecting the point $\infty\,\exp(\mathrm{i}13\pi/10)$ to the saddle $s_2$, passing through $s_4$, and the third connecting the saddle $s_2$ to the point $\infty\,\exp(\mathrm{i}\pi/10)$. Accordingly, the Stokes phenomenon occurs via the so-called \emph{saddle connection} between saddles $s_4$ and $s_2$\cite{berryNL-90}. {A pictorial representation of the topology above described is given, for the reader convenience, in Fig. \ref{FigSwPaths}.} {Before showing the numerical results about the performances of the 1st- and the 2nd-level H-WT, it is worth giving some details about the way the saddle topology of the swallowtail integral influences the retrieving capabilities of the WT. We refer, in particular, to the contribution, to the swallowtail integral, of $s_{3}$ and $s_{4}$. As far as the former is concerned, since there is only one adjacent saddle, $s_{4}$, due to the fact that the corresponding singulant $F_{34}=\mathrm{i}\,0.602168\ldots$ is purely imaginary, it turns out that $T^{(3)}_{r} \propto (-\mathrm{i})^{r}\,(r-1)!/|F_{3,4}|^{r}$, thus allowing the WT to operate the resummation. Similar considerations can be done for the contributive saddle $s_{2}$. The situation is somewhat different for the saddle $s_{4}$, which is connected to $s_{2}$. In fact, from the above described topology, it turns out that the adjacent saddle $s_{3}$ is dominant, i.e., presents the minimum value of $|F_{4,m}|$, with $m \in \mathcal{A}_{4}$. Accordingly, one should conclude also for $T^{(4)}_{r}$ an asymptotic ``factorial divided by power'' law, similar to that corresponding to $T^{(3)}_{r}$ and, due to the fact that $F_{4,3}=-\mathrm{i}\,0.602168\ldots$, one should expect the WT to be able in resumming the corresponding asymptotic series. But this, on the contrary, does not happen, because of the presence of the other two, nondominant, saddles $s_{1}$ and $s_{2}$, symmetrically placed in the complex singulant space. This can be explained by expliciting Eq.~(\ref{resurgence.1}) as \begin{equation} T^{(4)}_r\approx \displaystyle\frac {(r-1)!}{2\pi\mathrm{i}}\, \left[ -\displaystyle\frac{T^{(3)}_{0}}{F^{r}_{43}}+ \left( \displaystyle\frac{T^{(1)}_{0}}{F^{r}_{41}}- \displaystyle\frac{T^{(2)}_{0}}{F^{r}_{42}} \right) \right], \label{resurgence.1Bis} \end{equation} where $T^{(2)}_{0}=\mathrm{i}[T^{(1)}_{0}]^{*}$. This example shows how the divergent asymptotic series generated on Stokes sets do not necessarily display a strictly nonalternating sign pattern as, for example, happened for the Airy function, but rather how the asymptotic behavior of their single terms can display more complex patterns, depending on the whole saddle topology. Figure~\ref{FigSWk2} shows the relative errors obtained through the 1st- and the 2nd-level H-WTs in the case of swallowtail evaluation across the Stokes set defined above, when $\kappa=2$. As for Figs.~\ref{FigErrorAiryF16},~\ref{FigErrorAiryF14To2}, and~ \ref{FigInstantonek1ov2}, the errors are plotted versus the parameters $N$ and $M$. In this case their optimal values turn out to be $N=4$ (for the 1st-level) and $(N,M)=(7,6)$ (for the 2nd-level), with corresponding error values of $2\cdot10^{-3}$ and $5\cdot10^{-5}$, respectively, evaluated with respect to the ``exact" value of $S(0,2.8284\ldots,0.9205\ldots)$, obtained via the method recently proposed in Ref. \cite{borghiJOSAA-08bis}. Finally, an experiment about optimal resummation of swallowtail functions has been carried out, by using $\kappa \in [2,4]$ as the parameter representative of the ``asymptoticity" features. The error values are shown, versus $\kappa$, in Fig.~\ref{FigOptimalErrorsw}, while the optimal settings of $N$ and of $(N,M)$ are plotted, against $\kappa$, in Fig.~\ref{FigOptimalNsw} and~\ref{FigOptimalNMsw}, respectively. } \section{Conclusions} \label{conclusions} {The H-WT was introduced in Ref.~\cite{borghiPRE-08} as a powerful and easily implementable refinement of H aimed at allowing the WT to successfully decode those divergent asymptotic series generated through the application of the steepest descent method to saddle point integrals evaluated across Stokes sets, for which their single terms do not display a strictly alternating sign pattern. } The scheme proposed in \cite{borghiPRE-08} employed the WT only on the asymptotic series generated by the first-stage hyperasymptotic treatment of the corresponding diverging remainder. In the present sequel we reported on the possibility of combining H and the WT to higher orders in H. In particular, the full development of the 2nd-level H-WT has been detailed within the classical framework of H. The results obtained from the application of the 2nd-level H-WT, also in comparison to those obtained via the 1st-level one, on the different types of saddle-point integrals considered, showed how the increase of complexity and computational effort required by the new transformation be adequately refunded, in terms of accuracy of the estimate, particularly when the integrals are evaluated for values of their parameters which are beyond the asymptotic regime, whereas H turns out to be inapplicable and the 1st-order H-WT unavoidably lacks precision. At the same time, however, it should be noted how, for ``ordinary" asymptotic evaluations, at least in the cases considered in the present work, the performances of the 1st- and the 2nd-level H-WTs seem to be comparable in terms of the estimate accuracy, against a considerably difference in the computational efforts required by the two transformations. {Although the H-WT has been developed, here and in Ref.~\cite{borghiPRE-08}, with reference to the evaluation of saddle-point integrals of the type in Eq.~(\ref{sd.1}), we believe it could be useful also in dealing with problems of different nature like, for instance, the hyperasymptotic treatment of a wide class of linear and nonlinear ordinary differential equations, which has been recently considered~\cite{berryJPA-99,daalhuisPRSA-05,daalhuisPRSA-05b}. The semi-analytical algorithms proposed for the numerical evaluation of the 2nd-level hyperterminants would reveal useful in this perspective.} Before concluding, it is worth pointing out, as an important open problem, the need for an \emph{a priori} algorithm for estimating the values of $N$ and $M$ that lead to optimal results. Especially in cases where it is not convenient (or possible) to evaluate the original function, such an algorithm would certainly be of great help to a typical user. Unfortunately, differently from H, for which the optimal settings of the hyperseries truncations are directly extracted from the singulant values\cite{berryPRSA-90,berryPRSA-91}, at present it does not seem possible to provide similar information for the H-WT. The difficulties in giving practical guidelines for the choice of $N$ and $(N,M)$ can also be appreciated from the results presented in Sec. \ref{numericalExperiments} and especially from Figs. \ref{FigOptimalNAiry}, \ref{FigOptimalNMAiry}, \ref{FigOptimalNInstanton}, \ref{FigOptimalNMInstanton}, \ref{FigOptimalNsw}, and \ref{FigOptimalNMsw}, where it seems quite difficult to obtain general rules for the optimal settings of them. {A possible hint, grasped from a quantitative comparison between the results obtained for the Airy and the instanton functions, seems to be given by the strong connection between the H-WT retrieving performances and the saddle topology associated to the integral under consideration. What we found is that different saddle networks can share a sort of ``topological equivalence" property, which is related to the set of the saddles adjacent to that under consideration and to the values of the relevant singulants. If two networks turn out to be equivalent at a certain hyperasymptotic level, this would result in the same computational effort, in terms of relative error, as far as the corresponding H-WT retrieved estimates are concerned. This, in particular, would imply that the study of the H-WT retrieving performances could be, in principle, carried out only on a restricted class of prototype functions. } \acknowledgments I would like to thank all anonymous reviewers for their constructive criticisms and suggestions. I am also grateful to Turi Maria Spinozzi for his invaluable help during the preparation of the present work.
1,477,468,750,729
arxiv
\section{Introduction} Let $f_t$'s ($t\in[0,T)$) be a one-parameter $C^{\infty}$-family of immersions of a manifold $M$ into a Riemannian manifold $N$, where $T$ is a positive constant or $T=\infty$. Define a map $\widetilde f:M\times[0,T)\to N$ by $\widetilde f(x,t)=f_t(x)$ ($(x,t)\in M\times[0,T)$). If, for each $t\in[0,T)$, $\widetilde f_{\ast}((\frac{\partial}{\partial t})_{(x,t)})$ is the mean curvature vector of $f_t:M\hookrightarrow N$, then $f_t$'s ($t\in[0,T)$) is called a mean curvature flow. In particular, if $f_t$'s are embeddings, then we call $M_t:=f_t(M)$'s $(0\in[0,T))$ rather than $f_t$'s $(0\in[0,T))$ a mean curvature flow. Liu-Terng [LT] investigated the mean curvature flow having isoparametric submanifolds (or their focal submanifolds) in a Euclidean space as initial data and obtained the following facts. \vspace{0.5truecm} \noindent {\bf Fact 1([LT]).} {\sl Let $M$ be a compact isoparametric submanifold in a Euclidean space. Then the following statements ${\rm (i)}$ and ${\rm (ii)}$ hold: ${\rm (i)}$ The mean curvature flow $M_t$ having $M$ as initial data collapses to a focal submanifold of $M$ in finite time. If a focal map of $M$ onto $F$ is spherical, then the mean curvature flow $M_t$ has type I singularity, that is, $\lim\limits_{t\to T-0} {\rm max}_{v\in S^{\perp}M_t}\vert\vert A^t_v\vert\vert^2_{\infty}(T-t)\,<\, \infty$, where $A^t_v$ is the shape operator of $M_t$ for $v$, $\vert\vert A^t_v\vert\vert_{\infty}$ is the sup norm of $A^t_v$ and $S^{\perp}M_t$ is the unit normal bundle of $M_t$. {\rm(ii)} For any focal submanifold $F$ of $M$, there exists a parallel submanifold $M'$ of $M$ such that the mean curvature flow having $M'$ as initial data collapses to $F$ in finite time.} \vspace{0.5truecm} \noindent {\bf Fact 2([LT]).} {\sl Let $M$ be as in Fact 1, $C$ be the Weyl domain of $M$ at $x_0\,(\in M)$ and $\sigma$ be a stratum of dimension greater than zero of $\partial C$. Then the following statements ${\rm (i)}$ and ${\rm (ii)}$ hold: {\rm(i)} For any focal submanifold $F$ (of $M$) through $\sigma$, the maen curvature flow $F_t$ having $F$ as initial data collapses to a focal submanifold $F'$ (of $M$) through $\partial{\sigma}$ in finite time. If the fibration of $F$ onto $F'$ is spherical, then the mean curvature flow $F_t$ has type I singularity. (ii) For any focal submanifold $F$ (of $M$) through $\partial\sigma$, there exists a focal submanifold $F'$ (of $M$) through $\sigma$ such that the mean curvature flow $F'_t$ having $F'$ as initial data collapses to $F$ in finite time.} \vspace{0.5truecm} As a generalized notion of compact isoparametric hypersurfaces in a sphere and a hyperbolic space, and a compact isoparametric submanifolds in a Euclidean space, Terng-Thorbergsson [TT] defined the notion of an equifocal submanifold in a symmetric space as a compact submanifold $M$ satisfying the following three conditions: \vspace{0.2truecm} (i) the normal holonomy group of $M$ is trivial, (ii) $M$ has a flat section, that is, for each $x\in M$, $\Sigma_x:=\exp^{\perp}(T^{\perp}_xM)$ is totally geodesic and the induced metric on $\Sigma_x$ is flat, where $T^{\perp}_xM$ is the normal space of $M$ at $x$ and $\exp^{\perp}$ is the normal exponential map of $M$. (iii) for each parallel normal vector field $v$ of $M$, the focal radii of $M$ along the normal geodesic $\gamma_{v_x}$ (with $\gamma'_{v_x}(0)=v_x$) are independent of the choice of $x\in M$, where $\gamma'_{v_x}(0)$ is the velocity vector of $\gamma_{v_x}$ at $0$. \vspace{0.2truecm} \noindent On the other hand, Heintze-Liu-Olmos [HLO] defined the notion of an isoparametric submanifold with flat section in a general Riemannian manifold as a submanifold $M$ satisfying the above condition (i) and the following conditions (ii$'$) and (iii$'$): \vspace{0.2truecm} (ii$'$) for each $x\in M$, there exists a neighborhood $U_x$ of the zero vector (of $T^{\perp}_xM$) in $T^{\perp}_xM$ such that $\Sigma_x:=\exp^{\perp}(U_x)$ is totally geodesic and the induced metric on $\Sigma_x$ is flat, \vspace{0.2truecm} (iii$'$) sufficiently close parallel submanifolds of $M$ are CMC with respect to the radial direction. \vspace{0.2truecm} \noindent In the case where the ambient space is a symmetric space $G/K$ of compact type, they showed that the notion of an isoparametric submanifold with flat section coincides with that of an equifocal submanifold. The proof was performed by investigating its lift to $H^0([0,1],\mathfrak g)$ through a Riemannian submersion $\pi\circ\phi$, where $\pi$ is the natural projection of $G$ onto $G/K$ and $\phi$ is the parallel transport map for $G$ (which is a Riemannian submersion of $H^0([0,1],\mathfrak g)$ onto $G$ ($\mathfrak g:$the Lie algebra of $G$)). Let $M$ be an equifocal submanifold in $G/K$ and $v$ be a parallel normal vector field of $M$. The end-point map $\eta_v(:M\mapsto G/K)$ for $v$ is defined by $\eta_v(x)=\exp^{\perp}(v_x)$ ($x\in M$). Set $M_v:=\eta_v(M)$. We call $M_v$ a parallel submanifold of $M$ when ${\rm dim}\,M_v={\rm dim}\,M$ and a focal submanifold of $M$ when ${\rm dim}\,M_v<{\rm dim}\,M$. The parallel submanifolds of $M$ are equifocal. Let $f:M\times[0,T)\to G/K$ be the mean curvature flow having $M$ as initial data. Then, it is shown that, for each $t\in[0,T)$, $f_t:M\hookrightarrow G/K$ is a parallel submanifold of $M$ and hence it is equifocal (see Lemma 3.1). Fix $x_0\in M$. Let $\widetilde C\,(\subset T^{\perp}_{x_0}M)$ be the fundamental domain containing the zero vector (of $T^{\perp}_{x_0}M$) of the Coxeter group (which acts on $T^{\perp}_{x_0}M$) of $M$ at $x_0$ and set $C:=\exp^{\perp}(\widetilde C)$, where we note that $\exp^{\perp}\vert_{\widetilde C}$ is a diffeomorphism onto $C$. Without loss of generality, we may assume that $G$ is simply connected. Set $\widetilde M:=(\pi\circ\phi)^{-1}(M)$, which is an isoparametric submanifold in $H^0([0,1],\mathfrak g)$. Fix $u_0\in(\pi\circ\phi)^{-1}(x_0)$. The normal space $T^{\perp}_{x_0}M$ is identified with the normal space $T^{\perp}_{u_0}\widetilde M$ of $\widetilde M$ at $u_0$ through $(\pi\circ\phi)_{\ast u_0}$. Each parallel submanifold of $M$ intersects with $C$ at the only point and each focal submanifold of $M$ intersects with $\partial C$ at the only point, where $\partial C$ is the boundary of $C$. Hence, for the mean curvature flow $f:M\times [0,T)\to G/K$ having $M$ as initial data, each $M_t(:=f_t(M))$ intersects with $C$ at the only point. Denote by $x(t)$ this intersection point and define $\xi:[0,T)\to\widetilde C\, (\subset T^{\perp}_{x_0}M=T^{\perp}_{u_0}\widetilde M)$ by $\exp^{\perp}(\xi(t))=x(t)$ ($t\in[0,T)$). Set $\widetilde M_t:=(\pi\circ\phi)^{-1}(M_t)$ ($t\in[0,T)$). It is shown that $\widetilde M_t$ ($t\in[0,T)$) is the mean curvature flow having $\widetilde M$ as initial data because the mean curvature vector of $\widetilde M_t$ is the horizontal lift of that of $M_t$ through $\pi\circ\phi$. By investigating $\xi:[0,T)\to T^{\perp}_{u_0}\widetilde M$, we obtain the following fact corresponding to Fact 1. \vspace{0.5truecm} \noindent {\bf Theorem A.} {\sl Let $M$ be an equifocal submanifold in a symmetric space $G/K$ of compact type. Then the following statements ${\rm (i)}$ and ${\rm (ii)}$ hold: ${\rm (i)}$ If $M$ is not minimal, then the mean curvature flow $M_t$ having $M$ as initial data collapses to a focal submanifold $F$ of $M$ in finite time. Furtheremore, if $M$ is irreducible, the codimension of $M$ is greater than one and if the fibration of $M$ onto $F$ is spherical, then the flow $M_t$ has type I singularity. ${\rm (ii)}$ For any focal submanifold $F$ of $M$, there exists a parallel submanifold of $M$ collapsing to $F$ along the mean curvature flow and the set of all parallel submanifolds collapsing to $F$ along the mean curvature flow is a one-parameter $C^{\infty}$-family.} \vspace{0.5truecm} Also, we obtain the following fact corresponding to Fact 2 for the mean curvature flow having a focal submanifold of an equifocal submanifold as initial data. \vspace{0.5truecm} \noindent {\it Remark 1.1.} U. Christ ([Ch]) showed that all irreducible equifocal submanifolds of codimension greater than one on symmetric spaces of compact type are homogeneous and hence they occur as principal orbits of hyperpolar actions. On the other hand, A. Kollross ([Kol]) showed that all hyperpolar actions of cohomogeneity greater than one on irrdeucible symmetric spaces of compact type are orbit equivalent to Hermann actions on the space. Hence all equifocal submanifolds of codimension greater than one on irreducible symmetric spaces of compact type occurs as principal orbits of Hermann actions. Therefore they are classified completely. Hence we can show the statement of Theorem A by using this classification. However, it is very important to prove the statement conceptionally without the use of this classification. In fact, Liu-Terng ([LT]) prove the results in [LT] conceptionally without the use of the classification of isoparametric submanifolds in Euclidean spaces by J. Dadok ([D]). So, in this paper, we prove the statement of Theorem A conceptionally. \vspace{0.3truecm} \noindent {\bf Theorem B.} {\sl Let $M$ be as in the statement of Theorem A and $\sigma$ be a stratum of dimension greater than zero of $\partial C$ (which is a stratified space). Then the following statements ${\rm (i)}$ and ${\rm (ii)}$ hold: ${\rm (i)}$ For any non-minimal focal submanifold $F$ (of $M$) through $\sigma$, the maen curvature flow $F_t$ having $F$ as initial data collapses to a focal submanifold $F'$ (of $M$) through $\partial{\sigma}$ in finite time. Furthermore, if $M$ is irreducible, the codimension of $M$ is greater than one and if the fibration of $F$ onto $F'$ is spherical, then the flow $F_t$ has type I singularity. ${\rm (ii)}$ For any focal submanifold $F$ of $M$ through $\partial\sigma$, there exists a focal submanifold of $M$ through $\sigma$ collapsing to $F$ along the mean curvature flow, and the set of all focal submanifolds of $M$ through $\sigma$ collapsing to $F$ along the mean curvature flow is a one-parameter $C^{\infty}$-family.} \vspace{0.5truecm} Since focal submanifolds of $M$ through $0$-dimensional stratums of $\partial C$ are minimal, it follows from these theorems that $M$ collapses to a minimal focal submanifold of $M$ after finitely many times of collapses along the mean curvature flows. \newpage $$\begin{array}{c} \displaystyle{ \begin{array}{llll} \displaystyle{M_t\mathop{\longrightarrow}_{(t\to T_1)}}& \displaystyle{\mathop{F^1}_{\rm non-min.}}&&\\ &\displaystyle{F^1_t\mathop{\longrightarrow}_{(t\to T_2)} \mathop{F^2}_{\rm non-min.}}&&\\ &&\displaystyle{\ddots}&\\ &&&\displaystyle{F^{k-1}_t\mathop{\longrightarrow}_{(t\to T_k)} \mathop{F^k}_{\rm min.}} \end{array} }\\ \displaystyle{ \left( \begin{array}{l} \displaystyle{F^1\,:\,{\rm a}\,\,{\rm focal}\,\,{\rm submanifold}\,\, {\rm of}\,\,M}\\ \displaystyle{F^i\,:\,{\rm a}\,\,{\rm focal}\,\,{\rm submanifold}\,\, {\rm of}\,\,F^{i-1}\,\,(i=2,\cdots,k)} \end{array} \right)} \end{array}$$ \noindent According to the homogeneity theorem for an equifocal submanifold by Christ [Ch], all irreducible equifocal submanifolds of codimension greater than one in symmetric spaces of compact type are homogeneous. Hence, according to the result by Heintze-Palais-Terng-Thorbergsson [HPTT], they are principal orbits of hyperpolar actions. Furthermore, according to the classification by Kollross [Kol] of hyperpolar actions on irreducible symmetric spaces of compact type, all hyperpolar actions of cohomogeneity greater than one on the symmetric spaces are Hermann actions. Therefore, all equifocal submanifolds of codimension greater than one in irreducible symmetric spaces of compact type are principal orbits of Hermann actions. In the last section, we describe explicitly the mean curvature flows having orbits of Hermann actions of cohomogeneity two on irreducible symmetric spaces of compact type and rank two as initial data. \section{Preliminaries} In this section, we briefly review the quantities associated with an isoparametric submanifold in an (infinite dimensional separable) Hilbert space, which was introduced by Terng [T2]. Let $M$ be an isoparametric submanifold in a Hilbert space $V$. \vspace{0.3truecm} \noindent {\bf 2.1. Principal curvatures, curvature normals and curvature distributions} Let $E_0$ and $E_i$ ($i\in I$) be all the curvature distributions of $M$, where $E_0$ is defined by $(E_0)_x=\displaystyle{\mathop{\cap}_{v\in T^{\perp}_xM} {\rm Ker}\,A_v}\,(x\in M)$. For each $x\in M$, we have $T_xM=\overline{(E_0)_x\oplus \displaystyle{\left(\mathop{\oplus}_{i\in I}(E_i)_x\right)}}$, which is the common eigenspace decomposition of $A_v$'s ($v\in T^{\perp}_xM$). Also, let $\lambda_i$ ($i\in I$) be the principal curvatures of $M$, that is, $\lambda_i$ is the section of the dual bundle $(T^{\perp}M)^{\ast}$ of $T^{\perp}M$ such that $A_v\vert_{(E_i)_x}=(\lambda_i)_x(v){\rm id}$ holds for any $x\in M$ and any $v\in T^{\perp}_xM$, and ${\bf n}_i$ be the curvature normal corresponding to $\lambda_i$, that is, $\lambda_i(\cdot)=\langle{\bf n}_i,\cdot\rangle$. \vspace{0.3truecm} \noindent {\bf 2.2. The Coxeter group associated with an isoparametric submanifold} Denote by ${\it l}^x_i$ the affine hyperplane $(\lambda_i)_x^{-1}(1)$ in $T^{\perp}_xM$. The focal set of $M$ at $x$ is equal to the sum $\displaystyle{\mathop{\cup}_{i\in I}(x+{\it l}^x_i)}$ of the affine hyperplanes $x+{\it l}_i^x$'s ($i\in I$) in the affine subspace $x+T^{\perp}_xM$ of $V$. Each affine hyperplane ${\it l}_i^x$ is called a focal hyperplane of $M$ at $x$. Let $W$ be the group generated by the reflection $R_i^x$'s ($i\in I$) with respect to ${\it l}_i^x$. This group is independent of the choice $x$ of $M$ up to group isomorphism. This group is called the Coxeter group associated with $M$. The fundamental domain of the Coxeter group containing the zero vector of $T^{\perp}_xM$ is given by $\{v\in T^{\perp}_xM\,\vert\,\lambda_i(v)<1\,(i\in I)\}$. \vspace{0.3truecm} \noindent {\bf 2.3. Principal curvatures of parallel submanifolds} Let $M_w$ be the parallel submanifold of $M$ for a (non-focal) parallel normal vector field $w$, that is, $M_w=\eta_w(M)$, where $\eta_w$ is the end-point map for $w$. Denote by $A^w$ the shape tensor of $M_w$. This submanifold $M_w$ also is isoparametric and $A^w_v\vert_{\eta_{w\ast}(E_i)_x} =\frac{(\lambda_i)_x(v)}{1-(\lambda_i)_x(w_x)}{\rm id} \,\,(i\in I)$ for any $v\in T^{\perp}_{\eta_w(x)}M_w$, that is, $\frac{\lambda_i}{1-\lambda_i(w)}$'s ($i\in I$) are the principal curvatures of $M_w$ and hence $\frac{{\bf n}_i}{1-\lambda_i(w)}$'s ($i\in I$) are the curvature normals of $M_w$, where we identify $T^{\perp}_{\eta_w(x)}M_w$ with $T^{\perp}_xM$. \vspace{0.5truecm} Let $M$ be a (general) submanifold in a Hilbert space $V$. \vspace{0.3truecm} \noindent {\bf 2.4. The mean curvature vector of a regularizable submanifold} Assume that $M$ is regularizable in sense of [HLO], that is, for each normal vector $v$ of $M$, the regularizable trace ${\rm Tr}_r\,A_v$ and ${\rm Tr}\,A_v^2$ exist, where ${\rm Tr}_r\,A_v$ is defined by ${\rm Tr}_r\,A_v:=\sum\limits_{i=1}^{\infty}(\mu^+_i+\mu^-_i)$ ($\mu^-_1\leq\mu^-_2\leq\cdots\leq 0\leq\cdots\leq\mu^+_2\leq\mu^+_1\,:\,$ the spectrum of $A_v$). Then the mean curvature vector $H$ of $M$ is defined by $\langle H,v\rangle={\rm Tr}_r\,A_v\,\,(\forall\,v\in T^{\perp}M)$. \vspace{0.3truecm} \noindent {\bf 2.5. The mean curvature flow for a regularizable submanifold} Let $f_t:M\hookrightarrow V$ ($0\leq t<T$) be a $C^{\infty}$-family of regularizable submanifold immersions into $V$. Denote by $H_t$ the mean curvature vector of $f_t$. Define a map $F:M\times[0,T)\to V$ by $F(x,t):=f_t(x)$ ($(x,t)\in M\times[0,T)$). If $\frac{\partial F}{\partial t}=H_t$ holds, then we call $f_t$ ($0\leq t<T$) the {\it mean curvature flow}. In particular, if $f_t$'s are embeddings, then we set $M_t:=f_t(M)$ and $M_t$ (rather than $f_t$) the {\it mean curvature flow}. Note that, for a given regularizable submanifold immersion $f:M\hookrightarrow V$, there does not necessarily exist the mean curvature flow having $f$ as initial data in short time. Furthermore, we note that, for the restriction $f\vert_U$ of $f$ to a relative compact domain $U$ of $M$, the existence of the mean curvature flow having $f\vert_U$ as initial data in short time is not assured for the sake of absence of the infinite dimensional vector bundle version of the Hamilton's theorem (Theorem 5.1 of [Ha]) for the existence and the uniqueness of solutions of a certain kind of evolution equation (which includes the Ricci flow equation and the mean curvature flow equation (of finite dimensional case)) in short time. \vspace{0.3truecm} Let $M$ be an equifocal submanifold in a symmetric space $G/K$ of compact type and set $\widetilde M:=(\pi\circ\phi)^{-1}(M)$, where $\pi$ is the natural projection of $G$ onto $G/K$ and $\phi:H^0([0,1],\mathfrak g)\to G$ is the parallel transport map for $G$. \vspace{0.3truecm} \noindent {\bf 2.6. The mean curvature vector of the lifted submanifold} Denote by $\widetilde H$ (resp. $H$) the mean curvature vector of $\widetilde M$ (resp. $M$). Then $\widetilde M$ is a regularizable isoparametric submanifold and $\widetilde H$ is equal to the horizontal lift of $H^L$ of $H$ (see Lemma 5.2 of [HLO]). \section{Proofs of Theorems A and B} In this section, we prove Theorems A and B. Let $M$ be an equifocal submanifold in a symmetric space $G/K$ of compact type, $\pi:G\to G/K$ be the natural projection and $\phi$ be the parallel transport map for $G$. Set $\widetilde M:=(\pi\circ\phi)^{-1}(M)$. Take $u_0\in\widetilde M$ and set $x_0:=(\pi\circ\phi)(u_0)$. We identify $T^{\perp}_{x_0}M$ with $T^{\perp}_{u_0}\widetilde M$. Let $\widetilde C(\subset T^{\perp}_{u_0}\widetilde M=T^{\perp}_{x_0}M)$ be the fundamental domain of the Coxeter group of $\widetilde M$ at $u_0$ containing the zero vector ${\bf 0}$ of $T^{\perp}_{u_0}\widetilde M(=T^{\perp}_{x_0}M)$ and set $C:=\exp^{\perp}(\widetilde C)$, where $\exp^{\perp}$ is the normal exponential map of $M$. Denote by $H$ (resp. $\widetilde H$) the mean curvature vector of $M$ (resp. $\widetilde M$). The mean curvature vector $H$ and $\widetilde H$ are a parallel normal vector field of $M$ and $\widetilde M$, respectively. Let $w$ be a parallel normal vector field of $M$ and $w^L$ be the horizontal lift of $w$ to $H^0([0,1],\mathfrak g)$, which is a parallel normal vector field of $\widetilde M$. Denote by $M_w$ (resp. $\widetilde M_{w^L}$) the parallel (or focal) submanifold $\eta_w(M)$ (resp. $\eta_{w^L}(\widetilde M)$) of $M$ (resp. $\widetilde M$), where $\eta_w$ (resp. $\eta_{w^L}$) is the end-point map for $w$ (resp. $w^L$). Then we have $\widetilde M_{w^L}=(\pi\circ\phi)^{-1}(M_w)$. Denote by $H^w$ (resp. $\widetilde H^{w^L}$) the mean curvature vector of $M_w$ (resp. $\widetilde M_{w^L}$). Define a vector field $X$ on $\widetilde C\,(\subset T^{\perp}_{u_0}\widetilde M=T^{\perp}_{x_0}M)$ by $X_w:=(\widetilde H^{\widetilde w})_{u_0+w}$ ($w\in \widetilde C$), where $\widetilde w$ is the parallel normal vector field of $\widetilde M$ with $\widetilde w_{u_0}=w$. Let $\xi\,:\,(-S,T)\to\widetilde C$ be the maximal integral curve of $X$ with $\xi(0)={\bf 0}$. Note that $S$ and $T$ are possible be equal to $\infty$. Let $\widetilde{\xi(t)}$ be the parallel normal vector field of $M$ with $\widetilde{\xi(t)}_{x_0}=\xi(t)$. \vspace{0.3truecm} \noindent {\bf Lemma 3.1.} {\sl The family $\widetilde M_{\widetilde{\xi(t)}^L}$ ($0\leq t<T$) is the mean curvature flow having $\widetilde M$ as initial data and $M_{\widetilde{\xi(t)}}$ ($0\leq t<T$) is the mean curvature flow having $M$ as initial data.} \vspace{0.3truecm} \noindent {\it Proof.} Fix $t_0\in[0,T)$. Define a flow $F:\widetilde M\times[0,T)\to H^0([0,1],\mathfrak g)$ by $F(u,t):=\eta_{\widetilde{\xi(t)}^L}(u)$ ($(u,t)\in\widetilde M\times[0,T)$) and $f_t:\widetilde M\to H^0([0,1],\mathfrak g)$ ($0\leq t<T$) by $f_t(u):=F(u,t)$ ($u\in\widetilde M$). Here we note that $f_t(\widetilde M)=\widetilde M_{\widetilde{\xi(t)}^L}$. For simplicity, denote by ${\widetilde H}^{t_0}$ the mean curvature vector of ${\widetilde M}_{\widetilde{\xi(t_0)}^L}$. It is easy to show that $F_{\ast}((\frac{\partial}{\partial t})_{(\cdot,t_0)})$ is a parallel normal vector field of ${\widetilde M}_{\widetilde{\xi(t_0)}^L}$ and that $F_{\ast}((\frac{\partial}{\partial t})_{(u_0,t_0)}) =({\widetilde H}^{t_0})_{f_{t_0}(u_0)}$. On the other hand, since ${\widetilde M}_{\widetilde{\xi(t_0)}^L}$ is isoparametric, ${\widetilde H}^{t_0}$ is also a parallel normal vector field of ${\widetilde M}_{\widetilde{\xi(t_0)}^L}$. Hence we have $F_{\ast}((\frac{\partial}{\partial t})_{(\cdot,t_0)})=\widetilde H^{t_0}$. Therefore, it follows from the arbitrariness of $t_0$ that $\widetilde M_{\widetilde{\xi(t)}^L}$ ($0\leq t<T$) is the mean curvature flow having $\widetilde M$ as initial data. Define a flow $\overline F:M\times [0,T)\to G/K$ by $\overline F(x,t):= \eta_{\widetilde{\xi(t)}}(x)$ ($(x,t)\in M\times[0,T)$) and $\bar f_t:M\to G/K$ ($0\leq t<T$) by $\bar f_t(x):=\overline F(x,t)$ ($x\in M$). Here we note that $\bar f_t(M)=M_{\widetilde{\xi(t)}}$. For simplicity, denote by $H^t$ the mean curvature vector of $M_{\widetilde{\xi(t)}}$. Fix $t_0\in[0,T)$. Since $\widetilde M_{\widetilde{\xi(t_0)}^L}=(\pi\circ\phi)^{-1} (M_{\widetilde{\xi(t_0)}})$, we have $(H^{t_0})^L=\widetilde H^{t_0}$. On the other hand, we have $\overline F_{\ast}((\frac{\partial}{\partial t})_{(\cdot,t_0)})^L =F_{\ast}((\frac{\partial}{\partial t})_{(\cdot,t_0)})(=\widetilde H^{t_0})$. Hence we have $\overline F_{\ast}((\frac{\partial}{\partial t})_{(\cdot,t_0)}) =H^{t_0}$. Therefore, it follows from the arbitrariness of $t_0$ that $M_{\widetilde{\xi(t)}}$ ($0\leq t<T$) is the mean curvature flow having $M$ as initial data. \hspace{1.5truecm}q.e.d. \vspace{0.3truecm} \noindent {\it Proof of Theorem A.} Clearly we suffice to show the statement of Theorem A in the case where $M$ is full. Hence, in the sequel, we assume that $M$ is full. Denote by $\Lambda$ the set of all principal curvatures of $\widetilde M$. Set $r:={\rm codim}\,M$. It is shown that the set of all focal hyperplanes of $\widetilde M$ is given as the sum of finite pieces of infinite parallel families consisting of hyperplanes in $T^{\perp}_{u_0}\widetilde M$ which arrange at equal intervals. Let $\{{\it l}_{aj}\,\vert\,j\in{\Bbb Z}\}$ ($1\leq a\leq \bar r$) be the finite pieces of infinite parallel families consisting of hyperplanes in $T^{\perp}_{u_0}\widetilde M$. Since ${\it l}_{aj}$'s ($j\in{\Bbb Z}$) arrange at equal intervals, we can express as $\displaystyle{ \Lambda=\mathop{\cup}_{a=1}^{\bar r}\{\frac{\lambda_a}{1+b_aj}\,\vert\, j\in{\bf Z}\}}$, where $\lambda_a$'s and $b_a$'s are parallel sections of $(T^{\perp}\widetilde M)^{\ast}$ and positive constants greater than one, respectively, which are defined by $((\lambda_a)_{u_0})^{-1}(1+b_aj)={\it l}_{aj}$. For simplicity, we set $\lambda_{aj}:=\frac{\lambda_a}{1+b_aj}$. Denote by ${\bf n}_{aj}$ and $E_{aj}$ the curvature normal and the curvature distribution corresponding to $\lambda_{aj}$, respectively. The fundamental domain $\widetilde C$ of the Coxeter group of $\widetilde M$ at $u_0$ is given by $$\widetilde C=\{w\in T^{\perp}_{u_0}\widetilde M\,\vert\,\lambda_a(w)<1\,\, (1\leq a\leq\bar r)\}.$$ It is shown that, for each $a$, $\lambda_{a,2j}$'s ($j\in{\Bbb Z}$) have the same multiplicity and so are also $\lambda_{a,2j+1}$'s ($j\in{\Bbb Z}$). Denote by $m_a^e$ and $m_a^o$ the multiplicities of $\lambda_{a,2j}$ and $\lambda_{a,2j+1}$, respectively. Take a parallel normal vector field $w$ of $\widetilde M$ with $w_{u_0}\in\widetilde C$. Denote by ${\widetilde A}^w$ (resp. ${\widetilde H}^w$) the shape tensor (resp. the mean curvature vector) of the parallel submanifold ${\widetilde M}_w$. Since ${\widetilde A}^w_v\vert_{\eta_{w\ast}(E_{aj})_u} =\frac{(\lambda_{aj})_u(v)}{1-(\lambda_{aj})_u(w_u)}\,{\rm id}$ $(v\in T^{\perp}_u\widetilde M$), we have $$\begin{array}{l} \displaystyle{{\rm Tr}_r\widetilde A^w_v=\sum_{a=1}^{\bar r}\left( \sum_{j\in{\bf Z}}\frac{m_a^e(\lambda_{a,2j})_u(v)}{1-(\lambda_{a,2j})_u(w_u)}+ \sum_{j\in{\bf Z}}\frac{m_a^o(\lambda_{a,2j+1})_u(v)} {1-(\lambda_{a,2j+1})_u(w_u)} \right)}\\ \hspace{1.1truecm}\displaystyle{=\sum_{a=1}^{\bar r}\left( m_a^e\cot\frac{\pi}{2b_a}(1-(\lambda_a)_u(w_u))-m_a^o\tan\frac{\pi}{2b_a} (1-(\lambda_a)_u(w_u))\right)\frac{\pi}{2b_a}(\lambda_a)_u(v),} \end{array}$$ where we use the relation $\cot\,\frac{\theta}{2}=\sum\limits_{j\in{\bf Z}} \frac{2}{\theta+2j\pi}$. Therefore we have $$\begin{array}{l} \displaystyle{\widetilde H^w=\sum_{a=1}^{\bar r}\left( m_a^e\cot\frac{\pi}{2b_a}(1-\lambda_a(w)) -m_a^o\tan\frac{\pi}{2b_a}(1-\lambda_a(w))\right)\frac{\pi}{2b_a}{\bf n}_a,} \end{array}\leqno{(3.1)}$$ where ${\bf n}_a$ is the curvature normal corresponding to $\lambda_a$. Denote by $\widetilde{\sigma}_a$ ($1\leq a\leq\bar r$) the maximal dimensional stratum $(\lambda_a)_{u_0}^{-1}(1)$ of $\partial\widetilde C$. Fix $a_0\in\{1,\cdots,\bar r\}$. Take $w_0\in\widetilde{\sigma}_{a_0}$ and and $w'_0\in\widetilde C$ near $w_0$ such that $w_0-w_0'$ is normal to $\widetilde{\sigma}_{a_0}$. Set $w_0^{\varepsilon} :=\varepsilon w'_0+(1-\varepsilon)w_0$ for $\varepsilon\in(0,1)$. Then we have $\lim\limits_{\varepsilon\to+0}(\lambda_{a_0})_{u_0} (w^{\varepsilon}_0)=1$ and $\displaystyle{\mathop{\sup}_{0<\varepsilon<1} (\lambda_a)_{u_0}(w^{\varepsilon}_0)<1}$ for each $a\in\{1,\cdots,\bar r\}\setminus\{a_0\}$. Hence we have $ \lim_{\varepsilon\to+0}\cot\frac{\pi}{2b_{a_0}} (1-(\lambda_{a_0})_{u_0}(w^{\varepsilon}_0))=\infty$ and $\mathop{\sup}_{0<\varepsilon<1}\coth (1-(\lambda_a)_{u_0}(w^{\varepsilon}_0))\,<\,\infty\,\, (a\in\{1,\cdots,\bar r\}\setminus\{a_0\})$. Therefore, we have $\lim\limits_{\varepsilon\to+0} \frac{X_{w_0^{\varepsilon}}}{\vert\vert X_{w_0^{\varepsilon}}\vert\vert}$ is the outward unit normal vector of $\widetilde{\sigma}_{a_0}$. Also we have $\lim\limits_{\varepsilon\to +0}\vert\vert X_{w_0^{\varepsilon}}\vert\vert =\infty$. From these facts, $X$ is as in Fig. 1 on a sufficiently small collar neighborhood of $\widetilde{\sigma}_{a_0}$. \vspace{0.3truecm} \centerline{ \unitlength 0.1in \begin{picture}( 19.2700, 18.7400)( 21.6900,-28.1400) \special{pn 8 \special{pa 3000 1404 \special{pa 2170 2814 \special{pa 3800 2814 \special{pa 3800 2814 \special{pa 3000 1404 \special{fp \special{pn 13 \special{pa 3610 2574 \special{pa 4096 2292 \special{fp \special{sh 1 \special{pa 4096 2292 \special{pa 4028 2308 \special{pa 4050 2320 \special{pa 4048 2344 \special{pa 4096 2292 \special{fp \put(35.0000,-13.4000){\makebox(0,0)[lb]{} \put(40.4000,-18.3000){\makebox(0,0)[lb]{$X$} \special{pn 13 \special{pa 3440 2280 \special{pa 3928 1998 \special{fp \special{sh 1 \special{pa 3928 1998 \special{pa 3860 2014 \special{pa 3882 2026 \special{pa 3880 2050 \special{pa 3928 1998 \special{fp \special{pn 13 \special{pa 3290 2020 \special{pa 3778 1738 \special{fp \special{sh 1 \special{pa 3778 1738 \special{pa 3710 1754 \special{pa 3732 1766 \special{pa 3730 1790 \special{pa 3778 1738 \special{fp \special{pn 13 \special{pa 3130 1730 \special{pa 3618 1448 \special{fp \special{sh 1 \special{pa 3618 1448 \special{pa 3550 1464 \special{pa 3572 1476 \special{pa 3570 1500 \special{pa 3618 1448 \special{fp \special{pn 8 \special{pa 3330 1150 \special{pa 3080 1520 \special{da 0.070 \special{sh 1 \special{pa 3080 1520 \special{pa 3134 1476 \special{pa 3110 1476 \special{pa 3102 1454 \special{pa 3080 1520 \special{fp \put(33.4000,-11.1000){\makebox(0,0)[lb]{$\widetilde{\sigma}_{a_0}$} \put(29.5000,-22.4000){\makebox(0,0)[lt]{$\widetilde C$} \end{picture } \vspace{0.6truecm} \centerline{{\bf Fig. 1.}} \vspace{0.5truecm} \noindent Define a function $\rho$ over $\widetilde C$ by $$\begin{array}{l} \displaystyle{\rho(w):=-\sum_{a=1}^{\bar r}\left(m_a^e \log\sin\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w))\right.}\\ \hspace{2.4truecm}\displaystyle{ \left.+m_a^o\log\cos\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w))\right) \qquad(w\in\widetilde C).} \end{array}$$ Let $(x_1,\cdots,x_r)$ be the Euclidean coordinate of $T^{\perp}_{u_0}\widetilde M$. For simplicity, set $\partial_i:=\frac{\partial}{\partial x_i}$ ($i=1,\cdots,r$). Then it follows from the definition of $X$ and $(3.1)$ that $(\partial_i\rho)(w)=\langle X_w,\partial_i\rangle$ ($w\in\widetilde C,\,\,i=,\cdots,r$), that is, ${\rm grad}\,\rho=X$. Also we have $$\begin{array}{l} \displaystyle{(\partial_i\partial_j\rho)(w)= \sum_{a=1}^{\bar r}\left(\frac{m_a^e} {\sin^2\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w))} +\frac{m_a^o}{\cos^2\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w))}\right)}\\ \hspace{3.4truecm}\displaystyle{\times \frac{\pi^2}{4b_a^2}(\lambda_a)_{u_0}(\partial_i)(\lambda_a)_{u_0} (\partial_j).} \end{array}$$ It follows from this relation that $\rho$ is downward convex. Also it is shown that $\rho(w)\to\infty$ as $w\to\partial\widetilde C$. Hence we see that $\rho$ has the only minimal point. Denote by $w_0$ this minimal point. It is clear that $X_{w_0}=0$. From these facts and the fact that $X$ is as in Fig. 1 on a sufficently small collar neighborhood of each maximal dimensional stratum $\widetilde{\sigma}$ of $\partial\widetilde C$, $\rho$ and $X$ are as in Fig. 2. \vspace{0.3truecm} \centerline{ \unitlength 0.1in \begin{picture}( 62.2000, 31.7000)( 1.9000,-39.5000) \special{pn 8 \special{pa 2390 3000 \special{pa 1180 3600 \special{pa 2800 3600 \special{pa 2400 3000 \special{pa 2400 3000 \special{pa 2390 3000 \special{fp \special{pn 8 \special{pa 2400 1440 \special{pa 1190 2040 \special{pa 2810 2040 \special{pa 2410 1440 \special{pa 2410 1440 \special{pa 2400 1440 \special{fp \special{pn 8 \special{pa 1180 3600 \special{pa 1180 1380 \special{fp \special{pn 8 \special{pa 2800 3600 \special{pa 2800 1380 \special{fp \special{pn 13 \special{sh 1 \special{ar 2200 3390 10 10 0 6.28318530717959E+0000 \special{sh 1 \special{ar 2200 3390 10 10 0 6.28318530717959E+0000 \special{pn 8 \special{pa 2400 3000 \special{pa 2400 2080 \special{fp \special{pn 8 \special{pa 2400 1970 \special{pa 2400 1020 \special{fp \special{pn 13 \special{ar 2970 3850 2450 2350 4.0875027 4.4275224 \special{pn 13 \special{ar 2120 1880 1340 110 1.2064650 1.9866020 \special{pn 13 \special{ar 2220 2740 550 1240 5.0698994 5.5740280 \special{pn 13 \special{ar 2350 1630 100 40 4.1052041 5.1813557 \special{pn 13 \special{ar 1660 1960 130 30 2.4201139 3.4238501 \special{pn 13 \special{ar 2580 1940 60 40 5.8507775 6.2831853 \special{ar 2580 1940 60 40 0.0000000 1.1341692 \special{pn 8 \special{pa 2200 2810 \special{pa 2200 3380 \special{dt 0.045 \special{pn 13 \special{ar 2200 890 530 1890 0.2153192 1.5707963 \special{pn 13 \special{ar 2200 990 800 1790 1.5707963 2.9737795 \special{pn 13 \special{ar 2200 200 240 2580 0.2889064 0.2974171 \special{ar 2200 200 240 2580 0.3229490 0.3314596 \special{ar 2200 200 240 2580 0.3569915 0.3655022 \special{ar 2200 200 240 2580 0.3910341 0.3995447 \special{ar 2200 200 240 2580 0.4250766 0.4335873 \special{ar 2200 200 240 2580 0.4591192 0.4676298 \special{ar 2200 200 240 2580 0.4931617 0.5016724 \special{ar 2200 200 240 2580 0.5272043 0.5357149 \special{ar 2200 200 240 2580 0.5612469 0.5697575 \special{ar 2200 200 240 2580 0.5952894 0.6038000 \special{ar 2200 200 240 2580 0.6293320 0.6378426 \special{ar 2200 200 240 2580 0.6633745 0.6718852 \special{ar 2200 200 240 2580 0.6974171 0.7059277 \special{ar 2200 200 240 2580 0.7314596 0.7399703 \special{ar 2200 200 240 2580 0.7655022 0.7740128 \special{ar 2200 200 240 2580 0.7995447 0.8080554 \special{ar 2200 200 240 2580 0.8335873 0.8420979 \special{ar 2200 200 240 2580 0.8676298 0.8761405 \special{ar 2200 200 240 2580 0.9016724 0.9101830 \special{ar 2200 200 240 2580 0.9357149 0.9442256 \special{ar 2200 200 240 2580 0.9697575 0.9782681 \special{ar 2200 200 240 2580 1.0038000 1.0123107 \special{ar 2200 200 240 2580 1.0378426 1.0463532 \special{ar 2200 200 240 2580 1.0718852 1.0803958 \special{ar 2200 200 240 2580 1.1059277 1.1144383 \special{ar 2200 200 240 2580 1.1399703 1.1484809 \special{ar 2200 200 240 2580 1.1740128 1.1825235 \special{ar 2200 200 240 2580 1.2080554 1.2165660 \special{ar 2200 200 240 2580 1.2420979 1.2506086 \special{ar 2200 200 240 2580 1.2761405 1.2846511 \special{ar 2200 200 240 2580 1.3101830 1.3186937 \special{ar 2200 200 240 2580 1.3442256 1.3527362 \special{ar 2200 200 240 2580 1.3782681 1.3867788 \special{ar 2200 200 240 2580 1.4123107 1.4208213 \special{ar 2200 200 240 2580 1.4463532 1.4548639 \special{ar 2200 200 240 2580 1.4803958 1.4889064 \special{ar 2200 200 240 2580 1.5144383 1.5229490 \special{ar 2200 200 240 2580 1.5484809 1.5569915 \special{pn 13 \special{ar 2200 600 320 2180 1.5707963 2.7994368 \special{pn 13 \special{sh 1 \special{ar 2200 2780 10 10 0 6.28318530717959E+0000 \special{sh 1 \special{ar 2200 2780 10 10 0 6.28318530717959E+0000 \special{pn 13 \special{sh 1 \special{ar 1950 1990 10 10 0 6.28318530717959E+0000 \special{sh 1 \special{ar 1950 1990 10 10 0 6.28318530717959E+0000 \special{pn 8 \special{pa 3110 2160 \special{pa 2480 2300 \special{fp \special{sh 1 \special{pa 2480 2300 \special{pa 2550 2306 \special{pa 2532 2288 \special{pa 2542 2266 \special{pa 2480 2300 \special{fp \special{pn 8 \special{pa 2980 3070 \special{pa 2520 3480 \special{fp \special{sh 1 \special{pa 2520 3480 \special{pa 2584 3452 \special{pa 2560 3446 \special{pa 2556 3422 \special{pa 2520 3480 \special{fp \special{pn 13 \special{sh 1 \special{ar 2400 1590 10 10 0 6.28318530717959E+0000 \special{sh 1 \special{ar 2400 1590 10 10 0 6.28318530717959E+0000 \special{pn 8 \special{pa 1180 3600 \special{pa 1180 780 \special{fp \special{sh 1 \special{pa 1180 780 \special{pa 1160 848 \special{pa 1180 834 \special{pa 1200 848 \special{pa 1180 780 \special{fp \special{pn 8 \special{pa 5000 1794 \special{pa 4170 3204 \special{pa 5800 3204 \special{pa 5800 3204 \special{pa 5000 1794 \special{fp \special{pn 13 \special{sh 1 \special{ar 5000 2804 10 10 0 6.28318530717959E+0000 \special{sh 1 \special{ar 5000 2804 10 10 0 6.28318530717959E+0000 \special{pn 13 \special{pa 5150 2714 \special{pa 5300 2614 \special{fp \special{sh 1 \special{pa 5300 2614 \special{pa 5232 2634 \special{pa 5256 2644 \special{pa 5256 2668 \special{pa 5300 2614 \special{fp \special{pn 13 \special{pa 5130 2114 \special{pa 5600 1804 \special{fp \special{sh 1 \special{pa 5600 1804 \special{pa 5532 1824 \special{pa 5554 1834 \special{pa 5554 1858 \special{pa 5600 1804 \special{fp \special{pn 13 \special{pa 5290 2404 \special{pa 5760 2094 \special{fp \special{sh 1 \special{pa 5760 2094 \special{pa 5692 2114 \special{pa 5714 2124 \special{pa 5714 2148 \special{pa 5760 2094 \special{fp \special{pn 13 \special{pa 5450 2684 \special{pa 5920 2374 \special{fp \special{sh 1 \special{pa 5920 2374 \special{pa 5852 2394 \special{pa 5874 2404 \special{pa 5874 2428 \special{pa 5920 2374 \special{fp \special{pn 13 \special{pa 5610 2964 \special{pa 6080 2654 \special{fp \special{sh 1 \special{pa 6080 2654 \special{pa 6012 2674 \special{pa 6034 2684 \special{pa 6034 2708 \special{pa 6080 2654 \special{fp \special{pn 8 \special{pa 5690 3114 \special{pa 6242 3002 \special{da 0.070 \special{sh 1 \special{pa 6242 3002 \special{pa 6172 2996 \special{pa 6190 3012 \special{pa 6180 3034 \special{pa 6242 3002 \special{fp \special{pn 8 \special{pa 5730 3164 \special{pa 6222 3440 \special{da 0.070 \special{sh 1 \special{pa 6222 3440 \special{pa 6174 3390 \special{pa 6174 3414 \special{pa 6154 3424 \special{pa 6222 3440 \special{fp \special{pn 8 \special{pa 5670 3174 \special{pa 5936 3670 \special{da 0.070 \special{sh 1 \special{pa 5936 3670 \special{pa 5922 3602 \special{pa 5912 3624 \special{pa 5888 3622 \special{pa 5936 3670 \special{fp \special{pn 13 \special{pa 5520 3164 \special{pa 5514 3728 \special{fp \special{sh 1 \special{pa 5514 3728 \special{pa 5536 3662 \special{pa 5514 3676 \special{pa 5496 3662 \special{pa 5514 3728 \special{fp \special{pn 13 \special{pa 5220 3164 \special{pa 5214 3728 \special{fp \special{sh 1 \special{pa 5214 3728 \special{pa 5236 3662 \special{pa 5214 3676 \special{pa 5196 3662 \special{pa 5214 3728 \special{fp \special{pn 13 \special{pa 4850 3154 \special{pa 4844 3718 \special{fp \special{sh 1 \special{pa 4844 3718 \special{pa 4866 3652 \special{pa 4844 3666 \special{pa 4826 3652 \special{pa 4844 3718 \special{fp \special{pn 13 \special{pa 4530 3154 \special{pa 4524 3718 \special{fp \special{sh 1 \special{pa 4524 3718 \special{pa 4546 3652 \special{pa 4524 3666 \special{pa 4506 3652 \special{pa 4524 3718 \special{fp \special{pn 8 \special{pa 4350 3154 \special{pa 4076 3648 \special{da 0.070 \special{sh 1 \special{pa 4076 3648 \special{pa 4126 3598 \special{pa 4102 3600 \special{pa 4090 3580 \special{pa 4076 3648 \special{fp \special{pn 8 \special{pa 4240 3154 \special{pa 3810 3518 \special{da 0.070 \special{sh 1 \special{pa 3810 3518 \special{pa 3874 3490 \special{pa 3850 3484 \special{pa 3848 3460 \special{pa 3810 3518 \special{fp \special{pn 8 \special{pa 4300 3064 \special{pa 3760 3228 \special{da 0.070 \special{sh 1 \special{pa 3760 3228 \special{pa 3830 3228 \special{pa 3812 3212 \special{pa 3818 3190 \special{pa 3760 3228 \special{fp \special{pn 13 \special{pa 4360 2964 \special{pa 3892 2650 \special{fp \special{sh 1 \special{pa 3892 2650 \special{pa 3936 2704 \special{pa 3936 2680 \special{pa 3958 2672 \special{pa 3892 2650 \special{fp \special{pn 13 \special{pa 4550 2684 \special{pa 4082 2370 \special{fp \special{sh 1 \special{pa 4082 2370 \special{pa 4126 2424 \special{pa 4126 2400 \special{pa 4148 2392 \special{pa 4082 2370 \special{fp \special{pn 13 \special{pa 4710 2394 \special{pa 4242 2080 \special{fp \special{sh 1 \special{pa 4242 2080 \special{pa 4286 2134 \special{pa 4286 2110 \special{pa 4308 2102 \special{pa 4242 2080 \special{fp \special{pn 13 \special{pa 4850 2114 \special{pa 4382 1800 \special{fp \special{sh 1 \special{pa 4382 1800 \special{pa 4426 1854 \special{pa 4426 1830 \special{pa 4448 1822 \special{pa 4382 1800 \special{fp \special{pn 8 \special{pa 4970 1924 \special{pa 4628 1476 \special{da 0.070 \special{sh 1 \special{pa 4628 1476 \special{pa 4652 1540 \special{pa 4660 1518 \special{pa 4684 1516 \special{pa 4628 1476 \special{fp \special{pn 8 \special{pa 5000 1874 \special{pa 5004 1310 \special{da 0.070 \special{sh 1 \special{pa 5004 1310 \special{pa 4984 1378 \special{pa 5004 1364 \special{pa 5024 1378 \special{pa 5004 1310 \special{fp \special{pn 8 \special{pa 5040 1934 \special{pa 5356 1468 \special{da 0.070 \special{sh 1 \special{pa 5356 1468 \special{pa 5302 1512 \special{pa 5326 1512 \special{pa 5334 1534 \special{pa 5356 1468 \special{fp \special{pn 13 \special{pa 5000 2474 \special{pa 5004 2294 \special{fp \special{sh 1 \special{pa 5004 2294 \special{pa 4982 2360 \special{pa 5002 2348 \special{pa 5022 2362 \special{pa 5004 2294 \special{fp \special{pn 13 \special{pa 4850 2704 \special{pa 4696 2608 \special{fp \special{sh 1 \special{pa 4696 2608 \special{pa 4742 2660 \special{pa 4742 2636 \special{pa 4764 2626 \special{pa 4696 2608 \special{fp \special{pn 13 \special{pa 4740 2904 \special{pa 4594 3012 \special{fp \special{sh 1 \special{pa 4594 3012 \special{pa 4660 2988 \special{pa 4638 2980 \special{pa 4636 2956 \special{pa 4594 3012 \special{fp \special{pn 13 \special{pa 5000 2974 \special{pa 5000 3154 \special{fp \special{sh 1 \special{pa 5000 3154 \special{pa 5020 3088 \special{pa 5000 3102 \special{pa 4980 3088 \special{pa 5000 3154 \special{fp \special{pn 13 \special{pa 5290 2914 \special{pa 5438 3014 \special{fp \special{sh 1 \special{pa 5438 3014 \special{pa 5394 2960 \special{pa 5394 2984 \special{pa 5372 2994 \special{pa 5438 3014 \special{fp \put(35.0000,-13.4000){\makebox(0,0)[lb]{} \put(31.4000,-29.8000){\makebox(0,0)[rt]{$\widetilde C$} \put(22.6000,-34.4000){\makebox(0,0)[rt]{$w_0$} \put(33.1000,-20.8000){\makebox(0,0)[rt]{$\rho$} \put(10.9000,-9.2000){\makebox(0,0)[rt]{${\Bbb R}$} \put(50.9000,-39.5000){\makebox(0,0)[rt]{$X$} \put(52.1000,-27.8000){\makebox(0,0)[rt]{$w_0$} \put(49.6000,-12.0000){\makebox(0,0)[lb]{?} \put(36.0000,-36.6000){\makebox(0,0)[lb]{?} \put(64.1000,-36.8000){\makebox(0,0)[lb]{?} \end{picture \hspace{2.5truecm}} \vspace{0.5truecm} \centerline{{\bf Fig. 2.}} \vspace{0.5truecm} \noindent Let $W$ be the Coxeter group of $\widetilde M$ at $u_0$ and $W_0$ the isotropy group of $W$ at ${\bf 0}$. Also, let $\Gamma$ be the lattice of translations of $W$ and set $\Gamma^{\ast}:=\{\omega\in(T^{\perp}_{u_0}\widetilde M)^{\ast} \,\vert\,\omega(\Gamma)\subset{\bf Z}\}$. Also, let $C^{\triangle}(T^{\perp}_{u_0}\widetilde M)^W$ be the space of all finite sums of $W$-invariant eigenfunctions of $\triangle$, where $\triangle$ is the Laplace operator of $T^{\perp}_{u_0}\widetilde M$. Then we can show $$\begin{array}{l} \displaystyle{C^{\triangle}(T^{\perp}_{u_0}\widetilde M)^W\otimes{\Bbb C}= \{f=\sum_{\omega\in\Gamma^{\ast}}a_{\omega}e^{2\pi\sqrt{-1}\omega}\,\vert\, a_{\omega}\in{\Bbb C}\,\,(a_{\omega}=0\,\,{\rm except}\,\,{\rm for}\,\, {\rm finite}\,\,\omega)}\\ \hspace{8.3truecm}\displaystyle{\&\,\,f\,:\,W-{\rm invariant}\}.} \end{array}$$ Hence, according to Chapter VI,$\S$ 3, Theorem 1 of [B], we have $C^{\triangle}(T^{\perp}_{u_0}\widetilde M)^W\otimes{\Bbb C} ={\Bbb C}[\phi_1,\cdots,\phi_r]$ (polynomial ring) for some $\phi_1,\cdots,\phi_r(\in C^{\triangle}(T^{\perp}_{u_0}\widetilde M)^W)$. Here we note that $r={\rm dim}\,T^{\perp}_{u_0}\widetilde M$ because $G/K$ is irreducible and hence $M$ is full and irreducible. By reordering $\phi_1,\cdots,\phi_r$ suitably, it is shown that $\{{\rm Re}\,\phi_1,\cdots,{\rm Re}\,\phi_{r_1+r_2},\,{\rm Im}\,\phi_{r_1+1},$ \newline $\cdots,{\rm Im}\,\phi_{r_1+r_2}\}$ is a base of $C^{\triangle}(T^{\perp}_{u_0}(\widetilde M))^W$ (see proof of Theorem 7.6 of [HLO]), where ${\rm Re}(\cdot)$ (resp. ${\rm Im}(\cdot)$) is the real part (resp. the imaginary part) of $(\cdot)$, and $r_1$ and $r_2$ are positive integers with $r_1+2r_2=r$. Denote by $\{\psi_1,\cdots,\psi_r\}$ this base for simplicity. Set $\Psi:=(\psi_1,\cdots,\psi_r)$, which is a $C^{\infty}$-map from $T^{\perp}_{u_0}\widetilde M$ onto ${\Bbb R}^r$. It is shown that $\Psi$ is injective and that $\Psi\vert_{\overline{\widetilde C}}$ is a homeomorphism of $\overline{\widetilde C}$ onto $\Psi(\overline{\widetilde C})$, where $\overline{\widetilde C}$ is the closure of $\widetilde C$. Set $\xi_w(t):=\psi_t(w)$ and $\bar{\xi}_w(t):=\Psi(\psi_t(w))$, where $w\in\widetilde C$. Also, we set $\xi^i_w(t):=x_i(\xi_w(t))$ and $\bar{\xi}^i_w(t):=y_i(\bar{\xi}_w(t))$, where $(y_1,\cdots,y_r)$ is the natural coordinate of ${\Bbb R}^r$. Then we have $$\begin{array}{l} \hspace{0.6truecm} \displaystyle{(\bar{\xi}^i_w)'(t)=\langle{\rm grad}(y_i\circ\Psi)_{\xi_w(t)}, X_{\xi_w(t)}\rangle}\\ \displaystyle{=\sum_{a=1}^{\bar r}\left(\left( m_a^e\cot(\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(\xi_w(t)))) -m_a^o\tan(\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(\xi_w(t))))\right)\right.}\\ \hspace{1.2truecm} \displaystyle{\left.\times\frac{\pi}{2b_a}(\lambda_a)_{u_0} ({\rm grad}(y_i\circ\Psi)_{\xi_w(t)})\right).} \end{array}$$ Denote by $W$ the Coxeter group of $\widetilde M$ at $u_0$. Let $f_i$ be the $W$-invariant $C^{\infty}$-function over $T^{\perp}_{u_0}\widetilde M$ such that $$\begin{array}{l} \displaystyle{f_i(v):= \sum_{a=1}^{\bar r}\left(\left( m_a^e\cot(\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(v))) -m_a^o\tan(\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(v)))\right)\right.}\\ \hspace{2truecm} \displaystyle{\left.\times\frac{\pi}{2b_a}(\lambda_a)_{u_0} ({\rm grad}(y_i\circ\Psi)_v)\right)} \end{array}$$ for all $v\in W\cdot\widetilde C$. It is easy to show that such a $W$-invariant $C^{\infty}$-function exists uniquely. Hence, we can describe $f_i$ as $f_i=Y_i\circ\Psi$ in terms of some $C^{\infty}$-function $Y_i$ over ${\Bbb R}^r$. Set $Y:=(Y_1,\cdots,Y_r)$, which is regarded as a $C^{\infty}$-vector field on ${\Bbb R}^r$. Then we have $Y_{\Psi(w)}=\Psi_{\ast}(X_w)$ ($w\in\widetilde C$), that is, $Y\vert_{\Psi(\widetilde C)}=\Psi_{\ast}(X)$. Also we can show that $Y\vert_{\partial\Psi(\widetilde C)}$ has no zero point. From these facts and the fact that $X$ is as in Fig. 2, we see that the flow of $X$ starting any point of $\widetilde C$ other than $w_0$ ($w_0:$ the only zero point of $X$) converges to a point of $\partial\widetilde C$ in finite time, and that, furthermore, for each point of $\partial\widetilde C$, there exists a unique flow of $X$ converging to the point. Since $M$ is not minimal by the assumption, we have $X_{\bf 0}\not =0$, that is, ${\bf 0}\not=w_0$. Hence we have $T<\infty$ and $\lim\limits_{t\to T}\xi(t)\in\partial\widetilde C$, where $\xi(t)=\psi_t({\bf 0})$ and $T$ is the supremum of the domain of $\xi$. Set $w_1:=\lim\limits_{t\to T-0}\xi(t)$. Therefore, since $M_t=M_{\widetilde{\xi(t)}}$, the mean curvature flow $M_t$ collapses to the focal submanifold $F:=M_{\widetilde w_1}$ in time $T$, where $\widetilde w_1$ is the parallel normal vector field of $M$ with $(\widetilde w_1)_{x_0}=w_1$. Also, $M_t$'s ($-S<t<T$) are all of parallel submanifolds of $M$ collapsing to $F$ along the mean curvature flow, where $-S$ is the infimum of the domain of $\xi$. Thus the first-half part of the statement (i) and the statement (ii) are shown. Next we shall show the second-half part of the statement (i). Assume that $M$ is irreducible and the codimension of $M$ is greater than one and that the fibration of $M$ onto $F$ is spherical. Set $\widetilde M:=(\pi\circ\phi)^{-1}(M)$ and $\widetilde F :=(\pi\circ\phi)^{-1}(F)$. Since the fibration of $M$ onto $F$ is spherical, $\widetilde F$ passes through a highest dimensional stratum $\widetilde{\sigma}$ of $\partial\widetilde C$. Let $a_0$ be the element of $\{1,\cdots,\bar r\}$ with $\widetilde{\sigma}\subset(\lambda_{a_0})_{u_0}^{-1} (1)$. Set $\widetilde M_t:=(\pi\circ\phi)^{-1}(M_t)$ ($t\in[0,T)$), which is the mean curvature flow having $\widetilde M$ as initial data. Denote by $A^t$ (resp. $\widetilde A^t$) the shape tensor of $M_t$ (resp. $\widetilde M_t$). Then, since $\widetilde M_t$ is the parallel submanifold of $\widetilde M$ for ${\widetilde{\xi(t)}}^L$, we have $${\rm Spec}\,\widetilde A^t_v\setminus\{0\}=\{\frac{(\lambda_{aj})_{u_0}(v)} {1-(\lambda_{aj})_{u_0}(\xi(t))}\,\vert\,a=1,\cdots,\bar r, \,\,j\in{\Bbb Z}\}$$ for each $v\in T^{\perp}_{u_0+\xi(t)}\widetilde M_t=T^{\perp}_{u_0} \widetilde M$. Since $\lim\limits_{t\to T-0}\xi(t)\in(\lambda_{a_0})_{u_0}^{-1}(1)$ and $\lim\limits_{t\to T-0}\xi(t)\notin(\lambda_a)_{u_0}^{-1}(1)$ ($a\in\{1,\cdots,\bar r\}\setminus\{a_0\}$), we have $\lim\limits_{t\to T-0}(\lambda_{a_0})_{u_0}(\xi(t))=1$ and $\lim\limits_{t\to T-0}(\lambda_a)_{u_0}(\xi(t))<1$ ($a\not=a_0$). From these facts, $\xi'(t)=(\widetilde H^{\widetilde{\xi(t)}^L})_{u_0+\xi(t)}$ and $(3.1)$, we have $$\begin{array}{l} \hspace{0.6truecm} \displaystyle{\lim_{t\to T-0}\vert\vert\widetilde A^t_v\vert\vert_{\infty}^2 (T-t)}\\ \displaystyle{=\lim_{t\to T-0}\frac{(\lambda_{a_0})_{u_0}(v)^2} {(1-(\lambda_{a_0})_{u_0}(\xi(t)))^2}(T-t)}\\ \displaystyle{=\frac12(\lambda_{a_0})_{u_0}(v)^2 \lim_{t\to T-0}\frac{1}{(1-(\lambda_{a_0})_{u_0}(\xi(t)))(\lambda_{a_0})_{u_0} (\xi'(t))}}\\ \displaystyle{=\frac{(\lambda_{a_0})_{u_0}(v)^2} {2m_{a_0}^e\vert\vert({\bf n}_{a_0})_{u_0}\vert\vert^2}.} \end{array}\leqno{(3.2)}$$ Hence we have $$\lim_{t\to T-0}\mathop{\max}_{v\in S^{\perp}_{u_0+\xi(t)}\widetilde M_t} \vert\vert\widetilde A^t_v\vert\vert_{\infty}^2(T-t) =\frac{1}{2m_{a_0}^e}.$$ Thus the mean curvature flow $\widetilde M_t$ has type I singularity. Set $\bar v_t:=(\pi\circ\phi)_{\ast u_0+\xi(t)}(v)$ and let $\{\lambda^t_1,\cdots,\lambda^t_n\}\,\,(\lambda^t_1\leq\cdots\leq\lambda^t_n)$ (resp. $\{\mu^t_1,\cdots,\mu^t_n\}\,\,(0\leq\mu^t_1\leq\cdots\leq\mu^t_n)$) be all the eigenvalues of $A^t_{\bar v_t}$ (resp. $R(\cdot,\bar v_t)\bar v_t$), where $n:={\rm dim}\,M$. Since $M$ is an irreducible equifocal submanifold of codimension greater than one by the assumption, it is homogeneous by the homogeneity theorem of Christ (see [Ch]) and hence it is a principal orbit of a Hermann action by the result of Heintze-Palais-Terng-Thorbergsson (see [HPTT]) and the classification of hyperpolar actions by Kollross (see [Kol]). Furthermore, $M$ and its parallel submanifolds are curvature-adapted by the result of Goertsches-Thorbergsson (see [GT]). Therefore, $A^t_{\bar v_t}$ and $R(\cdot,\bar v_t)\bar v_t$ commute and hence we have $$\sum_{i=1}^n\sum_{j=1}^n\left( {\rm Ker}(A^t_{\bar v_t}-\lambda^t_i\,{\rm id})\cap{\rm Ker}(R(\cdot,\bar v_t) \bar v_t-\mu_j^t\,{\rm id})\right)=T_{(\pi\circ\phi)(u_0+\xi(t))}M_t.$$ Set $\bar E_{ij}^t:={\rm Ker}(A^t_{\bar v_t}-\lambda_i^t\,{\rm id})\cap {\rm Ker}(R(\cdot,\bar v_t)\bar v_t-\mu_j^t\,{\rm id})$ ($i,j\in\{1,\cdots,n\}$) and $I_t:=\{(i,j)\in\{1,\cdots,n\}^2\,\vert\,\bar E_{ij}^t\not=\{0\}\}$. For each $(i,j)\in I_t$, we have $${\rm Spec}(\widetilde A^t_v\vert_{(\pi\circ\phi)_{\ast}^{-1}(\bar E_{ij}^t)}) = \left\{ \begin{array}{ll} \displaystyle{\left\{\frac{\sqrt{\mu_j^t}}{\arctan\frac{\sqrt{\mu_j^t}} {\lambda_i^t}+k\pi}\,\vert\,k\in{\Bbb Z}\right\}} & \displaystyle{(\mu_j^t\not=0)}\\ \displaystyle{\{\lambda_i^t\}} & \displaystyle{(\mu_j^t=0)} \end{array}\right.$$ in terms of Proposition 3.2 of [Koi1] and hence $$\vert\vert\widetilde A^t_v\vert\vert_{\infty}=\max\left(\left\{ \frac{\sqrt{\mu_j^t}}{\arctan\frac{\sqrt{\mu_j^t}}{\vert\lambda_i^t\vert}}\, \vert\,(i,j)\in I_t\,\,{\rm s.t.}\,\,\mu_j^t\not=0\right\}\cup \{\vert\lambda_i^t\vert\,\vert\,(i,j)\in I_t\,\,{\rm s.t.}\,\,\mu_j^t=0\} \right).$$ It is clear that $\displaystyle{\mathop{{\rm sup}}_{0\leq t<T}\mu_n^t\,<\,\infty}$. If $\lim\limits_{t\to T-0}\vert\lambda_i^t\vert=\infty$, then we have $\displaystyle{\lim_{t\to T-0}\left(\frac{\sqrt{\mu_j^t}} {\arctan\frac{\sqrt{\mu_j^t}}{\lambda_i^t}}\right)/\lambda_i^t}$\newline $=1$. Hence we have $$\begin{array}{l} \hspace{0.6truecm}\displaystyle{\lim\limits_{t\to T-0} \vert\vert\widetilde A^t_v\vert\vert^2_{\infty}(T-t)}\\ \displaystyle{=\max\left\{\lim\limits_{t\to T-0} (\lambda_i^t)^2(T-t)\,\vert\,i=1,\cdots,n\right\}}\\ \displaystyle{=\lim\limits_{t\to T-0}\max\{(\lambda_i^t)^2(T-t)\,\vert\, i=1,\cdots,n\}}\\ \displaystyle{=\lim\limits_{t\to T-0} \vert\vert A^t_{\bar v_t}\vert\vert^2_{\infty}(T-t),} \end{array}$$ which together with $(3.2)$ deduces $$\lim\limits_{t\to T-0}\vert\vert A^t_{\bar v_t}\vert\vert^2_{\infty}(T-t)= \frac{(\lambda_{a_0})_{u_0}(v)^2}{2m_{a_0}^e\vert\vert ({\bf n}_{a_0})_{u_0}\vert\vert^2}.$$ Therefore we obtain $$\lim\limits_{t\to T-0}\mathop{\max}_{v\in S^{\perp}_{\exp^{\perp}(\xi(t))} M_t}\vert\vert A^t_v\vert\vert^2_{\infty}(T-t)=\frac{1}{2m_{a_0}^e}<\infty.$$ Thus the mean curvature flow $M_t$ has type I singularity. \hspace{3.4truecm}q.e.d. \vspace{0.5truecm} Next we prove Theorem B. \vspace{0.5truecm} \noindent {\it Proof of Theorem B.} For simplicity, set $I:=\{1,\cdots,\bar r\}$. Let $\widetilde{\sigma}$ be a stratum of dimension greater than zero of $\partial\widetilde C$ and $I_{\widetilde{\sigma}}:=\{a\in I\,\vert\,\widetilde{\sigma}\subset (\lambda_a)_{u_0}^{-1}(1)\}$. Let $w_1\in\widetilde{\sigma}$. Denote by $F$ (resp. $\widetilde F$) the focal submanifold of $M$ (resp. $\widetilde M$) for $\widetilde w_1$ (resp. ${\widetilde w_1}^L$). Assume that $F$ is not minimal. Then, since $\displaystyle{{\rm Ker}\,\eta_{\widetilde w_{\ast}} =\mathop{\oplus}_{a\in I_{\widetilde{\sigma}}}(E_{a0})_{u_0}}$, we have $$T_{u_0+w_1}\widetilde F =\left(\mathop{\oplus}_{a\in I\setminus I_{\widetilde{\sigma}}} \mathop{\oplus}_{j\in{\bf Z}}\eta_{\widetilde w_1\ast}((E_{aj})_{u_0})\right) \oplus\left(\mathop{\oplus}_{a\in I_{\widetilde{\sigma}}} \mathop{\oplus}_{j\in{\bf Z}\setminus\{0\}}\eta_{\widetilde w_1\ast} ((E_{aj})_{u_0}) \right).$$ Also we have $$T^{\perp}_{u_0+w_1}\widetilde F =\left(\mathop{\oplus}_{a\in I_{\widetilde{\sigma}}} (E_{a0})_{u_0}\right)\oplus T^{\perp}_{u_0}\widetilde M,$$ where we identify $T_{u_0+w_1}H^0([0,1],\mathfrak g)$ with $T_{u_0}H^0([0,1],\mathfrak g)$. For $v\in T^{\perp}_{u_0}\widetilde M\,(\subset T^{\perp}_{u_0+w_1}\widetilde F)$, we have $$\widetilde A^{\widetilde w_1^L}_v\vert_{\eta_{\widetilde w\ast} ((E_{aj})_{u_0})}=\frac{(\lambda_{aj})_{u_0}(v)}{1-(\lambda_{aj})_{u_0}(w_1)} {\rm id}\,\,\,\, ((a,j)\in((I\setminus I_{\widetilde{\sigma}})\times{\Bbb Z})\cup (I_{\widetilde{\sigma}}\times({\Bbb Z}\setminus\{0\}))).$$ Hence we have $$\begin{array}{l} \hspace{0.6truecm} \displaystyle{{\rm Tr}_r{\widetilde A}^{{\widetilde w}_1^L}_v}\\ \displaystyle{=\sum_{a\in I\setminus I_{\widetilde{\sigma}}}\left( \sum_{j\in{\bf Z}}\frac{m_a^e(\lambda_{a,2j})_{u_0}(v)} {1-(\lambda_{a,2j})_{u_0}(w_1)}+\sum_{j\in{\bf Z}} \frac{m_a^o(\lambda_{a,2j+1})_{u_0}(v)} {1-(\lambda_{a,2j+1})_{u_0}(w_1)}\right)}\\ \hspace{0.6truecm}\displaystyle{+\sum_{a\in I_{\widetilde{\sigma}}} \left(\sum_{j\in{\bf Z}\setminus\{0\}}\frac{m_a^e(\lambda_{a,2j})_{u_0}(v)} {1-(\lambda_{a,2j})_{u_0}(w_1)}+\sum_{j\in{\bf Z}} \frac{m_a^o(\lambda_{a,2j+1})_{u_0}(v)} {1-(\lambda_{a,2j+1})_{u_0}(w_1)}\right)}\\ \displaystyle{=\sum_{a\in I\setminus I_{\widetilde{\sigma}}}\left( m_a^e\cot\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w_1)) -m_a^o\tan\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w_1))\right)\frac{\pi}{2b_a} (\lambda_a)_{u_0}(v),} \end{array}$$ that is, the $T_{u_0}^{\perp}\widetilde M$-component $((\widetilde H^{{\widetilde w}_1^L})_{u_0+w_1})_{T^{\perp}_{u_0} \widetilde M}$ of $(\widetilde H^{{\widetilde w}_1^L})_{u_0+w_1}$ is equal to $$\sum_{a\in I\setminus I_{\widetilde{\sigma}}}\left( m_a^e\cot\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w_1)) -m_a^o\tan\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w_1))\right)\frac{\pi}{2b_a} ({\bf n}_a)_{u_0}.$$ Denote by $\Phi_{u_0+w_1}$ the normal holonomy group of $\widetilde F$ at $u_0+w_1$ and $L_{u_0}$ be the focal leaf through $u_0$ for $\widetilde w_1$. Since $L_{u_0}=\Phi_{u_0+w_1}\cdot u_0$, there exists $\mu\in\Phi_{u_0+w_1}$ such that $\mu(T^{\perp}_{u_0}\widetilde M)=T^{\perp}_{u_1}\widetilde M$ for any point $u_1$ of $L_{u_0}$. On the other hand, since $\widetilde F$ has constant principal curvatures in the sense of [HOT], $({\widetilde H}^{{\widetilde w}_1^L})_{u_0+w_1}$ is $\Phi_{u_0+w_1}$-invariant. Hence we have $\displaystyle{(\widetilde H^{{\widetilde w}_1^L})_{u_0+w_1} \in\mathop{\cap}_{u\in L_{u_0}}T^{\perp}_u\widetilde M}$, where we note that $\displaystyle{\mathop{\cap}_{u\in L_{u_0}}T^{\perp}_u\widetilde M}$ contains $\widetilde{\sigma}$ as an open subset. Therefore, we obtain $$\begin{array}{l} \displaystyle{ (\widetilde H^{{\widetilde w}_1^L})_{u_0+w_1}= \sum_{a\in I\setminus I_{\widetilde{\sigma}}}\left( m_a^e\cot\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w_1))\right.}\\ \hspace{2.4truecm}\displaystyle{\left. -m_a^o\tan\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w_1))\right)\frac{\pi}{2b_a} ({\bf n}_a)_{u_0}\,\,\,\,(\in T\widetilde{\sigma}).} \end{array}\leqno{(3.3)}$$ Define a tangent vector field $X^{\widetilde{\sigma}}$ on $\widetilde{\sigma}$ by $X^{\widetilde{\sigma}}_w :=(\widetilde H^{{\widetilde w}^L})_{u_0+w}$ ($w\in\widetilde{\sigma}$). Let $\xi:(-S,T)\to\widetilde{\sigma}$ be the maximal integral curve of $X^{\widetilde{\sigma}}$ with $\xi(0)=w_1$. Define a function $\rho_{\widetilde{\sigma}}$ over $\widetilde{\sigma}$ by $$\begin{array}{l} \displaystyle{\rho_{\widetilde{\sigma}}(w) :=-\sum_{a\in I\setminus I_{\widetilde{\sigma}}} \left(m_a^e\log\sin\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w))\right.}\\ \hspace{2.4truecm}\displaystyle{ \left.+m_a^o\log\cos\frac{\pi}{2b_a}(1-(\lambda_a)_{u_0}(w))\right) \qquad(w\in\widetilde{\sigma}).} \end{array}$$ It follows from the definition of $X^{\widetilde{\sigma}}$ and $(3.3)$ that ${\rm grad}\,\rho_{\widetilde{\sigma}} =X^{\widetilde{\sigma}}$. Also we can show that $\rho_{\widetilde{\sigma}}$ is downward convex and that $\rho_{\widetilde{\sigma}}(w)\to\infty$ as $w\to\partial\widetilde{\sigma}$. Hence we see that $\rho_{\widetilde{\sigma}}$ has the only minimal point. Denote by $w_0$ this minimal point. It is clear that $X^{\widetilde{\sigma}}_{w_0}=0$. Also, by imitating the proof of Theorem A, we can show that the flow of $X^{\widetilde{\sigma}}$ starting any point of $\widetilde{\sigma}$ other than $w_0$ converges to a point of $\partial\widetilde{\sigma}$ in finite time, and that, furthermore, for each point of $\partial\widetilde{\sigma}$, there exists a unique flow of $X^{\widetilde{\sigma}}$ converging to the point. Since $F$ is not minimal by the assumption, we have $X^{\widetilde{\sigma}}_{w_1}\not=0$, that is, $w_1\not=w_0$. Hence we have $T<\infty$ and $\lim\limits_{t\to T-0}\xi(t)\in\partial\widetilde{\sigma}$. Set $w_2:=\lim\limits_{t\to T-0}\xi(t)$. Therefore, since $F_t=M_{\widetilde{\xi(t)}}$, the mean curvature flow $F_t$ collapses to the lower dimensional focal submanifold $F':=M_{\widetilde w_2}$ in time $T$, where $\widetilde w_2$ is the parallel normal vector field of $M$ with $(\widetilde w_2)_{x_0}=w_2$. Also, $F_t$'s ($-S<t<T$) are all of focal submanifolds of $M$ through $\widetilde{\sigma}$ collapsing to $F'$ along the mean curvature flow. Thus the first-half part of the statement (i) and the statement (ii) are shown. Also, by imitating the proof of Theorem A, we can show the second-half part of the statement (i). \hspace{10.5truecm}q.e.d. \section{Hermann actions of cohomogeneity two} According to the homogeneity theorem for an equifocal submanifold in a symmetric space of compact type by Christ ([Ch]), equifocal submanifolds of codimension greater than one in an irreducible compact type symmetric space are homogeneous. Hence, according to the result by Heintze-Palais-Terng-Thorbergsson ([HPTT]), they occur as principal orbits of hyperpolar actions on the symmetric space. Furthermore, by using the classification of hyperpolar actions on irreducible compact type symmetric spaces by Kollross ([Kol]), we see that they occur as principal orbits of Hermann actions on the symmetric spaces. We have only to analyze the vector field $X$ defined in the previous secton to analyze the mean curvature flows having parallel submanifolds of an equifocal submanifold $M$ as initial data. Also, we have only to analyze the vector fields $X^{\widetilde{\sigma}}$'s ($\widetilde{\sigma}\,:\,$a simplex of $\partial\widetilde C$) defined in the proof of Theorem B to analyze the mean curvature flows having focal submanifolds of $M$ as initial data. In this section, we shall explicitly describe the vector field $X$ defined for principal orbits of all Hermann actions of cohomogeneity two on all irreducible symmetric spaces of compact type and rank two (see Table 3). Let $G/K$ be a symmetric space of compact type and $H$ be a symmetric subgroup of $G$. Also, let $\theta$ be an involution of $G$ with $({\rm Fix}\,\theta)_0\subset K\subset{\rm Fix}\,\theta$ and $\tau$ be an invloution of $G$ with $({\rm Fix}\,\tau)_0\subset H\subset{\rm Fix}\, \tau$, where ${\rm Fix}\,\theta$ (resp. ${\rm Fix}\,\tau$) is the fixed point group of $\theta$ (resp. $\tau$) and $({\rm Fix}\,\theta)_0$ (resp. $({\rm Fix}\,\tau)_0$) is the identity component of ${\rm Fix}\,\theta$ (resp. ${\rm Fix}\,\tau$). In the sequel, we assume that $\tau\circ\theta=\theta\circ\tau$. Set $L:={\rm Fix}(\theta\circ\tau)$. Denote by the same symbol $\theta$ (resp. $\tau$) the involution of the Lie algebra $\mathfrak g$ of $G$ induced from $\theta$ (resp. $\tau$). Set $\mathfrak k:={\rm Ker}(\theta-{\rm id}),\,\mathfrak p:={\rm Ker}(\theta+ {\rm id}),\,\mathfrak h:={\rm Ker}(\tau-{\rm id})$ and $\mathfrak q:= {\rm Ker}(\tau+{\rm id})$. The space $\mathfrak p$ is identified with $T_{eK}(G/K)$. From $\theta\circ\tau=\tau\circ\theta$, we have $\mathfrak p=\mathfrak p\cap\mathfrak h+\mathfrak p\cap\mathfrak q$. Take a maximal abelian subspace $\mathfrak b$ of $\mathfrak p\cap\mathfrak q$ and let $\mathfrak p=\mathfrak z_{\mathfrak p}(\mathfrak b) +\sum\limits_{\beta\in\triangle'_+}\mathfrak p_{\beta}$ be the root space decomposition with respect to $\mathfrak b$, where $\mathfrak z_{\mathfrak p}(\mathfrak b)$ is the centralizer of $\mathfrak b$ in $\mathfrak p$, $\triangle'_+$ is the positive root system of $\triangle':=\{\beta\in\mathfrak b^{\ast}\,\vert\,\exists\,X(\not=0)\in \mathfrak p\,\,{\rm s.t.}\,\,{\rm ad}(b)^2(X)=-\beta(b)^2X\,\,(\forall\,b\in \mathfrak b)\}$ under some lexicographic ordering of $\mathfrak b^{\ast}$ and $\mathfrak p_{\beta}:=\{X\in\mathfrak p\,\vert\,{\rm ad}(b)^2(X)=-\beta(b)^2X \,\,(\forall\,b\in\mathfrak b)\}$ ($\beta\in\triangle'_+$). Also, let ${\triangle'}^V_+:=\{\beta\in\triangle'_+\,\vert\,\mathfrak p_{\beta}\cap \mathfrak q\not=\{0\}\}$ and ${\triangle'}^H_+:=\{\beta\in\triangle'_+\,\vert\, \mathfrak p_{\beta}\cap\mathfrak h\not=\{0\}\}$. Then we have $\mathfrak q=\mathfrak b+\sum\limits_{\beta\in{\triangle'}^V_+} (\mathfrak p_{\beta}\cap\mathfrak q)$ and $\mathfrak h= \mathfrak z_{\mathfrak h}(\mathfrak b)+\sum\limits_{\beta\in{\triangle'}^H_+} (\mathfrak p_{\beta}\cap\mathfrak h)$, where $\mathfrak z_{\mathfrak h} (\mathfrak b)$ is the centralizer of $\mathfrak b$ in $\mathfrak h$. The orbit $H(eK)$ is a reflective submanifold and it is isometric to the symmetric space $H/H\cap K$ (equipped with a metric scaled suitably). Also, $\exp^{\perp}(T^{\perp}_{eK}(H(eK)))$ is also a reflective submanifold and it is isometric to the symmetric space $L/H\cap K$ (equipped with a metric scaled suitably), where $\exp^{\perp}$ is the normal exponential map of $H(eK)$. The system ${\triangle'}^V:={\triangle'}^V_+\cup(-{\triangle'}^V_+)$ is the root system of $L/H\cap K$. Define a subset $\widetilde C$ of $\mathfrak b$ by $$\begin{array}{l} \displaystyle{\widetilde C:=\{b\in\mathfrak b\,\vert\,0<\beta<\pi\,(\forall\, \beta\in{\triangle'}^V_+),\,\,-\frac{\pi}{2}<\beta< \frac{\pi}{2}\,(\forall\,\beta\in{\triangle'}^H_+)\}.} \end{array}$$ Set $C:={\rm Exp}(\widetilde C)$, where ${\rm Exp}$ is the exponential map of $G/K$ at $eK$. Let $P(G,H\times K):=\{g\in H^1([0,1],G)\,\vert\,(g(0),g(1))\in H\times K\}$, where $H^1([0,1],G)$ is the Hilbert Lie group of all $H^1$-paths in $G$. This group acts on $H^0([0,1],\mathfrak g)$ as gauge action. The orbits of the $P(G,H\times K)$-action are the inverse images of orbits of the $H$-action by $\pi\circ\phi$. The set $\Sigma:={\rm Exp}(\mathfrak b)$ is a section of the $H$-action and $\mathfrak b$ is a section of the $P(G,H\times K)$-action on $H^0([0,1],\mathfrak g)$, where $\mathfrak b$ is identified with the horizontal lift of $\mathfrak b$ to the zero element $\hat 0$ of $H^0([0,1],\mathfrak g)$ ($\hat0\,:\,$the constant path at the zero element $0$ of $\mathfrak g$). The set $\widetilde C$ is the fundamental domain of the Coxeter group of a principal $P(G,H\times K)$-orbit and each prinicipal $H$-orbit meets $C$ at one point and each singular $H$-orbit meets $\partial C$ at one point. The focal set of the principal orbit $P(G,H\times K)\cdot Z_0$ ($Z_0\in\widetilde C$) consists of the hyperplanes $\beta^{-1}(j\pi)$'s ($\beta\in{\triangle'}^V_+\setminus{\triangle'}^H_+,\,\,j\in{\Bbb Z}$), $\beta^{-1}((j+\frac12)\pi)$'s ($\beta\in{\triangle'}^H_+\setminus{\triangle'}^V_+,\,\,j\in{\Bbb Z}$), $\beta^{-1}(\frac{j\pi}{2})$'s ($\beta\in{\triangle'}^V_+\cap{\triangle'}^H_+,\,\,j\in{\Bbb Z}$) in $\mathfrak b(=T^{\perp}_{Z_0}(P(G,H\times K)\cdot Z_0))$. Denote by $\exp^G$ the exponential map of $G$. Note that $\pi\circ\exp^G\vert_{\mathfrak p}={\rm Exp}$. Let $Y_0\in\widetilde C$ and $M(Y_0):=H({\rm Exp}(Y_0))$. Then we have $T^{\perp}_{{\rm Exp}(Y_0)}M(Y_0) =(\exp^G(Y_0))_{\ast}(\mathfrak b)$. Denote by $A^{Y_0}$ the shape tensor of $M(Y_0)$. Take $v\in T^{\perp}_{{\rm Exp}(Y_0)}M(Y_0)$ and set $\bar v:=(\exp^G(Y_0))_{\ast}^{-1}(v)$. By scaling the metric of $G/K$ by a suitable positive constant, we have $$A^{Y_0}_v\vert_{\exp^G(Y_0)_{\ast}(\mathfrak p_{\beta}\cap\mathfrak q)}= -\frac{\beta(\bar v)}{\tan\beta(Y_0)}{\rm id}\,\,\,\, (\beta\in{\triangle'}^V_+)\leqno{(4.1)}$$ and $$A^{Y_0}_v\vert_{\exp^G(Y_0)_{\ast}(\mathfrak p_{\beta}\cap\mathfrak h)}= \beta(\bar v)\tan\beta(Y_0){\rm id}\,\,\,\,(\beta\in{\triangle'}^H_+). \leqno{(4.2)}$$ Set $m_{\beta}^V:={\rm dim}(\mathfrak p_{\beta}\cap\mathfrak q)$ ($\beta\in{\triangle'}^V_+$) and $m_{\beta}^H:={\rm dim}(\mathfrak p_{\beta}\cap\mathfrak h)$ ($\beta\in{\triangle'}^H_+$). Set $\widetilde M(Y_0):=(\pi\circ\phi)^{-1}(M(Y_0))(=P(G,H\times K)\cdot Y_0)$. We can show $(\pi\circ\phi)(Y_0)={\rm Exp}(Y_0)$. Denote by ${\widetilde A}^{Y_0}$ the shape tensor of $\widetilde M(Y_0)$. According to Proposition 3.2 of [Koi1], we have $$\begin{array}{l} \displaystyle{{\rm Spec}({\widetilde A}^{Y_0}_{\bar v} \vert_{(\pi\circ\phi)^{-1}_{\ast Y_0}(\exp^G(Y_0)_{\ast} (\mathfrak p_{\beta}\cap\mathfrak q))})\setminus\{0\}= \{\frac{-\beta(\bar v)}{\beta(Y_0)+j\pi}\,\vert\,j\in{\Bbb Z}\}\,\,\,\, (\beta\in{\triangle'}^V_+),}\\ \displaystyle{{\rm Spec}({\widetilde A}^{Y_0}_{\bar v} \vert_{(\pi\circ\phi)^{-1}_{\ast Y_0}(\exp^G(Y_0)_{\ast} (\mathfrak p_{\beta}\cap\mathfrak h))})\setminus\{0\}= \{\frac{-\beta(\bar v)}{\beta(Y_0)+(j+\frac12)\pi}\,\vert\,j\in{\Bbb Z}\}\,\,\, \,(\beta\in{\triangle'}^H_+),} \end{array}$$ and $${\rm Spec}({\widetilde A}^{Y_0}_{\bar v} \vert_{(\pi\circ\phi)^{-1}_{\ast Y_0}(\exp^G(Y_0)_{\ast} (\mathfrak z_{\mathfrak h}(\mathfrak b)))})=\{0\}.$$ Hence the set ${\cal PC}_{\widetilde M(Y_0)}$ of all principal curvatures of $\widetilde M(Y_0)$ is given by $${\cal PC}_{\widetilde M(Y_0)}=\{\frac{-\widetilde{\beta}}{\beta(Y_0)+j\pi}\, \vert\,\beta\in{\triangle'}^V_+,\,\,j\in\Bbb Z\}\cup \{\frac{-\widetilde{\beta}}{\beta(Y_0)+(j+\frac12)\pi}\,\vert\, \beta\in{\triangle'}^H_+,\,\,j\in\Bbb Z\},$$ where $\widetilde{\beta}$ is the parallel section of $(T^{\perp}\widetilde M (Y_0))^{\ast}$ with $\widetilde{\beta}_{u_0} =\beta\circ\exp^G(Y_0)_{\ast}^{-1}$. Also, we can show that the multiplicity of $\frac{-\widetilde{\beta}} {\beta(Y_0)+j\pi}$ ($\beta\in{\triangle'}^V_+$) is equal to $m_{\beta}^V$ and that of $\frac{-\widetilde{\beta}}{\beta(Y_0)+(j+\frac12)\pi}$ ($\beta\in{\triangle'}^H_+$) is equal to $m_{\beta}^H$. Define $\lambda_{\beta}^{Y_0}$ and $b_{\beta}^{Y_0}$ ($\beta\in{\triangle'}_+$) by $$(\lambda^{Y_0}_{\beta},b^{Y_0}_{\beta}):=\left\{ \begin{array}{ll} \displaystyle{(\frac{-\widetilde{\beta}}{\beta(Y_0)},\,\frac{\pi} {\beta(Y_0)})} & \displaystyle{(\beta\in{\triangle'}^V_+\setminus {\triangle'}^H_+)}\\ \displaystyle{(\frac{-\widetilde{\beta}}{\beta(Y_0)+\frac{\pi}{2}},\, \frac{\pi}{\beta(Y_0)+\frac{\pi}{2}})} & \displaystyle{ (\beta\in{\triangle'}^H_+\setminus{\triangle'}^V_+)}\\ \displaystyle{(\frac{-\widetilde{\beta}}{\beta(Y_0)},\,\frac{\pi} {2\beta(Y_0)})} & \displaystyle{(\beta\in{\triangle'}^V_+\cap{\triangle'}^H_+).}\end{array} \right.$$ Then we have $\frac{-\widetilde{\beta}}{\beta(Y_0)+j\pi} =\frac{\lambda_{\beta}^{Y_0}}{1+jb_{\beta}^{Y_0}}$ when $\beta\in{\triangle'}^V_+\setminus{\triangle'}^H_+$, $\frac{-\widetilde{\beta}}{\beta(Y_0)+(j+\frac12)\pi} =\frac{\lambda_{\beta}^{Y_0}}{1+jb_{\beta}^{Y_0}}$ when $\beta\in{\triangle'}^H_+\setminus{\triangle'}^V_+$ and $(\frac{-\widetilde{\beta}}{\beta(Y_0)+j\pi},\frac{-\widetilde{\beta}} {\beta(Y_0)+(j+\frac12)\pi}) =(\frac{\lambda_{\beta}^{Y_0}}{1+2jb_{\beta}^{Y_0}},\, \frac{\lambda_{\beta}^{Y_0}}{1+(2j+1)b_{\beta}^{Y_0}})$ when $\beta\in{\triangle'}^V_+\cap{\triangle'}^H_+$. That is, we have $${\cal PC}_{\widetilde M(Y_0)}=\{\frac{\lambda_{\beta}^{Y_0}} {1+jb_{\beta}^{Y_0}}\,\vert\,\beta\in\triangle'_+,\,j\in{\Bbb Z}\}.$$ Denote by $m_{\beta j}$ the multiplicity of $\frac{\lambda_{\beta}}{1+jb_{\beta}}$. Then we have $$ m_{\beta,2j}=\left\{ \begin{array}{ll} \displaystyle{m_{\beta}^V} & \displaystyle{(\beta\in{\triangle'}^V_+\setminus {\triangle'}^H_+)}\\ \displaystyle{m_{\beta}^H} & \displaystyle{(\beta\in{\triangle'}^H_+ \setminus{\triangle'}^V_+)}\\ \displaystyle{m_{\beta}^V} & \displaystyle{(\beta\in{\triangle'}^V_+\cap {\triangle'}^H_+),} \end{array}\right.\qquad m_{\beta,2j+1}=\left\{ \begin{array}{ll} \displaystyle{m_{\beta}^V} & \displaystyle{(\beta\in{\triangle'}^V_+ \setminus{\triangle'}^H_+)}\\ \displaystyle{m_{\beta}^H} & \displaystyle{(\beta\in{\triangle'}^H_+\setminus {\triangle'}^V_+)}\\ \displaystyle{m_{\beta}^H} & \displaystyle{(\beta\in{\triangle'}^V_+\cap {\triangle'}^H_+),} \end{array}\right.$$ where $j\in{\Bbb Z}$. Denote by $\widetilde H^{Y_0}$ the mean curvature vector of $\widetilde M(Y_0)$ and ${\bf n}^{Y_0}_{\beta}$ the curvature normal corresponding to $\lambda^{Y_0}_{\beta}$. Define $\beta^{\sharp}\,(\in\mathfrak b)$ by $\beta(\cdot)=\langle \beta^{\sharp},\cdot\rangle$ and let $\widetilde{\beta^{\sharp}}^{Y_0}$ be the parallel normal vector field of $\widetilde M(Y_0)$ with $(\widetilde{\beta^{\sharp}}^{Y_0})_{Y_0}=\beta^{\sharp}$, where we identify $\mathfrak b$ with $T^{\perp}_{Y_0}\widetilde M(Y_0)$. From $(3.1)$ (the case of $w=0$), we have $$\begin{array}{l} \displaystyle{\widetilde H^{Y_0}=\sum_{\beta\in{\triangle'}^V_+} m_{\beta}^V\cot\frac{\pi}{2b^{Y_0}_{\beta}}\cdot\frac{\pi}{2b^{Y_0}_{\beta}} {\bf n}^{Y_0}_{\beta}}\\ \hspace{1.2truecm}\displaystyle{ -\sum_{\beta\in{\triangle'}^H_+}m_{\beta}^H \tan\frac{\pi}{2b^{Y_0}_{\beta}}\cdot\frac{\pi}{2b_{\beta}^{Y_0}} {\bf n}_{\beta}^{Y_0}}\\ \hspace{0.6truecm}\displaystyle{= -\sum_{\beta\in{\triangle'}^V_+}m_{\beta}^V\cot\beta(Y_0) {\widetilde{\beta^{\sharp}}}^{Y_0}+\sum_{\beta\in{\triangle'}^H_+} m_{\beta}^H\tan\beta(Y_0){\widetilde{\beta^{\sharp}}}^{Y_0}.} \end{array} \leqno{(4.3)}$$ Define a tangent vector field $X$ on $\widetilde C$ by assigning $({\widetilde H}^{Y_0})_{Y_0}\,(\in T^{\perp}_{Y_0}\widetilde M(Y_0) =\mathfrak b(\subset V))$ to each $Y_0\in\widetilde C$. From $(4.3)$, we have $$X_{Y_0}=-\sum_{\beta\in{\triangle'}^V_+}m_{\beta}^V\cot\beta(Y_0) \beta^{\sharp} +\sum_{\beta\in{\triangle'}^H_+}m_{\beta}^H\tan\beta(Y_0)\beta^{\sharp}. \leqno{(4.4)}$$ By using this description, we can explicitly describe this vector field $X$ for all Hermann actions of cohomogeneity two on all irreducible symmetric spaces of compact type and rank two. All Hermann actions of cohomogeneity two on all irreducible symmetric spaces of compact type and rank two are given in Table 1. The systems ${\triangle'}^V_+$ and ${\triangle'}^H_+$ for the Hermann actions are given in Table 2 and the explicit descriptions of $X$ for the Hermann actions are given in Table 3. In Table 1, $H^{\ast}\curvearrowright G^{\ast}/K$ implies the dual action of $H\curvearrowright G/K$ and $L^{\ast}/H\cap K$ is the dual of $L/H\cap K$. In Table 2, $\{\alpha,\beta,\alpha+\beta\}$ implies a positive root system of the root system of (${\mathfrak a}_2$)-type ($\alpha=(2,0),\beta=(-1,\sqrt3)$), $\{\alpha,\beta,\alpha+\beta, 2\alpha+\beta\}$ implies a positive root system of the root system of (${\mathfrak b}_2$)(=(${\mathfrak c}_2$))-type ($\alpha=(1,0),\beta=(-1,1)$) and $\{\alpha,\beta,\alpha+\beta,\alpha+2\beta,\alpha+3\beta,2\alpha+3\beta\}$ implies a positive root system of the root system of (${\mathfrak g}_2$)-type ($\alpha=(2\sqrt3,0),\beta=(-\sqrt3,1)$). In Table $1\sim3$, $\rho_i$ ($i=1,\cdots,16$) imply automorphisms of $G$ and $(\cdot)^2$ implies the product Lie group $(\cdot)\times(\cdot)$ of a Lie group $(\cdot)$. In Table 2, $\displaystyle{\mathop{\alpha}_{(m)}}$ and so on imply that the multiplicity of $\alpha$ is equal to $m$. \vspace{0.5truecm} $$\begin{tabular}{|c|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$H^{\ast}\curvearrowright G^{\ast}/K$} & {\scriptsize$L^{\ast}/H\cap K$}\\ \hline {\scriptsize$\rho_1(SO(3))\curvearrowright SU(3)/SO(3)$} & {\scriptsize$SO_0(1,2)\curvearrowright SL(3,{\Bbb R})/SO(3)$} & {\scriptsize$(SL(2,{\Bbb R})/SO(2))\times{\Bbb R}$}\\ \hline {\scriptsize$SO(6)\curvearrowright SU(6)/Sp(3)$} & {\scriptsize$SO^{\ast}(6)\curvearrowright SU^{\ast}(6)/Sp(3)$} & {\scriptsize$SL(3,{\Bbb C})/SU(3)$}\\ \hline {\scriptsize$\rho_2(Sp(3))\curvearrowright SU(6)/Sp(3)$} & {\scriptsize$Sp(1,2)\curvearrowright SU^{\ast}(6)/Sp(3)$} & {\scriptsize$(SU^{\ast}(4)/Sp(2))\times U(1)$}\\ \hline {\scriptsize$SO(q+2)\curvearrowright$} & {\scriptsize$SO_0(2,q)\curvearrowright$} & {\scriptsize$SO_0(2,q)/SO(2)\times SO(q)$}\\ {\scriptsize$SU(q+2)/S(U(2)\times U(q))$} & {\scriptsize$SU(2,q)/S(U(2)\times U(q))$} & {\scriptsize}\\ \hline {\scriptsize$S(U(j+1)\times U(q-j+1))\curvearrowright$} & {\scriptsize$S(U(1,j)\times U(1,q-j))\curvearrowright$} & {\scriptsize$(SU(1,j)/S(U(1)\times U(j)))\times$}\\ {\scriptsize$SU(q+2)/S(U(2)\times U(q))$} & {\scriptsize$SU(2,q)/S(U(2)\times U(q))$} & {\scriptsize$(SU(1,q-j)/S(U(1)\times U(q-j)))$}\\ \hline {\scriptsize$SO(j+1)\times SO(q-j+1)\curvearrowright$} & {\scriptsize$SO(1,j)\times SO(1,q-j)\curvearrowright$} & {\scriptsize$(SO_0(1,j)/SO(j))\times$}\\ {\scriptsize$SO(q+2)/SO(2)\times SO(q)$} & {\scriptsize$SO(2,q)/SO(2)\times SO(q)$} & {\scriptsize$(SO_0(1,q-j)/SO(q-j))$}\\ \hline {\scriptsize$SO(4)\times SO(4)\curvearrowright$} & {\scriptsize$SO^{\ast}(4)\times SO^{\ast}(4)\curvearrowright$} & {\scriptsize$SU(2,2)/S(U(2)\times U(2))$}\\ {\scriptsize$SO(8)/U(4)$} & {\scriptsize$SO^{\ast}(8)/U(4)$} & {\scriptsize}\\ \hline {\scriptsize$\rho_3(SO(4)\times SO(4))\curvearrowright$} & {\scriptsize$SO(4,{\Bbb C})\curvearrowright SO^{\ast}(8)/U(4)$} & {\scriptsize$SO(4,{\Bbb C})/SO(4)$}\\ {\scriptsize$SO(8)/U(4)$} & {\scriptsize} & {\scriptsize}\\ \hline {\scriptsize$\rho_4(U(4))\curvearrowright SO(8)/U(4)$} & {\scriptsize$U(2,2)\curvearrowright SO^{\ast}(8)/U(4)$} & {\scriptsize$(SO^{\ast}(4)/U(2))\times(SO^{\ast}(4)/U(2))$}\\ \hline {\scriptsize$SO(4)\times SO(6)\curvearrowright$} & {\scriptsize$SO^{\ast}(4)\times SO^{\ast}(6)\curvearrowright$} & {\scriptsize$SU(2,3)/S(U(2)\times U(3))$}\\ {\scriptsize$SO(10)/U(5)$} & {\scriptsize$SO^{\ast}(10)/U(5)$} & {\scriptsize}\\ \hline {\scriptsize$SO(5)\times SO(5)\curvearrowright$} & {\scriptsize$SO^(5,{\Bbb C})\curvearrowright SO^{\ast}(10)/U(5)$} & {\scriptsize$SO(5,{\Bbb C})/SO(5)$}\\ {\scriptsize$SO(10)/U(5)$} & {\scriptsize} & {\scriptsize}\\ \hline {\scriptsize$\rho_5(U(5))\curvearrowright SO(10)/U(5)$} & {\scriptsize$U(2,3)\curvearrowright SO^{\ast}(10)/U(5)$} & {\scriptsize$(SO^{\ast}(4)/U(2))\times(SO^{\ast}(6)/U(3))$}\\ \hline {\scriptsize$SO(2)^2\times SO(3)^2\curvearrowright$} & {\scriptsize$SO(2,{\Bbb C})\times SO(3,{\Bbb C})\curvearrowright$} & {\scriptsize$SO_0(2,3)/SO(2)\times SO(3)$}\\ {\scriptsize$(SO(5)\times SO(5))/SO(5)$} & {\scriptsize$SO(5,{\Bbb C})/SO(5)$} & {\scriptsize}\\ \hline {\scriptsize$\rho_6(SO(5))\curvearrowright$} & {\scriptsize$SO_0(2,3)\curvearrowright SO(5,{\Bbb C})/SO(5)$} & {\scriptsize$(SO(2,{\Bbb C})/SO(2))$}\\ {\scriptsize$(SO(5)\times SO(5))/SO(5)$} & {\scriptsize} & {\scriptsize$\times(SO(3,{\Bbb C})/SO(3))$}\\ \hline {\scriptsize$\rho_7(U(2))\curvearrowright Sp(2)/U(2)$} & {\scriptsize$U(1,1)\curvearrowright Sp(2,{\Bbb R})/U(2)$} & {\scriptsize$(Sp(1,{\Bbb R})/U(1))$}\\ {\scriptsize} & {\scriptsize} & {\scriptsize$\times(Sp(1,{\Bbb R})/U(1))$}\\ \hline {\scriptsize$SU(q+2)\curvearrowright$} & {\scriptsize$SU(2,q)\curvearrowright$} & {\scriptsize$SU(2,q)/S(U(2)\times U(q))$}\\ {\scriptsize$Sp(q+2)/Sp(2)\times Sp(q)$} & {\scriptsize$Sp(2,q)/Sp(2)\times Sp(q)$} & {\scriptsize}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 1.}} \newpage $$\begin{tabular}{|c|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$H^{\ast}\curvearrowright G^{\ast}/K$} & {\scriptsize$L^{\ast}/H\cap K$}\\ \hline {\scriptsize$U(4)\curvearrowright$} & {\scriptsize$U^{\ast}(4)\curvearrowright$} & {\scriptsize$Sp(2,{\Bbb C})/Sp(2)$}\\ {\scriptsize$Sp(4)/Sp(2)\times Sp(2)$} & {\scriptsize$Sp(2,2)/Sp(2)\times Sp(2)$} & {\scriptsize}\\ \hline {\scriptsize$Sp(j+1)\times Sp(q-j+1)\curvearrowright$} & {\scriptsize$Sp(1,j)\times Sp(1,q-j)\curvearrowright$} & {\scriptsize$(Sp(1,j)/Sp(1)\times Sp(j))\times$}\\ {\scriptsize$Sp(q+2)/Sp(2)\times Sp(q)$} & {\scriptsize$Sp(2,q)/Sp(2)\times Sp(q)$} & {\scriptsize$(Sp(1,q-j)/Sp(1)\times Sp(q-j))$}\\ \hline {\scriptsize$SU(2)^2\cdot SO(2)^2\curvearrowright$} & {\scriptsize$SL(2,{\Bbb C})\cdot SO(2,{\Bbb C})\curvearrowright$} & {\scriptsize$Sp(2,{\Bbb R})/U(2)$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize$Sp(2,{\Bbb C})/Sp(2)$} & {\scriptsize}\\ \hline {\scriptsize$\rho_8(Sp(2))\curvearrowright$} & {\scriptsize$Sp(2,{\Bbb R})\curvearrowright Sp(2,{\Bbb C})/Sp(2)$} & {\scriptsize$(SL(2,{\Bbb C})/SU(2))$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize} & {\scriptsize$\times(SO(2,{\Bbb C})/SO(2))$}\\ \hline {\scriptsize$\rho_9(Sp(2))\curvearrowright$} & {\scriptsize$Sp(1,1)\curvearrowright Sp(2,{\Bbb C})/Sp(2)$} & {\scriptsize$(Sp(1,{\Bbb C})/Sp(1))$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize} & {\scriptsize$\times(Sp(1,{\Bbb C})/Sp(1))$}\\ \hline {\scriptsize$Sp(4)\curvearrowright E_6/Spin(10)\cdot U(1)$} & {\scriptsize$Sp(2,2)\curvearrowright E_6^{-14}/Spin(10)\cdot U(1)$} & {\scriptsize$Sp(2,2)/Sp(2)\times Sp(2)$}\\ \hline {\scriptsize$SU(6)\cdot SU(2)\curvearrowright$} & {\scriptsize$SU(2,4)\cdot SU(2)\curvearrowright$} & {\scriptsize$SU(2,4)/S(U(2)\times U(4))$}\\ {\scriptsize$ E_6/Spin(10)\cdot U(1)$} & {\scriptsize$E_6^{-14}/Spin(10)\cdot U(1)$} & {\scriptsize}\\ \hline {\scriptsize$\rho_{10}(SU(6)\cdot SU(2))\curvearrowright$} & {\scriptsize$SU(1,5)\cdot SL(2,{\Bbb R})\curvearrowright$} & {\scriptsize$SO^{\ast}(10)/U(5)$}\\ {\scriptsize$ E_6/Spin(10)\cdot U(1)$} & {\scriptsize$E_6^{-14}/Spin(10)\cdot U(1)$} & {\scriptsize}\\ \hline {\scriptsize$\rho_{11}(Spin(10)\cdot U(1))\curvearrowright$} & {\scriptsize$SO^{\ast}(10)\cdot U(1)\curvearrowright$} & {\scriptsize$(SU(1,5)/S(U(1)\times U(5)))$}\\ {\scriptsize$ E_6/Spin(10)\cdot U(1)$} & {\scriptsize$E_6^{-14}/Spin(10)\cdot U(1)$} & {\scriptsize$\times(SL(2,{\Bbb R})/SO(2))$}\\ \hline {\scriptsize$\rho_{12}(Spin(10)\cdot U(1))\curvearrowright$} & {\scriptsize$SO_0(2,8)\cdot U(1)\curvearrowright$} & {\scriptsize$SO_0(2,8)/SO(2)\times SO(8)$}\\ {\scriptsize$ E_6/Spin(10)\cdot U(1)$} & {\scriptsize$E_6^{-14}/Spin(10)\cdot U(1)$} & {\scriptsize}\\ \hline {\scriptsize$Sp(4)\curvearrowright E_6/F_4$} & {\scriptsize$Sp(1,3)\curvearrowright E_6^{-26}/F_4$} & {\scriptsize$SU^{\ast}(6)/Sp(3)$}\\ \hline {\scriptsize$\rho_{13}(F_4)\curvearrowright E_6/F_4$} & {\scriptsize$F_4^{-20}\curvearrowright E_6^{-26}/F_4$} & {\scriptsize$(SO_0(1,9)/SO(9))\times U(1)$}\\ \hline {\scriptsize$\rho_{14}(SO(4))\curvearrowright G_2/SO(4)$} & {\scriptsize$SL(2,{\Bbb R})\times SL(2,{\Bbb R}) \curvearrowright G_2^2/SO(4)$} & {\scriptsize$SO(4)/SO(2)\times SO(2)$}\\ \hline {\scriptsize$\rho_{15}(SO(4))\curvearrowright G_2/SO(4)$} & {\scriptsize$\rho^{\ast}_{15}(SO(4))\curvearrowright G_2^2/SO(4)$} & {\scriptsize$(SL(2,{\Bbb R})/SO(2))$}\\ {\scriptsize} & {\scriptsize} & {\scriptsize$\times(SL(2,{\Bbb R})/SO(2))$}\\ \hline {\scriptsize$\rho_{16}(G_2)\curvearrowright(G_2\times G_2)/G_2$} & {\scriptsize$G_2^2\curvearrowright G_2^{\bf C}/G_2$} & {\scriptsize$(SL(2,{\Bbb C})/SU(2))$}\\ {\scriptsize} & {\scriptsize} & {\scriptsize$\times(SL(2,{\Bbb C})/SU(2))$}\\ \hline {\scriptsize$SU(2)^4\curvearrowright(G_2\times G_2)/G_2$} & {\scriptsize$SL(2,{\Bbb C})\times SL(2,{\Bbb C})\curvearrowright G_2^{\bf C}/G_2$} & {\scriptsize$G_2^2/SO(4)$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 1(continued).}} \vspace{0.5truecm} $$\begin{tabular}{|c|c|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$\triangle_+=\triangle'_+$} & {\scriptsize${\triangle'}^V_+$} & {\scriptsize${\triangle'}^H_+$}\\ \hline {\scriptsize$\rho_1(SO(3))\curvearrowright SU(3)/SO(3)$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(1)}\}}$}\\ \hline {\scriptsize$SO(6)\curvearrowright SU(6)/Sp(3)$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(4)},\mathop{\alpha+\beta}_{(4)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2)}\}}$}\\ \hline {\scriptsize$\rho_2(Sp(3))\curvearrowright SU(6)/Sp(3)$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(4)},\mathop{\alpha+\beta}_{(4)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(4)}, \mathop{\alpha+\beta}_{(4)}\}}$}\\ \hline {\scriptsize$SO(q+2)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2q-4)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2q-4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(q-2)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(q-2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(q-2)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(q-2)},}$}\\ {\scriptsize$SU(q+2)/S(U(2)\times U(q))$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)},\mathop{2\alpha}_{(1)}, \mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)},\mathop{2\alpha}_{(1)}, \mathop{2\alpha+2\beta}_{(1)}\}}$}\\ {\scriptsize$(q\,>\,2)$}&&&\\ \hline {\scriptsize$S(U(j+1)\times U(q-j+1))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2q-4)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2q-4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2j-2)}, \mathop{\alpha+\beta}_{(2q-2j-2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2q-2j-6)}, \mathop{\beta}_{(2)},}$}\\ {\scriptsize$SU(q+2)/S(U(2)\times U(q))$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)},\mathop{2\alpha}_{(1)}, \mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha}_{(1)}, \mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{\alpha+\beta}_{(2j-2)}, \mathop{2\alpha+\beta}_{(2)}\}}$}\\ {\scriptsize$(q>2)$} & & & \\ \hline {\scriptsize$S(U(2)\times U(2))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{\beta}_{(1)},}$}\\ {\scriptsize$SU(4)/S(U(2)\times U(2))$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize$\displaystyle{\mathop{\alpha+\beta}_{(1)}, \mathop{2\alpha+\beta}_{(1)}\}}$}\\ {\scriptsize (non-isotropy gr. act.)} & & &\\ \hline {\scriptsize$SO(j+1)\times SO(q-j+1)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(q-2)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(q-2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(j-1)}, \mathop{\alpha+\beta}_{(q-j-1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(q-j-1)}, \mathop{\beta}_{(1)},}$}\\ {\scriptsize$SO(q+2)/SO(2)\times SO(q)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize$\displaystyle{\mathop{\alpha+\beta}_{(j-1)}, \mathop{2\alpha+\beta}_{(1)}\}}$}\\ \hline {\scriptsize$SO(4)\times SO(4)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\alpha+\beta}_{(2)}\}}$}\\ {\scriptsize$SO(8)/U(4)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize}\\ \hline {\scriptsize$\rho_3(SO(4)\times SO(4))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\alpha+\beta}_{(2)} \}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(2)},}$}\\ {\scriptsize$SO(8)/U(4)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}}$}\\ \hline {\scriptsize$\rho_4(U(4))\curvearrowright SO(8)/U(4)$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)},\mathop{\alpha+\beta}_{(1)} \}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(3)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(3)},}$}\\ {\scriptsize} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}}$}\\ \hline {\scriptsize$SO(4)\times SO(6)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(4)},\mathop{\alpha+\beta}_{(4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2)},}$}\\ {\scriptsize$SO(10)/U(5)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}}$}\\ \hline {\scriptsize$SO(5)\times SO(5)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(4)},\mathop{\alpha+\beta}_{(4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2)},}$}\\ {\scriptsize$SO(10)/U(5)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}}$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 2.}} \newpage $$\begin{tabular}{|c|c|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$\triangle_+=\triangle'_+$} & {\scriptsize${\triangle'}^V_+$} & {\scriptsize${\triangle'}^H_+$}\\ \hline {\scriptsize$\rho_5(U(5))\curvearrowright SO(10)/U(5)$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(4)},\mathop{\alpha+\beta}_{(4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{2\alpha}_{(1)}, \mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(4)},\mathop{\alpha+\beta}_{(4)}, \mathop{2\alpha+\beta}_{(4)}\}}$}\\ {\scriptsize} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize}\\ \hline {\scriptsize$SO(2)^2\times SO(3)^2\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(1)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(1)},}$}\\ {\scriptsize$(SO(5)\times SO(5))/SO(5)$} & {\scriptsize$\displaystyle{\{\mathop{2\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$}\\ \hline {\scriptsize$\rho_6(SO(5))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2)},\mathop{2\alpha+\beta}_{(2)}\}}$}\\ {\scriptsize$(SO(5)\times SO(5))/SO(5)$} & {\scriptsize$\displaystyle{\{\mathop{2\alpha+\beta}_{(2)}\}}$} & {\scriptsize} & {\scriptsize}\\ \hline {\scriptsize$\rho_7(U(2))\curvearrowright Sp(2)/U(2)$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(1)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(1)}, \mathop{2\alpha+\beta}_{(1)}\}}$}\\ {\scriptsize} & {\scriptsize$\displaystyle{\{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize}\\ \hline {\scriptsize$SU(q+2)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4q-8)}, \mathop{\beta}_{(4)},\mathop{\alpha+\beta}_{(4q-8)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2q-4)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2q-4)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2q-4)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2q-4)}\}}$}\\ {\scriptsize$Sp(q+2)/Sp(2)\times Sp(q)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}, \mathop{2\alpha}_{(3)},\mathop{2\alpha+2\beta}_{(3)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}, \mathop{2\alpha}_{(2)},\mathop{2\alpha+2\beta}_{(2)}\}}$}\\ {\scriptsize$(q>2)$}&&&\\ \hline {\scriptsize$SU(4)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(3)},\mathop{\alpha+\beta}_{(3)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(1)}\}}$}\\ {\scriptsize$Sp(4)/Sp(2)\times Sp(2)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(3)}\}}$}\\ \hline {\scriptsize$U(4)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(3)},\mathop{\alpha+\beta}_{(3)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(1)}\}}$}\\ {\scriptsize$Sp(4)/Sp(2)\times Sp(2)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}\}}$}\\ \hline {\scriptsize$Sp(j+1)\times Sp(q-j+1)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4q-8)}, \mathop{\beta}_{(4)},\mathop{\alpha+\beta}_{(4q-8)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2j-4)},\mathop{2\alpha}_{(3)}, \mathop{\alpha+\beta}_{(4q-4j-4)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4q-4j-4)},\mathop{\beta}_{(4)}, \mathop{\alpha+\beta}_{(4j-4)}\}}$}\\ {\scriptsize$Sp(q+2)/Sp(2)\times Sp(q)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}, \mathop{2\alpha}_{(3)},\mathop{2\alpha+2\beta}_{(3)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+2\beta}_{(3)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}\}}$}\\ {\scriptsize$(q>2)$}&&&\\ \hline {\scriptsize$Sp(2)\times Sp(2)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)}, \mathop{\beta}_{(3)},\mathop{\alpha+\beta}_{(3)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(3)}, \mathop{\alpha+\beta}_{(3)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)},\mathop{\beta}_{(3)}, \mathop{2\alpha+\beta}_{(4)}\}}$}\\ {\scriptsize$Sp(4)/Sp(2)\times Sp(2)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}\}}$} & {\scriptsize} & {\scriptsize}\\ \hline {\scriptsize$SU(2)^2\cdot SO(2)^2\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(1)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(1)},}$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$}\\ \hline {\scriptsize$\rho_8(Sp(2))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(2)}, \mathop{2\alpha+\beta}_{(2)}\}}$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}\}}$} & {\scriptsize} & {\scriptsize}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 2(continued).}} \newpage $$\begin{tabular}{|c|c|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$\triangle_+=\triangle'_+$} & {\scriptsize${\triangle'}^V_+$} & {\scriptsize${\triangle'}^H_+$}\\ \hline {\scriptsize$\rho_9(Sp(2))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\alpha+\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(2)}, \mathop{2\alpha+\beta}_{(2)}\}}$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}\}}$} & {\scriptsize} & {\scriptsize}\\ \hline {\scriptsize$Sp(4)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)}, \mathop{\beta}_{(6)},\mathop{\alpha+\beta}_{(9)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{\beta}_{(3)}, \mathop{\alpha+\beta}_{(3)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{\beta}_{(3)}, \mathop{\alpha+\beta}_{(6)},}$}\\ {\scriptsize$\displaystyle{E_6/Spin(10)\cdot U(1)}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(5)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$}\\ \hline {\scriptsize$SU(6)\cdot SU(2)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)}, \mathop{\beta}_{(6)},\mathop{\alpha+\beta}_{(9)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{\beta}_{(4)}, \mathop{\alpha+\beta}_{(5)},}$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(5)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(3)}\}}$}\\ \hline {\scriptsize$\rho_{10}(SU(6)\cdot SU(2))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)}, \mathop{\beta}_{(6)},\mathop{\alpha+\beta}_{(9)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{\beta}_{(4)}, \mathop{\alpha+\beta}_{(4)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{\beta}_{(2)}, \mathop{\alpha+\beta}_{(5)},}$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(5)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$}\\ \hline {\scriptsize$\rho_{11}(Spin(10)\cdot U(1))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)}, \mathop{\beta}_{(6)},\mathop{\alpha+\beta}_{(9)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)},\mathop{2\alpha}_{(1)}, \mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(6)},\mathop{\alpha+\beta}_{(9)}, \mathop{2\alpha+\beta}_{(5)}\}}$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(5)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize}\\ \hline {\scriptsize$\rho_{12}(Spin(10)\cdot U(1))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)}, \mathop{\beta}_{(6)},\mathop{\alpha+\beta}_{(9)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(6)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(6)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)},\mathop{\beta}_{(5)}, \mathop{\alpha+\beta}_{(3)},}$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(5)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(4)}, \mathop{2\alpha}_{(1)},\mathop{2\alpha+2\beta}_{(1)}\}}$}\\ \hline {\scriptsize$Sp(4)\curvearrowright E_6/F_4$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)}, \mathop{\beta}_{(8)},\mathop{\alpha+\beta}_{(8)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{\beta}_{(4)}, \mathop{\alpha+\beta}_{(4)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(4)},\mathop{\beta}_{(4)}, \mathop{\alpha+\beta}_{(4)}\}}$}\\ \hline {\scriptsize$\rho_{13}(F_4)\curvearrowright E_6/F_4$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)}, \mathop{\beta}_{(8)},\mathop{\alpha+\beta}_{(8)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(8)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(8)}, \mathop{\alpha+\beta}_{(8)}\}}$}\\ \hline {\scriptsize$\rho_{14}(SO(4))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(1)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{3\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(1)}, \mathop{2\alpha+\beta}_{(1)},}$}\\ {\scriptsize$\displaystyle{G_2/SO(4)}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}, \mathop{3\alpha+\beta}_{(1)},\mathop{3\alpha+2\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize$\displaystyle{\mathop{3\alpha+\beta}_{(1)}\}}$}\\ \hline {\scriptsize$\rho_{15}(SO(4))\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(1)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)}, \mathop{3\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(1)},\mathop{\alpha+\beta}_{(1)}, \mathop{2\alpha+\beta}_{(1)},}$}\\ {\scriptsize$\displaystyle{G_2/SO(4)}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}, \mathop{3\alpha+\beta}_{(1)},\mathop{3\alpha+2\beta}_{(1)}\}}$} & {\scriptsize} & {\scriptsize$\displaystyle{\mathop{3\alpha+\beta}_{(1)}\}}$}\\ \hline {\scriptsize$\rho_{16}(G_2)\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{3\alpha+2\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\{\mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2)}, \mathop{2\alpha+\beta}_{(2)},}$}\\ {\scriptsize$\displaystyle{(G_2\times G_2)/G_2}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}, \mathop{3\alpha+\beta}_{(2)},\mathop{3\alpha+2\beta}_{(2)}\}}$} & {\scriptsize} & {\scriptsize$\displaystyle{\mathop{3\alpha+\beta}_{(2)}\}}$}\\ \hline {\scriptsize$SU(2)^4\curvearrowright$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(2)}, \mathop{\beta}_{(2)},\mathop{\alpha+\beta}_{(2)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(1)},}$} & {\scriptsize$\displaystyle{\{\mathop{\alpha}_{(1)},\mathop{\beta}_{(1)}, \mathop{\alpha+\beta}_{(1)},}$}\\ {\scriptsize$\displaystyle{(G_2\times G_2)/G_2}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(2)}, \mathop{3\alpha+\beta}_{(2)},\mathop{3\alpha+2\beta}_{(2)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}, \mathop{3\alpha+\beta}_{(1)},\mathop{3\alpha+2\beta}_{(1)}\}}$} & {\scriptsize$\displaystyle{\mathop{2\alpha+\beta}_{(1)}, \mathop{3\alpha+\beta}_{(1)},\mathop{3\alpha+2\beta}_{(1)}\}}$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 2(continued${}^2$).}} \newpage \vspace{0.5truecm} $$\begin{tabular}{|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$X\qquad(\widetilde C)$}\\ \hline {\scriptsize$\rho_1(SO(3))\curvearrowright SU(3)/SO(3)$} & {\scriptsize$X_{(x_1,x_2)}=(\tan(x_1+\sqrt3x_2)-2\cot2x_1 +\tan(x_1-\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$\sqrt3\tan(x_1+\sqrt3x_2)-\sqrt3\tan(x_1-\sqrt3x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0, x_2>\frac{1}{\sqrt3}x_1-\frac{\pi}{2\sqrt3}, x_2<-\frac{1}{\sqrt3}x_1+\frac{\pi}{2\sqrt3})$}\\ \hline {\scriptsize$SO(6)\curvearrowright SU(6)/Sp(3)$} & {\scriptsize$X_{(x_1,x_2)}=(-4\cot2x_1-2\cot(x_1-\sqrt3x_2) -2\cot(x_1+\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+4\tan2x_1+2\tan(x_1-\sqrt3x_2) +2\tan(x_1+\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$2\sqrt3\cot(x_1-\sqrt3x_2)-2\sqrt3\cot(x_1+\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$-2\sqrt3\tan(x_1-\sqrt3x_2)+2\sqrt3\tan(x_1+\sqrt3x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0, x_2>\frac{1}{\sqrt3}x_1, x_2<-\frac{1}{\sqrt3}x_1+\frac{\pi}{2\sqrt3})$}\\ \hline {\scriptsize$\rho_2(Sp(3))\curvearrowright SU(6)/Sp(3)$} & {\scriptsize$X_{(x_1,x_2)}=(-8\cot2x_1+4\tan(x_1-\sqrt3x_2) +4\cot(x_1+\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$4\sqrt3\tan(x_1+\sqrt3x_2) -4\sqrt3\tan(x_1-\sqrt3x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0, x_2>\frac{1}{\sqrt3}x_1-\frac{\pi}{2\sqrt3}, x_2<-\frac{1}{\sqrt3}x_1+\frac{\pi}{2\sqrt3})$}\\ \hline {\scriptsize$SO(q+2)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-(q-2)\cot x_1+\cot(x_1-x_2) -\cot(x_1+x_2)$}\\ {\scriptsize$SU(q+2)/S(U(2)\times U(q))$} & {\scriptsize$+(q-2)\tan x_1+\tan(x_1-x_2)+\tan(x_1+x_2)+2\tan 2x_1,$}\\ {\scriptsize$(q>2)$} & {\scriptsize$\cot(x_1-x_2)-(q-2)\cot x_2-\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-\tan(x_1-x_2)+(q-2)\tan x_2+\tan(x_1+x_2)+2\tan2x_2)$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0, x_2>x_1,x_2<\frac{\pi}{4})$}\\ \hline {\scriptsize$SO(4)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-\cot x_1+\tan x_1+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize$SU(4)/S(U(2)\times U(2))$} & {\scriptsize$-\cot x_2-\tan(x_1-x_2)+\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$S(U(j+1)\times U(q-j+1))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-2(j-1)\cot x_1-2\cot2x_1 +2(q-j-1)\tan x_1$}\\ {\scriptsize$SU(q+2)/S(U(2)\times U(q))$} & {\scriptsize$+2\tan(x_1-x_2)+2\tan(x_1+x_2),$}\\ {\scriptsize$(q>2)$} & {\scriptsize$-2(q-j-1)\cot x_2-2\cot2x_2-2\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+2(j-1)\tan x_2+2\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$S(U(2)\times U(2))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-\cot x_1+\tan x_1+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize$SU(4)/S(U(2)\times U(2))$} & {\scriptsize$-\cot x_2-\tan(x_1-x_2)+\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize(non-isotropy gr. act.)} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$SO(j+1)\times SO(q-j+1)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-(j-1)\cot x_1+(q-j-1)\tan x_1$}\\ {\scriptsize$SO(q+2)/SO(2)\times SO(q)$} & {\scriptsize$+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize$(q>2)$} & {\scriptsize$-(q-j-1)\cot x_2-\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+(j-1)\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 3.}} \newpage $$\begin{tabular}{|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$X\qquad(\widetilde C)$}\\ \hline {\scriptsize$SO(4)\times SO(4)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-2\cot x_1-\cot(x_1-x_2)$}\\ {\scriptsize$SO(8)/U(4)$} & {\scriptsize$-2\cot(x_1+x_2)+2\tan x_1,$}\\ {\scriptsize} & {\scriptsize$\cot(x_1-x_2)-2\cot x_2$}\\ {\scriptsize} & {\scriptsize$-2\cot(x_1+x_2)+2\tan x_2)$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_2<x_1,x_1+x_2<\pi)$}\\ \hline {\scriptsize$\rho_3(SO(4)\times SO(4))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-2\cot x_1+2\tan x_1$}\\ {\scriptsize$SO(8)/U(4)$} & {\scriptsize$+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$-2\cot x_2-\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+2\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2<0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$\rho_4(U(4))\curvearrowright SO(8)/U(4)$} & {\scriptsize$X_{(x_1,x_2)}=(-\cot x_1+3\tan x_1$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$-\cot x_2-\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+3\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$SO(4)\times SO(6)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=2(-\cot x_1-\cot(x_1-x_2)-\cot(x_1+x_2)$}\\ {\scriptsize$SO(10)/U(5)$} & {\scriptsize$-\cot2x_1+\tan x_1$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$\cot(x_1-x_2)-\cot x_2-\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-\cot2x_2-\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$SO(5)\times SO(5)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=2(-\cot x_1-\cot(x_1-x_2)-\cot(x_1+x_2)$}\\ {\scriptsize$SO(10)/U(5)$} & {\scriptsize$+\tan x_1+\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1+x_2)+\tan2x_1,$}\\ {\scriptsize} & {\scriptsize$\cot(x_1-x_2)-\cot x_2-\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-\tan(x_1-x_2)+\tan x_2$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1+x_2)+\tan2x_2)$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_2>x_1,x_2<\frac{\pi}{4})$}\\ \hline {\scriptsize$\rho_5(U(5))\curvearrowright SO(10)/U(5)$} & {\scriptsize$X_{(x_1,x_2)}=2(-2\cot x_1-\cot2x_1$}\\ {\scriptsize} & {\scriptsize$+2\tan(x_1-x_2)+2\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$-\cot2 x_2-2\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+2\tan(x_1+x_2)+2\tan x_2)$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 3(continued).}} \newpage $$\begin{tabular}{|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$X\qquad(\widetilde C)$}\\ \hline {\scriptsize$SO(2)^2\times SO(3)^2\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-\cot x_1-\cot(x_1-x_2)-\cot(x_1+x_2)$}\\ {\scriptsize$(SO(5)\times SO(5))/SO(5)$} & {\scriptsize$+\tan x_1+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$\cot(x_1-x_2)-\cot x_2-\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-\tan(x_1-x_2)+\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_2>x_1,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$\rho_6(SO(5))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=2(-\cot x_1+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize$(SO(5)\times SO(5))/SO(5)$} & {\scriptsize$-\tan(x_1-x_2)+\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1-\frac{\pi}{2},x_1+x_2< \frac{\pi}{2})$}\\ \hline {\scriptsize$\rho_7(U(2))\curvearrowright Sp(2)/U(2)$} & {\scriptsize$X_{(x_1,x_2)}=(-\cot x_1+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$-\cot x_2-\tan(x_1-x_2)+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$U(q+2)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-(2q-4)\cot x_1-2\cot(x_1-x_2) -2\cot(x_1+x_2)$}\\ {\scriptsize$Sp(q+2)/Sp(2)\times Sp(q)$} & {\scriptsize$-2\cot2x_1+(2q-4)\tan x_1+2\tan(x_1-x_2)$}\\ {\scriptsize$(q>2)$} & {\scriptsize$+2\tan(x_1+x_2)+4\tan 2x_1,$}\\ {\scriptsize} & {\scriptsize$2\cot(x_1-x_2)-(2q-4)\cot x_2-2\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-2\cot2x_2-2\tan(x_1-x_2)+(2q-4)\tan x_2$}\\ {\scriptsize} & {\scriptsize$+2\tan(x_1+x_2)+4\tan2x_2)$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1,x_2<\frac{\pi}{4})$}\\ \hline {\scriptsize$SU(4)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-2\cot x_1-\cot(x_1-x_2)-\cot(x_1+x_2)$}\\ {\scriptsize$Sp(4)/Sp(2)\times Sp(2)$} & {\scriptsize$2\tan x_1+2\tan(x_1-x_2)+3\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$\cot(x_1-x_2)-2\cot x_2-\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-2\tan(x_1-x_2)+\tan x_2+3\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$U(4)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-2\cot x_1-2\cot(x_1-x_2)-2\cot(x_1+x_2)$}\\ {\scriptsize$Sp(4)/Sp(2)\times Sp(2)$} & {\scriptsize$2\tan x_1+\tan(x_1-x_2)+2\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$2\cot(x_1-x_2)-2\cot x_2-2\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-\tan(x_1-x_2)+\tan x_2+2\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$Sp(j+1)\times Sp(q-j+1)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-4(j-1)\cot x_1-6\cot2x_1$}\\ {\scriptsize$Sp(q+2)/Sp(2)\times Sp(q)$} & {\scriptsize$+4(q-j-1)\tan x_1+4\tan(x_1-x_2)+4\tan(x_1+x_2),$}\\ {\scriptsize$(q>2)$} & {\scriptsize$-4(q-j-1)\cot x_2-6\cot2x_2-4\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+4(j-1)\tan x_2+4\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 3(continued${}^2$).}} \newpage $$\begin{tabular}{|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$X\qquad(\widetilde C)$}\\ \hline {\scriptsize$Sp(2)\times Sp(2)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-3\cot x_1+\tan x_1$}\\ {\scriptsize$Sp(4)/Sp(2)\times Sp(2)$} & {\scriptsize$+3\tan(x_1-x_2)+4\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$-3\cot x_2-3\tan(x_1-x_2)+4\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$SU(2)^2\cdot SO(2)^2\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-\cot x_1-\cot(x_1-x_2)-\cot(x_1+x_2)$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize$+\tan x_1+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$\cot(x_1-x_2)-\cot x_2-\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-\tan(x_1-x_2)+\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$\rho_8(Sp(2))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=2(-\cot x_1+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize$-\cot x_2-\tan(x_1-x_2)+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$\rho_9(Sp(2))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=2(-\cot x_1+\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize$(Sp(2)\times Sp(2))/Sp(2)$} & {\scriptsize$-\cot x_2-\tan(x_1-x_2)+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$Sp(4)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-4\cot x_1-3\cot(x_1-x_2)-4\cot(x_1+x_2)$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$+4\tan x_1+3\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$3\cot(x_1-x_2)-3\cot x_2-4\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-3\tan(x_1-x_2)+6\tan x_2+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$SU(6)\cdot SU(2)\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-4\cot x_1-2\cot(x_1-x_2)-2\cot(x_1+x_2)$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$-2\cot 2x_1+4\tan x_1$}\\ {\scriptsize} & {\scriptsize$+4\tan(x_1-x_2)+3\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$2\cot(x_1-x_2)-4\cot x_2-2\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-2\cot 2x_2-4\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+5\tan x_2+3\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$\rho_{10}(SU(6)\cdot SU(2))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-4\cot x_1-4\cot(x_1-x_2)-4\cot(x_1+x_2)$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$-2\cot 2x_1+4\tan x_1$}\\ {\scriptsize} & {\scriptsize$+2\tan(x_1-x_2)+\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$4\cot(x_1-x_2)-4\cot x_2-4\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-2\cot 2x_2-2\tan(x_1-x_2)+5\tan x_2$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>x_1,x_1+x_2<\frac{\pi}{2})$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 3(continued${}^3$).}} \newpage $$\begin{tabular}{|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$X\qquad(\widetilde C)$}\\ \hline {\scriptsize$\rho_{11}(Spin(10)\cdot U(1))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-8\cot x_1-2\cot2x_1$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$+6\tan(x_1-x_2)+5\tan(x_1+x_2),$}\\ {\scriptsize} & {\scriptsize$-2\cot2x_2-6\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+9\tan x_2+5\tan(x_1+x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,x_1+x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$\rho_{12}(Spin(10)\cdot U(1))\curvearrowright$} & {\scriptsize$X_{(x_1,x_2)}=(-6\cot x_1-\cot(x_1-x_2)-\cot(x_1+x_2)$}\\ {\scriptsize$E_6/Spin(10)\cdot U(1)$} & {\scriptsize$2\tan x_1+5\tan(x_1-x_2)$}\\ {\scriptsize} & {\scriptsize$+4\tan(x_1+x_2)+2\tan2x_1,$}\\ {\scriptsize} & {\scriptsize$\cot(x_1-x_2)-6\cot x_2-\cot(x_1+x_2)$}\\ {\scriptsize} & {\scriptsize$-5\tan(x_1-x_2)+3\tan x_2$}\\ {\scriptsize} & {\scriptsize$+4\tan(x_1+x_2)+2\tan2x_2)$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2<\frac{\pi}{4},x_2>x_1)$}\\ \hline {\scriptsize$Sp(4)\curvearrowright E_6/F_4$} & {\scriptsize$X_{(x_1,x_2)}=(-8\cot2x_1-4\cot(x_1-\sqrt3x_2) -4\cot(x_1+\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+8\tan2x_1+4\tan(x_1-\sqrt3x_2)+4\tan(x_1+\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$4\sqrt3\cot(x_1-\sqrt3x_2)-4\sqrt3\cot(x_1+\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$-4\sqrt3\tan(x_1-\sqrt3x_2)+4\sqrt3\tan(x_1+\sqrt3x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,0<x_1<\frac{\pi}{4},\frac{x_1}{\sqrt 3}<x_2< \frac{x_1}{\sqrt 3}+\frac{\pi}{2\sqrt3},-\frac{x_1}{\sqrt 3}<x_2< -\frac{x_1}{\sqrt 3}+\frac{\pi}{2\sqrt3})$}\\ \hline {\scriptsize$\rho_{13}(F_4)\curvearrowright E_6/F_4$} & {\scriptsize$X_{(x_1,x_2)}=(-16\cot2x_1+8\tan(x_1-\sqrt3x_2) +8\tan(x_1+\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$-8\sqrt3\tan(x_1-\sqrt3x_2) +8\sqrt3\tan(x_1+\sqrt3x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_1-\sqrt3x_2<\frac{\pi}{2}, x_1+\sqrt3x_2<\frac{\pi}{2})$}\\ \hline {\scriptsize$\rho_{14}(SO(4))\curvearrowright G_2/SO(4)$} & {\scriptsize$X_{(x_1,x_2)}=(-2\cot2x_1+3\tan(3x_1-\sqrt3x_2) +\tan(x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1+\sqrt3x_2)+3\tan(3x_1+\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$-2\sqrt3\cot2\sqrt3x_2-\sqrt3\tan(3x_1-\sqrt3x_2) -\sqrt3\tan(x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+\sqrt3\tan(x_1+\sqrt3x_2)+\sqrt3\tan(3x_1+\sqrt3x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,\sqrt3x_1+x_2< \frac{\pi}{2\sqrt3})$}\\ \hline {\scriptsize$\rho_{15}(SO(4))\curvearrowright G_2/SO(4)$} & {\scriptsize$X_{(x_1,x_2)}=(-2\cot2x_1+3\tan(3x_1-\sqrt3x_2) +\tan(x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1+\sqrt3x_2)+3\tan(3x_1+\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$-2\sqrt3\cot2\sqrt3x_2-\sqrt3\tan(3x_1-\sqrt3x_2) -\sqrt3\tan(x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+\sqrt3\tan(x_1+\sqrt3x_2)+\sqrt3\tan(3x_1+\sqrt3x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,\sqrt3x_1+x_2< \frac{\pi}{2\sqrt3})$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 3(continued${}^4$).}} \newpage $$\begin{tabular}{|c|c|} \hline {\scriptsize$H\curvearrowright G/K$} & {\scriptsize$X\qquad(\widetilde C)$}\\ \hline {\scriptsize$\rho_{16}(G_2)\curvearrowright(G_2\times G_2)/G_2$} & {\scriptsize$X_{(x_1,x_2)}=2(-2\cot2x_1+3\tan(3x_1-\sqrt3x_2) +\tan(x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1+\sqrt3x_2)+3\tan(3x_1+\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$-2\sqrt3\cot2\sqrt3x_2-\sqrt3\tan(3x_1-\sqrt3x_2) -\sqrt3\tan(x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+\sqrt3\tan(x_1+\sqrt3x_2)+\sqrt3\tan(3x_1+\sqrt3x_2))$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2>0,\sqrt3x_1+x_2< \frac{\pi}{2\sqrt3})$}\\ \hline {\scriptsize$SU(2)^4\curvearrowright(G_2\times G_2)/G_2$} & {\scriptsize$X_{(x_1,x_2)}=(-2\cot2x_1-3\cot(3x_1-\sqrt3x_2) -\cot(x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$-\cot(x_1+\sqrt3x_2)-3\cot(3x_1+\sqrt3x_2)+2\tan2x_1$}\\ {\scriptsize} & {\scriptsize$+3\tan(3x_1-\sqrt3x_2)+\tan(x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+\tan(x_1+\sqrt3x_2)+3\tan(3x_1+\sqrt3x_2),$}\\ {\scriptsize} & {\scriptsize$\sqrt3\cot(3x_1-\sqrt3x_2)+\sqrt3\cot(x_1-\sqrt3x_2) -\sqrt3\cot(x_1+\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$-\sqrt3\cot(3x_1+\sqrt3x_2)-2\sqrt3\cot2\sqrt3 x_2 -\sqrt3\tan(3x_1-\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$-\sqrt3\tan(x_1-\sqrt3x_2)+\sqrt3\tan(x_1+\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$+\sqrt3\tan(3x_1+\sqrt3x_2)+2\sqrt3\tan2\sqrt3x_2)$}\\ {\scriptsize} & {\scriptsize$(\widetilde C\,:\,x_1>0,x_2<\frac{\pi}{4\sqrt3},x_2>\sqrt3x_1)$}\\ \hline \end{tabular}$$ \vspace{0.3truecm} \centerline{{\bf Table 3(continued${}^5$).}} \vspace{1truecm} \centerline{{\bf References}} \vspace{0.5truecm} {\small \noindent [BV] J. Berndt and L. Vanhecke, Curvature-adapted submanifolds, Nihonkai Math. J. {\bf 3} (1992) 177-185. \noindent [BCO] J. Berndt, S. Console and C. Olmos, Submanifolds and holonomy, Research Notes in Mathematics 434, CHAPMAN $\&$ HALL/CRC Press, Boca Raton, London, New York Washington, 2003. \noindent [B] N. Bourbaki, Groupes et alg$\acute e$bres de Lie, Chapitre 4,5 et 6, Hermann, 1968. \noindent [Ch] U. Christ, Homogeneity of equifocal submanifolds, J. Differential Geometry {\bf 62} (2002) 1-15. \noindent [Co] L. Conlon, Remarks on commuting involutions, Proc. Amer. Math. Soc. {\bf 22} (1969) 255-257. \noindent [GT] O. Goertsches and G. Thorbergsson, On the Geometry of the orbits of Hermann actions, Geom. Dedicata {\bf 129} (2007) 101-118. \noindent [He] S. Helgason, Differential geometry, Lie groups and symmetric spaces, Academic Press, New York, 1978. \noindent [Ha] R. S. Hamilton, Three-manifolds with positive Ricci curvature, J. Differential Geom. {\bf 17} (1982) 255-306. \noindent [Hu1] G. Huisken, Flow by mean curvature of convex surfaces into spheres, J. Differential Geom. {\bf 20} (1984) 237-266. \noindent [Hu2] G. Huisken, Contracting convex hypersurfaces in Riemannian manifolds by their mean curvature, Invent. math. {\bf 84} (1986) 463-480. \noindent [HLO] E. Heintze, X. Liu and C. Olmos, Isoparametric submanifolds and a Chevalley- type restriction theorem, Integrable systems, geometry, and topology, 151-190, AMS/IP Stud. Adv. Math. 36, Amer. Math. Soc., Providence, RI, 2006. \noindent [HOT] E. Heintze, C. Olmos and G. Thorbergsson, Submanifolds with constant principal curvatures and normal holonomy groups, Intern. J. Math. {\bf 2} (1991) 167-175. \noindent [HPTT] E. Heintze, R.S. Palais, C.L. Terng and G. Thorbergsson, Hyperpolar actions on symmetric spaces, Geometry, topology and physics for Raoul Bott (ed. S. T. Yau), Conf. Proc. Lecture Notes Geom. Topology {\bf 4}, Internat. Press, Cambridge, MA, 1995 pp214-245. \noindent [HTST] D. Hirohashi, H. Tasaki, H. Song and R. Takagi, Minimal orbits of the isotropy groups of symmetric spaces of compact type, Differential Goem. and its Appl. {\bf 13} (2000) 167-177. \noindent [Koi1] N. Koike, On proper Fredholm submanifolds in a Hilbert space arising from subma- nifolds in a symmetric space, Japan. J. Math. {\bf 28} (2002) 61-80. \noindent [Koi2] N. Koike, Actions of Hermann type and proper complex equifocal submanifolds, Osaka J. Math. {\bf 42} (2005) 599-611. \noindent [Kol] A. Kollross, A Classification of hyperpolar and cohomogeneity one actions, Trans. Amer. Math. Soc. {\bf 354} (2001) 571-612. \noindent [LT] X. Liu and C. L. Terng, The mean curvature flow for isoparametric submanifolds, arXiv:math.DG/0706.3550v1. \noindent [P] R.S. Palais, Morse theory on Hilbert manifolds, Topology {\bf 2} (1963) 299-340. \noindent [PT] R.S. Palais and C.L. Terng, Critical point theory and submanifold geometry, Lecture Notes in Math. {\bf 1353}, Springer, Berlin, 1988. \noindent [S] G. Schwarz, Smooth functions invariant under the action of a compact Lie group, Topology {\bf 14} (1975) 63-68. \noindent [T1] C.L. Terng, Isoparametric submanifolds and their Coxeter groups, J. Differential Ge- ometry {\bf 21} (1985) 79-107. \noindent [T2] C.L. Terng, Proper Fredholm submanifolds of Hilbert space, J. Differential Geometry {\bf 29} (1989) 9-47. \noindent [TT] C.L. Terng and G. Thorbergsson, Submanifold geometry in symmetric spaces, J. Diff- erential Geometry {\bf 42} (1995) 665-718. \noindent [Z] X. P. Zhu, Lectures on mean curvature flows, Studies in Advanced Math., AMS/IP, 2002. \vspace{0.5truecm} {\small \rightline{Department of Mathematics, Faculty of Science} \rightline{Tokyo University of Science, 1-3 Kagurazaka} \rightline{Shinjuku-ku, Tokyo 162-8601 Japan} \rightline{(koike@ma.kagu.tus.ac.jp)} } \end{document} 
1,477,468,750,730
arxiv
\section{Introduction} The origin of energetic particles accelerated in solar events is still an open question. While flares and shocks driven by coronal mass ejections (CMEs) are believed to be two sources of solar energetic particle (SEP) acceleration in impulsive and gradual SEP events respectively \citep[e.g.][]{Reames1999}, it is not clear what the exact flare-related acceleration mechanism in the impulsive SEP events is or where the CME-driven shocks most efficiently accelerate particles and when the particles are released in gradual SEP events. Electron release has been observed to temporally coincide with type III radio bursts at the Sun and traveling along open field lines into interplanetary space \citep[see][]{Lin1985}. Using observations of the Three-dimensional Plasma and Energetic Particles instrument \citep[3DP;][]{Lin1995} on the Wind spacecraft, \citet{Wang2006} studied three electron events and found two distinct injections of electrons: that of low-energy electrons at energies $\sim$ 0.4 to 6--9 keV began 9.1 min before the type III radio burst and that of $\sim$ 13 to 300 keV electrons started 7.6 min after the type III burst. Delays of 10 min up to half an hour between electron release time at the Sun and solar electromagnetic emissions (EM) have been reported by other works \citep[e.g.][]{Cliver82,Kallenrode91,Krucker1999, Haggerty02}. \citet{Krucker1999} showed evidence that some electron events are not related to type III bursts. They found that the electron events appeared to be related to the passage of large-scale coronal transient waves, also called EIT waves or Extreme Ultraviolet (EUV) waves \citep{Thompson1998, Thompson2000}, over the footpoint of the field line connected to the spacecraft. Although the nature of EUV waves is still largely debated, past studies using low-cadence ultraviolet images ($>$12 minutes) showed that EUV waves are correlated with CMEs rather than flares \citep{Plunkett1998, Cliver1999}. Based on more recent three-dimensional stereoscopic analyses, the EUV waves are generally believed to be the imprint of the CME driven shock on solar surface \citep[e.g.][]{Veronig2008, Patsourakos2009}. Proton release is probably more complicated than electron release. \citet{Krucker2000} studied the timing of proton onsets in the energy range from 30 keV to 6 MeV. They found that the release of the protons appears to be energy-dependent. The most energetic protons are possibly released simultaneously with the electrons while lower-energy protons are released $\sim$ 0.5 to 2 hrs later than electrons. They also found that protons with energies between 0.03 and 6 MeV are released high in the corona, around 1--10 Rs above the electrons. Their results are consistent with studies by \citet{Kahler1994} and \citet{Gopalswamy2012} on the CME heights at the time of SEP release. \citet{Kahler1994} analyzed $>$ 10 MeV proton events and found that the peak of the intensity profile for $>$ 10 MeV protons occurs when the associated CME reaches heights of 5--15 Rs. \citet{Gopalswamy2012} examined the onset times and release heights of energetic particles using the ground-level enhancement (GLE) events. They found an earlier release time and a lower release height of CMEs for this highly energetic subset of events. Although both SEP solar particle release (SPR) times and EM onsets have been discussed at length in the past, comparison between electron release times and proton release times has been discussed only in a few papers \citep[e.g.][]{Cliver82,Haggerty09,Koulou15,Kahler2003,Posner07}. In \citet{ Posner07}'s study, the author adopted the prevailing assumption of simultaneous release of electrons and protons, but he also pointed out that ``release of protons before electrons (and vice versa) is possible [E. Roelof, D. Haggerty, personal communication, 2006]''. Using a group of 32 historic GLE events, \citet{Cliver82} found that delays of 10 minutes between 100 keV and 1 MeV electron SPRs and $\le$ 5 minute delays between 2 GeV proton and 1 MeV electron SPRs. Delays of 10 to 50 minutes in the proton SPRs relative to metric type II onsets for well connected events were found in the smaller GLE events. \citet{Kahler2003} compared the onset of relativistic electrons and protons of GLEs from solar cycle 23. They found that half of GLE events the relativistic proton injection preceded that of electrons, however, the low intensity GLEs tend to have a later time for the proton injection. Recently, \citet{Koulou15} compared the proton and electron release as inferred from VDA based on Wind/3DP and ERNE data, and found a 7-min average dalay of near-relativistic electrons with respect to deka-MeV protons. \citet{Haggerty09} studied 19 electron beam events using EPAM 38-315 keV data, and found that for 11 of the 19 events the arrival of 50-100 MeV protons followed by electrons within $\sim$ 3 min. On the other hand, the remaining 8 events show a broad 5-25 minute delays of the protons relactive to the electron injections. In this paper, we study large SEP events with a peak $>$ 10 MeV proton flux above 10 $cm^2 sr^{-1} s^{-1}$ as observed by GOES from October 2006 (the launch of the Solar and Terrestrial Relations Observatory (STEREO)) to March 2014. The proton SPR times at various energies from 10 MeV to 131 MeV are investigated and compared to the release times of 0.25 MeV--10.4 MeV electrons and solar EM onsets. The exact time when energetic particles are first released at the Sun is crucial to understanding the particle acceleration and where it takes place. This is the first systematic study and comparison between electron and proton SPRs for large SEP events in the new STEREO era. The paper aims to address the following key issues: 1) Are protons and electrons accelerated by the same source and released simultaneously at the Sun? 2) What is the acceleration time needed for protons and electrons to reach high energies and are the acceleration times energy dependent? \section{Observations and Data Analysis} \subsection{Event Selection} From October 2006 to March 2014, GOES have observed 35 large SEP events with peak intensity great than 10 $cm^2 sr^{-1} s^{-1}$ in the $>$ 10 MeV proton channel (\url{http://cdaw.gsfc.nasa.gov/CME_list/sepe/}). In this paper we selected 28 of the 35 SEP events and excluded 6 events where the Energetic and Relativistic Nuclei and Electron instrument (ERNE) measurements have a data gap and 1 event where only a mild flux enhancement ($< 10\%$) was seen above the background level. When an SEP event has multiple-increases of flux we define the earliest rise as its onset time and count it as one event. We divide these 28 events into two groups: SOHO (the Solar and Heliospheric Observatory) SEP events and STEREO SEP events. By choosing the smallest connection angles (CA) between SEP solar locations and magnetic foot-points of each spacecraft, we define whether a SEP event is SOHO SEP or STEREO SEP. We compute the longitude of connection footpoint by assuming Parker spiral theory: \begin{equation} \phi_0 = D\Omega/V_{slw} + \phi , \end{equation} where $\phi$ and $\phi_0$ are the spacecraft longitude and its solar connection footpoint longitude, $D$ is the distance to the Sun, $V_{slw}$ is the average in-situ solar wind speed observed by the spacecraft, and $\Omega$ is the solar rotation rate based on a Sidereal rotation period of 24.47 days. CA is then given by: \begin{equation} CA = \phi_0 -\phi_{src} , \end{equation} where $\phi_{src}$ is the solar source longitude of SEPs. It should be noted that an average spread of $\sim 30^{\circ}$ between active regions and source surface connection footpoints has been reported in previous statistical studies \citep[e.g.][]{Nitta06,Wiedenbeck2013} and an uncertainty of CA angles as large as $20^\circ$ was found using various methods in \citet{Lario14}. The solar source locations were identified as the locations of associated flares or eruptive prominences in movies of EUV images by the Atmospheric Imaging Assembly \citep[AIA;][]{Lemen2012} on the \emph{Solar Dynamics Observatory} (SDO) spacecraft and by the EUV Imager \citep[EUVI;][]{Wuelser2004,Howard2008} on the STEREO spacecraft. Additional information on the events has been extracted from the GOES flare list (\url{http://www.lmsal.com/solarsoft/latest_events/}), the CDAW CME catalog (\url{http://cdaw.gsfc.nasa.gov/CME_list}), and the type II radio burst lists compiled by the Wind and STEREO data center (\url{http://ssed.gsfc.nasa.gov/waves/data_products.html}), and type III radio burst data (\url{http://cdaw.gsfc.nasa.gov/images/wind/waves/}). In-situ observations including the Electron Proton and Helium Instrument \citep[EPHIN;][]{Mellin1995} and ERNE \citep{Torsti1995} on the SOHO spacecraft, and the High Energy Telescope \citep[HET;][]{von2008}, Low Energy Telescope \citep[LET;][]{Mewaldt2008}, and the Solar Electron Proton Telescope \citep[SEPT;][]{Mellin2008} on the STEREO spacecraft are used for the determination of the SEP onsets. SOHO/ERNE covers the energy range from 1.58 to 131 MeV of protons by using two different sensors. The Low-Energy Detector (LED) operates in the range 1.58 MeV to 12.7 MeV and the High-Energy Detector (HED) from 13.8 MeV to 131 MeV. We determined proton onset times of SOHO SEP events using SOHO/ERNE 1-minute averages in proton energy channels from 13.8 MeV to 131 MeV. We used only HED channels since the small geometric factor at the lowest energies yields a relatively high intensity at 1-count level, which make it difficult to determine the event onset time accurately \citep{Vainio13}. We used SOHO/EPHIN 1-minute averages in energy channels from 0.25 to 10.4 MeV to determine electron onset times. When EPHIN data were not available, we used instead 1-minute averages of the 230--392 keV electron data from Wind/3DP or the 175--315 keV electron data from the Electron, Proton, and Alpha Monitor \citep[EPAM;][]{Gold1998} on the \emph{Advanced Composition Explorer} (ACE). For STEREO SEP events, we used STEREO/SEPT 1-minute averages (sun-direction) in the electron energy channel 0.255--0.295 MeV, STEREO/HET 1-minute averages in the electron channel 2.8--4 MeV and proton energy channels from 40 to 100 MeV, and LET 1-minute averages in proton energy channel 10--12 MeV to determine the onset times of electrons and protons. We did not use HET data in the proton channel 13.6--15.1 MeV due to its large data gaps in many events. Instead LET standard data in the proton energy channel 10--12 MeV was used. To estimate the scattering effect of the first arriving particles, we compute the anisotropy of the electrons using Wind/3DP data for the SOHO events and SEPT data for the STEREO events. The anisotropy of the protons was not considered in this study due to the lack of data available in instruments including ERNE, EPHIN, and HET, instead we use the VDA to evaulate their scattering effects. Furthermore, note that SOHO rotates 180\deg every three months and the pointing direction of ERNE and EPHIN will change from being sunward along the nominal Parker spiral direction to being perpendicular to that. Thus it is likely that both ERNE and EPHIN will miss the first arriving particles when SOHO's roll angle is 180\deg. We estimate the uncertainty by comparing the EPHIN and ERNE data with the available Wind/3DP data or ACE/EPAM data. Finally, SEP intensity measurements can suffer from contamination. Possible causes of contamination include 1) particle misidentification, e.g. the presence of electrons in the proton channels and 2) missing particle energy, e.g. high-energy protons (electrons) deposit only a fraction of the energy at the detector and thus are counted as low-energy protons (electrons). \citet{Posner07} analyzed extensively EPHIN electron and proton measurements and found that some contamination (see also \citet{delPeral01}) and instrument dead time problems exist during the main phase of the SEP event, but the onset time determination using EPHIN electron and proton measurements can be done reliably. \citet{Haggerty02, Haggerty03} used simulations to examine contamination in the ACE/EPAM electron channels and they concluded that while the effect can be significant in the lowest-energy channels (E'1 and E'2), it is negligible in the highest two channels E'3 (102--175 keV) and E'4 (175--312 keV). Other contamination factors are X-rays which will increase COSTEP front detector count rate \citep{Posner07, Klassen05}. In this study, we have excluded contaminations that may caused by the X-ray or other particle energy misidentifations. So far, no simulations have been conducted to evaluate contamination in ERNE data (Valtonen, personal communication) or reported for STEREO HET, LET and SEPT data. Therefore, we used the SEP intensity data from their instrument websites as provided with no additional contamination corrections. \subsection{SEP Onset Time Determination} We used an intersection slope method to determine the onset times of first arriving particles. We first make linear fits for the background and increasing logarithmic fluxes of SEPs respectively and then take the intersection of the two lines as the onset time. The uncertainty of the intersection slope method was estimated by the earliest and latest possible onset times, which were determined by the intersection with the background level, $\pm 3\sigma/(slope-error_{slope})$, similar to the method used in \citet{Miteva14}. Figure~\ref{Fig_2slp} shows the procedure for (a) the 2011 August 8 SEP event and (b) the 2011 August 4 SEP event. The SEP event in (a) has a rapid increase of flux, allowing for an accurate determination of the onset times while the SEP intensity in (b) has a slow rise which caused a larger uncertainty, as indicated by the gray rectangle in the figure. The obtained uncertainties of the intersection slope method range from $\pm$ 2 min to 9 min for the 28 SEP events. A similar uncertainty of ten minutes has been reported in other studies using alternative methods \citep[e.g.][]{Huttunen05, Vainio13}. In general, the uncertainty of the intersection slope method itself (caused by the background flux fluctuation) is relatively small when compared with the high background errors. It is well known that if the background flux of an SEP event is too high, it will mask the real onset time of SEPs \citep[e.g.][]{Lintunen2004, Laitinen2010}. To illustrate the background effect on the onset time, we over-plotted two elevated onset levels (ratio of background level to peak flux) of $\sim 5\%$ (orange dashed line) and $\sim 8\%$ (blue dashed line) in Figure~\ref{Fig_2slp} (b). They introduced an error of $\sim$ 23 min and $\sim$ 32 min, respectively. The error caused by the high background can be estimated by: \begin{equation} ERR_{bglv} = (Int_{ont1} - Int_{ont0})/slope_{fit} \end{equation} where $Int_{ont1}$ and $Int_{ont0}$ are the logarithm of the SEP intensity at onset level 1 and onset level 0 and $slope_{fit}$ is the linear fit slope to the logarithm of the SEP intensity. In this work, we used equation (3) to correct the background effect by choosing a normalized onset level as $1\%$ of maximum flux in all data set, where the maximum flux is defined as the SEP prompt peak within 6 hours of the onset. Also, note that the data time averages set a lower limit to the onset uncertainty. For example, in Figure~\ref{Fig_2slp} (b) we applied a 3-point running smoothing average to the 1-minute intensity data to make the early rise of the event more easy to see, which set a lower limit of 3 min for this case. The uncertainty listed in Table 1 has a lower limit of time averages and upper limit of the background errors and errors of contaminations. Note that, the first small flux increase in Figure~\ref{Fig_2slp} (a) was caused by X-ray contamination, we have included an uncertainty of $\sim$ 12 min as a lower limit in Table 1. \newcommand{\txw}{\textwidth} \begin{figure} \begin{center} \includegraphics[width=0.92\txw, height = 0.36\txw ]{Fig1_merge.eps} \end{center} \caption{1-minute averaged intensity of near-relativistic electrons (2.64--10.4 MeV) from SOHO/EPHIN observations for: a) the 2011 August 9 SEP and b) the 2011 August 4 SEP. Horizontal lines give the background level (solid), the average intensity during a pre-event time interval, and the background $\pm$ 3$\sigma$ levels (dashed--dotted). The inclined line is the linear fit to the logarithm of the SEP intensity during the early rise of the event. The onset time is the time of intersection of this line with the background, the gray rectangle indicates the uncertainty. The orange and blue dashed lines in (b) are two assumed background levels, illustrating how the elevated background levels affect SEP onset times.}\label{Fig_2slp} \end{figure} \subsection{Solar Particle Release Time Determination} To infer the electron and proton release time at the Sun, time-shifting analysis (TSA) and velocity dispersion analysis (VDA) are two commonly used methods in the past works \citep[e.g.][]{Krucker1999,Tylka2003,Malandraki12,Vainio13}. The TSA computes the SPR time by shifting the onset time by particle traveling time along the nominal Parker spiral field line: $t_{SPR} = t_{onset} - l/v$, where $l$ is the nominal path length from the Sun to the spacecraft and $v$ is the particle speed. The nominal path length is computed using the Parker spiral field-line model with the average solar wind speed measured in-situ at the observing spacecraft. The result of TSA represents a latest possible release of SEP particles. It is a good approximation if the SEP particles travel nearly scatter free at nearly zero pitch angle along the magnetic field line. For those particles which experience strong scattering, the TSA method can introduce large errors, especially for the protons. The error of TSA SPRs is given by: \begin{equation} ERR_{tsa} = (l_{sct}-l_{nom})/v \end{equation} where $l_{sct}$ and $l_{nom}$ are scattering path length and nominal path length and $v$ is the particle speed. Note that the TSA method is a good approximation for near-relativistic electrons and the TSA error for electrons is relatively small due to their extremely high speeds. For example, considering a path length range of 1.25 AU to 2 AU, the uncertainty of the TSA is given by $dt = (2 AU - 1.25 AU )/v$. For 1 MeV and 0.25 MeV electrons and 100 MeV, 50 MeV and 10 MeV protons, the corresponding errors are $\sim$ 6 min, 8 min, 13 min, 19 min and 40 min. The velocity dispersion analysis (VDA) is another method commonly used to estimate the release time of SEPs and their travel path length. The VDA method is based on the assumption that particles at all energies are released simultaneously and travel the same path length \citep{Krucker1999, Tylka2003, Vainio13}. The particle arrival time at 1 AU is given by: \begin{equation} t_{{onset}}(E) = t_0 + 8.33 \frac{{min}}{{AU}}L(E)\beta^{-1}(E) \end{equation} where $t_{{onset}}(E)$ is the onset time in minutes observed in different energy E, $t_0$ is the release time in minutes at the Sun, $L$ is the path length (AU) travelled by the particle and $\beta^{-1}(E)= c/v(E)$ is the inverse speed of the particles. If energetic particles travel the same path length and are released at the same time then a linear dispersion relation can be obtained by plotting particle onset times versus $\beta^{-1}$. The slope and intersection of the linear fit yield the path length and the particle release time at the Sun, respectively. \section{Statistics and Analysis Results} \subsection{Event Catalog} Table~\ref{Table1} summarizes the timing of 28 selected SEP events and associated solar eruptions. The first and second columns of the table list SEP event number and date. The numbers 1--17 denote 17 SOHO SEP events and the numbers S1--S11 denote 11 STEREO SEP events. The third and fourth columns of the table show the release times of SEP electrons with uncertainty in parentheses. e1 and e2 represent 0.25--0.7 MeV and 2.64--10.4 MeV electrons for SOHO SEP events and 0.255--0.295 MeV and 2.8--4.0 MeV electrons for STEREO SEP events. The fifth to seventh columns of the table show the release times of SEP protons. p2, p1 and p0 represent 80.2--101 MeV, 50.8--67.3 MeV and 13.8--16.9 MeV protons for SOHO SEP events and 60--100 MeV, 40--60 MeV, and 10--12 MeV protons for STEREO SEP events. From the eighth to thirteenth columns are the onset times of type III, metric type II, decameter-hectometric (DH) type II, CME speed and source location and CME heights at the e1 release times. The fourteenth column denotes the observing spacecraft and the fifteenth is the connection angle of SEP to the spacecraft. In this work, we used the TSA method to infer particle release times and 8.33 minutes have been added to the release times in order to directly compare with electromagnetic emission onsets. Here the inferred SEP release times indicate when the particles are injected onto the field line connecting to the observer. To avoid the large background effect, we set the SPR time as null '---:---' when a onset level is greater than 10\%. \subsection{Time Differences between Electron and Proton Release Times} \begin{figure} \noindent\includegraphics[width=0.9\txw]{Fig0_1a.eps} \caption{ Histograms of time differences for 17 SOHO SEP events between: (a) e2 and e1 SPRs, (b) p2 and e1 SPRs, (c) p1 and e1 SPRs and (d) p0 and e1 SPRs.} \label{SOHO-SEPdt} \end{figure} Figure~\ref{SOHO-SEPdt} shows histograms of time differences, $dt$, for the 17 SOHO SEP events between: (a) e2 and e1 SPRs, (b) p2 and e1 SPRs, (c) p1 and e1 SPRs and (d) p0 and e1 SPRs, where $dt = t_{SPR}(e2,p2,p1,p0) - t_{SPR}(e1)$, i.e., $dt$ is positive when the e2 and proton SPR times are delayed from the e1 SPR times. In panel (a), the e2 release times are found to be systematically larger than the e1 release times with an average of 6.8 min. 11 of the 12 events ($\sim$91\%) have $dt < $ 10 min and one event (event 6) has a delay of 14 min. In panel (b), the p2 release times are delayed from the e1 release times with an average of 4.7 min. 10 out of 11 events ($\sim$91\%) have $dt < $ 10 min and one event (event 6) has a delay of 19 min. The p1--e1 SPRs in panel (c) show similar delays as the p2--e1 SPRs, ranging from -3 min to 25 min, with an average of 5.2 min. For the p0 protons, 7 of 12 events ($\sim$58\%) have $dt < $ 10 min and five SEPs (event 3, 4, 6, 8 and 9) have large delays of $\ge$ 10 min. Among these five SEPs, events 3, 6 and 8 are weak with small flux increases in e2 and p2. Event 4 is associated with a high latitude source and events 4 and 9 shows a large proton scattering effect(see Sections 3.5). \begin{figure} \noindent\includegraphics[width=0.9\txw]{Fig0_1b.eps} \caption{Histograms of time differences for the 11 STEREO SEP events between: (a) e2 and e1 SPRs, (b) p2 and e1 SPRs, (c) p1 and e1 SPRs and (d) p0 and e1 SPRs.} \label{ST-SEPdt} \end{figure} Figure~\ref{ST-SEPdt} shows histograms of time differences, $dt$, for the 11 STEREO SEP events between: (a) e2 and e1 SPRs, (b) p2 and e1 SPRs, (c) p1 and e1 SPRs and (d) p0 and e1 SPRs. Figure~\ref{ST-SEPdt} displays a similar trend as Figure~\ref{SOHO-SEPdt}. Two events (event S5 and S10) in the e2--e1 SPRs and three events (event S5, S6 and S9) in the p2--e1 SPRs show a large delay of 12-28 min. Among these three events, S6 is associated with a low CME speed with CA of 4$^\circ$ and S5 and S10 have no associated metric type II but DH type II bursts, indicating a later shock formation time. For the p0 protons, 8 events present a broad 10-41 minute delay relative to the e1 release times, and 3 events (events S1, S4 and S8) have $dt < $ 10 min. Among the 8 events with larger delays, 4 events (S2, S3, S9 and S11) have experienced strong scattering effects (S7 has no data availalbe for the VDA) and 3 events (S5, S6 and S10) have delayed proton release times. \begin{figure} \noindent\includegraphics[width=0.9\txw]{Fig0_1a_eng.eps} \caption{ Histograms of time differences for SOHO SEP events between: (a) p2 and p1 SPRs, (b) p2 and p0 SPRs, and (c) p1 and p0 SPRs.} \label{SOHO-eng} \end{figure} \begin{figure} \noindent\includegraphics[width=0.9\txw]{Fig0_1b_eng.eps} \caption{Histograms of time differences for STEREO SEP events between: (a) p2 and p1 SPRs, (b) p2 and p0 SPRs, and (c) p1 and p0 SPRs.} \label{ST-eng} \end{figure} We plot histograms of time differences, $dt$, between: (a) p2 and p1 SPRs, (b) p2 and p0 SPRs and (c) p1 and p0 SPRs in Figure~\ref{SOHO-eng} (SOHO events) and Figure~\ref{ST-eng} (STEREO events). Both Figure~\ref{SOHO-eng} and Figure~\ref{ST-eng} show that the p2 protons have similar release times as the p1 protons with an average $dt$ of $\sim$ 2.5 min and -0.2 min. For the p0 protons, there are 7 SOHO SEPs and 5 STEREO SEPs with delays of the p2--p0 SPRs within 5 min, two SOHO events 4 and 9 and four STEREO events S2, S3, S9 and S11 show proton scattering effects, where p0 appeared to be released later than p2 and p1, i.e., $dt$ = p2-p0 or p1-p0 is negative. \subsection{Time Differences between Electron Release Times and Radio Emission Onset Times} \begin{figure} \noindent\includegraphics[width=0.9\txw]{Fig0_3a.eps} \caption{ Time differences between (a) e1 SPR times and type III onset times, (b) e1 SPR times and metric type II onset times, and (c) e1 SPR times and DH type II onsets.} \label{Tii_SEP} \end{figure} Figure~\ref{Tii_SEP} plots histograms of time delays between (a) e1 SPR times and type III onset times, (b) e1 SPR times and metric type II onset times, and (c) e1 SPR times and DH type II onsets. The e1 release times are found to be 2--42 min delayed from type III onset times, and similarly 3--25 min from metric type II onset times. There are 5 events (1,7,11, 17 and S1) with delays $<$ 5 min and 7 events (4, 6, 12,13, 16, S3 and S7)) with delays $>$ 20 min. Most of events (59\%, 16 of 27) have the delays ranging from 6--19 min. Among the former 5 events (1,7,11,17 and S1), 4 of them are associated with metric type II burst except S1, which has a DH type II detected 5 min later than the e1 SPR time. There are in total 7 events (see Figure~\ref{CA_dt}) having no associated metric type II bursts but all of the 28 SEP events are associated with DH type II bursts. \begin{figure} \noindent\includegraphics[width=0.7\txw]{Fig0_2a.eps} \caption{ Delays between the e1 release times and type III onsets as a function of CAs. Red, green and blue colors mark three groups with delays $dt \le 5$ min, $5 < dt < 20$ min, and $dt \ge 20$ min. Circles denote the events lack of associated type II bursts} \label{CA_dt} \end{figure} Figure~\ref{CA_dt} presents correlation of CAs and time differences, DT, between the e1 SPR times and metric type III onsets. Red, green and blue colors mark three groups with delays $dt \le 5$ min, $5 < dt < 20$ min, and $dt \ge 20$ min. From Figure~\ref{CA_dt}, we can see that there is a poor correlation between CAs and DTs with correlation coefficient CC = 0.167 and the data points are widely spread in the whole plot. Event S8 has the second largest CA of $ -54^\circ$ but a relatively small delay of 16 min. This SEP event was associated with a M6.5 flare at N09E12. STB/HET, SOHO/EPHIN and ERNE, and GOES have all detected a rapid rise of SEP fluxes, despite of relatively large CA to STB ($61^\circ$) and to SOHO ($70^\circ$). This is one of the longitudinally wide-spread SEP events (Richardson et al, 2014) and will be studied in our future work. On the other side, event 4 is the SEP event which has a small CA of $2^\circ$ but a large delay, associated with a M3.7 flare at active region (AR) 11164 at N31W53. The type III onset, metric type II onset and the 0.25--0.7 MeV electron release time are 19:52 UT, 19:54 UT and 20:21 UT respectively. Although this is a well-connected SEP event to SOHO, there is a large delay of 24 min between the e1 release and metric type II onset. A likely reason is that although this SEP event is well-connected to SOHO in longitude, however, its large source latitude (N31) and relatively small CME width kept it poorly-connected to the ecliptic plane \citep[cf][]{Gopalswamy2014}. \subsection{SEP Electron Anisotropy} The TSA method assumes that the SEP particles have propagated scatter-free at zero pitch angle along the magnetic field line, large errors may present in the TSA method for events with strong scattering. To estimate the scattering effect of the first arriving particles, we compute the electron anisotropy using Wind/3DP data for the SOHO events (we used ACE/EPAM electron data for events 1 and 2 when there was a data gap in Wind/3DP) and SEPT data for the STEREO events. The solid state telescope (SST) Foil pitch angle distributions (SFPD) for Wind 3DP electrons (available at \url{ftp://cdaweb.gsfc.nasa.gov/pub/data/wind/3dp/3dp_sfpd/}) returns a velocity distributions function containing 7 energy bins from $\sim$ 27 keV to 520 keV and 8 pitch angle bins roughly covering pitch angles from 0-180$^\circ$. Note that the covered pitch-angles can vary from distribution-to-distribution since the automated CDF routine tends to remove all the direct sun/anti-sun directions to avoid X-ray and EUV contamination (see \url{http://cdaweb.gsfc.nasa.gov/misc/NotesW.html#WI_SFPD_3DP}). The SEPT instrument provides 45--400 keV electron measurements. It consists of four identical telescopes which cover four viewing directions: SUN (along the nominal Parker spiral to the Sun), ANTI-SUN (away from the Sun), NORTH and SOUTH. In this Section, the anisotropy of the protons are not computed due to the lack of anisotropy data in ERNE and HET. Instead, we use the VDA to evaluate their scattering effects in Section 3.5. The anisotropy of a SEP event is defined as \begin{equation} A=\frac{3\int_{-1}^{+1} I(\mu) \cdot \mu \cdot d\mu}{\int_{-1}^{+1} I(\mu) \cdot d\mu} \end{equation} where $I(\mu)$ is the intensity at a given pitch-angle direction and $\mu$ is the pitch angle consine. Omnidirectional intensities were calculated by integrating second-order polynomial fits to the pitch-angle distribution of intensities using 1-minute averages (12-second for Wind/3DP) of the data. To stabilize the fit during periods of poor pitch-angle coverage, an artificial point was added to the pitch-angle distribution to fill the uncovered range \citep[cf][]{Droge14}. \begin{figure} \noindent\includegraphics[width=0.7\txw]{Fig_aniso.eps} \caption{Anisotropy and intensity time profiles of the SEP event on May 17 2012 observed by Wind/3DP. } \label{aniso} \end{figure} Figure~\ref{aniso} shows Wind 3DP SFPD measurements on the May 17 2012 SEP event, which serves as a good example of a strong anisotropic event. The upper panel shows the time series of the intensity in color coding as a function of pitch angle bins. The middle panel shows the 65 keV electron 8-bin intensity measured by the SST telescope. The third panel shows the anisotropy as computed from the pitch angle distribution measurements. The anisotropy reaches a maximum of 2.23 at 01:44 UT during the onset of this event. In Table 2 column 2 we list the maximum anisotropy for the 27 SEP events (data are not available yet for the STEREO SEP event on 2014 February 25). The obtained anisotropies range from 0.27 to 2.97 (absolute values), which are similar to those computed from ACE/EPAM data in \citet{Dresing14}. 6 out of 27 SEPs have relatively strong anisotropies with A $>$ 2.0, 16 events have 1.0 $\ge$ A $\le$ 2.0, and 5 events have relatively weak anisotropies with A $<$ 1.0. The obtained anisotropies suggest that most of electrons with finite pitch angles still experienced certain degree of scattering although the first-arriving electrons with $\mu \sim 0$ are generally propagated with less scattering. The uncertainties of e2 and e1 brought by the scattering effects are 6 min and 8 min, respectively, for a path length of 2 AU. \subsection{The Proton VDA Release Time} In this section, we carry out the VDA to estimate the proton scattering effects and compare the proton VDA release times with electron release times. The VDA analysis was based on 1 min time resolution ERNE and HET (LET) proton data with energy channels between 10 MeV and 100 MeV. The VDA onset times are determined based on the fix onset level (see Figure 1) for all energy channels, which is selected to be the minimum background level of all analysis channels. To avoid high-background effects or errors brought by background variation, we have excluded the channels with background levels $>$ 10\% and channels with a slowly rising background (see details in Figure 9 and Figure 10). In addition, to avoid energy-dependent scattering effect, we carried out the VDA by using either high $\sim$ 50--100 MeV or low $\sim$ 10--50 MeV energy channels only, depending on data availability. The energy range used in the analysis has been listed in column 8 of Table 2 . \begin{figure}[t!] \mbox{ \includegraphics[width = 0.98\txw,height = 0.90\txw ]{Fig9_merge.eps} }\par \caption{SEP intensity on 2014 February 20: (a) 0.027 -- 0.5 MeV electron intensity from Wind/3DP, (b) time-shifted 0.027 -- 0.5 MeV electron intensity from Wind/3DP, c) 15.4 -- 90.5 MeV normalized and time-shifted intensity from ERNE and d) over-plotted electron and proton intensity from Wind/3DP, EPHIN, and ERNE.} \label{Fig0220} \end{figure} Figure~\ref{Fig0220} shows an example of SEP event on February 20 2014. The 2014 February 20 SEP is a strong anisotropic event with a maximum anisotropy A = 1.94. This SEP event was associated with a M3.0 X-ray flare at S15W73 and a halo CME with speed of 948 km/s. The observed metric and DH Type II, and type III onsets are 07:45 UT and 08:06 UT, and 07:46 UT, respectively. The event CA is $-24^{\circ}$ and GOES observed a small SEP intensity of 22 pfu. Figure~\ref{Fig0220} (a) plots the 12-second electron intensity in 27--520 keV energy channels from Wind/3DP SFPD from the solar direction. A clear velocity dispersion in the peak flux is visible. The velocity dispersion at the onset shows an instrumental effect: the intensity at lower energy channels were contaminated by higher-energy channels. This occurred when the high-energy electron lost only a fraction of its energy in the detector, a count at a lower energy was recorded resulting in too early onset times. The early onset effect at low energy can be more clearly seen in Figure~\ref{Fig0220} (b), where the electron time profiles have been shifted by the travel time of SEPs with 1.25 AU path length. Figure~\ref{Fig0220} (c) plots the 1-minute proton intensity from ERNE, and (d) superimposed intensity profiles from Wind/3DP electrons, EPHIN electrons and ERNE protons on 2014 February 20. For easy comparison, in Figure~\ref{Fig0220} (c) the intensity profiles have been normalized to the peak values and the travel times have been subtracted with 1.25 AU path length. The red vertical solid line in the figure indicates the type III burst onset time. Note that a slow rise of the background flux was shown in the 15.4 MeV proton channel before the sharp rising phase, to avoid the onset uncertainty, we have excluded this channel in the analysis based on onset levels less than 10\% of the peak value (see Figure~\ref{Fig0220a}). \begin{figure}[t!] \mbox{ \includegraphics[width=0.7\txw, height = 0.7\txw ]{Fig0220_vda_b.eps} }\par \caption{The VDA based on onset times at 0.1\%, 1\%, 2\%, 5\% and 10\% of the peak value.} \label{Fig0220a} \end{figure} Figure~\ref{Fig0220a} presents the VDA results based on onset times at 0.1\%, 1\%, 2\%, 5\% and 10\% of the peak value. The results show that the first arriving protons at 0.1\% onset level propagated nearly scatter-free with a path length of 1.2 $\pm$ 0.14 AU. The later arriving protons at onset level $>$ 5\% present a larger scattered path length with a path length of $\sim$ 1.5 AU. However, although the scattered path lengths increase as the onset levels increase, the VDA proton release times remain roughly the same within 7 min of uncertainty. The TSA release time for 180 keV electrons from Wind/3DP is at 07:52 UT and for the e1 and e2 electrons from EPHIN are at 07:50 UT and 07.54 UT, thus no significant differences between proton and electron SPR times are found for this SEP event. \begin{figure}[t!] \mbox{ \includegraphics[width=0.7\txw, height = 0.7\txw ]{Fig0127_vda_b.eps} }\par \caption{The VDA based on 57.4--90.5 MeV (blue), 15.4--57.4 MeV (green) and 15.4--90.5 MeV energy channels.} \label{Fig0127a} \end{figure} Caution has to be taken with the VDA due to high background effect and energy-dependent scattering effect. The energy-dependent scattering effect becomes more important for cases when there is a large amount of scattering. For such cases, the VDA using the energy range from 10 to 100 MeV may yield a release time that is earlier than expected \citep{Diaz11}. Figure~\ref{Fig0127a} shows a good example of such a SEP event on 2012 January 27 which has a weakest anisotropy with A = 0.27. As shown in Figure~\ref{Fig0127a}, the velocity dispersion onset times do not lie on a straight line but curved from high to low energy, showing an increasing scattered path length. The VDA based on energy 15.4--36.4 MeV (green) yielded a path length of 2.51 AU, which is $\sim$ 0.76 AU larger than that in 57.4--90.5 MeV (blue); and the VDA using energy 15.4--90.5 MeV gave a too large path length of 3.17 AU and a unreasonable release time that is earlier than the type III onset. In Table 2, we list the VDA results along with the electron anisotropy, e1 SPR time, e2 SPR time for the 28 SEP events. Entries with '---' (5 of 28) are cases where the path length values were outside the range of 1--3 AU due to high background level (HBG), ion contamination (IC), or cases when data are not available (NA). The obtained proton path lengths range from 1.18 to 2.51 AU. In general, the derived proton path length tend to be larger for weak anisotropic events than strong anisotropic events. However, there are cases with strong electron anisotropies show large apparent proton path lengths due to the relatively high background levels (especially for the STEREO HET data). 15 out of 23 events have the path lengths $l \le$ 1.5 AU and 21 of 23 events have $l \le$ 1.65 AU except events 9 and S3, where event 9 (the January 27 2012 SEP event) has the weakest anisotropy A = 0.27. By comparing the TSA release time with the derived release times from the VDA, we obtain the maximum errors for the p2, p1, p0 protons of $\sim$ 6 min, 10 min and 32 min, respectively. The p2 TSA release times have the smallest error as expected. The proton release times from VDA, $t_{SPR}(p_{vda})$, are found to be delayed from the e1 SPRs by -1--30 min, and from the e2 SPRs by -9--18 min. $\sim$ 70\% (16 of 23) events have the $dt_1 = t_{SPR}(p_{vda}) - t_{SPR}(e1)$ $<$ 8 min and seven events (3, 6,8,S5,S6,S9 and S10) have $dt_1 \ge $ 8 min. 13 (out of 19) events have $dt_2 = t_{SPR}(p_{vda}) - t_{SPR}(e2)$ within 6 min and 3 events (S6, S9 and S10) have $dt_2 >$ 9 min. In addition, there are 3 events (1, 7 and S2) having a negative $dt_2$, where events 1 and 7 suffered the X-ray contamination resuting in a large undertainty of $\sim$ 10 min in the e2 SPR times. \section{Summary and Discussion} \subsection{Summary} By choosing the smallest CA among the three spacecraft, we derive and compare the high energy electron and proton SPR times using SOHO/EPHIN electron fluxes in the 0.25--10.4 MeV channels, SOHO/ERNE proton fluxes in the 13.8--101 MeV channels, or in the similar energy channels of the SEPT and HET (LET) detectors on STEREO. Our main results are listed below. \begin{itemize} \item The e2 release times are found to be systematically larger than the e1 release times by an average of 6.8 min and 7.3 min, for the 12 SOHO SEPs and 10 STEREO SEPs, respectively. Among these 22 events, three events (6, S5, and S10) have a large 10--28 min delay. \item The p2 protons are shown to have similar SPR times with the p1 protons. The average delay between the p2--p1 SPRs are $\sim$ 2.5 min and -0.2 min, for the 12 SOHO SEPs and 9 STEREO SEPs, respectively. For the p0 protons, there are 12 SEP events showing small delays between the p2--p0 SPRs within 5 min and five events (9,S2, S3, S9 and S11) showing a large 10--32 min delay due to proton scattering effects. \item The proton VDA results show that protons are released simultaneously with the e1 electrons within 8 min for $\sim$ 70\%(16 of 23) SEP events, and the e2 electrons with 6 min for 13 of 19 events. There are $\sim$ 30\%(7 of 23) SEP events showing a delayed proton release time by $\sim$ 8--31 min. Among these 6 events, 3 events (6, S5, and S10) also have a large e2-e1 SPR delay. \item $\sim$ 65\% (15 of 23) protons events show a small scattered path length ($<$ 1.5 AU); 8 of 23 proton events have a large apparent path length ($>$ 1.5 AU), part of reason is due to higher background levels in the STEREO HET data. \item The delays between e1 SPRs and type III onsets range from 2 min to 42 min. The CME heights at the e1 release times range from 2.1 to 9.1 Rs. From the CME heights, it is likely that the e1 electrons are accelerated by the CME-driven and/or flare shock waves rather than flare reconnections. \end{itemize} \subsection{Discussion} \subsubsection{Association between Electrons and Protons} Our results are consistend with \citet{Haggerty09}'s study, where they suggested that near-relatic electrons and the energetic protons are accelerated and released by essentially the same mechansim(s). \citet{Haggerty09} studied the injection times of near-relativistic electrons and non-relativistic protons for 19 electron beam events using ERNE 50-100 MeV proton and EPAM 38-315 keV electron data, and found that 11 of the 19 events (60\%) are statistically consistent with zero delay between the proton and electron injection within the uncertainty of $\sim$ 3min. The remaining 8 events show a broad 5-25 minute delays of the protons relactive to the electron injections. They also compared the peak intensity of 175--315 keV elections with that of 1.8--4.7 MeV protons from ACE/EPAM and found a good correlation in the peak intensity of electrons and protons. \begin{figure} \noindent\includegraphics[width=0.7\txw]{int_corr.eps} \caption{ Logarithmic peak intensity correlation between the e1 electrons and the p0 protons.} \label{int_corr} \end{figure} \begin{figure}[t!] \mbox{ \includegraphics[width=0.98\txw, height = 0.45\txw ]{Fig13_merge.eps} }\par \caption{ (Left) over-plotted electron and proton intensity from ACE/EPAM, EPHIN, and ERNE on December 13 2006. (Right) over-plotted electron and proton intensity from WIND/3DP electrons, EPHIN, and WIND/EPACT protons on November 26 2011. The intensity has been normalized to the background flux level for easy comparison.} \label{figall} \end{figure} Among the 28 SEP events under study, we found similar correlations between the peak intensity of e1 electrons and p0 protons, as shown in Figure~\ref{int_corr}. Futhermore, the profiles between different spices are found to be very similar to each other although not identical, as shown in Figure~\ref{figall}. Our results support the conclusion that near-relativistic electron and high-energetic proton acceleration are closely related to each other. On the other hand, how the intensity profiles evolve with time, which result from the transport-modulated SEP particle accelerations at an evolving CME-driven shock, is not well understood. For example, at the SEP rise phase, it is not well understood why the e2 electrons are the last to reach their peak value for event 1 (left in Figure~\ref{figall}); while in the second example (event 8, right panel in Figure~\ref{figall}), the e2 electrons reach the plateau before the protons. \subsubsection{Direct Shock Accleration vs Tranverse Transport} Besides simultaneously released electron and proton events, there are seven events showing large delays of 8--31 min between proton release times $t_{SPR}(p_{vda})$ from VDA and e1 SPRs. These events are SEPs with small e2 and p2 intensities. Three possible reasons may account for these large delays: 1) the late formed shocks at high altitudes around DH type II onset times; 2) longer times needed for the evolving shocks to be intense enough to produce high-energy SEPs after DH type II onsets; 3) times needed for shocks in SEP events with large CAs to reach the magnetic connection footpoint to the observer. Among the above 7 events, events 8 and S6 have small CAs of 6$^\circ$ and 3$^\circ$, events 3 and S5 have large CAs of 30$^\circ$ and 32$^\circ$, and the other 3 events (3, 6, and S10) have intermediate CAs of 13-21 $^\circ$. 6 of these events have the similar e1 SPRs with the DH type II onsets within 5 min ( and a large 13-26 min delay between $t_{SPR}(p_{vda})$ and metric type II onsets) except event S10. The obtained timing comparising results are consistent with one (or two) of the above three hypotheses. Rouillard et al. (2012) investigated the 2011 March 21 SEP event using STEREO and SOHO observations. By tracking the CME shock lateral expansion they demonstrated that the delayed solar particle release times are consistent with the time required for the shock to propagate to the magnetic footpoint connecting to the observer. On the other hand, for large CA and/or high latitude SEPs, an alternative (or contributing) explanation is that the delay between the SEP release and electromagnetic emissions is caused by the propagation times needed for the SEP particles to transport across the field line to the connection footpoint of the observer \citep[e.g.][]{Dresing12, Qin13, Laiti15}. It is possible that both direct shock acceleration and cross-field propagation of SEPs play roles in the formation of SEP intensity time profile. At an evolving CME-driven shock near the Sun, many factors such as the shock obliquity, the compression ratio and transport parameters may affect the SEP intensity, further investigations are needed. \subsection{Conclusion} Our results suggest that near-relativistic electron and high-energy proton acceleration are closely related to each other. There exists a good association between high-energy electron and proton release time, intensity peak values and time profiles. For small intensity SEP events, it takes longer times for the e2 and p2 to reach up to the detectable flux levels. However, whether this delay is due to the times that needed for the evolving shock to be strengthened or due to particle transport effects are not resolved. \begin{acknowledgments} The authors would like to thank the support of STEREO, SOHO, WIND and ACE teams. The STEREO SECCHI data are produced by a consortium of RAL (UK), NRL (USA), LMSAL (USA), GSFC (USA), MPS (Germany), CSL (Belgium), IOTA (France), and IAS (France). The SOHO LASCO data are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut f$\ddot{u}$r Aeronomie (Germany), Laboratoire d'Astronomie (France), and the University of Birmingham (UK). SOHO Electron Proton and Helium Instrument (EPHIN) data were obtained from: \url{ http://www2.physik.uni-kiel.de/SOHO/phpeph/EPHIN.htm}; SOHO Energetic and Relativistic Nuclei and Electron instrument (ERNE) data were obtained from: \url{http://www.srl.utu.fi/erne_data/datafinder/df.shtml }; STEREO High Energy Telescope (HET) data were obtained from: \url{http://www.srl.caltech.edu/STEREO/Public/HET_public.html}; STEREO High Energy Telescope (LET) data were obtained from: \url{http://www.srl.caltech.edu/STEREO/Public/LET_public.html}; STEREO Solar Electron Proton Telescope data (SEPT) were obtained from: \url{http://www2.physik.uni-kiel.de/STEREO/index.php?doc=data}; and Wind/3DP and ACE/EPAM proton and electron data were obtained from \url{http://cdaweb.gsfc.nasa.gov/istp_public/}. This work was supported by NASA LWS TR\&T program NNX15AB70G. PM was partially supported by NASA grant NNX15AB77G and NSF grant AGS-1358274. \end{acknowledgments}
1,477,468,750,731
arxiv
\section{Appendix}\label{sec:appendix} \begin{figure}[H] \centering \includegraphics[scale=0.5]{./Sections/Image/subtaska1.pdf} \caption{Macro-average F1-scores for sub-task A based on 10-fold cross validation. Asterisks indicate a statistically significant difference, where ** denotes 1e-04 < p <= 1e-03, * corresponds to 1e-02 < p <= 5e-02, and ns indicates results where p > 5e-02.} \label{fig:subtaskacv} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.5]{./Sections/Image/subtaskb1.pdf} \caption{Performance with \emph{Setup B} for sub-task A. The notation is defined in Figure~\ref{fig:subtaskacv}.} \label{fig:sigA} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.5]{./Sections/Image/subtaskb2.pdf} \caption{Performance with \emph{Setup B} for sub-task B. The notation is defined in Figure~\ref{fig:subtaskacv}.} \label{fig:sigB} \end{figure} \begin{figure*}[H] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.5]{./Sections/Image/subtaskb1.pdf} \caption{Results for sub-task A.} \label{sigA} \end{subfigure} \hspace{1.8cm \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.5]{./Sections/Image/subtaskb2.pdf} \caption{Results for sub-task B.} \label{sigB} \end{subfigure} \caption{Performance for \emph{Setup B}. The notation is defined in \end{figure*} \section{Background}\label{background} The MAMI dataset contains 10,000 memes as the training and 1,000 memes as the test set; all of these are given together with the text transcription as obtained through optical character recognition (OCR). The reference labels are obtained by manual annotation via a crowdsourcing platform. The challenge is composed of two sub-tasks: Sub-task A represents a binary classification task and focuses on the identification of misogynous memes, so each meme should be classified as not misogynous (noMis) or misogynous (Mis). Sub-task B, in contrast, presents a multi-label classification task, where the misogynous memes should be grouped further, into four potentially overlapping categories. The dataset class distribution is illustrated in Table~\ref{tab:datadis}. \vspace{-0.2cm} \renewcommand{\arraystretch}{1.15} \begin{table}[!htb] \caption{MAMI-22 dataset class distribution. \textbf{Mis}: misogynous; \textbf{Shm}: shaming; \textbf{Ste}: stereotype; \textbf{Obj}: objectification; \textbf{Vio}: violence.} \setlength{\tabcolsep}{5.5pt} \begin{tabular}{|c | c c c c c|} \hline \textbf{Sets}& \textbf{Mis} & \textbf{Shm} & \textbf{Ste} & \textbf{Obj} & \textbf{Vio} \\ \hline\hline training set & 5000 & 1274 & 2810 & 2202 & 953 \\ test set & 500 & 146 & 350 & 348 & 153 \\ \hline \end{tabular} \label{tab:datadis} \end{table} Since the provided dataset contains two modalities (namely, images and texts), an automated approach requires integrating the information from the images with the textual information. However, the OCR-based transcriptions are quite error prone, while the images are often hard to recognize for automatic systems, due, among other reasons, to overlaid text and to the popularity of further changes, such as the composition of multiple sub-images. Consequently, it is challenging to identify the pertinent information of the respective modalities, in order to merge it into a joint classification decision. Some researchers have already worked on meme datasets. For example, \cite{sabat2019hate} created a hateful memes database, using the BERT model to extract a contextual text representation and the VGG-16 convolutional neural network~\cite{simonyan2014very} for image features. Then, text and image representations are concatenated to obtain a multi-modal representation. Facebook also organized a challenge for the identification of hateful memes in 2020~\cite{kiela2020hateful}. The winner of this challenge adopted an ensemble system with four different visual-linguistic transformer architectures~\cite{zhu2020enhance}. The Transformer model has shown excellent performance in many tasks, and it also shows promising results in the above studies, based on its use of the attention mechanism to extract the contextual information within a text. However, its ability to capture global information about the vocabulary of a language remains limited~\cite{lu2020vgcn}, and we hypothesize that this is even more of an issue in the task at hand, due to the very short texts in the given challenge. For this reason, we combine a Transformer model with a graph convolutional network (GCN)~\cite{yao2019graph}, which may help to address this issue. GCNs can be understood as a generalization of CNNs, where the data has graph structure and locality is defined by the connectivity of the graph. As input, a GCN receives features that connect to a set of nodes. From layer to layer, the features of a node are updated as weighted combinations of its neighbors\textquotesingle \ features. In our case, the graph is defined as follows: There is a node for every word in the vocabulary and for every document. The collection of nodes is $V = \left \{ D_1, D_2\cdots D_{n_D},W_1, W_2, \cdots W_{n_w}\right \}$, where $D_i$ and $W_i$ indicate the document and word nodes, respectively. $n_D$ is the number of documents and $n_W$ is the number of unique words in the corpus. The edges between word nodes are weighted with the word co-occurrence, the edges between document-word pairs are weighted with the term frequency-inverse document frequency (TF-IDF). A fixed-size sliding window with step size 1 is used to gather the word co-occurrence information through the entire dataset. The point-wise mutual information (PMI) is employed to measure the relationship between the words \textit{i} and \textit{j} as follows: \begin{equation} \label{PMI} \renewcommand\arraystretch{1.5} \begin{matrix} \textrm{PMI} (i, j) = \textrm{log}\frac{p(i, j)}{p(i)p(j)},\\ p(i, j) = \frac{N(i, j)}{N},\\ p(i) = \frac{N(i)}{N}, \end{matrix} \end{equation} where $N(i)$ counts the sliding windows in the training set that contain word $i$, $N(i, j)$ is the number of sliding windows that carry both words $i$ and $j$, and $N$ is the total number of sliding windows in the corpus. As described in~\cite{yao2019graph}, a positive PMI value indicates a high semantic correlation of words in corpus and vice versa. The adjacency matrix $\textbf{\textit{A}}$ of the graph is then computed elementwise, as follows: \begin{equation} \label{adj} \small A_{i, j}=\left\{\begin{matrix} \textrm{PMI}(i, j) &i, j \textrm{ are word nodes, } \textrm{PMI}(i, j)>0; \\ & n_D< i,j \leqslant n_D + n_W \\ \textrm{TF-IDF}_{i,j} & \textrm{document node } i \textrm{ and word node } j; \\ & i\leqslant n_D; n_D < j \leqslant n_D +n_W\\ 1 & i=j \\ 0 & \textrm{otherwise} \end{matrix}\right. \end{equation} Since the graph is undirected, the adjacency matrix is symmetric. Finally, the adjacency matrix is normalized by $\tildeb{\textbf{\textit{A}}} = \textbf{\textit{D}}^{-\frac{1}{2}}\textbf{\textit{A}}\textbf{\textit{D}}^{-\frac{1}{2}}$, where $\textbf{\textit{D}}$ is the degree matrix of $\textbf{\textit{A}}$. The normalized adjacency matrix $\tildeb{\textbf{\textit{A}}}$ is used to weight the graph node features, cf.~Section~\ref{smm}. A PyTorch implementation based on Text-GCN~\cite{yao2019graph}, as provided on GitHub\footnote{\url{https://github.com/codeKgu/Text-GCN}}, was used for the implementation. \section{Conclusion}\label{conclusion} This paper presents our ensemble-based approach to address two sub-tasks of the SemEval-2022 MAMI competition. The challenge aims to identify misogynous memes and classify them into---potentially overlapping---categories. We train different text models, an image model, and via our proposed fusion network, we combine these in a number of different bi-modal models. Among the uni-modal systems, all text models show a far better performance than the image model. As expected, our proposed graph convolutional attention network (GCAN), which also considers the graph structure of the input data while using pre-trained RoBERTa word embeddings as node features, consistently outperforms the pre-trained RoBERTa model. The proposed fusion network further improves the performance by combining the ideas of stream-weighting and representation fusion. We additionally adopt 10-fold cross-validation and use a dataset-level soft voting ensemble to obtain better and more robust results. Finally, our model-level hard voting ensemble integrates the soft voting ensemble predictions of our best uni- and bi-modal models. Our experiments indicate that this layered ensemble approach can significantly improve the model accuracy. Ultimately, our submitted system results in an F1-score of 0.755 for sub-task A and 0.709 for sub-task B. Overall, we believe that the identification of misogyny in memes is best addressed through bi-modal recognition, considering both textual and image information. Concerning the text-based classification, we found a graph convolutional attention neural network to be beneficial as an integrative model for Transformer embeddings. This helps in the text classification, when the documents are short, as for the given meme classification task. To cope with the bi-modality of the task at hand, we have implemented a range of systems for integrating the information from both streams. An idea that proved to be effective here was that of bringing together the strengths of early fusion and decision fusion in a joint framework. This allowed us to dynamically adjust the contributions of the two modalities through dynamic stream weighting, while still being able to combine information at the feature level across the streams, thanks to the representation fusion branch of our bi-modal systems. \subsection{Graph convolutional network}\label{GCNapp} \subsection{Example combining the OCR and image caption text}\label{textcomb} \subsection{Force teaching weighted binary cross-entropy loss }\label{weightloss} In weighted binary cross-entropy loss, the weights are obtained by: \begin{equation} \label{lossweight} w_{\textrm{loss}_{i}} = \frac{\frac{\textrm{NoS}}{\textrm{NoS}(i)}}{\sum_{j}^{}\frac{\textrm{NoS}}{\textrm{NoS}(j)}}, i \in [\textrm{Shm}, \textrm{Str}, \textrm{Obj}, \textrm{Vio}] \end{equation} where $\textrm{NoS}$ is the total number of samples in the training set. $\textrm{NoS}(i)$ represents the number of true instances for class $i$. Only four categories are considered in sub-task B: shaming, stereotype, objectification, and violence. Finally, the multi-label loss value is computed through the weighted combination of the binary cross-entropy: \begin{equation} \label{loss} \textrm{loss}1 = \sum_{i}^{}w_{\textrm{loss}_{i}}\cdot \textrm{BCE}(p_i, \textrm{target}_i). \end{equation} $p_i$ is the probability of class $i$ and $\textrm{target}_i$ is the ground truth. Due to the relationship between sub-task A and B, we adopted force teaching in the loss function for better performance. The sample should be first identified as misogynous and then classified into four categories. So, we add a force teaching term: \begin{equation} \label{mse} \textrm{loss}2 = \textrm{MSE}(\textrm{pseudo}_{_\textrm{Mis}}, \textrm{target}_{_\textrm{Mis}}), \end{equation} where \begin{equation} \label{mse1} \textrm{pseudo}_{_\textrm{Mis}} = \textrm{max}(p_{_\textrm{Shm}}, p_{_\textrm{Ste}}, p_{_\textrm{Obj}}, p_{_\textrm{Vio}}). \end{equation} The final loss is computed by \begin{equation} \label{lossend} \textrm{loss} = 0.7\times \textrm{loss}1 + 0.3 \times \textrm{loss}2. \end{equation} \subsection{\emph{Setup-A} results for sub-task A}\label{sub-taska} Sub-task A focuses on misogynous memes identification. Three single-modal models are first trained. Based on the single-modal models, the multi-modal models BERTC-Vit, GCAN-Vit, BERTC-GCAN, and BERTC-GCAN-Vit are optimized through our proposed fusion network (Figure~\ref{fig:mmms}). Figure~\ref{fig:subtaskacv} uses the Mann-Whitney-U test~\cite{mann1947test} to indicate statistically significant differences in the F1-score between the BERTC model and other models for sub-task A. \begin{figure}[h] \centering \includegraphics[scale=0.5]{./Sections/Image/subtaska1.pdf} \caption{Sub-task A macro-average F1-score for 10-fold cross validation. Asterisks indicate a statistically significant difference, where *** denotes 1e-04 < p <= 1e-03, * denotes 1e-02 < p <= 5e-02, ns denotes p > 5e-02.} \label{fig:subtaskacv} \end{figure} \begin{table}[H] \setlength{\tabcolsep}{3pt} \begin{tabular}{|c|c|c|c|} \hline model & ensemble & model & ensemble \\ \hline BERTC & 0.663 & GCAN-Vit & \textbf{0.707} \\ \hline GCAN & 0.674 & BERTC-GCAN & 0.677 \\ \hline Vit & 0.619 & \begin{tabular}[c]{@{}c@{}}BERTC-\\ GCAN-Vit\end{tabular} & 0.689 \\ \hline BERTC-Vit & 0.697 & - & - \\ \hline \end{tabular} \caption{Soft voting ensemble performance with \emph{setup-A} for sub-task A} \label{subtaskacv} \end{table} Table~\ref{subtaskacv} lists the model soft voting ensemble F1-score in 10-fold cross-validation. The GCAN-Vit model still outperforms other models after the soft voting ensemble. Finally, the model-level hard voting ensemble is adopted and yields 0.698 F1-score. \subsection{Significance comparison for \emph{setup-B} results}\label{sub-taskb} \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{./Sections/Image/subtaskb1.pdf} \caption{Sub-task A model performance significance comparison} \label{sigA} \end{figure} Figure~\ref{sigA} gives the sub-task A results, computed from \emph{setup-B}. Figure~\ref{sigB} is the results of sub-task B. Again, the multi-modal model GCAN-Vit outperforms other models. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{./Sections/Image/subtaskb2.pdf} \caption{Sub-task B model performance significance comparison} \label{sigB} \end{figure} \section{Introduction}\label{intro} Hate speech against women remains rampant despite many efforts at education, prevention and blocking. Misogyny takes place online and offline. Especially on social media platforms, misogyny appears in different forms and has serious implications~\cite{chetty2018hate}. Currently, automated detection and filtering seem to be the most effective way to prevent hate speech online. However, over the past few years, the rising popularity of memes brought misogyny to a new multi-modal form, which may be more likely to go viral due to their often surprising combinations of text and image that may strike viewers as funny and hence, as eminently shareable. The multi-modality of memes also makes automatic detection more challenging. Some memes express their hatred implicitly or through juxtaposition, so they may even appear harmless when considering the text or the image in isolation. SemEval-5 2022 Multimedia Automatic Misogyny Identification (MAMI)~\cite{task5} aims to identify and classify English misogynous memes. In recent years, the Transformer model \cite{vaswani2017attention} has been widely used in natural language processing (NLP) and image processing. Transfer learning~\cite{torrey2010transfer} with a pre-trained Transformer model can save training resources and increase efficiency with less training data~\cite{wang2020heck}. Therefore, in this work, we consider transfer learning to customize three uni-modal models based on the Transformer model: \romannumeral 1) fine-tuning a pre-trained RoBERTa model for classification (BERTC) \cite{liu2019roberta}; \romannumeral 2) training a graph convolutional attention network (GCAN) using the pre-trained RoBERTa model for word embedding; \romannumeral 3) fine-tuning a pre-trained image model, the Vision Transformer (ViT) \cite{dosovitskiy2020image}. Based on these three uni-modal models, four bi-modal models are trained through our proposed model fusion network, namely BERTC-ViT, GCAN-ViT, BERTC-GCAN, and BERTC-GCAN-ViT. All models are evaluated with 10-fold cross-validation. The macro-average and weighted-average F1-scores are employed as the metrics for the sub-tasks. Ultimately, the ensemble strategy is applied on both the dataset- and the model-level (detailed in Section~\ref{ensemble}) for better performance. The remainder of the paper is structured as follows: Section~\ref{background} introduces the MAMI challenge and related solutions to the task. Our ensemble model is described in Section~\ref{Overview}, followed by the experimental setup in Section~\ref{setup}. Finally, our results are shown and conclusions are drawn in Sections~\ref{results} and~\ref{conclusion}. \section{Results}\label{results} In summary, we investigated two configurations, displayed in Table~\ref{summary}. \emph{Setup A} represents the binary classification for sub-task A, resulting in an output dimension $n=1$. \emph{Setup B} additionally deals with the multi-label classification of sub-task B, returning an output of dimension $n=4$. All results are evaluated on the official test set. \begin{table}[!htb] \setlength{\tabcolsep}{2.7pt} \begin{tabular}{|c|c|c|c|} \hline \textbf{Setup} & \textbf{Task} & \textbf{Dimension} & \textbf{Loss} \\ \hline\hline \emph{Setup A} & sub-task A & $n=1$ & BCE \\ \hline \emph{Setup B} & \small{sub-tasks A/B} & $n=4$ & {\begin{tabular}[c]{@{}c@{}}\small{Weighted BCE} \& \\\small{Teacher Forcing} \end{tabular}} \\ \hline \end{tabular} \caption{Summary of the considered configurations.} \label{summary} \end{table} \subsection{Results for \emph{Setup A} (Sub-task A)} \label{sub-taska} In the first stage, we trained three different uni-modal models (i.e., BERTC, GCAN, and ViT). In the second stage, we optimized the bi-modal models (i.e., BERTC-ViT, GCAN-ViT, BERTC-GCAN, and BERTC-GCAN-ViT). The evaluation results in terms of the macro-average F1-score are displayed in Figure~\ref{fig:subtaskacv} and Table~\ref{subtaskacv}, showing the performance in identifying misogynous memes. To assess the statistical significance of performance differences, we applied a 10-fold cross validation and computed the Mann-Whitney-U test~\cite{mann1947test}. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{./Sections/Image/subtaska1.pdf} \caption{Macro-average F1-scores for sub-task A based on 10-fold cross validation. Asterisks indicate a statistically significant difference, where ** denotes 1e-04 < p <= 1e-03, * corresponds to 1e-02 < p <= 5e-02, and ns indicates results where p > 5e-02.} \label{fig:subtaskacv} \end{figure} As we can see, the text-only models (BERTC and GCAN) generally show a superior performance compared to the image-only model (ViT). The results in Figure~\ref{fig:subtaskacv} clearly indicate robust performance for our bi-modal models. They are more accurate and robust. In summary, the GCAN-ViT model yields the best results w.r.t. the reported median F1-score. \begin{table}[!htb] \setlength{\tabcolsep}{1.5pt} \begin{tabular}{|c|c||c|c|} \hline \textbf{Model} & \textbf{Ensemble} & \textbf{Model} & \textbf{Ensemble} \\ \hline\hline BERTC & 0.663 & GCAN-ViT & \textbf{0.707} \\ \hline GCAN & 0.674 & \small{BERTC-GCAN} & 0.677 \\ \hline ViT & 0.619 & \begin{tabular}[c]{@{}c@{}}\small{BERTC-}\\ \small{GCAN-ViT}\end{tabular} & 0.689 \\ \hline BERTC-ViT & 0.697 & - & - \\ \hline \end{tabular} \caption{Macro-average F1-scores of soft voting ensembles for sub-task A.} \label{subtaskacv} \end{table} Table~\ref{subtaskacv} lists the averaged F1-scores for soft voting ensembles, obtained by combining all learned models from the 10-fold cross-validations. The results show that our GCAN-ViT model outperforms all other models, achieving an F1-score of 0.707. \subsection{Results for \emph{Setup B} (Sub-tasks A/B)}\label{sub-taskb} Next, we addressed sub-task B, i.e.~to classify the misogynous memes into four, potentially overlapping, categories. Similar to \emph{Setup A}, we trained the same uni- and bi-modal models, but incorporating a different loss (see Table~\ref{summary}). For sub-task B, the weighted-average F1-score is applied. The results are presented in Figure~\ref{fig:sigs}. \begin{figure*}[!htb] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.5]{./Sections/Image/subtaskb1.pdf} \caption{Results for sub-task A.} \label{fig:sigA} \end{subfigure} \hspace{1.8cm \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[scale=0.5]{./Sections/Image/subtaskb2.pdf} \caption{Results for sub-task B.} \label{fig:sigB} \end{subfigure} \caption{Performance for \emph{Setup B}. The notation is defined in Figure~\ref{fig:subtaskacv}.} \label{fig:sigs} \end{figure*} Interestingly, the models optimized for sub-task B also perform better for sub-task A. In this case, we set the estimated label "misogynous" to 1 if at least one of the labels for "shaming", "stereotype", "objectification", or "violence" is 1. Figure~\ref{fig:sigA} depicts the sub-task A results while Figure~\ref{fig:sigB} shows the corresponding performance for sub-task B. Again, we see that the bi-modal model GCAN-ViT outperforms all other models. In addition, Tables~\ref{subtaskabcv} and~\ref{subtaskall} show the results for soft and hard voting ensembles. By comparing Table~\ref{subtaskabcv} with Table~\ref{subtaskacv} (both tables represent soft voting results), we observe significantly improved F1-scores for \emph{Setup B}. \begin{table}[!htb] \setlength{\tabcolsep}{5pt} \begin{tabular}{|c|c|c|} \hline \textbf{Model} & \textbf{Sub-task A} & \textbf{Sub-task B} \\ \hline\hline BERTC & 0.714 & 0.684 \\ \hline GCAN & 0.725 & 0.695 \\ \hline ViT & 0.666 & 0.641 \\ \hline BERTC-ViT & 0.746 &0.692 \\ \hline GCAN-ViT & \textbf{0.758} & \textbf{0.704} \\ \hline BERTC-GCAN & 0.724 & 0.696 \\ \hline BERTC-GCAN-ViT & 0.755 & 0.704 \\ \hline \end{tabular} \caption{F1-scores of soft voting ensembles for \emph{Setup B} (sub-tasks A and B).} \label{subtaskabcv} \end{table} \begin{table}[!htb] \setlength{\tabcolsep}{4.5pt} \begin{tabular}{|c|c|c|} \hline \textbf{Combination} & \textbf{Sub-task A} & \textbf{Sub-task B} \\ \hline\hline \small{Three uni-modal models} & 0.728 & 0.698 \\ \hline \small{Four bi-modal models} & 0.752 & \textbf{0.709} \\ \hline \small{All seven models} & \textbf{0.755} & 0.706 \\ \hline \small{Oracle model combination} & 0.762 & 0.716 \\ \hline \end{tabular} \caption{Model-level hard voting ensemble performance with \emph{Setup B} for sub-task A and B.} \label{subtaskall} \end{table} As a last experiment, we applied hard voting on the ensembles. Again, sub-task A results are derived from sub-task B. Table~\ref{subtaskall} shows the results of different combinations. Generally, the combination of the four bi-modal models in the 2nd row outperforms a combination of three uni-modal models in the 1st row. If we combine all uni- and bi-modal models (3rd row), the F1-score is 0.755 for sub-task A, and 0.706 for sub-task B. The results in bold print represent our submitted approaches for both sub-tasks, showing an F1-score of 0.755 for sub-task A and 0.709 for sub-task B. After the challenge ended, we again evaluated all possible subset combinations of the seven candidate models. The followed combinations give the best achievable results by knowing the official test set reference labels: ViT, BERTC-GCAN-ViT, BERTC-ViT, GCAN-ViT achieves an F1-score of 0.762 for sub-task A, while an ensemble consisting of BERTC-ViT and BERTC-GCAN-ViT yields an F1-score 0.716 on sub-task B. These results are shown for comparison in the final row of Table~\ref{subtaskall} as oracle results. \section{Experimental Setup}\label{setup} In this section, we describe our data processing and training pipeline in more detail. \subsection{Data pre-processing}\label{datapreprocessing} The challenge dataset provides a transcription text stream that was obtained via OCR. Via image captioning, we derive a second text stream that contains a description of the image in a few words. For the OCR text, we first use the Python \texttt{ftfy} package\footnote{\url{https://github.com/rspeer/python-ftfy}} to fix the garbled sequences that result from unexpected encodings (the \emph{mojibake}) like "à¶´à¶§à·". Next, all "@", "\#" symbols and website addresses are removed from the text. The emojis are converted to text form by the Python \texttt{emoji} package\footnote{\url{https://github.com/carpedm20/emoji}}. Finally, we remove non-English characters and convert the text to lowercase. For image captioning, we utilize a pre-trained encoder-decoder attention model~\cite{xu2016show}\footnote{\url{https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning}}. Although the translation from image to text is not very accurate, most likely owing to issues like the overlaid meme text, it was nonetheless beneficial for our classification task. We found that the description becomes more precise, when we split the memes into their constituent sub-images where applicable. In that case, the image caption is extracted over every sub-image as well as the entire meme. Finally, the image captions are combined with the word "and" and then concatenated with the OCR text, separated by ". ". With this rule, the final text of the meme in Figure~\ref{fig:meme} is: "\textit{mgo ci aindo make make me sandwich!!. a couple of baseball players standing next to each other and a woman holding a sign in front of a sign and a woman standing next to a group of people.}" \begin{figure}[H] \centering \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[scale=0.15]{./Sections/Image/datasetimage/0_0.jpg} \caption{a woman holding a sign in front of a sign} \label{first} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[scale=0.15]{./Sections/Image/datasetimage/1_0.jpg} \caption{a couple of baseball players standing next to each other} \label{second} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[scale=0.1]{./Sections/Image/datasetimage/org.jpg} \caption{a woman standing next to a group of people} \label{org} \end{subfigure} \caption{In (a) and (b), we see "sub-images" and corresponding captions. (c) shows the meme and its caption (when not considering the sub-image structure).} \label{fig:meme} \end{figure} We use the entire meme as the image input for ViT. All memes are first resized to 256$\times$256 and center-cropped to 224$\times$224 dimensions. The ViT model uses all 3 RGB channels, so we retain the RGB structure, thus the input image dimension is 3$\times$224$\times$224. We regularize the entire image database to range 0 to 1, then normalize each individual image to have zero mean and unit variance. \subsection{Loss function }\label{weightloss} We decided to use the binary cross-entropy (BCE) loss for both subtasks. Due to the imbalance in the class distributions (see Table~\ref{tab:datadis}), in sub-task B, we weighted the class-specific loss terms by their support as follows: \begin{equation} \label{lossweight} w_{c} = \frac{\frac{\textrm{NoS}}{\textrm{NoS}(c)}}{\sum_{c'}^{}\frac{\textrm{NoS}}{\textrm{NoS}(c')}}, c \in [\textrm{Shm}, \textrm{Ste}, \textrm{Obj}, \textrm{Vio}] \end{equation} where $\textrm{NoS}$ is the total number of samples in the training set and $\textrm{NoS}(c)$ represents the number of true instances for class $c$. The loss is then computed through the weighted combination of the single BCE terms: \begin{equation} \label{loss} \mathcal{L}_1 = \sum_{c}^{}w_{c}\cdot \textrm{BCE}(\textbf{\textit{p}}_c^B, \textbf{\textit{y}}_c^B). \end{equation} Here, $\textbf{\textit{p}}_c^B$ represents the system's output probability of class $c$ and $\textbf{\textit{y}}_c^B$ is the binary ground truth for sub-task B. Additionally, we employ a teacher forcing loss to connect both subtasks. The idea is that an instance should be identified as misogynous and possibly grouped into sub-categories simultaneously. The teacher forcing loss is defined as: \begin{equation} \label{mse} \mathcal{L}_2 = \lVert \textbf{\textit{p}}^A - \textbf{\textit{y}}^A \rVert, \end{equation} where the system's output probability for sub-task A is determined as: \begin{equation} \label{mse1} \textbf{\textit{p}}^A = \textrm{max}\big(\textbf{\textit{p}}_{_\textrm{Shm}}^B, \textbf{\textit{p}}_{_\textrm{Ste}}^B, \textbf{\textit{p}}_{_\textrm{Obj}}^B, \textbf{\textit{p}}_{_\textrm{Vio}}^B \big). \end{equation} The final loss is computed by \begin{equation} \label{lossend} \mathcal{L} = 0.7\cdot \mathcal{L}_1 + 0.3 \cdot \mathcal{L}_2. \end{equation} \subsection{Model training}\label{others} All models are trained using the PyTorch library~\cite{paszke2019pytorch} for 50 epochs. The AdamW optimizer~\cite{loshchilov2017decoupled} is used for backpropagation, using a linear learning rate scheduler with a warm-up to adapt the learning rate during the first four epochs in the training stage. The dropout rate is 0.5. The RoBERTa model parameters in the BERTC and the GCAN model are optimized separately. In our GCAN model, the adjacency matrix is computed with a sliding window of length 10. An 8-head self-attention is applied over 3 GCAN layers with an attention dimension of 1024. For all uni-modal models, the batch size is 16 and the initial learning rate is $2\cdot10^{-5}$. The RoBERTa and ViT block parameters in Figure~\ref{fig:smms} are also fine-tuned. The bi-modal models are trained based on the pre-trained uni-modal models. Here, we choose the batch size as 32, the initial learning rate is \mbox{$5\cdot10^{-6}$}. As the RoBERTa and ViT block parameters in Figure~\ref{fig:smms} are already updated during the uni-modal training stage, we froze these parameters in bi-modal re-training. To avoid overfitting, we adopt early stopping to exit the training process when the computed F1-score on the inner test set does not increase over 4 epochs. Inspired by~\cite{DBLP:conf/iclr/HuangLP0HW17}, we finally averaged those two epoch-wise model parameters, which had the highest validation F1-score during the training stage. The models have the same structure for sub-tasks A and B. The only differences are that in sub-task A, the classifier output dimension $n$ is $1$, and the BCE is used as the loss function (\emph{Setup A}), whereas in sub-task B, the classifier output dimension $n$ equals $4$ and training uses the weighted BCE with teacher forcing (Equation~\ref{lossend}) as the loss function (\emph{Setup B}). All models are trained using NVIDIA’s Volta-based DGX-1 multi-GPU system, using 3 Tesla V100 GPUs with 32~GB memory each. \section{System Overview}\label{Overview} In this section, we specify our uni- and bi-modal models. \begin{figure}[!htb] \centering \hspace{-0.3cm} \begin{subfigure}[b]{0.123\textwidth} \centering \includegraphics[scale=0.515]{./Sections/Image/BERTC.pdf} \caption{BERTC} \label{BERTC} \end{subfigure} \hspace{0.5cm} \begin{subfigure}[b]{0.123\textwidth} \centering \includegraphics[scale=0.515]{./Sections/Image/GCAN.pdf} \caption{GCAN} \label{GCAN} \end{subfigure} \hspace{0.65cm} \begin{subfigure}[b]{0.123\textwidth} \centering \includegraphics[scale=0.515]{./Sections/Image/Image.pdf} \caption{ViT} \label{Image} \end{subfigure} \caption{Uni-modal models, where $\textrm{L}_S$ is the sequence length, which depends on the RoBERTa tokenizer.} \label{fig:smms} \end{figure} Figure~\ref{fig:smms} depicts the three uni-modal models BERTC (\ref{BERTC}), GCAN (\ref{GCAN}), and ViT (\ref{Image}), which form the basis of our further experiments. The bi-modal models are constructed based on trained uni-modal models and our proposed model fusion network, which is further detailed in Section \ref{mmm}. Finally, we apply soft and hard voting ensembles on the trained candidate models. \subsection{Uni-modal models}\label{smm} As illustrated in Figure~\ref{fig:smms}, every uni-modal model has two outputs: the classification probabilities $\textbf{\textit{p}}_{\textrm{\tiny{i}}}$ and the classification features $\textrm{\textbf{f}}_{\textrm{\tiny{i}}}$. All classifier blocks in our models have the following, identical structure: a fully connected layer reduces the feature dimension to half the input dimension, followed by a ReLU activation and a dropout layer. Ultimately, an output layer projects the features to the output dimension $n$, and a sigmoid function squashes the range of the output vector components to $(0,1)$, allowing for an interpretation as a vector of label probabilities, with possible overlap in categories. \noindent \textbf{BERTC}: We fine-tune a pre-trained large RoBERTa language model (\texttt{roberta-large}) for classification. The text input is encoded by the RoBERTa model with the embedding dimension 1024. The Pooler layer returns the first classification token \texttt{} embedding $\textrm{\textbf{f}}_{\textrm{\tiny{BERTC}}}$ and feeds it into the classifier to obtain the probabilities $\textbf{\textit{p}}_{\textrm{\tiny{BERTC}}}$. \noindent \textbf{GCAN}: Again, a pre-trained RoBERTa model extracts contextual text information. Each token is considered as a word node and each meme is a document node. Thus the word node representation is given by the corresponding RoBERTa word embedding vector. We denote the input embedding sequence of document $k$ as $\textbf{\textit{x}}^k=[ \textbf{\textit{x}}_1^k, \textbf{\textit{x}}_2^k, \cdots \textbf{\textit{x}}_{\textrm{L}_S}^k]$, where $\textbf{\textit{x}}_i^k$, $i\in \{1, \ldots, \textrm{L}_S\}$ is a 1024-dimensional embedding vector of the $i$-th token. As depicted in Figure~\ref{GCAN}, $\textbf{\textit{x}}^k$ is an $\textrm{L}_S\times$1024 matrix. The first classification token \texttt{} embedding represents the document classification information. Thus, we use the document-word co-occurrence information TF-IDF as the edge weights for the \texttt{} embedding. All other token embeddings are weighted with the word co-occurrence information PMI. For each document $k$, we extract its specific adjacency matrix ${\tildeb{\textbf{\textit{A}}}}_k$ from the complete adjacency matrix ${\tildeb{\textbf{\textit{A}}}}$ by reducing it to rows and columns of all the document and word nodes ($i$ and $j$ in Equation~\ref{adj}) that are present in this document. The extracted document adjacency matrix ${\tildeb{\textbf{\textit{A}}}}_k$ is an $\textrm{L}_S\times\textrm{L}_S$ matrix. The GCAN block in Figure~\ref{GCAN} adopts the multi-head self-attention mechanism in 3 successive GCAN layers to embed the node representations. The queries \textbf{Q}, keys \textbf{K} and values \textbf{V} are identical and set to the respective layer input. The first layer input is given by the RoBERTa word embeddings $\textbf{\textit{x}}^k$ of the input text. The attention of head $j$ is obtained by \vspace{-0.2cm} \begin{equation} \label{dotatttransform} \small \bm{\alpha}_j = \textrm{softmax}\left(\frac{\left(\textbf{W}_j^Q\textbf{Q}^T\right)^T\left(\textbf{W}_j^K\textbf{K}^T\right)}{\sqrt{d_k}}\right)\left(\textbf{W}_j^V\textbf{V}^T\right)^T \end{equation} where $\textbf{W}_j^\ast$ are learned parameters for the queries $\textbf{Q}$, keys $\textbf{K}$, and values $\textbf{V}$, respectively. A superscript $T$ denotes the transpose; $d_k=\frac{d_{att}}{h}$, $d_{att}$ is the attention dimension and $h$ is the number of attention heads. Having computed the multi-head self-attention, each attention head output is multiplied by the document adjacency matrix ${\tildeb{\textbf{\textit{A}}}}_k$ \begin{equation} \label{dotatt} \tildea{\bm{{\alpha}}}_j = {\tildeb{\textbf{\textit{A}}}}_k\bm{\alpha}_j. \end{equation} Equation~\ref{layerprojection} describes the output $\bm{\alpha}$ of the GCAN layer: The weighted outcomes all heads are concatenated (concat), and a fully connected layer (FC) projects the representation to the attention dimension. Inspired by~\cite{velivckovic2017graph}, instead of concatenating the weighted attention head outputs, we employ averaging (avg) to fuse these weighted outputs in the last GCAN layer. A fully connected layer again projects the final representation to the attention dimension. Thus, after the GCAN block, the text representation is still an $\textrm{L}_S\times$1024 matrix. The document classification feature vector $\textrm{\textbf{f}}_{\textrm{\tiny{GCAN}}}$ is obtained by summing all node representations. \begin{equation} \label{layerprojection} \small \bm{\alpha}=\left\{\begin{matrix} \textrm{FC}\left ( \textrm{concat}\left ( \tildea{\bm{{\alpha}}}_1, \cdots \tildea{\bm{{\alpha}}}_h \right ) \right ) & \textrm{not in last layer} \\ \textrm{FC}\left ( \textrm{avg}\left ( \tildea{\bm{{\alpha}}}_1, \cdots \tildea{\bm{{\alpha}}}_h \right ) \right ) & \textrm{in last layer} \\ \end{matrix}\right. \end{equation} \noindent \textbf{ViT}: To extract the visual contextual information, we utilize the pre-trained ViT model \texttt{vit-large-patch16-224} to encode the input image. For this purpose, the input image is split into fixed-size patches, and a linear projection of the flattened patches is used to obtain the patch embedding vectors. The Transformer encoder transforms the embedding vectors. Finally, the embedding $\textrm{\textbf{f}}_{\textrm{\tiny{ViT}}}$ of the first classification token, \texttt{}, is fed to the classifier to obtain the prediction probabilities $\textbf{\textit{p}}_{\textrm{\tiny{ViT}}}$. \subsection{Bi-modal models}\label{mmm} Figure~\ref{fig:mmms} shows our fusion model structure. \begin{figure}[!htb] \centering \includegraphics[scale=0.50]{./Sections/Image/Fusion.pdf} \caption{Fusion model structure} \label{fig:mmms} \end{figure} Each model $\textrm{M}_i$ has two outputs: the vector of its classification probabilities $\textbf{\textit{p}}_{i}$ and the classification features $\textrm{\textbf{f}}_{i}$. We concatenate the model classification probabilities and features as a multi-modal representation to make the final decision. Two fusion strategies---stream-weighting-based decision fusion and representation fusion---are considered. The weight predictor and the classifier in Figure~\ref{fig:mmms} both have the same structure as the classifier block in Figure~\ref{fig:smms}. The weight predictor output dimension is the number $m$ of candidate models for fusion. The stream weighting probability $\textbf{\textit{p}}_{sw}$ is obtained through a weighted combination of the class probability vectors of all uni-modal model outcome probabilities, i.e.~ \begin{equation}\textbf{\textit{p}}_{sw}=\sum_{i}^{}\textbf{\textit{p}}_i\cdot w_i. \end{equation} The classifier output dimension is the same as the number of classes $n$. A sigmoid function computes the representation fusion probabilities $\textbf{\textit{p}}_{rf}$ from the combined multi-modal representation. Finally, we average the stream weighting and the representation fusion probabilities. The following model combinations are attempted, where $\textrm{M}_i$, $i\in{\{1, 2, 3\}}$ is the $i$-th pre-trained uni-modal model. \renewcommand{\arraystretch}{1.15} \begin{table}[H] \setlength{\tabcolsep}{6pt} \begin{tabular}{|c|c c c|} \hline \textbf{Bi-modal model} & $\textrm{M}_1$ & $\textrm{M}_2$ & $\textrm{M}_3$ \\ \hline\hline BERTC-ViT & BERTC & ViT & - \\ GCAN-ViT & GCAN & ViT & - \\ BERTC-GCAN & BERTC & GCAN & - \\ BERTC-GCAN-ViT & BERTC & GCAN & ViT \\ \hline \end{tabular} \end{table} \subsection{Ensemble learning}\label{ensemble} Having established a number of possible uni-modal and bi-modal models, we now combine these trained models into ensembles. It has been reported in many studies that ensemble learning can enhance performance in comparison to single learners~\cite{onan2016ensemble, zhu2020enhance, gomes2017survey}. Therefore, we consider soft and hard voting ensemble approaches. We use the Python \texttt{sklearn} package\footnote{\url{https://github.com/scikit-learn/scikit-learn}} for 10-fold cross-validation. Thus, each model structure was trained ten times with different inner test sets. Finally, these ten models are used to evaluate the official test set and deliver ten predictions for every sample. The soft voting ensemble method is implemented as follows: $\textbf{\textit{p}}_{\small \textrm{M}_i}$, the ensemble probabilities that are used in the overall class decisions, are computed via \begin{equation} \label{softvote} \textbf{\textit{p}}_{\small \textrm{M}_i}=\sum_{j=0}^{9}w_{\small \textrm{M}_i}^j\cdot \textbf{\textit{p}}_{\small \textrm{M}_i}^j. \end{equation} Here, $\textbf{\textit{p}}_{\small \textrm{M}_i}^j$ denotes the probabilities of model $\textrm{M}_i$ in the $j$-th fold. The weights $w_{\small \textrm{M}_i}^j$ are computed by \begin{equation} \label{weight} w_{\small \textrm{M}_i}^j = \frac{\textrm{F1}_{\small \textrm{M}_i}^j}{\sum_{f}^{}\textrm{F1}_{\small \textrm{M}_i}^f}. \end{equation} $\textrm{F1}_{\small \textrm{M}_i}^j$ corresponds with the best F1-score of model $\textrm{M}_i$ over all epochs, computed on the inner test set in fold $j$. This soft voting ensemble, using the same model structure, but with the multiple outcomes from 10-fold cross-validation, is referred to as a \emph{dataset-level ensemble} in the following. The second type of ensemble---the \emph{model-level ensemble}---is constructed from the \emph{dataset-level ensemble} results of each model. We use a hard voting strategy with seven candidate models (BERTC, GCAN, ViT, BERTC-ViT, GCAN-ViT, BERTC-GCAN, and BERTC-GCAN-ViT). In this approach, we set the final prediction for a data point to one, if at least half of the considered models vote one, making it a simple majority-voting strategy. \section*{Acknowledgements} The work was supported by the PhD School ”SecHuman - Security for Humans in Cyberspace” by the federal state of NRW.
1,477,468,750,732
arxiv
\section{Extended Mott-Hagedorn resonance gas} \subsection{No quarks and gluons; hadronic spectral function with state-independent ansatz} We introduce the width $\Gamma$ of a resonance in the statistical model through the spectral function \begin{equation} \label{one} A(M,m)=N_M \frac{\Gamma \cdot m}{(M^2-m^2)^2+\Gamma^2 \cdot m^2}~, \end{equation} a Breit-Wigner distribution of virtual masses with a maximum at $M = m$ and the normalization factor \begin{equation} \label{two} N_M = \left[\, \int\limits_{m_0^2}^\infty {d(M^2)} \frac{ \Gamma \cdot m }{ ( M^2 - m^2)^2 + \Gamma^2 \cdot m^2 } \right]^{-1} =\frac{1}{ \frac{\pi}{2} + \arctan \left( \frac{m^2 - m^2_0 }{ \Gamma \cdot m} \right) }\,. \end{equation} The logarithm of the grand canonical partition function for hadrons and resonances can be written as \begin{eqnarray} \label{partfn} \log Z(T,V,\mu_B,\mu_S) &=& V\sum_{i:~ m_i < m_0}g_i\delta_i\!\int \frac{d^3 k}{ (2 \pi)^3 }\log\left(1+\delta_i e^{-(\!\sqrt{k^2 +m_i^2} - \mu_i)/T }\right)\\ \nonumber & + &V\,\sum_{i:~ m_i \geq m_0} g_i\delta_i\!\int\limits_{m_0^2}^\infty {d(M^2)}~A(M,m_i\int \frac{d^3 k}{ (2 \pi)^3 }\log\left(1+\delta_i e^{-(\!\sqrt{k^2 +m_i^2} - \mu_i)/T }\right)\,, \end{eqnarray} with the degeneracy $g_i$ and the chemical potential $\mu_i = B_i \cdot \mu_B + S_i \cdot \mu_S$ of hadron $i$. $\mu_B$ and $\mu_{S}$ are chemical potentials for baryon number and for strangeness respectively. For mesons, $\delta_{i} = -1 ~$ and for baryons $~ \delta_{i} = 1$. And the model ansatz for the resonance width $\Gamma$ is given by \cite{Blaschke:2003ut} \begin{equation} \label{three} \Gamma (T) = C_{\Gamma}~ \left( \frac{ m}{T_H} \right)^{N_m} \left( \frac{ T}{T_H} \right)^{N_T} \exp \left( \frac{ m}{T_H } \right)~, \end{equation} where $C_{\Gamma} = 10^{-4}$ MeV, $N_m = 2.5$, $N_T = 6.5$ and the Hagedorn temperature $T_H = 165$ MeV. The internal energy density of this model with zero resonance proper volume for given temperature $T$ and chemical potentials $\mu_B, \mu_{S}$ for strangeness, can be cast in the form \begin{equation} \label{four} \varepsilon(T,\mu_B,\mu_S) = \sum_{i:~ m_i < m_0} g_i ~\varepsilon_i (T,\mu_i;m_i)\nonumber + \sum_{i:~ m_i \geq m_0} g_i ~\int\limits_{m_0^2}^\infty {d(M^2)} ~A(M,m_i)~\varepsilon_i (T,\mu_i;M), \end{equation} where $m_0 = 1$ GeV and the internal energy density per degree of freedom with a mass $M$ is \begin{equation} \label{five} \varepsilon_i (T,\mu_i;M) = \int \frac{d^3 k}{ (2 \pi)^3 } \frac{\sqrt{k^2+M^2}}{\exp \left(\frac{\sqrt{k^2 +M^2} - \mu_i}{T} \right) + \delta_i } \, , \end{equation} According to Eq. (\ref{one}) the energy density of hadrons consists of the contribution of light hadrons for $m_i < m_0$ ~ and the contribution of heavier hadrons smeared with the spectral function for $m_i \geq m_0$. For simplicity, we assume $n_{S} = 0$ for the strangeness number density and $n_{B}= 0$ for the baryon number density. Then $\mu_{B} = 0$ and $\mu_{S} = 0$ always, so the temperature is the only significant statistical parameter here. The pressure $(P)$ is obtained from the thermodynamic relation \begin{equation}\label{pressure} P=\frac{\partial(T\log Z)}{\partial V}\,, \end{equation} which becomes in our case \[ P=\frac{T}{V}\log Z\,.\] The sound velocity squared for zeroth chemical potentials is given by \begin{equation} \label{ten} c_s^2 = \frac{\partial P}{\partial \varepsilon }~. \end{equation} In Fig.~\ref{Fig.1} we show the results for the thermodynamic quantities (pressure, energy density and squared speed of sound) of the MHRG model at this stage. The nice correspondence with results from lattice QCD is not accidental for the temperature region $T\sim T_c \sim 200$ MeV, where has been shown in \cite{Borsanyi:2010cj} that a hadron resonance gas perfectly describes the lattice QCD data. For $T>T_c$ the broadening of the spectral function (\ref{one}) which affects at this stage of the model only the hadronic resonances with $m>m_0$ leads to the vanishing of their contribution at about $2T_c$ while the light hadrons with masses $m<m_0$ are not affected and gradually reach the Stefan-\- Boltzmann (SB) limit determined by their number of degrees of freedom. As has been noted in \cite{Brown:1991dj}, this number ($\sum_{i=\pi,K,\eta,f_0,\rho,\omega,K^*,\eta',f_0,a_0}g_i= 3+4+1+1+9+3+6+1+1+3=32$) accidentally (or by duality arguments) coincides with that of the quarks and gluons ($\sum_{i=q,g}g_i=7/8*N_c*N_f*N_s*2 + (N_c^2-1)*2=31.5$) for $N_c=N_f=3$. Therefore, imposing that all mesons lighter than $m_0=1$ GeV are stable provides us with a SB limit at high temperatures which fakes that of quarks and gluons in the case for three flavors. Although providing us with an excellent fit of the lattice data, the high-temperature phase of this model is unphysical since it ignores the Mott effect for light hadrons. Due to the chiral phase transition at $T_c$, the quarks loose their mass and therefore the threshold of the continuum of quark-antiquark scattering states is lowered. At the same time the light meson masses, however, remain almost unaffected by the increase in the temperature of the system. Consequently, they merge the continuum and become unbound - their spectral function changes from a delta-function (on-shell bound states) to a Breit-Wigner-type (off-shell, resonant scattering states). This phenomenon is the hadronic analogue \cite{Zablocki:2010zz,Blaschke:2013zaa} of the Mott-Anderson transition for electrons in solid state physics (insulator-conductor transition). \begin{figure}[!th] \includegraphics[width=0.7\textwidth]{mhrg_latt_all} \caption{\label{Fig.1} (Color online) Thermodynamic quantities for the old Mott-Hagedorn Resonance Gas model \cite{Blaschke:2003ut}. Different line styles correspond to different values for the parameter $N_m$ in the range from $N_m=2.5$ (dashed line) to $N_m=3.0$ (solid line). Lattice QCD data are from Ref.~\cite{Borsanyi:2010cj}. } \end{figure} It has been first introduced for the hadronic-to-quark-matter transition in \cite{Blaschke:1984yj}. Later, within the NJL model, a microscopic approach to the thermodynamics of the Mott dissociation of mesons in quark matter has been given in the form of a generalized Beth-Uhlenbeck equation of state \cite{Hufner:1994ma}, see also \cite{Radzhabov:2010dd}. Recently, a detailed treatment of the Mott dissociation of two-quark correlations \cite{Blaschke:2013zaa} and in particular of pions \cite{Wergieluk:2012gd,Yamazaki:2012ux,Dubinin:2013yga} was give within the PNJL model as well as its nonlocal generalization \cite{Benic:2013tga}. \subsection{Hadronic spectral function with state-dependent ansatz} As a microscopic treatment of the Mott effect for all resonances is presently out of reach, we introduce an ansatz for a state-dependent hadron resonance width $\Gamma_i(T)$ given by the inverse collision time scale recently suggested within an approach to the chemical freeze-out and chiral condensate in a resonance gas \cite{Blaschke:2011ry} \begin{equation} \label{Gamma} \Gamma_i (T) = \tau_{\rm coll,i}^{-1}(T) = \sum_{j}\lambda\,\langle r_i^2\rangle_T \langle r_j^2\rangle_T~n_j(T)~, \end{equation} which is based on a binary collision approximation and relaxation time ansatz using for the in-medium hadron-hadron cross sections the geometrical Povh-H\"ufner law \cite{Povh:1990ad}. In Eq.~(\ref{Gamma}) the coefficient $\lambda$ is a free parameter, $n_j(T)$ is the partial density of the hadron $j$ and the mean squared radii of hadrons $\langle r_i^2 \rangle_T$ obtain in the medium a temperature dependence which is governed by the (partial) restoration of chiral symmetry. For the pion this was quantitatively studied within the NJL model \cite{Hippe:1995hu} and it was shown that close to the Mott transition the pion radius is well approximated by \begin{equation} r_\pi^2(T)=\frac{3}{4\pi^2} f_\pi^{-2}(T) =\frac{3M_\pi^2}{4\pi^2m_q} |\langle \bar{q} q \rangle_{T}|^{-1}~. \end{equation} Here the Gell-Mann--Oakes--Renner relation has been used and the pion mass shall be assumed chirally protected and thus temperature independent. For the nucleon, we shall assume the radius to consist of two components, a medium independent hard core radius $r_0$ and a pion cloud contribution $r_N^2(T)=r_0^2+r_\pi^2(T)~,$ where from the vacuum values $r_\pi=0.59$ fm and $r_N=0.74$ fm one gets $r_0=0.45$ fm. A key point of our approach is that the temperature dependent hadronic radii shall diverge when hadron dissociation (Mott effect) sets in, driven basically by the restoration of chiral symmetry. As a consequence, in the vicinity of the chiral restoration temperature all meson radii shall behave like that of the pion and all baryon radii like that of the nucleon. The resulting energy density behaviour is shown in Fig.~\ref{Fig.1a}. \begin{figure}[!th] \includegraphics[width=0.7\textwidth]{MHRG-spectral-m0.eps} \caption{\label{Fig.1a} (Color online) Energy density (red lines and symbols) and pressure (black lines and symbols) for the state-dependent width model of Eq.~(\ref{Gamma}) and three values of the mass threshold $m_0$: 1 GeV (solid lines), 980 MeV (dashed lines), 0 (dash-dotted lines). Lattice QCD data are from Ref.~\cite{Borsanyi:2010cj}.} \end{figure} This part of the model we call Mott-Hagedorn-Resonance-Gas (MHRG). When all hadrons are gone at $T\sim 250$ MeV, we are clearly missing degrees of freedom! \section{Quarks, gluons and hadron resonances above $T_c$} We improve the PNJL model over its standard versions \cite{Fukushima:2003fw,Ratti:2005jh} by adding perturbative corrections in $\mathcal{O}(\alpha_s)$ for the high-momentum region above the three-momentum cutoff $\Lambda$. In the second step, the MHRG part is replaced by its final form, using the state-dependent spectral function for the description of the Mott dissociation of all hadron resonances above the chiral transition. The total pressure obtains the form \begin{equation} P(T)=P_{\rm MHRG}(T)+P_{\rm PNJL,MF}(T)+P_2(T) ~. \end{equation} where $P_{\rm MHRG}(T)$ stands for the pressure of the MHRG model, accounting for the dissociation of hadrons in hot dense, matter. The $\mathcal{O}(\alpha_s)$ corrections can be split in quark and gluon contributions \begin{equation} \label{P2} P_2(T)=P_2^{{\rm quark}}(T) + P_2^{{\rm gluon}}(T)~, \end{equation} where $P_2^{{\rm quark}}$ stands for the quark contribution and $P_2^{{\rm gluon}}$ contains the ghost and gluon contributions. The total perturbative QCD correction to $\mathcal{O}(\alpha_s)$ is \begin{equation} P_2=-\frac{8}{\pi}\alpha_s T^4(I_{\Lambda}^+ +\frac{3}{\pi^2}((I_{\Lambda}^+)^2+(I_{\Lambda}^-)^2)), \end{equation} where $I^{\pm}_{\Lambda}=\int\limits_{\Lambda/T}^{\infty}\frac{{\rm d}x~x}{{\rm e}^x\pm 1}$. The corresponding contribution to the energy density is given in standard way from the thermodynamic relation \begin{equation*} \label{six} \varepsilon + P = T \cdot s = T \cdot \frac{\partial P}{\partial T }~, \end{equation*} We will now include an effective description of the dissociation of hadrons due to the Mott effect into the hadron resonance gas model by including the state dependent hadron resonance width (\ref{Gamma}) into the definition of the HRG pressure \begin{equation} P_{\rm MHRG}(T)=\sum_{i}\delta_id_i\!\int\!\frac{d^3p}{(2\pi)^3}dM\, A_i(M) T \ln\left(1+\delta_i{\rm e}^{-\sqrt{p^2+M^2}/T} \right)\,. \end{equation} From the pressure as a thermodynamic potential all relevant thermodynamical functions can be obtained. Combining the $\alpha_s$ corrected meanfield PNJL model for the quark-gluon subsystem with the MHRG description of the hadronic resonances we obtain the results shown in the Fig.~\ref{Fig.3} where the resulting partial contributions in comparison with lattice QCD data from Ref.~\cite{Borsanyi:2010cj} are shown. We see that the lattice QCD thermodynamics is in full accordance with a hadron resonance gas up to a temperature of $\sim 170$ MeV which corresponds to the pseudocritical temperature of the chiral phase transition. The lattice data saturate below the Stefan-Boltzmann limit of an ideal quark-gluon gas at high temperatures. The PNJL model, however, attains this limit by construction. The deviation is to good accuracy described by perturbative corrections to $\mathcal{O}(\alpha_s)$ which vanish at low temperatures due to an infrared cutoff procedure. The transition region $170\le T[{\rm MeV}]\le 250$ is described by the MHRG model, resulting in a decreasing HRG pressure which vanishes at $T \sim 250$ MeV. \begin{figure}[!th] \includegraphics[width=0.9\textwidth]{parton-fraction} \caption{\label{Fig.3} (Color online) Thermodynamic quantities (pressure - left panels; energy density - right panels) for the new Mott-Hagedorn Resonance Gas where quark-gluon plasma contributions are described within the PNJL model including $\alpha_s$ corrections (dashed lines). The total quantites are shown by full lines and compared to lattice QCD data \cite{Borsanyi:2010cj} in the upper panels. In the lower panels we show the fraction of partonic pressure (lower left panel) and the fraction of partonic energy density (lower right panel) by the solid blue lines, resp. Also shown in these lower panels is the fraction of partonic degrees of freedom $\lambda$, as introduced in Ref.~\cite{Nahrgang:2013xaa}. } \end{figure} We have presented two stages of an effective model description of QCD thermodynamics at finite temperatures which properly accounts for the fact that in the QCD transition region it is dominated by a tower of hadronic resonances. To this end we have further developed a generalization of the Hagedorn resonance gas thermodynamics which includes the finite lifetime of hadronic resonances in a hot and dense medium by a model ansatz for a temperature- and mass dependent spectral function. \section{Conclusion and outlook} After having presented the MHRG-PNJL model with the state-dependent spectral function approach we show the summary of the thermodynamic quantities in Fig.~\ref{Fig.3}. We have presented two stages of an effective model description of QCD thermodynamics at finite temperatures which properly accounts for the fact that in the QCD transition region it is dominated by a tower of hadronic resonances. In the first of the two stages of the developments we presented here, we have used the fact that the number of low-lying mesonic degrees of freedom with masses below $\sim 1$ GeV approximately equals that of the thermodynamic degrees of freedom of a gas of quark and gluons. In the second one we have further developed a generalization of the Hagedorn resonance gas thermodynamics which includes the finite lifetime of heavy resonances in a hot and dense medium by a model ansatz for a temperature- and mass dependent spectral function which is motivated by a model for the collision time successfully applied in the kinetic description of chemical freeze-out from a hadron resonance gas. The presented formalism allows for the analysis of relative contributions originating from partonic and hadronic degrees of freedom. Those latter ones, still present even for temperatures higher than the critical one \cite{Ratti:2011au} lead to clear phenomenological effects. In particular, they significantly influence the behavior of heavy quark observables in the hot QCD phase above $T_c$. The recent analysis of this effect \cite{Nahrgang:2013xaa} was based on rather arbitrary assumptions concerning hadronic degrees of freedom at higher temperatures. By comparing improved PNJL and MHRG contributions to the pressure and energy density, as presented in Fig.~\ref{Fig.3}, we were able to extract the fraction of pressure or energy density carried by partonic degrees of freedom - without going beyond our model assumptions. This opens interesting new applications of the approach presented here to the phenomenology of ultra-relativistic heavy-ion collisions. \subsection*{Acknowledgment} This work was supported in part by the Polish National Science Center (NCN) under contract No. N~N202 0523 40 and the ``Maestro'' programme UMO-2011/02/A/ST2/00306 (D.B.). \\[5mm]
1,477,468,750,733
arxiv
\section{Introduction}\label{intro} Let $\field$ be $\real$ or $\complex$. Throughout this paper, unless otherwise stated, all mappings are smooth (that is, of class $C^{\infty}$ if $\field$ = $\real$ or holomorphic if $\field$ = $\complex$). Let $S$ be a subset consisting of $r$ distinct points in $\field^n$. A map-germ $f:(\field^n, S)\rightarrow (\field^p, 0)$ is called a \textit{multigerm}. A multigerm $f:(\field^n, S)\rightarrow (\field^p, 0)$ can be identified with $\{ f_k:(\field^n, 0)\rightarrow (\field^p, 0) |\, 1\leq k\leq r \}$. Each $f_k$ is called a \textit{branch} of $f$. If $r=1$, $f$ is called a \textit{monogerm}. Let $C_{n,S}$ (resp., $C_{p,0}$) be the set of function-germs $(\field^n, S)\rightarrow \field$ (resp., $(\field^p, 0)\rightarrow \field$), and let $m_{n,S}$ (resp., $m_{p,0}$) be the subset of $C_{n,S}$ (resp., $C_{p,0}$) consisting of function-germs $(\field^n, S)\rightarrow (\field, 0)$ (resp., $(\field^p, 0)\rightarrow (\field, 0)$). For a non-negative integer $i$, let $m_{n,S}^i$ be the subset of $C_{n,S}$ consisting of function-germs $(\field^n, S)\rightarrow \field$ such that the terms of the Taylor series of them up to ($i-1$) are zero. The sets $C_{n,S}$ and $C_{p,0}$ have natural $\field$-algebra structures. For a multigerm $f:(\field^n, S)\rightarrow (\field^p, 0)$, let $f^*:C_{p,0}\rightarrow C_{n,S}$ be the $\field$-algebra homomorphism defined by $f^*(\psi)=\psi \circ f$. Set $Q(f)=C_{n,S}/f^{*}m_{p,0}C_{n,S}$ and $\delta(f)= \dim_{\field} Q(f)$. For a map germ $f:(\field^n, S)\rightarrow \field^p$, let $\theta_{n,S}(f)$ be the set of germs of vector fields along $f$. The set $\theta_{n,S}(f)$ has a natural $C_{n,S}$-module structure and is identified with the direct sum of $p$ copies of $C_{n,S}$. Put $\theta_{S}(n)=\theta_{n,S}(\mathrm{id}_{(\field^n, S)})$ and $\theta_0(p)=\theta_{p, \{0\}}(\mathrm{id}_{(\field^p, 0)})$, where $\mathrm{id}_{(\field^n, S)}$ (resp., $\mathrm{id}_{(\field^p, 0)}$) is the germ of the identity mapping of $(\field^n, S)$ (resp., $(\field^p, 0)$). For a multigerm $f:(\field^n, S)\rightarrow (\field^p, 0)$, following Mather \cite{ma}, define $tf$ : $\theta_{S}(n)\rightarrow \theta_{n,S}(f)$ (resp., $\omega f$ : $\theta_0(p)\rightarrow \theta_{n,S}(f)$) as $tf(\xi)=df\circ \xi$ (resp., $\omega f(\eta)=\eta\circ f$), where $df$ is the differential of $f$. Following Wall \cite{wa}, set $T{\cal R}_e(f)=tf(\theta_{S}(n))$ and $T{\cal L}_e(f)= \omega f(\theta_0(p))$. For a multigerm $f:(\field^n, S)\rightarrow (\field^p, 0)$, a vector field $\xi\in\theta_S(n)$ is said to be \textit{lowerable} if $df\circ\xi$ $\in T{\cal R}_e(f)\cap T{\cal L}_e(f)$. The set of lowerable vector fields is denoted by $\Lower(f)$. The set $\Lower(f)$ has a $C_{p,0}$-module structure via $f$. The notion of lowerable vector field, which was introduced by Arnol'd \cite{ar} for studying bifurcations of wave front singularities, is significant in Singularity Theory (for instance, see \cite{is}). In this paper, we investigate the following problem. \begin{problem}\label{prob1} Let $f:(\field^n, S)\rightarrow (\field^p, 0)$ be a multigerm satisfying $\delta(f)<\infty$. Then, is $\Lower(f)$ finitely generated? In the case that $\Lower(f)$ is finitely generated, prove it in a constructive way. \end{problem} Our first result is the following Proposition \ref{prop1}, which reduces Problem \ref{prob1} to the problem on $T{\cal R}_e(f)\cap T{\cal L}_e(f)$. \begin{prop}\label{prop1} Let $f:(\field^n, S)\rightarrow (\field^p, 0)$ be a multigerm satisfying $\delta(f)<\infty$. Then, $tf$ is injective. \end{prop} \noindent In the complex analytic case, since $C_{p,0}$ is Noetherian and $T{\cal R}_e(f)\cap T{\cal L}_e(f)$ is a $C_{p,0}$-submodule of the finitely generated module $\theta_{n,S}(f)$, it follows that $T{\cal R}_e(f)\cap T{\cal L}_e(f)$ is finitely generated. However, the algebraic argument gives no constructive proof. Moreover, in the real $C^{\infty}$ case, even finite generation of $T{\cal R}_e(f)\cap T{\cal L}_e(f)$ seems to be open. The main purpose of this paper is to give a constructive proof of the following theorem, which works well in both the real $C^{\infty}$ case and the complex analytic case. \begin{thm}\label{thm1} Let $f:(\field^n, S)\rightarrow (\field^p, 0)$ be a finitely ${\cal L}$-determined multigerm. Then, $T{\cal R}_e(f)\cap T{\cal L}_e(f)$ is finitely generated as a $C_{p,0}$-module via $f$. \end{thm} \noindent Here, a multigerm $f:(\field^n, S)\rightarrow (\field^p, 0)$ is said to be \textit{finitely ${\cal L}$-determined} if there exists a positive integer $\ell$ such that $m_{n,S}^\ell\theta_{n,S}(f) \subset T{\cal L}_e(f)$ holds. It is easily seen that if $f$ is finitely ${\cal L}$-determined, then $\delta(f)$ is finite. Thus, by combining Proposition \ref{prop1} and Theorem \ref{thm1}, we have the following partial affirmative answer to Problem \ref{prob1}. \begin{cor}\label{cor1} Let $f:(\field^n, S)\rightarrow (\field^p, 0)$ be a finitely ${\cal L}$-determined multigerm. Then, $\Lower(f)$ is finitely generated as a $C_{p,0}$-module via $f$. \end{cor} \bigskip In Section \ref{proof1} (resp., Section \ref{proof2}), Proposition \ref{prop1} (resp., Theorem \ref{thm1}) is proved. \section*{Acknowledgement} This work is partially supported by JSPS and CAPES under the Japan-Brazil research cooperative program. Y.M. is supported by Grant-in-Aid for JSPS Fellows Grant Number 251998. \section{Proof of Proposition \ref{prop1}}\label{proof1} It is sufficient to show that if $f:(\field^n, 0)\rightarrow (\field^p, 0)$ is a monogerm satisfying $\delta(f)<\infty$, then $tf$ is injective. The property $\delta(f)<\infty$ implies that $f$ is finite to one (for instance, see \cite{GG}). Suppose that $tf (\xi) = 0$. Then it follows that $f$ is constant along any integral curve of $\xi$. Since $f$ is finite to one, it follows that any integral curve of $\xi$ must consist of only one point. This means that $\xi = 0$. Thus, $tf$ must be injective. This completes the proof. \qed \section{Proof of Theorem \ref{thm1}}\label{proof2} Since $f$ is finitely ${\cal L}$-determined, there exists a positive integer $\ell$ such that \begin{equation}\label{eqn1} m_{n,S}^\ell\theta_{n,S}(f) \subset T{\cal L}_e(f) \end{equation} and $\delta(f)<\infty$. Thus, we have that $Q(f_k)$ is a finite dimensional vector space for any $k$ ($1\leq k \leq r$). Set $\delta(f_k) = m_k$ and $Q(f_k)=\{[\phi_{k,1}],\ldots,[\phi_{k,m_k}]\}_\field$, where $\phi_{k,j}\in C_{n,\{0\}}$ and $[\phi_{k,j}]=\phi_{k,j}+f_{k}^{*}m_{p, 0}C_{n,\{0\}}$ for any $j$ ($1\leq j \leq m_k$). Now, we find a finite set of generators for $T{\cal R}_e(f)\cap T{\cal L}_e(f)$. Let $(x_1,\ldots, x_n)$ (resp., $(X_1, \ldots, X_p)$) be the standard local coordinates of $\field^n$ (resp., $\field^p$) at the origin. Then, by the preparation theorem, $T{\cal R}_e(f_k)$ may be expressed as follows: \[ \left. \left\{ \left(\begin{array}{ccc} \displaystyle{\frac{\partial(f_{k,1})}{\partial x_1}}&\cdots&\displaystyle{\frac{\partial(f_{k,1})}{\partial x_n}} \\ \vdots&&\vdots \\ \displaystyle{\frac{\partial(f_{k,p})}{\partial x_1}}&\cdots&\displaystyle{\frac{\partial(f_{k,p})}{\partial x_n}} \end{array}\right) \left(\begin{array}{c} \displaystyle{\sum_{1\leq j\leq m_k}(\psi_{k,1,j}\circ f_k)\phi_{k,j}} \\ \vdots \\ \displaystyle{\sum_{1\leq j\leq m_k}(\psi_{k,n,j}\circ f_k)\phi_{k,j}} \end{array}\right) \right|\, \psi_{k,i,j}\in C_{p,0} \right\} , \] where $f_k=(f_{k,1},\ldots,f_{k,p})$. It can be simplified as follows: \[ \left. \left\{ \sum_{1\leq i\leq n, 1\leq j\leq m_k }^{}(\psi_{k,i,j}\circ f_k)\xi_{k,i,j} \right|\, \psi_{k,i,j}\in C_{p,0} \right\}, \] where $\xi_{k,i,j}$ is the transposed matrix of \[ \left(\frac{\partial (f_{k,1})}{\partial x_i}\phi_{k,j},\ldots,\frac{\partial (f_{k,p})}{\partial x_i}\phi_{k,j} \right). \] Note that $\xi_{k,i,j} \in T{\cal R}_e(f_k)$. For simplicity, we abbreviate $\displaystyle{\sum_{1\leq i\leq n, 1\leq j\leq m_k }}$ to $\displaystyle{\sum_{i, j}}$. Let $\bar{\eta}=(\bar{\eta_1},\ldots,\bar{\eta_r})$ be an element of $T{\cal R}_e(f)\cap T{\cal L}_e(f)$. Then, $\bar{\eta}$ may be expressed as follows: \[ \bar{\eta}=(\bar{\eta}_1,\ldots,\bar{\eta}_r)=\left(\sum_{i,j}(\psi_{1,i,j}\circ f_1)\xi_{1,i,j},\ldots,\sum_{i,j}(\psi_{r,i,j}\circ f_r)\xi_{r,i,j}\right). \] For a $p$-tuple of non-negative integers $\alpha=(\alpha_1,\ldots,\alpha_p)$, set $|\alpha|=\alpha_1+\cdots+\alpha_p$, $f_k^{\alpha}=f_{k,1}^{\alpha_1}\cdots f_{k,p}^{\alpha_p}$ and $X^{\alpha}=X_1^{\alpha_1}X_2^{\alpha_2}\cdots X_p^{\alpha_p}$. The function-germ $\psi_{k,i,j} \in C_{p,0}$ can be written as the sum of a polynomial of total degree less than or equal to ($\ell-1$) and an element in $m^\ell_{p,0}$ as follows: \[ \psi_{k,i,j}(X_1,\ldots,X_p)=\sum_{0 \leq |\alpha| \leq \ell-1}c_{k,i,j,\alpha}X^{\alpha} +\sum_{|\alpha|=\ell}\tilde{\psi}_{k,i,j,\alpha}X^{\alpha}, \] where $c_{k,i,j,\alpha} \in \field$ and $\tilde{\psi}_{k,i,j,\alpha} \in C_{p,0}$. Note that $\ell$ is the positive integer given in $(\ref{eqn1})$. Then, we have the following: \[ \bar{\eta}_k=\sum_{i,j} \, \sum_{0 \leq |\alpha| \leq \ell-1}c_{k,i,j,\alpha}(f_k^{\alpha}\xi_{k,i,j}) +\sum_{i,j} \, \sum_{|\alpha|=\ell}\tilde{\psi}_{k,i,j,\alpha}(f_k^{\alpha}\xi_{k,i,j}). \] Set $\bar{\xi}_{k,i,j,\alpha}= (0,\dots,0,\underbrace{f_k^{\alpha}\xi_{k,i,j}}_{k\mbox{-th component}},0,\ldots,0)$. Note that $\bar{\xi}_{k,i,j,\alpha}\in T{\cal R}_e(f)$. Then, we have the following: \[ \bar{\eta} = \sum_{1\leq k\leq r}\sum_{i,j}\sum_{0 \leq |\alpha| \leq \ell-1}c_{k,i,j,\alpha}\bar{\xi}_{k,i,j,\alpha} +\sum_{1\leq k\leq r}\sum_{i,j}\sum_{|\alpha| = \ell}(\tilde{\psi}_{k,i,j,\alpha} \circ f)\bar{\xi}_{k,i,j,\alpha}. \] We define finite sets $L$ and $H$ as follows: \[ L=\left\{\bar{\xi}_{k,i,j,\alpha} \left|\, 0 \leq |\alpha| \leq \ell-1, 1\leq k \leq r, 1\leq i \leq n, 1\leq j \leq m_k\right\}\right., \] \[ H=\left\{\bar{\xi}_{k,i,j,\alpha} \left|\, |\alpha|=\ell, 1\leq k \leq r, 1\leq i \leq n, 1\leq j \leq m_k\right\}\right.. \] Then, $H \subset T{\cal R}_e(f)\cap T{\cal L}_e(f)$ by (\ref{eqn1}). Therefore, \[ \sum_{1\leq k\leq r}\sum_{i,j}\sum_{0 \leq |\alpha| \leq l-1}c_{k,i,j,\alpha}\bar{\xi}_{k,i,j,\alpha} \] belongs to $V=T{\cal R}_e(f)\cap T{\cal L}_e(f) \cap L_{\field}$. The set $V$ is a finite dimensional vector space. Set $\dim_{\field} V = m$. Then, there exist $\underline{\xi}_1,\ldots,\underline{\xi}_m \in T{\cal R}_e(f)\cap T{\cal L}_e(f)$ such that $V=\langle\underline{\xi}_1,\ldots,\underline{\xi}_m\rangle_{\field}$. It is clear that $V \subset \langle\underline{\xi}_1,\ldots,\underline{\xi}_m\rangle_{f^*C_{p,0}}$. Therefore, \[ \bar{\eta} \in \langle\underline{\xi}_1,\ldots,\underline{\xi}_m\rangle_{f^*C_{p,0}}+H_{f^*C_{p,0}}. \] Hence, \[ T{\cal R}_e(f)\cap T{\cal L}_e(f) \subset \langle\underline{\xi}_1,\ldots,\underline{\xi}_m\rangle_{f^*C_{p,0}}+H_{f^*C_{p,0}}. \] Since $\{\underline{\xi}_1,\ldots,\underline{\xi}_m\} \cup H$ is contained in $T{\cal R}_e(f)\cap T{\cal L}_e(f)$, the converse inclusion also holds. Therefore, $T{\cal R}_e(f)\cap T{\cal L}_e(f)$ is finitely generated as a $C_{p,0}$-module via $f$. This completes the proof. \qed
1,477,468,750,734
arxiv
\section{Introduction} Classically, there exist two main approaches to the theory of elliptic functions: the Jacobi approach based on the theory of theta-functions, and the Weierstrass approach. Each of these two pictures has its advantages and disadvantages when tackling a concrete problem, but essentially they are completely equivalent to each other. Strangely enough, Jacobi's picture admits a very natural and well-developed generalization to the higher genus case, while the Weierstrass picture remains essentially undeveloped. Generalization of the Weierstrass sigma-function to higher genus started from works of F.Klein who treated the hyperelliptic case in \cite{Klein1} and Baker \cite{Baker}. In \textsection 27 of subsequent work \cite{Klein2}, F. Klein constructed a sigma-function for an arbitrary curve of genus 3, making the first step beyond the hyperelliptic case. More recently the subject attracted attention of many researchers (Buchstaber, Leykin, Enolskii, Nakayashiki and others \cite{BLE,BLE1,Naka2008,EEG}) who developed the theory of higher genus sigma-function for the so-called $(n,s)$-curves. It turns out that the sigma-functions are a convenient tool in description of algebro-geometric solutions of integrable systems of KP type as well as in description of Jacobi and Kummer algebraic varieties. In this paper we introduce a notion of a higher genus sigma-function for an arbitrary Riemann surface of genus $g$. We construct a straightforward and natural analogue of elliptic sigma-functions following the classical formalism of the theory of elliptic functions as close as possible. Our definition of the higher genus sigma-function resembles the genus three sigma-function of Klein from \cite{Klein2}, although we also use some ingredients of recent works \cite{BLE,Naka2008}. The main role in our approach is played by the matrix of logarithmic derivatives of the product of all non-vanishing theta-constants with respect to entries of the matrix of $b$-periods. The even sigma-functions defined via our scheme are modular invariant; as far as the odd sigma-functions are concerned, we can only claim modular invariance of a certain power of them. Our generalization of the notion of a sigma-function to an arbitrary Riemann surface is based on the following expression of the sigma-function in terms of the Jacobi theta-function $\theta_1$, which is the elliptic theta-function with the odd characteristic $[1/2,1/2]$ (\cite{WW}, Section 21.43): \begin{equation} \sigma(u)= \frac{2\omega_1}{\theta_1'}{\rm exp}\left\{\frac{\eta_1}{2\omega_1} u^2\right\} \theta_1\left(\frac{u}{2\omega_1}{\Big |}\frac{\omega_2}{\omega_1}\right)\;, \label{siggen1int} \end{equation} where the parameters $\eta_1$ and $\omega_1$ are related via the equation \begin{equation} \omega_1 \eta_1=-\frac{1}{12}\frac{\theta_1'''}{\theta_1'}\;. \label{omeetaint} \end{equation} Besides the odd sigma-function (\ref{siggen1int}), in classical theory there exist 3 even sigma-functions \cite{WW}. To define the higher genus sigma-functions we start from the following data: a Riemann surface ${\cal L}$ of genus $g$ with a chosen canonical basis cycles $\{a_i,b_i\}_{i=1}^g$, a marked point $x_0\in {\cal L}$ with some local parameter $\zeta$, and an odd non-singular spin line bundle $\chi$. Using the local parameter $\zeta$ we build a distinguished basis of holomorphic differentials $v_j^0$ which is independent of the choice of a canonical basis of cycles on ${\cal L}$ (the differentials $v_i^0$ are defined by their local behaviour near $x_0$ as follows: $v_j^0(x)=(\zeta^{k_j-1}+ O(\zeta^{k_j}))d\zeta$, where $(k_1,\dots,k_g)$ is the Weierstrass gap sequence at $x_0$). The matrices of $a$- and $b$-periods of these differentials we denote by $2\omega_1$ and $2\omega_2$, respectively. We have $\omega_2=\omega_1\mathbb B}%{{\bf q}$, where $\mathbb B}%{{\bf q}$ is the matrix of $b$-periods of ${\cal L}$. Let us denote by ${\cal S}$ the set of even characteristics $\beta^\mu$ such that $\theta[\beta^\mu](0|\mathbb B}%{{\bf q})\neq 0$ (or, equivalently, the set of non-vanishing theta-constants on ${\cal L}$); by $N$ we denote the number of these characteristics. For a generic curve, according to Th.7 of \cite{farkas}, $N=2^{2g-1}+2^{g-1}$; for a hyperelliptic curve $N=\frac{1}{2}\left(^{2g+2}_{g+1}\right)$. The key ingredient of our construction is the product of all non-vanishing theta-constants, we denote this product by ${\cal F}$: \begin{equation*} {\cal F}=\prod_{\beta^\mu\in {\cal S}}\theta[\beta^\mu](0|\mathbb B}%{{\bf q})\;. \end{equation*} The following symmetric matrix $\Lambda$ which is proportional to the matrix of derivatives of $\log{\cal F}$ with respect to matrix entries of $\mathbb B}%{{\bf q}$ plays the main role in the sequel: \begin{equation*} \Lambda_{ij}= - \frac{\pi {\rm i}}{N}\frac{\partial \log {\cal F}}{\partial \mathbb B}%{{\bf q}_{ij}}= -\frac{1}{4N}\sum_{\beta^\mu\in {\cal S}}\frac{1}{\theta[\beta^\mu](0|\mathbb B}%{{\bf q})}\frac{\partial^2\theta[\beta^\mu](z|\mathbb B}%{{\bf q})}{\partial z_i\partial z_j}\Big|_{z=0}\;. \end{equation*} This matrix was used in \cite{Klein2} (see \textsection 25) where it played the same role as here in the construction of sigma-functions in genus $3$. Now we define two other matrices, $\eta_1$ and $\eta_2$ as follows: \begin{equation} \eta_1 = (\omega_1^t)^{-1} \Lambda\;,\hskip0.8cm \eta_2={\omega_1^{t}}^{-1}\Lambda\mathbb B}%{{\bf q}-\frac{\pi {\rm i}}{2}{\omega_1^{t}}^{-1}\;. \label{eta12intr} \end{equation} The matrices $\omega_{1,2}$ and $\eta_{1,2}$ defined in this way satisfy equations $\omega_2 \omega_1^t = \omega_1 \omega_2^t$, $\eta_2 \eta_1^t = \eta_1 \eta_2^t$ and the higher genus analogue of the Legendre relation $\eta_1 \omega_2^t =\eta_2 \omega_1^t+\frac{\pi {\rm i}}{2} I_g$, where $I_g$ is the $g\times g$ unit matrix. Note that the first relation in (\ref{eta12intr}) gives a natural generalization of (\ref{omeetaint}). Consider the Abel map ${\cal U}(x)=\int_{x_0}^x v_i$ on ${\cal L}$. Here $v_i$ are holomorphic 1-forms on ${\cal L}$ normalized via relations $\int_{a_i}v_j=\delta_{ij}$. Our construction does not depend on the choice of the initial point for the Abel map. We choose this point to coincide with $x_0$, the point of normalisation for the holomorphic differentials $v^0_i$. Let us denote by $D$ the positive divisor of degree $g-1$ corresponding to the odd spin line bundle $\chi$. Since the divisor $2D$ lies in the canonical class, the vector ${\cal U}(D)+K^{x_0}$ (here $K^{x_0}$ is the vector of Riemann constants corresponding to the point $x_0$) equals $\mathbb B}%{{\bf q} \beta_1+\beta_2$, where $\beta_{1,2}\in 1/2{\mathbb Z}^g$. Denote by $\beta^\chi$, the odd theta-characteristic defined by the vectors $[\beta_1,\beta_2]$. As a straightforward analogy to (\ref{siggen1int}) we define an odd higher genus sigma-function corresponding to an arbitrary non-singular odd spin structure $\chi$ as follows: \begin{equation} \sigma_{\chi}(u)= {\cal F}^{-3/N} {\rm det}(2\omega_1)\, {\rm exp}\left(\frac{1}{2} u^t (\eta_1\omega_1^{-1}) u\right) \theta[\beta^{\chi}]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q})\;, \label{sigint} \end{equation} where $\theta[\beta^{\chi}](z|\mathbb B}%{{\bf q})$ is the Riemann theta-function with the odd characteristics $\beta^\chi$. The function $\sigma_\chi$ is modular invariant in the following sense: under a change of canonical basis of cycles $\sigma_\chi^{8 N}$ remains invariant, i.e., $\sigma_\chi$ itself can get multiplied by a root of unity of degree $8N$. Thus modular properties of $\sigma_\chi$ are determined by a homomorphism of the modular group to the cyclic group ${\mathbb Z}_{8N}$. The matrices $\omega_{1,2}$ and $\eta_{1,2}$ depend on the choice of a base point $x_0$ and a local parameter $\zeta$. However, the sigma-functions corresponding to different choices of a marked point turn out to be equivalent: they coincide up to a linear change of the variable $u$. In addition to the odd sigma-function $\sigma(u)$, Weierstrass introduced three even sigma-functions $\sigma_r$, $r=1,2,3$, which are proportional to even Jacobi theta-function. The even sigma-functions are expressed in terms of $\sigma$ as follows: \begin{equation} \sigma_r (u) =\frac{e^{-\eta_r u} \sigma(u+\omega_r)}{\sigma(\omega_r)}\;, \label{sirint0} \end{equation} where $\omega_3=-\omega_1-\omega_2$ and $\eta_3=-\eta_1-\eta_2$. Applying a higher genus analogue of formula (\ref{sirint0}) to any of the odd sigma-functions (\ref{sigint}), we define a higher genus even sigma-function corresponding to an arbitrary even spin line bundle $\mu$: \begin{equation} \sigma_{\mu}(u)= {\rm exp} \left(\frac{1}{2} u^t (\eta_1 \omega_1^{-1}) u\right) \frac{\theta[\beta^\mu]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q})}{\theta[\beta^\mu](0|\mathbb B}%{{\bf q})} \;. \label{sievenint} \end{equation} The functions $\sigma_{\mu}$ themselves (not their $8N$th power, as in the case of $\sigma_\chi$) are invariant under any change of the canonical basis of cycles. The function $\sigma_\mu$ is well-defined for all Riemann surfaces for which $\theta[\beta^\mu](0|\mathbb B}%{{\bf q})\neq0$. We find our present definition of the higher genus sigma-function more natural than the definitions of this object given in \cite{BLE,BLE1,EEG,Naka2008} and other previous works due to the following reasons. First, in contrast to these works, we do not make use of any concrete realization of a Riemann surface in the form of an algebraic equation. Second, in \cite{BLE,BLE1,EEG,Naka2008} a higher genus sigma-function was defined only for a class of the so-called $(n,s)$-curves. On such a curve there exists a holomorphic $1$-form with the only zero of multiplicity $2g-2$ (see \cite{Naka2008}); therefore these curves form only a tiny subset (a subspace of codimension $g-2$) in the moduli space of Riemann surfaces. Third, the genus one relation (\ref{omeetaint}) was previously not carried over to higher genus (except, perhaps, the hyperelliptic case where some analog of (\ref{omeetaint}) was derived in \cite{EHKKLS}; however, the relation from \cite{EHKKLS} differs from ours). The formulas relating odd and even sigma-functions in genus one were also not generalized to higher genus. Fourth, in previous works (except the hyperelliptic case) the moduli-dependent factor which provides the modular invariance of (a power of) an odd sigma-function was not introduced. Finally, our definition of the higher genus sigma-function naturally generalizes the approach of F. Klein \cite{Klein2} to sigma-functions of non-hyperelliptic genus three Riemann surfaces. The paper is organized as follows. In section \ref{theta} we collect necessary facts about theta-functions. In section \ref{Wsigma} we review definitions and properties of elliptic sigma-functions. In section \ref{auxobj} we introduce a few auxiliary objects and study their transformation properties under the change of canonical basis of cycles. In section \ref{sigmafun} we define the odd sigma-functions for a generic Riemann surface of arbitrary genus and analyse their periodicity and modular properties. In section \ref{sigmaeven} we introduce even sigma-functions in arbitrary genus and study their properties. In section \ref{invar} we show that the sigma-functions corresponding to a different choice of the base point and the local parameter are equivalent, i.e., can be obtained from each other by a linear change of variables. In Section \ref{sect_onRS} we replace the argument $u$ of the sigma-function by the Abel map of a point on the Riemann surface and thus consider the sigma-function as a function on the surface. In Section \ref{sechyper} we use Thomae's formulas to treat the hyperelliptic case in more detail. \section{Theta-function: summary} \label{theta} Let us recall the definition and properties of the Riemann theta-function. The genus $g$ theta-function with characteristics $\left[^\alpha_\beta\right]$ (where $\alpha,\beta\in{\cal C}^g$), with the matrix of periods $\mathbb B}%{{\bf q}$ and an argument $z\in {\cal C}^g$ is defined as follows: \begin{equation*} \theta\left[^\alpha_\beta\right](z|\mathbb B}%{{\bf q})=\sum_{m\in {\mathbb Z}^g} {\rm exp} \{\pi {\rm i} ((m+\alpha)^t)\mathbb B}%{{\bf q} (m+\alpha) + 2\pi {\rm i} (m+\alpha)^t\, (z+\beta)\}\;. \end{equation*} The theta-function possesses the following quasi-periodicity properties with respect to shifts of its argument by the period vectors: \begin{equation} \theta\left[^\alpha_\beta\right](z+ {\bf k}_1|\mathbb B}%{{\bf q})= e^{2\pi {\rm i} (\alpha^t) {\bf k}_1}\theta\left[^\alpha_\beta\right](z|\mathbb B}%{{\bf q}) \label{pertheta1} \end{equation} and \begin{equation} \theta\left[^\alpha_\beta\right](z+\mathbb B}%{{\bf q} {\bf k}_2|\mathbb B}%{{\bf q})= e^{-2\pi {\rm i} (\beta^t) {\bf k}_2} e^{-\pi {\rm i} (k_2^t)\mathbb B}%{{\bf q} k_2 -2\pi {\rm i} (k_2^t)z } \theta\left[^\alpha_\beta\right](z|\mathbb B}%{{\bf q}) , \label{pertheta2} \end{equation} where ${\bf k}_{1,2}\in {\mathbb Z}^g$ are arbitrary. To describe modular properties of the theta-function consider a symplectic transformation of the basis of cycles on the surface ${\cal L}$ \begin{equation} \left(\begin{array}{c} {{\bf b}}^\gamma \\ {{\bf a}}^\gamma \end{array}\right)= \gamma \left(\begin{array}{c} {\bf b} \\ {\bf a}\end{array}\right) \label{gamma1} \end{equation} defined by the matrix \begin{equation} \gamma=\left(\begin{array}{cc} A & B \\ C & D \end{array}\right)\;\;\in\;\; Sp(2g,{\mathbb Z})\;. \label{gamma} \end{equation} Here ${\bf a} = (a_1,\dots,a_g)^t$ and ${\bf b} = (b_1,\dots,b_g)^t$ are vectors composed of basis cycles. The corresponding modular transformation of the theta-function looks as follows (see for example \cite{Fay73}, page 7): \begin{equation} \theta\left[^\alpha_\beta\right]^\gamma(((C\mathbb B}%{{\bf q} +D)^t)^{-1} z| \mathbb B}%{{\bf q}^\gamma)= \xi(\gamma,\alpha,\beta)\;\{{\rm det} (C\mathbb B}%{{\bf q}+D)\}^{1/2} e^{\pi {\rm i}\left\{ (z^t)(C\mathbb B}%{{\bf q} +D)^{-1} Cz\right\}} \; \theta\left[^\alpha_\beta\right] (z|\mathbb B}%{{\bf q}), \label{thgam} \end{equation} where $\xi(\gamma,\alpha,\beta)=\rho(\gamma)\kappa(\gamma,\alpha,\beta)$; $\rho(\gamma)$ is a root of unity of degree 8; \begin{equation} \kappa(\gamma, \alpha, \beta) = {\rm exp} \{ \pi {\rm i} [({\alpha}^t D^t - {\beta}^t C^t)(- B{\alpha}+A{\beta}+(A(B)^t)_0) - ({\alpha}^t){\beta}]\}\;, \end{equation} and \begin{equation} \left[\begin{array}{c} \alpha \\ \beta \end{array}\right]^\gamma =\left(\begin{array}{cc} D & -C \\ -B & A \end{array} \right) \left[\begin{array}{c} \alpha \\ \beta \end{array}\right] + \frac{1}{2} \left[\begin{array}{cc} (C D^t)_0 \\ (A B^t)_0 \end{array}\right]\;. \label{transcar1} \end{equation} For an arbitrary matrix $M$, the notation $M_0$ is used for the column vector of diagonal entries of $M$. The transformation of the Riemann matrix $\mathbb B}%{{\bf q}$ of $b$-periods is as follows \cite{Fay92}: \begin{equation} \mathbb B}%{{\bf q}^\gamma=(A\mathbb B}%{{\bf q}+B)(C\mathbb B}%{{\bf q}+D)^{-1}. \label{transper} \end{equation} \section{Weierstrass sigma-function} \label{Wsigma} Let us first briefly discuss the classical Weierstrass sigma-function from a convenient perspective (see, for example, chapter 20 of \cite{WW}). Let ${\cal L}$ be a Riemann surface of genus $1$ with an arbitrary (not necessarily normalized) holomorphic differential $v^0$ on ${\cal L}$ and a canonical basis $\{a,b\}$ of $H_1({\cal L},{\mathbb Z})$. Introduce $a$- and $b$-periods of $v^0$: $2\omega_1:=\oint_a v^0$ and $2\omega_2:=\oint_b v^0$. Choosing $x_0\in {\cal L}$ as the initial point, we can map the surface ${\cal L}$ to the fundamental parallelogram $J({\cal L})$ with periods $2\omega_1$ and $2\omega_2$ and identified opposite sides. This map is given by $u(x)=\int_{x_0}^x v^0$; then $u$ can be used as a local coordinate on ${\cal L}$. Now introduce the Weierstrass $\wp$-function, which is a double periodic function of $u$ with the periods $2\omega_1$ and $2\omega_2$ and a second order pole at $0$ such that $\wp(u)\sim u^{-2}+o(1)$ as $u\to 0$. Then the $\zeta$-function is defined via $-\zeta'(u)=\wp(u)$ and $\zeta(u)=u^{-1}+o(1)$ as $u\to 0$, and, finally, the sigma-function is defined by the equation $\zeta(u)=\sigma'(u)/\sigma(u)$ and $\sigma(u)= u+ o(u)$ as $u\to 0$. The sigma-function defined in this way has the following properties which characterize it uniquely: \begin{enumerate}[A.] \item $\sigma(u)$ is holomorphic in the fundamental parallelogram with the sides $2\omega_1$ and $2\omega_2$ and has a simple zero at $u=0$. \item $\sigma(u)$ satisfies the following periodicity relations: \begin{equation*} \sigma(u+2\omega_1) = -e^{2 \eta_1 (\omega_1 + u) } \sigma(u)\;,\hskip0.7cm \sigma(u+2\omega_2) = -e^{2 \eta_2 (\omega_2 + u) } \sigma(u), \end{equation*} where $\eta_1:=\zeta(\omega_1)$ and $\eta_2:=\zeta(\omega_2)$; these constants are related to periods $\omega_1$ and $\omega_2$ via the Legendre relation $\eta_1\omega_2-\eta_2\omega_1=\pi {\rm i}/2$ and \begin{equation*} \omega_1\eta_1=-\frac{1}{12}\frac{\theta_1'''}{ \theta_1'}\;. \end{equation*} \item Consider an arbitrary matrix $\gamma\in SL(2,{\mathbb Z})$ of the form (\ref{gamma}) acting on the periods $\omega_{1,2}$ as follows: $\omega_1^\gamma= C \omega_2+ D\omega_1$, $\omega_2^\gamma= A\omega_2+ B\omega_1$. Then the sigma-functions corresponding to two sets of periods, coincide: \begin{equation*} \sigma(u; \omega_1,\omega_2)=\sigma(u;\omega^\gamma_1,\omega^\gamma_2)\;. \end{equation*} \item The sigma-function is locally holomorphic as a function of periods $\omega_1$ and $\omega_2$, which play the role of moduli parameters. Moreover, $\sigma(u;\omega_1,\omega_2)$ neither vanishes nor diverges identically in $u$ for any values of $\omega_1$ and $\omega_2$ as long as $\Im(\omega_1/\omega_1)$ remains positive. \item Normalization: $\sigma'(0)=1$. \end{enumerate} These properties determine the way to generalize the classical sigma-function to an arbitrary genus. However, in higher genus it turns out to be impossible to satisfy all of the properties A-E simultaneously. Therefore, we shall keep only the (appropriately reformulated) properties A-C as the main principles. The property D, as it stands, can not be fulfilled in arbitrary genus. Namely, the higher genus sigma-function is well-defined and holomorphic with respect to moduli on each stratum of the moduli space where the given set of theta-constants remains non-vanishing. On the subspace of the moduli space where some of these theta-constants vanish the sigma-function becomes singular as a function of moduli, and has to be redefined using the new set of non-vanishing theta-constants. The property E does not have an obvious natural analog in the case of a function of several variables; therefore, we are not going to carry it over to the higher genus case. The odd Weierstrass sigma-function is expressed in terms of the Jacobi theta-function $\theta_1$, and periods $\omega_1$, $\omega_2$ and $\eta_1$ (\cite{WW}, Section 21.43) by (\ref{siggen1int}). % This formula is the starting point of our construction of higher genus odd sigma-functions. To construct even $\sigma$-functions in any genus we shall generalize formula (\ref{sirint0}). \section{Some auxiliary objects} \label{auxobj} \subsection{Definitions} Fix a Riemann surface ${\cal L}$ with a marked point $x_0$ and a chosen local parameter $\zeta$ in neighbourhood of $x_0$. Let $1=k_1< k_2,\dots, < k_g\leq 2g-1$ be the Weierstrass gap sequence at $x_0$ (if $x_0$ is a non-Weierstrass point, the gap sequence is $(1,2,\dots,g-1,g)$). \begin{definition}\label{defv} The basis of holomorphic differentials $v^0_1,\dots,v^0_g$ is called ``distinguished'' if in a neighbourhood of $x_0$ the holomorphic differentials $v^0_i$ have the following expansion in the distinguished local parameter $\zeta$: \begin{equation} v^0_j(x)=(\zeta^{k_j-1}+ O(\zeta^{k_g})) d\zeta\;. \label{defvjw} \end{equation} \end{definition} The existence of holomorphic differentials with zeros of order (exactly) $k_j-1$ at the Weierstrass point $x_0$ is an immediate corollary of the Riemann-Roch theorem. The required structure of higher order terms in the Taylor series can always be achieved by taking linear combinations of such differentials. Now let us choose some symplectic basis $\{a_i,b_i\}_{i=1}^g$ in $H_1({\cal L},{\mathbb Z})$, and introduce matrices of $a$- and $b$-periods of $v_i^0$: \begin{equation*} 2(\omega_1)_{ij}=\int_{a_j} v_i^0 \;,\;\;\;\; 2(\omega_2)_{ij}=\int_{b_j} v_i^0\;. \end{equation*} Then the matrix of $b$-periods of the surface ${\cal L}$ is given by \begin{equation} \mathbb B}%{{\bf q} = \omega_1^{-1}\omega_2 \;. \label{Bper} \end{equation} Denote the set of all non-singular even theta-characteristics $\beta^\mu$ on ${\cal L}$ by ${\cal S}$, and their number by $N$; for a generic surface $N=2^{2g-1}+2^{g-1}$. Consider the product of all non-vanishing theta-constants: \begin{equation} {\cal F}=\prod_{\beta^\mu\in {\cal S}}\theta[\beta^\mu](0|\mathbb B}%{{\bf q})\;. \label{Fcaldef} \end{equation} Let us introduce the following $g\times g$ symmetric matrix $\Lambda$: \begin{equation} \Lambda_{ij}= - \frac{\pi {\rm i}}{N}\frac{\partial \log {\cal F}}{\partial \mathbb B}%{{\bf q}_{ij}} \;, \label{defM1} \end{equation} which, according to the heat equation for theta-function, can be written as \begin{equation} \Lambda_{ij}=-\frac{1}{4N}\sum_{\beta^\mu\in {\cal S}} \frac{1}{\theta[\beta^\mu](0|\mathbb B}%{{\bf q}) }\frac{\partial^2\theta[\beta^\mu](z|\mathbb B}%{{\bf q})}{\partial z_i\partial z_j}\Big|_{z=0}\;. \label{defM2} \end{equation} Let us now define matrices $\eta_1$ and $\eta_2$ as follows: \begin{equation} \eta_1 = (\omega_1^t)^{-1} \Lambda\;,\hskip0.8cm \eta_2={\omega_1^{t}}^{-1}(\Lambda\mathbb B}%{{\bf q}-\frac{\pi {\rm i}}{2}I_g)\;. \label{eta12} \end{equation} Definition (\ref{eta12}) together with the symmetry of matrices $\mathbb B}%{{\bf q}=\omega_1^{-1}\omega_2$ and $\Lambda$ immediately imply the following relations: \begin{equation*} -\omega_2 \omega_1^t + \omega_1 \omega_2^t =0\;, \end{equation*} \begin{equation*} -\eta_2 \eta_1^t + \eta_1 \eta_2^t =0\;, \end{equation*} \begin{equation} -\eta_2 \omega_1^t + \eta_1 \omega_2^t =\frac{\pi {\rm i}}{2} I_g\;. \label{Leg3} \end{equation} Relation (\ref{Leg3}) is a straightforward higher genus analogue of the Legendre relation. Definition (\ref{eta12}) implies also the following relation between $\omega_{1,2}$ and $\eta_{1,2}$: \begin{equation} \label{useless} \eta_1^t\omega_2 -\omega_1^t\eta_2 = \frac{\pi {\rm i}}{2}I_g. \end{equation} In the sequel we shall make use of the matrix \begin{equation} \eta_1\omega_1^{-1} = (\omega_1^t)^{-1} \Lambda \omega_1^{-1}, \label{eta1ome1} \end{equation} which is obviously symmetric as a corollary of the symmetry of $\Lambda$. \subsection{Transformation properties} Let us now see how all the matrices defined in Section \ref{auxobj} transform under a symplectic transformation (\ref{gamma1}), (\ref{gamma}) of the canonical basis of cycles on ${\cal L}$. \begin{lemma}\label{chaangeM} Under a change of the canonical homology basis (\ref{gamma1}) the matrix $\Lambda$ transforms as follows: \begin{equation} \label{Lambda_transform} \Lambda^\gamma = (C\mathbb B}%{{\bf q}+D)\Lambda (C\mathbb B}%{{\bf q}+D)^t -\frac{\pi {\rm i}}{2} C(C\mathbb B}%{{\bf q}+D)^t. \end{equation} \end{lemma} {\it Proof.} Let us use the formula (\ref{defM1}) for $\Lambda$. Due to (\ref{thgam}) we have \begin{equation} {\cal F}^\gamma=\epsilon \{{\rm det}(C\mathbb B}%{{\bf q} +D)\}^{N/2}{\cal F} , \label{transF} \end{equation} where $\epsilon$ is a root of unity of degree $8$, i.e., \begin{equation} \log{\cal F}^\gamma=\log{\cal F}+ \frac{N}{2}\log \{{\rm det}(C\mathbb B}%{{\bf q} +D)\} +\log\epsilon \;. \label{transFcal} \end{equation} Using definition (\ref{defM1}) of the matrix $\Lambda$, we get \begin{equation} (\Lambda^\gamma)_{ij} =-\frac{\pi {\rm i}}{N}\frac{\partial {\cal F}^\gamma}{\partial\mathbb B}%{{\bf q}^\gamma_{ij}}\;. \label{Mgam00} \end{equation} The Riemann matrix $\mathbb B}%{{\bf q}$ transforms as in (\ref{transper}) under the change of a symplectic basis. Substituting (\ref{transper}) and (\ref{transFcal}) into (\ref{Mgam00}) and taking into account that $$\partial_{\mathbb B}%{{\bf q}_{ij}}\log{\rm det}(C\mathbb B}%{{\bf q}+D)=Tr\{[\partial_{\mathbb B}%{{\bf q}_{ij}}(C\mathbb B}%{{\bf q}+D)](C\mathbb B}%{{\bf q}+D)^{-1}\}\;,$$ we see that the matrix $\Lambda^\gamma$ is given by (\ref{Lambda_transform}). Alternatively one can prove the lemma by a straightforward differentiation of (\ref{thgam}) with respect to $z_i$ and $z_j$ and then putting $z=0.$ $\Box$ Now we are in a position to prove transformation formulas for the matrices $\omega_{1,2}$ and $\eta_{1,2}$ under the change of a canonical basis of cycles: \begin{lemma} Under a symplectic transformation (\ref{gamma1}) the matrices $\omega_{1,2}$ and $\eta_{1,2}$ transform as follows: \begin{equation} \omega_1^{\gamma}= \omega_2 C^t + \omega_1 D^t\;,\;\;\;\;\omega_2^{\gamma}= \omega_2 A^t + \omega_1 B^t\;, \label{trans0} \end{equation} \begin{equation*} \eta_1^{\gamma}= \eta_2 C^t + \eta_1 D^t\;,\;\;\;\;\eta_2^{\gamma}= \eta_2 A^t + \eta_1 B^t\;. \end{equation*} \end{lemma} {\it Proof.} The transformation of the matrices of periods $\omega_1$ and $\omega_2$ (\ref{trans0}) follows from the fact that the choice of holomorphic differentials $v_i^0$ depends only on a marked point $x_0$ and local parameter $\zeta$, and does not depend on the canonical basis of cycles. The transformation of $\eta_1$ and $\eta_2$ is immediately implied by their definition (\ref{eta12}) and the transformation laws for $\omega_1$ (given by (\ref{trans0})), $\mathbb B}%{{\bf q}$ (given by (\ref{transper})), $\Lambda$ (given by (\ref{Lambda_transform})) and the following relations for the blocks of a symplectic matrix $\gamma$ (\ref{gamma}): $C^tA=A^tC,$ $D^tB=B^tD,$ $D^tA - B^tC=1$. $\Box$ \section{Odd sigma-function in higher genus} \label{sigmafun} To generalize $\sigma(u)$ to any genus consider a Riemann surface ${\cal L}$ of genus $g$ and an odd non-degenerate spin structure $\chi$ on ${\cal L}$. Fix a point $x_0\in {\cal L}$ and a local parameter $\zeta(x)$ in a neighbourhood of $x_0$. Consider the corresponding distinguished basis of holomorphic differentials $v_1^0,\dots,v_g^0$ and their matrices of periods $2\omega_{1}$ and $2\omega_{2}$. Introduce matrices $\eta_{1,2}$ by (\ref{eta12}). Formula (\ref{siggen1int}) relating the genus one odd sigma-function with the theta-function $\theta_1$ has four main ingredients, which will be generalized to any $g>1$ as follows: \begin{enumerate}\rm \item The theta-function with an odd half integer characteristic (which is unique in genus 1) $\theta_1({u}/{2\omega_1})$ (recall that $\theta_1=-\theta[1/2,1/2]$). For genus $g$ Riemann surfaces we choose some odd non-singular spin structure $\chi$ and (ignoring the minus sign relating $\theta_1$ and $\theta[1/2,1/2]$ since $\theta_1$ enters the definition of $\sigma$ twice) replace $\theta_1({u}/{2\omega_1})$ by $\theta[\beta^\chi]\left((2\omega_1)^{-1} u\right)$, with $u\in {\cal C}^g$, where $[\beta^\chi]=\left[^{\beta_1}_{\beta_2}\right]$ and the vectors $\beta_{1,2}\in (1/2){\mathbb Z}^g$ are defined by \begin{equation} \mathbb B}%{{\bf q}\beta_1+\beta_2=K^{x_0}+{\cal U}_{x_0}(D)\;. \label{defbetachi} \end{equation} Here $D$ is the divisor corresponding to the spin structure $\chi$, $\;{\cal U}_{x_0}$ is the Abel map and $K^{x_0}$ is the vector of Riemann constants with the base point $x_0.$ \item The exponent of the expression $({\eta_1}/{2\omega_1}) u^2$. For any $g>1$ this term is replaced by the exponent of the bilinear form $\frac{1}{2} u^t (\eta_1 \omega_1^{-1}) u$. \item The factor $2{\omega_1}$, which does not depend on the argument $u$. For any $g>1$ this factor is replaced by ${\rm det} (2\omega_1)$. \item The factor $\theta_1'$, which equals $\pi \theta_2\theta_3\theta_4$ according to Jacobi's formula. This factor does not depend on the argument $u$, but depends on $\mathbb B}%{{\bf q}=\omega_1^{-1}\omega_2$. For any $g>1$ we shall replace this factor by ${\cal F}^{3/N}$, where ${\cal F}$ given by (\ref{Fcaldef}) is the product of all non-vanishing theta-constants on ${\cal L}$. \end{enumerate} \begin{definition} The sigma-function corresponding to an odd non-singular spin structure $\chi$ is defined by the formula \begin{equation} \sigma_{\chi}(u)= {\cal F}^{-3/N}\, {\rm det}(2\omega_1)\, {\rm exp} \left(\frac{1}{2} u^t (\eta_1 \omega_1^{-1}) u\right)\, \theta[\beta^\chi]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q}). \label{conjsig} \end{equation} \end{definition} Obviously, $\sigma_{\chi}(u)$ is an odd function since $\theta[\beta^\chi]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q})$ is odd. Now we are going to study other properties of the odd sigma-function. \subsection{Periodicity of odd sigma-functions} Periodicity properties of $ \sigma_{\chi}(u)$ (\ref{conjsig}) are given by the following proposition: \begin{proposition} The function $\sigma_{\chi}(u)$ has the following transformation properties with respect to shifts of the argument $u$ by lattice vectors $2\omega_1{\bf k}_1$ and $2\omega_2{\bf k}_2$ for any ${\bf k}_{1,2}\in {\mathbb Z}^g$: \begin{equation} \sigma_\chi(u+2\omega_1 {\bf k}_1) = e^{2\pi {\rm i} (\beta_1^t)k_1} e^{2( \omega_1{\bf k}_1 + u)^t \eta_1{\bf k}_1} \sigma_\chi(u), \label{perisig1} \end{equation} \begin{equation} \sigma_\chi(u+2\omega_2 {\bf k}_2) =e^{2\pi {\rm i} (\beta_2^t)k_2} e^{2( \omega_2{\bf k}_2 + u)^t \eta_2{\bf k}_2}\sigma_\chi(u) \label{perisig2} \end{equation} where $\beta_1$ and $\beta_2$ are vectors forming the non-singular odd characteristic $\beta^\chi$. \end{proposition} {\it Proof.} To prove (\ref{perisig1}) we first use the corresponding periodicity property of the theta-function (\ref{pertheta1}), which produces the first exponential factor in (\ref{perisig1}). The multiplier coming from the exponential term in (\ref{conjsig}) equals $$ {\rm exp}\frac{1}{2}\{(u^t+ 2 {\bf k}_1^t \omega_1^t)\eta_1\omega_1^{-1}(u+2\omega_1 {\bf k}_1) - u^t \eta_1\omega_1^{-1} u\}, $$ which can be brought into the form of the second exponential factor in (\ref{perisig1}) by a simple computation using the symmetry of the matrix $\eta_1^t\omega_1=\Lambda$. The proof of (\ref{perisig2}) is slightly more complicated; besides (\ref{pertheta2}) it requires also the use of relation (\ref{Leg3}). Namely, consider the expression \begin{equation} {\rm log}\frac{\sigma_\chi(u+2\omega_2 {\bf k}_2)}{\sigma_\chi(u)} . \label{comtr}\end{equation} The contribution from the exponential term in (\ref{conjsig}) to this expression is $$ \frac{1}{2}\{(u^t+ 2 {\bf k}_2^t \omega_2^t)\eta_1\omega_1^{-1}(u+2\omega_2 {\bf k}_2) - u^t \eta_1\omega_1^{-1} u\}, $$ which equals \begin{equation} 2{\bf k}_2^t \omega_2^t \eta_1 \omega_1^{-1} \omega_2 {\bf k}_2+2 u^t \eta_1\omega_1^{-1}\omega_2 {\bf k}_2 \;, \label{comper1} \end{equation} where the symmetry of the matrix $\eta_1\omega_1^{-1}$ (\ref{eta1ome1}) was used. The contribution from the second exponential term in the transformation of the theta-function (\ref{pertheta2}) to (\ref{comtr}) equals \begin{equation} -\pi {\rm i} {\bf k}_2^t \omega_1^{-1}\omega_2 {\bf k}_2 - \pi {\rm i} {\bf k}_2^t \omega_1^{-1} u\;. \label{contrtheta} \end{equation} Now the sum of first terms in (\ref{comper1}) and (\ref{contrtheta}) gives \begin{equation} \label{temp1_periodicity} 2 {\bf k}_2^t\left\{\omega_2^t \eta_1 - \frac{\pi{\rm i}}{2}I_g\right\} \omega_1^{-1}\omega_2{\bf k}_2 = 2 {\bf k}_2^t \eta_2^t \omega_2 {\bf k}_2 = 2 {\bf k}_2^t \omega_2^t\eta_2 {\bf k}_2\;, \end{equation} where in the first equality we used the relation (\ref{useless}). In the second equality we used the symmetry of $\omega_2^t\eta_2=\mathbb B}%{{\bf q}\Lambda\mathbb B}%{{\bf q} -\frac{\pi {\rm i}}{2}\mathbb B}%{{\bf q}$. Now consider the sum of second terms in (\ref{comper1}) and (\ref{contrtheta}). This sum is equal to \begin{equation} \label{temp2_periodicity} 2u^t\left( \eta_1\omega_2^t - \frac{\pi{\rm i}}{2}\right)(\omega_1^t)^{-1} {\bf k}_2 = 2u^t\eta_2 {\bf k}_2, \end{equation} where we used the symmetry of matrix $\mathbb B}%{{\bf q}=\omega_1^{-1}\omega_2$ and (\ref{Leg3}). The sum of expressions (\ref{temp1_periodicity}) and (\ref{temp2_periodicity}) gives the second exponent in (\ref{perisig2}). $\Box$ \subsection{Transformation of $\sigma_\chi$ under change of canonical basis of cycles} To deduce modular properties of the sigma-function, we need the following lemma. \begin{lemma} For an arbitrary $\gamma\in Sp(2g,{\mathbb Z})$ of the form (\ref{gamma}) acting on symplectic homology basis on ${\cal L}$ according to (\ref{gamma1}) the characteristics $(\beta^\chi)^\gamma$ and $\beta^\chi$ are related by (\ref{transcar1}). \end{lemma} {\it Proof.} According to Lemma 1.5 on p.11 of \cite{Fay92}, the vectors of Riemann constant $K^\gamma_{x_0}$ and $K_{x_0}$ are related by (modulo the lattice of periods): \begin{equation*} K^\gamma_{x_0}\equiv [(C\mathbb B}%{{\bf q}+D)^t]^{-1} K_{x_0}+ \frac{1}{2}\mathbb B}%{{\bf q} (C D^t)_0 + \frac{1}{2} (A B^t)_0, \end{equation*} which immediately shows that $(\beta^\chi)^\gamma$ and $\beta^\chi$ are related by (\ref{transcar1}). $\Box$ The modular properties of the odd sigma-function are described in the next theorem. \begin{theorem} The function $\sigma_\chi(u)$ (\ref{conjsig}) is invariant with respect to symplectic transformations up to a possible multiplication with a root of unity of degree $8N$. \end{theorem} {\it Proof.} Take an arbitrary $\gamma\in Sp(2g,{\mathbb Z})$ of the form (\ref{gamma}) acting on a symplectic homology basis on ${\cal L}$ according to (\ref{gamma1}) and consider the ratio of sigma-functions: \begin{equation} \frac{\sigma_\chi^\gamma(u)}{\sigma_\chi(u)}= e^{\frac{1}{2} u^t (\eta^{\gamma}_1 (\omega_1^\gamma)^{-1}- \eta_1 \omega_1^{-1}) u}\left(\frac{{\rm det} \omega_1^\gamma}{{\rm det} \omega_1}\right) \left(\frac{{\cal F}^{\gamma}}{{\cal F}}\right)^{-3/N} \frac{\theta[{\beta^\chi}^\gamma]((2\omega^\gamma_1)^{-1}u|\mathbb B}%{{\bf q}^\gamma)}{\theta[\beta^\chi]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q})}. \label{sisig} \end{equation} According to (\ref{thgam}), we have \begin{equation} \frac{\theta[{\beta^\chi}^\gamma]((2\omega^\gamma_1)^{-1}u|\mathbb B}%{{\bf q}^\gamma)}{\theta[\beta^\chi]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q})}= \xi(\gamma,\alpha,\beta)\{{\rm det} (C\mathbb B}%{{\bf q}+D)\}^{1/2}e^{\pi {\rm i}\left\{ (z^t)(C\mathbb B}%{{\bf q} +D)^{-1} Cz\right\}}, \label{thgam1} \end{equation} where $z=(2\omega_1)^{-1}u$. Look first at the exponential factor in the ratio (\ref{sisig}) composed of the first multiplier in (\ref{sisig}), as well as the last term in the transformation of theta-function (\ref{thgam1}). It is convenient to use the variable $z=(2\omega_1)^{-1}u$. The term in the exponent arising from (\ref{thgam1}) equals \begin{equation} \pi {\rm i} (z^t) (C \mathbb B}%{{\bf q} +D)^{-1}C z = \pi {\rm i} (z^t) (C^t) (\mathbb B}%{{\bf q} C^t + D^t) ^{-1} z. \label{expth} \end{equation} On the other hand, the first term in (\ref{sisig}) can be transformed as follows: \begin{multline*} 2 (z^t)(\omega_1)^t\left\{ (\eta_2 C^t +\eta_1 D^t)(\omega_2C^t +\omega_1 D^t)^{-1} -\eta_1 \omega_1^{-1}\right\}\omega_1 z \\ = 2 (z^t)(\omega_1)^t\left\{ (\eta_2 C^t +\eta_1 D^t) (\omega_1^{-1}\omega_2 C^t +D^t)^{-1} -\eta_1\right\} z. \end{multline*} Recall now that $\mathbb B}%{{\bf q}=\omega_1^{-1}\omega_2=\omega_2^t(\omega_1^t)^{-1}$, and the above expression rewrites as \begin{equation} 2 (z^t)(\omega_1)^t\left\{\eta_2 C^t +\eta_1 D^t -\eta_1(\omega_2^t(\omega_1^t)^{-1} C^t +D^t)\right\} (\mathbb B}%{{\bf q} C^t + D^t) ^{-1} z. \label{prom1} \end{equation} Using the relation $\eta_1 \omega_2^t =\eta_2 \omega_1^t+\frac{\pi {\rm i}}{2} I_g$ from (\ref{Leg3}), we rewrite (\ref{prom1}) as $$ -\pi {\rm i} (z^t)C^t(\mathbb B}%{{\bf q} C^t + D^t) ^{-1} z, $$ which cancels against (\ref{expth}). Therefore, the exponential term in (\ref{sisig}) is equal to 1. Consider the term involving ${\rm det} (C\mathbb B}%{{\bf q}+D)\}^{1/2}$ in (\ref{sisig}). This term appears from the transformation law (\ref{thgam}) of the theta-function, as well as from the transformation law (\ref{transF}) of the function ${\cal F}$, and the transformation law of of ${\rm det} \omega_1$; this term cancels out similarly to the genus 1 case. What remains is a root of unity of eight's degree from the transformation of the theta-function, as well as a root of unity of degree $8N$ from the transformation of ${\cal F}^{3/N}$ which give altogether a root of unity of degree $8N$ in the transformation of $\sigma_\chi$. $\Box$ \section{Even sigma-functions in higher genus} \label{sigmaeven} Even elliptic sigma-functions are given by \begin{equation} \sigma_r (u) =\frac{e^{-\eta_r u} \sigma(u+\omega_r)}{\sigma(\omega_r)} \label{sir} \end{equation} for $r=1,2,3$, where $\omega_3=-\omega_1-\omega_2$ and $\eta_3=-\eta_1-\eta_2.$ The enumeration of sigma-functions is different from theta-functions: $\sigma$ is proportional to $\theta_1$, $\sigma_3$ is proportional to $\theta_3$ (i.e. usual theta-function with 0 characteristic), $\sigma_1$ is proportional to $\theta_2$, and $\sigma_2$ to $\theta_4$: \begin{equation} \label{sigmas_thetas} \sigma_1(u) = {\rm exp} \left( \frac{\eta_1u^2}{2\omega_1} \right) \frac{\theta_2(\frac{u}{2\omega_1})}{\theta_2}, \qquad \sigma_2(u) = {\rm exp} \left( \frac{\eta_1u^2}{2\omega_1} \right) \frac{\theta_4(\frac{u}{2\omega_1})}{\theta_4}, \qquad \sigma_3(u) = {\rm exp} \left( \frac{\eta_1u^2}{2\omega_1} \right) \frac{\theta_3(\frac{u}{2\omega_1})}{\theta_3}, \end{equation} where $\theta_k=\theta_k(0)$. Let us now describe a generalization of the even sigma-functions to the higher genus case. Denote by $\mu$ an even spin line bundle on the surface ${\cal L}$ (generically such a line bundle does not admit holomorphic sections). An inspired by (\ref{sigmas_thetas}) definition of an even sigma-function corresponding to $\mu$ is less ambiguous than in the case of odd sigma-function if we insist on carrying the normalization $\sigma_\mu(0)=1$ to the higher genus case. Denote by $\beta_\mu$ the non-singular even characteristic corresponding (analogously to (\ref{defbetachi})) to the spin structure $\mu$ under some choice of a canonical basis of cycles $\{a_i,b_i\}_{i=1}^g$. \begin{definition} The sigma-function corresponding to an even non-singular spin structure $\mu$ is defined by the formula \begin{equation} \sigma_{\mu}(u)= {\rm exp} \left(\frac{1}{2} u^t (\eta_1 \omega_1^{-1}) u\right) \frac{\theta[\beta^\mu]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q})}{\theta[\beta^\mu](0|\mathbb B}%{{\bf q})}. \label{conjsigeven} \end{equation} \end{definition} Periodicity properties of even sigma-functions (\ref{conjsigeven}) coincide with the periodicity properties (\ref{perisig1}), (\ref{perisig2}) of odd sigma-functions, where the vectors $\beta_1$ and $\beta_2$ should be substituted by characteristic vectors corresponding to the even spin bundle $\mu$. Moreover, even sigma-functions $\sigma_{\mu}(u)$ are invariant under any change of canonical basis of cycles. This property is in contrast to the case of odd sigma-functions where only $\sigma_{\chi}^{8N}(u|\mathbb B}%{{\bf q})$ can be claimed to be invariant under any change of canonical basis of cycles. Finally we have \begin{equation*} \sigma_{\mu}(0)=1 \end{equation*} similarly to the genus 1 case. The even sigma-functions (\ref{conjsigeven}) is well-defined if \begin{equation} \theta[\beta^\mu](0|\mathbb B}%{{\bf q})\neq 0\;. \label{singeven} \end{equation} Therefore, for a generic Riemann surface when none of theta-constants vanishes all even sigma-functions, as well as all odd sigma-functions, are well-defined. For Riemann surfaces for which relation (\ref{singeven}) does not hold, the definition of $\sigma_\mu$ should be modified, but we do not consider this case here. \begin{proposition} Formula (\ref{sir}) relating odd and even sigma functions in the elliptic case admits the following generalization to higher genus: \begin{equation} \label{even-odd} \sigma_\mu(u) = e^{ -u^t(\eta_1 n_1 + \eta_2 n_2)} \frac{\sigma_\chi(u+\omega_1 n_1 + \omega_2 n_2)}{\sigma_\chi(\omega_1 n_1 + \omega_2 n_2)}, \end{equation} where $n_1$ and $n_2$ are integer vectors, $n_1,n_2\in{\mathbb Z}^g$, such that \begin{equation*} \frac{1}{2} \left(\begin{array}{c}n_2 \\n_1\end{array}\right) = \beta^\mu-\beta^\chi. \end{equation*} \end{proposition} {\it Proof.} The proposition can be proved by the following straightforward calculation. Using definition (\ref{conjsig}) of $\sigma_\chi$, we rewrite (\ref{even-odd}) in the form: \begin{multline*} \sigma_\mu(u) = {\rm exp} \left\{ -u^t(\eta_1n_1 +\eta_2 n_2) + \frac{1}{2} ( u^t + n_1^t \omega_1^t + n_2^t \omega_2^t ) (\eta_1 \omega_1^{-1}) (u+\omega_1n_1 + \omega_2 n_2) \right. \\ \left. - \frac{1}{2} (n_1^t \omega_1^t + n_2^t \omega_2^t ) (\eta_1 \omega_1^{-1}) (\omega_1n_1 + \omega_2 n_2) \right\} \frac{\theta[\beta_\chi] \left( (2\omega_1)^{-1} u + \frac{1}{2} n_1 + \frac{1}{2}\mathbb B}%{{\bf q} n_2 \right) }{\theta[\beta_\chi] \left( \frac{1}{2} n_1 + \frac{1}{2}\mathbb B}%{{\bf q} n_2 \right)}. \end{multline*} Simplifying the expression in the exponent and taking into account that \begin{equation*} \theta[\beta] (z|\mathbb B}%{{\bf q}) = {\rm exp} \{ \pi {\rm i} \beta_1^t\mathbb B}%{{\bf q} \beta_1 + 2 \pi {\rm i} (z+\beta_2)^t\beta_1 \}\, \theta(z+\beta_2 + \mathbb B}%{{\bf q} \beta_1|\mathbb B}%{{\bf q}) \qquad \mbox{with}\qquad \beta=\left[\begin{array}{c}\beta_1 \\\beta_2\end{array}\right], \end{equation*} we get \begin{multline*} \sigma_\mu(u) = {\rm exp} \left\{ -u^t(\eta_1n_1 +\eta_2 n_2) + \frac{1}{2} u^t(\eta_1 \omega_1^{-1})u + u^t(\eta_1 \omega_1^{-1}) (\omega_1n_1 + \omega_2 n_2) - \pi {\rm i} u^t(2\omega^t_1)^{-1}n_2\right\} \\ \times\frac{\theta[\beta^\mu] \left( (2\omega_1)^{-1} u \right)} {\theta[\beta^\mu] ( 0)}. \qquad \qquad \qquad \end{multline*} Further simplification of the exponential factor leads to \begin{equation*} \sigma_\mu(u) = {\rm exp} \left\{ \frac{1}{2} u^t(\eta_1 \omega_1^{-1})u\right\} {\rm exp}\left\{ u^t\left( -\eta_2\omega_1^t + \eta_1\omega_2^t - \frac{1}{2}\pi{\rm i} I_g\right) (\omega^t_1)^{-1}n_2) \right\} \frac{\theta[\beta^\mu] \left( (2\omega_1)^{-1} u \right)} {\theta[\beta^\mu] ( 0)}, \end{equation*} which due to relation (\ref{Leg3}) for the period matrices $\omega_i$ and $\eta_i$ coincides with (\ref{conjsigeven}). $\Box$ \section{Dependence on a choice of a marked point and a local parameter} \label{invar} The period matrix $\omega_1$ transforms under a change of the marked point $x_0$ and of the local parameter $\zeta$ which define the distinguished basis of holomorphic differentials. This transformation also implies a transformation of the symmetric matrix (\ref{eta1ome1}) $\eta_1\omega_1^{-1} = (\omega_1^t)^{-1} \Lambda \omega_1^{-1}$ of the bilinear form which enters the exponential term in the definition of both odd (\ref{conjsig}) and even (\ref{conjsigeven}) sigma-functions. However, the matrix $\Lambda$ depends only on the Riemann surface ${\cal L}$ and a choice of a canonical basis of cycles on ${\cal L}$. This allows to establish the coincidence of the sigma-functions corresponding to the same spin structure on ${\cal L}$ and to different bases of distinguished differentials. Namely, consider two points $x_0$ and $\tilde{x}_0$ on the surface with local parameters $\zeta$ and $\tilde{\zeta}$ in their neighbourhoods, respectively. Let $2\omega_1$ and $2\tilde{\omega}_1$ be the matrices of $a$-periods of two sets of distinguished holomorphic differentials $\{v_i^0\}$ and $\{\tilde{v}_i^0\}$ such that the differentials $\{v_i^0\}$ have expansions (\ref{defvjw}) at the point $x_0$ in $\zeta$ and the differentials $\{\tilde{v}_i^0\}$ have expansions (\ref{defvjw}) at the point $\tilde{x}_0$ in $\tilde{\zeta}$. Both sets of differentials give bases in the space of holomorphic differentials on ${\cal L}$ and, therefore, are related by a linear transformation: $ (\tilde{v}_1^0, \dots, \tilde{v}_g^0)^t = Q(v_1^0, \dots, v_g^0)^t$ with some matrix $Q$, i.e., \begin{equation*} \tilde{\omega}_1=Q \omega_1\;. \end{equation*} Then the corresponding odd sigma-functions $\sigma_\chi(u)$ and $\tilde{\sigma}_\chi(\tilde{u})$ (\ref{conjsig}) look as follows: $$ \sigma_\chi(u)= {\cal F}^{-3/N}\, {\rm det}(2\omega_1)\, {\rm exp} \left(\frac{1}{2} u^t (\omega_1^t)^{-1} \Lambda \omega_1^{-1} u\right)\, \theta[\beta^\chi]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q}) $$ and $$ \tilde{\sigma}_\chi(\tilde{u})= {\cal F}^{-3/N}\, {\rm det}(2\tilde{\omega}_1)\, {\rm exp} \left(\frac{1}{2} \tilde{u}^t (\tilde{\omega}_1^t)^{-1} \Lambda \tilde{\omega}_1^{-1} \tilde{u}\right)\, \theta[\beta^\chi]((2\tilde{\omega}_1)^{-1}\tilde{u}|\mathbb B}%{{\bf q})\;. $$ Therefore, \begin{equation*} \tilde{\sigma}_\chi(\tilde{u})= \{{\rm det} Q\}\,\sigma_\chi(Q^{-1} \tilde{u}), \end{equation*} or \begin{equation*} \tilde{\sigma}_\chi(\tilde{u})= \{{\rm det} Q\}\,\sigma_\chi(u), \end{equation*} if we put $\tilde{u}=Qu$. The even sigma-functions are related by even simpler transformation (with $\tilde{u}=Qu$): \begin{equation*} \tilde{\sigma}_\mu(\tilde{u})= \sigma_\mu(u). \end{equation*} Thus the sigma-functions corresponding to two different sets of distinguished holomorphic differentials are essentially equivalent. We notice however that the matrix $Q$ depends on the moduli of the Riemann surface, and on the choice of base points $x_0$ and $\tilde{x}_0$, and on the choice of local parameters near $x_0$ and $\tilde{x}_0$. \section{Sigma-function as a function on a Riemann surface} \label{sect_onRS} One can consider the sigma-function as a function of a point on a Riemann surface, just like the Riemann theta-function. We recall that the Riemann theta-function, $\theta(z)$, for $z={\cal U}(x)-{\cal U}(D_g)-K$, where $D_g$ is a positive non-special divisor of degree $g$, has $g$ zeros on ${\cal L}$, and these zeros lie at the points of the divisor $D_g$. (Here we put ${\cal U}={\cal U}_{x_0}$ and $K=K^{x_0}$.) What is a natural definition of the sigma-function on a Riemann surface? Obviously, we should take the sigma-function $\sigma_\chi(u)$ constructed above and take $(2\omega_1)^{-1}u+\mathbb B}%{{\bf q}\beta_1+\beta_2={\cal U}(x)-{\cal U}(D_g)-K$ for some choice of a divisor $D_g$. A natural set of $g$ points in our construction consists of the base point $x_0$ and the divisor $D$; therefore, we choose $D_g=x_0+D$. Then, according to the definition (\ref{defbetachi}) of the characteristic $\beta_\chi$, we have $u=2\omega_1({\cal U}(x)-{\cal U}(x_0))=2\omega_1{\cal U}(x)$. Alternatively, we can introduce the ``modified'' Abel map \begin{equation*} U(x)=2\omega_1{\cal U}(x)\;. \end{equation*} The components of $U(x)$ are given by integrals of the ``distinguished'' differentials $v^0_j$: \begin{equation*} U_j(x)=\int_{x_0}^x v^0_j. \end{equation*} Then the odd sigma-function on the Riemann surface is given by: \begin{equation*} \sigma_\chi(x):=\sigma_\chi(U(x))\;. \end{equation*} Alternatively, using the representation (\ref{conjsig}) of $\sigma_\chi$ in terms of the theta-function, we have: \begin{equation*} \sigma_\chi(x)= {\cal F}^{-3/N}{\rm det}(2\omega_1)\, e^{2 \,{\cal U}^t(x) \, \Lambda \, {\cal U}(x)} \theta[\beta^\chi]({\cal U}(x)|\mathbb B}%{{\bf q}). \end{equation*} The function $\sigma_\chi(x)$ has zeros at the points $x_0$ and $P_1,\dots,P_{g-1}$ (where $D=P_1+\dots+ P_{g-1}$). If we divide it by the spinor $h_\chi(x)$, we get a non-single-valued $-1/2$ form \begin{equation*} \hat{\sigma}_\chi(x)=\frac{\sigma_\chi(x)}{h_\chi(x)}, \end{equation*} which has only one zero at $x=x_0$. This object is similar to the usual prime-form \begin{equation*} E(x,x_0)=\frac{\theta[\beta^\chi]({\cal U}(x)-{\cal U}(x_0))}{h_\chi(x)\,h_\chi(x_0)}\;. \end{equation*} The difference is that $ \hat{\sigma}_\chi(x)$ is a $-1/2$-form with respect to $x$, and $x_0$ plays the role of parameter. In this sense $ \hat{\sigma}_\chi(x)$ is similar to the $-1/2$-form \begin{equation*} e(x):=E(x,x_0)h_\chi(x_0)=\frac{\theta[\beta^\chi]({\cal U}(x)-{\cal U}(x_0))}{h_\chi(x)}\;. \end{equation*} In contrast to $e(x)$, which transforms non-trivially under modular transformations, our $-1/2$-form, being taken to the power $8N$, is modular-invariant. Note also that the second derivative of the logarithm of $\sigma_{\chi}(x-y)$ for two points $x,y\in{\cal L}$ is the following symmetric bidifferential (a differential on ${\cal L}\times{\cal L}$) holomorphic everywhere except for a pole of order $2$ at the diagonal $x=y:$ \begin{equation} \label{bidiff} d_x d_y {\rm log}\sigma_\chi(x-y) = W(x,y) -4{\bf v}^t(x)\,\Lambda\, {\bf v}(y)\;. \end{equation} Here ${\bf v} = (v_1,\dots,v_g)^t$ is the vector of holomorphic 1-forms on ${\cal L}$ normalized via the relations $\int_{a_i}v_j=\delta_{ij}$; and $W(x,y):=d_xd_y\log E(x,y)$ is the symmetric bidifferential with a double pole along $x=y$ and holomorphic everywhere else normalized via $\int_{a_i}W(x,y)=0$ for any $i=1,\dots,g$, the integration being taken with respect to any of the arguments. The bidifferential \begin{equation} W_{Klein}(x,y)=W(x,y) -4{\bf v}^t(x)\,\Lambda\, {\bf v}(y) \label{WKlein} \end{equation} from the right-hand side of (\ref{bidiff}) was introduced by F. Klein in \cite{Klein2} (see also discussion in J. Fay's book \cite{Fay73} p. 22). The bidifferential $W_{Klein}$ is independent of the choice of homology basis defining the bidifferential $W(x,y)$, the holomorphic differentials $v_i$ and the Riemann matrix $\mathbb B}%{{\bf q}$. This independence can be derived from the transformation (\ref{Lambda_transform}) of the matrix $\Lambda$ and the following transformation of the bidifferential $W$ under a change (\ref{gamma1}) of the canonical homology basis (see \cite{Fay92}, p. 10): \begin{equation*} W^\gamma(x,y) = W(x,y) - 2\pi {\rm i} {\bf v}^t(x) (C\mathbb B}%{{\bf q} +D)^{-1} C{\bf v}(y). \end{equation*} \section{Hyperelliptic sigma-functions} \label{sechyper} Here we consider our general construction presented above for the subspaces of hyperelliptic Riemann surfaces, i.e., Riemann surfaces possessing a meromorphic function of degree $2$. Any such Riemann surface ${\cal L}$ is biholomorphically equivalent to an algebraic curve of the form \begin{equation*} \nu^2=\prod_{j=1}^{2g+2} (\lambda-\lambda_j). \end{equation*} For definiteness choose the canonical basis of cycles $\{a_i,b_i\}_{i=1}^g$ on ${\cal L}$ in the same way as in Chapter III, \textsection 8 of \cite{Tata}. The point $x_0$ entering our construction can be chosen to coincide with $\infty^{(1)}$ (a point corresponding to $\lambda=\infty$ where $\nu\sim\lambda^{g+1}$) and the local parameter $\zeta = 1/\lambda$. The basis of holomorphic differentials $v_i^0$ can in this case be chosen as follows: \begin{equation} v_i^0=-\frac{\lambda^{g-i}d\lambda}{\nu}, \qquad i=1,\dots, g. \end{equation} The corresponding matrix of periods is $(2\omega_1)_{ij}=\int_{a_i} v_j^0$, and the other matrices $\omega_2$ and $\eta_{1,2}$ are given by (\ref{Bper}), (\ref{eta12}). The non-vanishing theta-constants in the hyperelliptic case correspond to even characteristics $\beta^\mu=[\beta_1^\mu,\beta_2^\mu]$ which can be constructed via partition of the set $S=\{1,\dots,2g+2\}$ into two subsets: $T=\{{i_1},\dots,{i_{g+1}}\}$ and $S\setminus T=\{{j_1},\dots,{j_{g+1}}\}$ (see \cite{Tata}); therefore, $N=\frac{1}{2}\left(^{2g+2}_{g+1}\right)$. Then the corresponding theta-constant is given by Thomae's formula \begin{equation} \theta[\beta^\mu]^4(0|\mathbb B}%{{\bf q})=\pm {\rm det}^2\,(2\omega_1)\prod_{i,j\in T, \;i<j}(\lambda_i-\lambda_j)\,\prod_{i,j\not\in T,\; i<j}(\lambda_i-\lambda_j). \label{Thomae} \end{equation} Therefore \begin{equation*} {\cal F}=\prod_{\beta^\mu }\theta[\beta^\mu](0|\mathbb B}%{{\bf q})=\epsilon [{\rm det}\,(2\omega_1)]^{N/2} \prod_{i,j=1,\,i< j}^{2g+2}(\lambda_i-\lambda_j)^{\frac{Ng}{4(2g+1)}}, \end{equation*} where $\epsilon^8=1$. Thus we can transform the definition of the odd sigma-function (\ref{conjsig}) to the form \begin{equation*} \sigma_\chi(u)= ({\rm det (2\omega_1)})^{-1/2}\prod_{i<j} (\lambda_i-\lambda_j)^{-3/8}\,{\rm exp} \left(\frac{1}{2} u^t (\eta_1 \omega_1^{-1}) u\right) \theta[\beta^\chi]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q}). \end{equation*} The even hyperelliptic sigma-function $\sigma_\mu$ (\ref{conjsigeven}) can be rewritten as follows using (\ref{Thomae}): \begin{equation*} \sigma_{\mu}(u)= \epsilon\; {\rm exp} \left(\frac{1}{2} u^t (\eta_1 \omega_1^{-1}) u\right) \frac{\{{\rm det}\,(2\omega_1)\}^{-1/2}\theta[\beta^\mu]((2\omega_1)^{-1}u|\mathbb B}%{{\bf q})}{\prod_{i<j; i,j\in T}(\lambda_i-\lambda_j)^{1/4}\,\prod_{i<j; i,j \not\in T}(\lambda_i-\lambda_j)^{1/4}}, \end{equation*} where $\epsilon$ is a root of unity of degree $8$ which has to be chosen to provide the normalization $\sigma_{\mu}(0)=1$. We see that on the space of hyperelliptic curves the sigma-functions with even characteristics are always well-defined as long as the curve remains non-degenerate. \section{Open questions} There are several interesting problems related to results of this work. First, the role of sigma-functions in the theory of integrable systems should be further clarified; results of previous works \cite{BLE1} suggest that, due to their modular invariance, the sigma-functions might have various advantages in describing algebro-geometric solutions of integrable systems in comparison to theta-functions. The issue of modular invariance should also be resolved completely. Namely, according to our present results, the sigma-function is invariant under modular transformations up to multiplication with certain roots of unity. In particular, our present framework allows genus one sigma-function to be multiplied with a root of unity of degree 24 under modular transformations, while it is well-known that this root of unity is in fact absent. Therefore, one may wonder whether it should be possible to better control this root of unity in the higer genus case. This issue might be quite subtle, since even the explicit computation of the transformation of Dedekind eta-function under the action of the modular group is rather non-trivial (see \cite{Atiyah} for the review of this topic). In the case of Riemann surfaces admitting a holomorphic differential with zero of the highest order ($2g-2$), and in particular for the case of the $(n,s)$ curves studied in previous works, it is desirable to establish an explicit relationship of our construction with the previous ones. In particular, one could expect that the Taylor expansion of the sigma-functions near zero should involve an appropriate Schur polynomial, similarly to \cite{BLE2,HE}. {\bf Acknowledgements.} We thank V. Enolskii and A. Kokotov for important discussions. We are grateful to S. Grushevsky for attracting our attention to \cite{farkas}. We are grateful to the anonymous referee for useful comments. DK thanks Max-Planck-Institut f\"ur Mathematik in Bonn where the main part of this work was completed for support, hospitality and excellent working conditions. The work of DK was partially supported by Concordia Research Chair grant, NSERC, NATEQ and Max-Planck Society. The work of VS was supported by FQRNT, NSERC and the University of Sherbrooke.
1,477,468,750,735
arxiv
\section{Additional ablational results }In this section, We present more comparison results of the ablation experiments, showing the impacts of $L_{point}$ and $L_{line}$ respectively. In \refFig{fig:appendix_point_pascal} to \refFig{fig:appendix_point_city}, we visualize the category-level potential fields of the ground-truth, with/without our $L_{point}$ in 2-4th row. One can observe that our proposed point loss significantly improve the prediction consistency in the class channel and thus benefit the practical semantic segmentation. In \refFig{fig:appendix_line_pascal} to \refFig{fig:appendix_line_city}, we similarly visualize the category-level equipotential line region of the ground-truth, with/without our $L_{line}$, we see that the equipotential line loss $L_{line}$ enforce the category-level contour learning to help the network refine its prediction in the semantic boundary areas. \begin{figure}[h] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_point_pascal.pdf} \caption{Example results of the point loss on Pascal Voc 2012. We respectively visualize the potential field of the ``person", ``bird", ``cow" category of the input image.} \label{fig:appendix_point_pascal} \vspace{-10pt} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_point_city.pdf} \caption{Example results of the point loss on Cityscapes. We respectively visualize the potential field of the ``human", ``car", ``bus" category of the input image.} \label{fig:appendix_point_city} \vspace{-10pt} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_line_pascal.pdf} \caption{Example results of the equipotential line poss on Pascal Voc. We respectively visualize the equipotnetial line region of the ``cow", ``chair", ``bird" category of the input image.} \label{fig:appendix_line_pascal} \vspace{-10pt} \end{figure} \begin{figure}[h] \centering \includegraphics[ width=8.5cm, height=4cm]{./Figures/appendix_line_city.pdf} \caption{Example results of the equipotential line poss on Cityscapes.We respectively visualize the equipotential line region of the ``car", ``rider", ``bus" category of the inputs.} \label{fig:appendix_line_city} \vspace{-10pt} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_pascal_psa.pdf} \caption{Qualitative comparison for segmentation results on Pascal VOC 2012 validation set. } \label{fig:appendix_pascal_psa} \vspace{-10pt} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_pascal_psp.pdf} \caption{Qualitative comparison for segmentation results on Pascal VOC 2012 validation set.} \label{fig:appendix_pascal_psp} \vspace{-10pt} \end{figure} \begin{figure}[j!] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_pascal_deep.pdf} \caption{ Qualitative comparison for segmentation results on Pascal VOC 2012 validation set.} \label{fig:appendix_pascal_deep} \vspace{-10pt} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_city_psanet.pdf} \caption{Qualitative comparison for segmentation results on Cityscapes validation set.} \label{fig:appendix_city_psa} \vspace{-10pt} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_city_pspnet.pdf} \caption{Qualitative comparison for segmentation results on Cityscapes validation set.} \label{fig:appendix_city_psp} \vspace{-10pt} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=8.5cm, height=4cm]{./Figures/appendix_city_deeplab.pdf} \caption{Qualitative comparison for segmentation results on Cityscapes validation set.} \label{fig:appendix_city_deep} \vspace{-10pt} \end{figure} \section{Additional semantic segmentation results} In this part, we present more visual results of semantic segmentation experiments. We specifically choose images with challenging scenes, such as inter/intra-class cases, and images with misleading backgrounds. With demonstrated segmentation performance from \refFig{fig:appendix_pascal_psa} to \refFig{fig:appendix_city_deep}. \end{document} \section{Introduction} \label{sec:introduction} \IEEEPARstart{E}{xisting} deep semantic segmentation approaches \cite{FCN,unet,segnet,wsss,channel_cvpr21} are usually trained with the cross-entropy (CE) loss for multi-class classification. In the training phase, this loss measures the mismatch between the areas determined by the probability estimation from the neural network and areas defined by the ground-truth semantic label. However, the existing benchmarks' inherent drawbacks may prevent CE from achieving better performance, especially in semantic boundary areas. The first disadvantage relates to the label noise. It is studied \cite{noise_2,noise_3} that the misaligned ground-truth annotations with the real object edges \cite{noise_1,noise_2,noise_3} would mislead CE and results in low segmentation accuracy on the boundary regions. At the same time, the inherent inductive bias \cite{bias1, bias2} of convolutional neural networks (CNNs) is also a big barrier for CE to learn a clear semantic boundary. In this work, we propose a novel method to help CE handle the boundary segmentation problem. We mainly identify the ``semantic boundary area" into two types (shown in \refFig{fig:boundaryexample}): the inter-class and intra-class. The inter-class case refers to transition areas between different categories; these categories (on the first row) either have similar visual characteristics (patterns/textures) or have strong semantic relationships. The intra-class case (on the second row) areas are often seen when segmenting multiple instances of the same category in a small area, particularly on objects with complicated contours. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{./Figures/Example_figure1.pdf} \end{center} \caption{Demonstration of the semantic boundary problem. (c) shows the prediction from DeepLabv3+ \cite{Deeplabv3plus}, and (d) shows the output when combined with EPL module. Both rows respectively demonstrate the segmentation difficulties in inter-class and intra-class boundary areas.} \label{fig:boundaryexample} \end{figure} To this end, we present a novel framework to address the semantic boundary segmentation problem. Our central idea comes from two aspects: \begin{itemize} \item People get a good visual understanding of objects in real life by changing the relative observation distance or varying the direction/perspectives (in \refFig{fig:motivations} (a)). We conclude that better visual expression combines observations from various positions and perspectives. To this end, we propose a novel operator (anisotropic convolution) to expand the semantic labels and a loss function to refine the segmentation estimation in different directions. \item Objects (e.g., animals in \refFig{fig:boundaryexample}) with similar geometric appearances are usually classified as the same categories. In real life, people keep those characteristics in mind and use them as empirical evidence to recognize new species (in \refFig{fig:motivations} (b)). In this work, we suggest learning category-level contours to achieve this effect. \end{itemize} \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{./Figures/Mind_line.pdf} \end{center} \caption{(a) This work takes inspiration from the daily visual observation, i.e., changing the relative distance $D$ and the perspective $S$ contributes the better object-contour recognition. (b) The category-level contour information is an important cue for image classification. For example, people would categorize tigers and leopards into cat species by external contours, even though they have different textures. } \label{fig:motivations} \end{figure} Specifically, we propose the equipotential learning (EPL) module, which refines the segmentation predictions in a new domain, termed the potential domain. Following the deep segmentation tradition \cite{FCN,segnet,unet}, we define the semantic segmentation task as a pixel-wise classification problem in the conventional probability domain and then propose to convert the probability field into the potential field with anisotropic convolution to obtain visual observation from multiple perspectives. We implement two loss functions for refining the segmentation prediction in the potential field, the point loss and equipotential line loss, respectively performing anisotropic regression and category-level contour learning. In summary, our contributions are as follow: \begin{itemize} \item We design an anisotropic convolution, a novel operator that converts the deep semantic segmentation problem to the self-defined potential domain, aiming to optimize neural network predictions from different directions. To the best of our knowledge, it is the first time to use this idea to deal with the boundary segmentation problem. \item In the potential domain, we build a point loss, requiring the segmentation predictions to fit the real image content anisotropically, enforcing the consistency between predicted pixels and their nearby decision boundaries. \item We present an equipotential line loss to learn each category's contour. This loss specifically learns the edge regions and optimizes the corresponding predictions for better boundary segmentation performance. \item Experimental results on Pascal Voc 2012 and Cityscapes demonstrate that our method can help current segmentation solutions \cite{pspnet,psanet,ccnet,gscnn} refine their predictions on the semantic boundary areas. \end{itemize} \section{Related Work} \label{sec:reletedwork} \noindent\textbf{FCN methods for semantic segmentation.} FCN refers \cite{FCN} to networks adopting the convolutional layer throughout the architecture. With more similar methods \cite{unet,segnet,DeepLabv2} proposed, FCN has become the standard practice in the image segmentation community. Also, methods like conditional random field (CRF) \cite{DeepLabv2} and point-based sampling \cite{efficient,pointrend,hardpixel} are put to enforce FCN models' segmentation ability on decision boundaries. However, building specified operators brings extra computational costs. Also, these studies have relatively weak performance in learning category-level characteristics. In this work, EPL module maps each category to an independent channel, learning category-level characteristics without increasing the parameter size.\\ \noindent\textbf{Distance field regression.} The distance field (DF) is well-known in the computer vision and graphics community. The value of a point in the field is defined as the distance to the nearest boundary, enabling high-quality feature representation. Audebert \textit{et al.} \cite{audebert} design a multi-task model for FCN, which requires feature maps to fit a DF, apart from estimating probabilities. Recently, Xue \textit{et al.} \cite{xue} suggested networks to fit the signed distance field (SDF) directly and then use a smoothed Heaviside function to turn the distance prediction into probabilistic predictions. With incorporated information from DF, one could effectively regulate the layout of segmentation results. However, the primary issue of field-based studies~\cite{regression1,regression2,xue,IABL} is that DF itself does not carry any category identity information, which may mislead the neural network when learning multiple categories jointly. We borrow the concept of ``distance field" but map category-level image contents to independent spaces and then learn the contour information in the potential domain to address this issue. \noindent\textbf{Boundary supervision}: Many loss functions are specifically designed for calculating boundary loss. For instance, Discriminative Feature Network (DFN)\cite{yu} employs an edge detection step for feature maps and tries to match the edge map \cite{focalloss} by a sigmoid loss. In the medical image segmentation community, Dice loss \cite{diceloss} is widely used to solve the class-imbalance problem in the boundary region. Another loss proposed in \cite{bound_loss} re-calculates the distance field's metric in an integral regional way and achieves great success over the binary organ segmentation task. Our work uses the equipotential lines in the potential domain as the boundary supervision and learns all categories' contours with the proposed line loss. \section{Methodology} \label{sec:method} This section first introduces the anisotropic convolution, an operator that convolves the semantic segmentation problem from the probability domain to the potential domain (Probability $\rightarrow$ Potential). Secondly, we elaborate on fitting anisotropic observation and performing the category-level contour learning in the potential domain. Finally, we plug EPL into FCN models to improve their boundary segmentation performance. \subsection{Preliminaries} Before going deep into the anisotropic convolution, we clarify two important concepts used throughout the paper.\\ \begin{itemize} \item \textbf{Field.} We use ``field" to represent the basic unit for domain conversion. The predicted probability fields refer to probability estimations from FCN, and the ground-truth probability fields are the one-hot encoding result of label annotations. Similarly, we name the conversion results in the potential domain, the predicted/ground-truth potential fields. \item \textbf{Potential energy.} We define the potential energy as the pixels' numerical value in potential fields. The anisotropic convolution converts the pixel-level probability estimations to potential energies in different directions by performing domain conversion in the training phase. \end{itemize} \subsection{Anisotropic Convolution for Domain Conversion} \label{section:ac} To implement the domain conversion, we introduce the anisotropic convolution (AC), a differentiable convolutional operator that proceeds in specific directions. Using probability fields as the input, AC extends the image content to get its anisotropic semantic extensions. A general AC operator consists of a filtering kernel $W$ and anisotropic splitter $S$, corresponding to the variables of ``relative distance" and ``perspective" in the visual observation process. We let $X$ and $Y$ ($Y\in [0,1]$) stand for input images and their ground-truth probability fields. In the supervised $K$-class semantic segmentation task, we train network $f$ with the parameter $\theta$. $\hat{Y}=\{{\hat{y_{1}},\hat{y_{2}},...\hat{y_{K}}}\}$ denotes the category-level probability estimation, where: \begin{equation} \hat{Y}=f_{\theta}(X), \label{eq:base_equation} \end{equation} For any point $\hat{y}^{p}\in \hat{Y}$ ($p\in P$ is the spatial coordinate set of $Y$ and $\hat{Y}$), we get its energy $E(\hat{y}^{p})$ in the potential domain by performing the conversion on its $w \times w$ neighborhood space $V^{\hat{p}}$. Formally, we express this process as: \begin{equation} E(\hat{y}^{p})= V^{\hat{p}}*(W \circ S) ,\label{eq:potential_energy} \end{equation} where $V^{\hat{p}}$ has the same kernel size as $W$, and $*$ and $\circ$ denote the convolution and Hadamard product, respectively. Also, we apply the same conversion on all possible $y^{p}\in Y$ to get the ground-truth $E(y^{p})$. In real life, we usually adjust the perspective to get different views of an object since anisotropic observations help us better understand the object. In our case, the changeable perspective is realized by letting the splitter contain different direction vectors. For instance, the splitter $S$ in \refFig{fig:example_of_ac} consists of four elements ($S=\{s_{1},s_{2},s_{3},s_{4}\}$), denoting the directions of up, down, left, and right, respectively. In the experiment section, we test three splitters $A, B$ and $C$ (shown in the bottom of \refFig{fig:example_of_ac}) to explore the effect of $S$ in semantic segmentation, containing 4, 4, and 8 directions, respectively. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{./Figures/Example_figure3.pdf} \end{center} \caption{Example of the domain conversion with $5\times5$ anisotropic convolution (AC). Here, AC includes four directions (referring to splitter $A$), which expands the content contained in the probability field (\textbf{Prob. Field}) in specified directions. The potential energy of the resulting potential fields (\textbf{Pot. Fields}) ranges from 0 to 3, and we label their distribution with different colors. In experiments, we test all three splitters ($A, B$, and $C$) in ablation studies to see their effectiveness.} \label{fig:example_of_ac} \end{figure} To mitigate the training difficulty and reduce the computational overhead, we set the filtering kernel $W$ with a box pattern \cite{box_filter} and maintain both weights of $W$ and $S$ unchanged in the training phase. AC does not increase the parameter budget of $\theta$ in the full conversion process. \refFig{fig:example_of_ac} presents a domain conversion example, where a $7 \times 7$ probability field $E(\hat{Y})$ is converted to four potential fields in different directions using $5 \times 5$ AC operator. We think of each potential field as an observation for the input semantic part (shown in orange) in a direction. Generally, domain conversion has two benefits: \begin{itemize} \item \textbf{Visual:} In \refFig{fig:example_of_ac}, we observe that the category-level semantic is extended in preset directions. We think of each potential field as a view in a specific direction. In the training phase, after converting the ground-truth probability fields to potential fields, one can get the anisotropic observations of the real image content after integrating the context information in all potential fields. \item \textbf{Physical:} In standard deep segmentation practices, the neural network can only get the supervision information from the semantic labels. AC spreads the image context anisotropically and maps the sparse contour label to a dense annotation in the potential fields. $E(Y)$ provides more comprehensive and stronger supervision for input $X$ than $Y$, especially in the boundary areas. Besides, domain conversion enables the network $f$ to explore a broader solution space since we can simultaneously optimize $f$ in the probability and potential domains. \end{itemize} Besides, AC operator enables users flexibly to change the ``observation distance" and ``perspective" variables by adjusting $W$ and $S$, based on the property of object instances (refer to Sec. \ref{sec:evluation} for details). \subsection{Loss functions} This section presents the point loss and the equipotential line loss that enforce the optimization in the potential domain. The first term implements the anisotropic field regression throughout the potential domain, while the second loss aims to precisely learn each category's contour. \subsubsection*{\bf{Point Loss for anisotropic field regression}} We propose a point loss to fit the ground-truth potential fields $E(Y)$ in all directions to learn the contextual information in the potential domain. This loss regresses the fields in the potential domain globally; therefore, the information learned from the potential domain would correct prediction errors in the probability domain. Also, the potential energy integrates information from the neighborhood space (of the same size as $W$) for better optimization when the loss function regresses points in the semantic boundary. In other words, the computed gradient in the potential domain on $y^{p}$ will push the main segmentation network to refine $y^{p}$'s involved neighborhood predictions in the probability domain through backpropagation, enhancing the prediction consistency among pixels. Formally, we compute the point loss $L_{point}$ by averaging the error in each direction $s\in S$ as: \begin{equation} L_{point}=\frac{1}{|S|}\displaystyle\sum_{s\in S}^{|S|}\sum_{p\in P}||E_{s}(y^{p})-E_{s}(\hat{y}^{p})||, \label{eq:field_loss} \end{equation} In the experiment section, we set the point loss with the $L1$ and $L2$ norm and then test their effectiveness. In conclusion, the potential fields denote the anisotropic observations for the image content; when $L_{point}$ regresses the prediction to fit the ground-truth potential fields, the segmentation network will look for an anisotropically-stable state and therefore achieve a globally-semantic balance. \subsubsection*{\bf{Equipotential Line Loss for category-level contour learning}} Although $L_{point}$ helps improve the prediction consistency, it also brings a risk of blur boundaries because of the predicted in-between values, i.e., close to 0.5, in the potential fields (see the blur part in \refFig{fig:Abl_Point_pascal}). It would result in an intra-class indistinction problem in the category channels. Besides, the point loss designed for global regression lacks the effects of contour learning, thus failing to fully utilize the semantic boundary information in the potential domain. To solve this issue and enable contour learning, we specifically present an equipotential line loss to strengthen the optimization in the object boundary-related region. In the ground-truth potential fields converted by the $w\times w$ AC, we define the equipotential line as a set of points having an equal energy value in the range of $[1,\lfloor \frac{w}{2}\rfloor]$. After domain conversion, the yielding ground-truth $E(Y)$ always follows a discrete distribution in $[0,\frac{w}{2}]$. By contrast, the predicted $E(\hat{Y})$'s distribution has a continuous shape since it takes the probability estimation $\hat{y}$ as input. Once matched with the ground-truth equipotential lines, the network predicts concrete category-level contours. Therefore, we suggest $E(\hat{Y})$ to specifically learn the equipotential lines that carry affluent contour information. Similarly, we provide an example in \refFig{fig:Line_loss_match_bin} , showing how to learn a single category's contour with $L_{line}$. With the $7\times7$ AC operating in direction $s$, we get three equipotential lines (marked with different colors). As observed in \refFig{fig:Line_loss_match_bin}, all equipotential lines are closely located around the dog's real edge and can be used to depict its contour. We generalize this example to the general $w\times w$ AC, quantify the line loss between the predicted and ground truth equipotential lines in all directions, and learn the category-level contours. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{./Figures/line_demonstration.pdf} \end{center} \caption{Example of the equipotential line loss for category-level contour learning with $7\times 7$ AC. We take the ``dog" category as the example and visualize its ground-truth and predicted equipotential lines (in the middle and right figure) in direction $s$, termed $l_{s}$ and $\hat{l_s}$, and mark the energy distribution with the colors in the bar. To learn the dog's contour, $L_{line}$ optimizes the mismatch region between $l_{s}$ and $\hat{l_{s}}$, in any $s \in S$, in the range [1,$\lfloor \frac{w}{2} \rfloor$] ($\lfloor \frac{w}{2} \rfloor=3$ in this example).} \label{fig:Line_loss_match_bin} \end{figure} \textbf{Loss formulation:} Before elaborating on $L_{line}$'s formulation, we need to instantiate the ground-truth/predicted equipotential line regions (denoted with $L$ and $\hat{L}$) in $E(Y)$ and $E(\hat{Y})$. Formally, we index $E(Y)$ in the ascending order and composite $L$ by iteratively assigning $E(Y)$'s points to equipotential lines according to their energy values. When generalizing to the category-level case, for each category $i\in K$, its specified equipotential line region $l_{i,s}$ in direction $s\in S$ is represented as: $l_{i,s}=\{l_{i,s}^{1},...,l_{i,s}^{\lfloor\frac{w}{2}\rfloor}\}$; the superscript denotes the energy. In order to fit the continuous prediction with the discrete ground truth, we assume that each line in $\hat{l}_{i,s}$ is composed of the same number of points as its ground truth in $l_{i,s}$. Therefore, we reorder and index $E(\hat{Y})$ to let $\hat{l}_{i,s}$ be an equal-count counterpart of $l_{i,s}$. It means that for any integer $\mu \in [1,\lfloor \frac{w}{2} \rfloor]$, we have: \begin{equation} |l_{i,s}^{\mu}|=|\hat{l}_{i,s}^{\mu}|, \label{eq:equ_count} \end{equation} where $|\cdot|$ denotes the number of points in the individual line. To formulate the loss, we get inspiration from dice loss \cite{diceloss} and optimize the equipotential line loss $L_{line}$ in all directions by enlarging the intersection between each $\langle l_{i,s}$, $\hat{l}_{i,s} \rangle$ pair. \textbf{Algorithm~\ref{alg:line_Loss}} presents the implementation details of $L_{line}$. To compute the intersection part, we specifically apply an exponential activation with factor $\mu$ to punish large mismatches and then measure the overlapped area between $l_{i,s}$ and $\hat{l}_{i,s}$. Additionally, we use a constant value $C$ to normalize the equipotential dice coefficient (EDC) within $[0,1]$ as dice loss. It is clear to see that $L_{line}$ decreases when the corresponding lines in $l_{i,s}$ and $\hat{l}_{i,s}$ match with each other. \begin{algorithm}[t] \setstretch{1.1} \caption{Equipotential Line Loss} \label{alg:line_Loss} \hspace*{\algorithmicindent} \textbf{Input:} the ground-truth/predicted line region $L,\hat{L}$, and the normalization factor $\mu$.\\ \hspace*{\algorithmicindent} \textbf{Output:$L_{line}$} \begin{algorithmic}[1] \State Initialized $L_{line}$ = 0; \For {each category $i$ in K} \For {each direction $s$ in S} \State{$l_{i,s},\hat{l}_{i,s}\leftarrow$ $L[i][s]$ , $\hat{L}[i][s]$;}\algorithmiccomment{Category-level unit} \For {$\tau \leftarrow 1$ to $\lfloor \frac{w}{2} \rfloor$} \State {$d_{i,s} \leftarrow e^{-(l_{i,s}-\tau)^{\mu}}$}; \algorithmiccomment{Exponential activation} \State {$\hat{d}_{i,s} \leftarrow e^{-(\hat{l}_{i,s}-\tau)^{\mu}}$}; \State {Represent the intersection area:} \Statex\qquad\qquad\qquad{$IoU_{i,s}\leftarrow ||d_{i,s}\cdot\hat{d}_{i,s}||_{1} $;} \State {Measure the mismatch area:} \Statex \qquad \qquad\qquad{$C\leftarrow\frac{||d_{i,s}||_{1}}{||d_{i,s}\cdot d_{i,s}||_{1}}$;} \Statex \qquad \qquad \qquad {$EDC_{i,s}^{\mu}\leftarrow \frac{2\cdot C\cdot IoU_{i,s}}{||d_{i,s}||_{1}+||\hat{d}_{i,s}||_{1}}$}\algorithmiccomment{Coefficient} \Statex \qquad \qquad \qquad{$L_{line}\leftarrow L_{line}+(1-EDC_{s}^{i})$} \EndFor \EndFor \EndFor \State \Return {$L_{line}/|S|$} \end{algorithmic} \end{algorithm} Intuitively, the equipotential line loss achieves the goal of contour learning by enforcing a strong geometric constraint on each category's edge area. $L_{line}$ punishes the predicted in-between values in $E(\hat{Y})$ and optimizes the predictions in semantic boundary areas from the distribution perspective. Compared with $L_{point}$, $L_{line}$ concentrates on optimizing the edge part determined by $W$, making the segmentation network better contour-representation ability. Also, $L_{line}$ integrates the line-level misalignment information of $\langle L,\hat{L} \rangle$ in different directions and, therefore, can more accurately localize the mismatch boundary, even though it happens in a small region. These two characteristics effectively help address the intra-class problem. Besides, our $L_{line}$ is more concise than other semantic boundary refinement studies \cite{xue,bound_loss} and introduces no additional parameters in the training or inference phase. \begin{figure*}[t] \begin{center} \includegraphics[width=17cm]{./Figures/FCN_PIPELINE.pdf} \end{center} \caption{Assemble FCN model with EPL for $K$-class semantic segmentation. After proceeding with the anisotropic convolution, along any direction in $S$, the point loss $L_{point}$ and the equipotential line loss $L_{line}$ respectively enable the anisotropic field regression and the category-level contour learning. } \label{fig:example_on_fcn} \end{figure*} \subsection{Applications of EPL in FCN} The full EPL module can be assembled to most FCN models (as illustrated in \refFig{fig:example_on_fcn}) to achieve better segmentation result. In the training phase, we follow the common practice to compute the cross-entropy loss (termed $L_{ce}$) in the probability domain. Next, we use AC to convert $Y$ and $\hat{Y}$ to the potential domain, where $L_{point}$ and $L_{line}$ are then employed for further optimization. Empirically, we use $\lambda_{1}$ and $\lambda_{2}$ as balance weights and express the final loss term as: \begin{equation} Loss=L_{ce}+\lambda_{1}L_{point}+\lambda_{2}L_{line}. \label{eq:lossterms} \end{equation} In the inference stage, EPL is discarded and incurs no extra computational overhead. \section{Evaluation} \label{sec:evluation} In this section, we perform two sets of experiments. In the ablation study, we add EPL module on PSPNet \cite{pspnet} to reveal the effectiveness of $L_{point}$ and $L_{line}$. Besides, we discuss the impact of anisotropic evolution (AC) when adopting different splitters. Moreover, we compare two loss functions with related studies to qualitatively evaluate their efficiency. Finally, we report the overall performance gains achieved with EPL on other baseline models. \subsection{Setup} \begin{itemize} \item \textbf{Datasets:} All experiments are conducted on two segmentation benchmarks: Pascal Voc 2012 and Cityscapes. The former dataset includes 21 classes, with 10,582 and 1,449 images for training and validation, while the latter contains 19 categories, including 2,979 training (fine-annotated) and 500 validation images. \item \textbf{Baseline Models:} We deploy EPL module on five FCN models: PSPNet \cite{pspnet}, PSANet \cite{psanet}, DeepLab v3+ \cite{Deeplabv3plus}, CCNet \cite{ccnet} and GSCNN \cite{gscnn}. We adopt the reliable PyTorch implementations \footnote{https://github.com/hszhao/semseg}\footnote{https://github.com/NVIDIA/semantic-segmentation}\footnote{https://github.com/jfzhang95/pytorch-deeplab-xception} to reproduce the baselines and achieve strong performances. The ablation experiments are conducted on PSPNet with resnet50 \cite{ResNet} while other baselines' performances are reported in the main result. \item \textbf{Data augmentation and experimental details:} We strictly follow the experiment settings in the adopted implementations without changing any other hyperparameters except the loss weight $\lambda_{1}, \lambda_{2}$, and $\mu$. The involved data augmentation operations include random scale (in $[0.5, 2.0]$), random rotation (degree within $[-10, 10]$), Gaussian blur, horizontal flipping, and random crop. In the ablation part, we conduct all experiments on the small-size images with crop sizes of 256$\times$256 and 256$\times$512 on Pascal Voc 2012 \cite{pascal} and Cityscapes \cite{cityscapes} (batch=$12$). As for practical experiments, we train all models in a batch of 16 with reported image size in Table \refTab{tb:main_city} and \refTab{tb:main_pascal}. Note that all experiments are conducted on 4$\times$NVIDIA RTX Titan. \item \textbf{Evaluation protocol:} All segmentation performances are evaluated on the validation set of benchmarks. We present qualitative evaluations on each component of EPL and reveal its effect in multiple-scale inference results, in the range of [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]. Except \textbf{for} the mean Intersection-over-union (mIoU), we specially employ \textbf{F-Measure} \cite{efficient, F_measure} and \textbf{Trimap IoU} \cite{DeepLabv2,boundary_iou} to quantize the models' segmentation performance in semantic boundary areas. \end{itemize} \subsection{Ablation Study} \label{section:abl_exp} We use \textbf{Point} and \textbf{Line} to denote the point loss and the equipotential line loss, respectively. In this part, we mainly apply the splitter A (shown in \refFig{fig:example_of_ac}) and set it with different kernel sizes to evaluate the effectiveness of both loss functions. To make a fair comparison, we empirically set $\mu$ (Eq. \ref{eq:equ_count}), $\lambda_{1},\lambda_{2}$ (Eq. \ref{eq:lossterms}), as 10, 0.1, and 0.01, respectively. Discussions of other splitters ($B$ and $C$) and choices of hyperparameters are reported in Appendix \ref{section: append_comparison}. \textbf{Ablation for the Point loss.} We add $L_{point}$ to the baseline network to regress the potential fields. To verify its effect, we respectively set $L_{point}$ (Eq. \ref{eq:field_loss}) of $L1$ and $L2$ norm (marked with the superscript) and report their results in Table \refTab{tab:abl_point}. One can see that we achieve considerable improvements over both datasets. We do not observe obvious performance differences between the two norms and therefore set $L_{point}$ of $L2$ norm in the later experiments. In \refFig{fig:Abl_Point_pascal}, we see that the prediction of the dog is considerably refined by $L_{point}$. However, after visualizing one potential field (the second row), we still observe blurs between the paws, indicating the in-between predictions. \begin{table}[t] \small \centering \begin{tabu}{c|c<{\centering}c<{\centering}c<{\centering}} \textbf{Dataset}&\textbf{Method}&\textbf{Kernel}&\textbf{mIoU(\%)}\\ \tabucline[0.8pt]{-} \multirow{7}{*}{Pascal Voc}&Baseline&-&73.52\\ ~&\multirow{3}{*}{+Point$^1$}&7&74.07\\ ~&~&9&73.85\\ ~&~&11&\textbf{74.08\tiny{(+0.56)}}\\ \cline{2-4} ~&\multirow{3}{*}{+Point$^2$}&7&74.09\\ ~&~&9&\textbf{74.42\tiny{(+0.90)}}\\ ~&~&11&74.27\\ \hline \multirow{7}{*}{Cityscapes}&Baseline&-&71.66\\ ~&\multirow{3}{*}{+Point$^1$}&9&70.44\\ ~&~&11&70.96\\ ~&~&13&\textbf{72.71\tiny{(+1.05)}}\\ \cline{2-4} ~&\multirow{3}{*}{+Point$^2$}&9&\textbf{72.39\tiny{(+0.73)}}\\ ~&~&11&70.93\\ ~&~&13&71.05\\ \hline \end{tabu} \caption{\textbf{Point loss} performance on Pascal Voc 2012 and Cityscapes validation set. The superscript of “+Point” denotes the norm type.} \label{tab:abl_point} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{./Figures/Point_abl.pdf} \end{center} \caption{Example of the point loss on Pascal Voc 2012. In the second row, we visualize the potential field. } \label{fig:Abl_Point_pascal} \end{figure} \textbf{Ablation for the equipotential line loss.} Table \refTab{tab:abl_line} reports the performance of $L_{line}$ on both datasets. After learning the category-specific contours, we observe that PSPNet achieves up to 0.63\%/1.60\% mIoU increasements on Pascal Voc 2012/Cityscapes. Similarly, we visualize the contour learning effects in \refFig{fig:Abl_line_pascal} and observe that $L_{line}$ learned the subtle and complicated shape features of the bicycle category. \textbf{Boundary Segmentation evaluation:} To furtherly verify the learning effect of both losses, we introduce two boundary-quality measures, F-measure \cite{F_measure}, and Trimap \cite{efficient, DeepLabv2}, to evaluate segmentation results in the semantic boundary area. Both metrics measure the matching level between the prediction and the ground truth in a narrow band region from the ground-truth semantic boundary given a pixel width. We evaluate the segmentation results multiple times with different pixel widths and present the comparison with/without employing $L_{line}$ in \refFig{fig:boundary_evaluation}; one can see that both $L_{point}$ and $L_{line}$ improve the segmentation capability in the boundary areas. Besides, we see that $L_{line}$ achieves more performance enhancements than $L_{point}$ on the areas near the edge (pixel width $<10$), indicating $L_{line}$ 's effectiveness on contour learning. \begin{table}[t] \small \centering \begin{tabu}{c|c<{\centering}c<{\centering}c<{\centering}} \textbf{Dataset}&\textbf{Method}&\textbf{Kernel}&\textbf{mIoU(\%)}\\ \tabucline[0.8pt]{-} \multirow{4}{*}{Pascal Voc}&Baseline&-&73.52\\ ~&\multirow{3}{*}{+Line}&7&\textbf{74.15\tiny{(+0.63)}}\\ ~&~&9&73.47\\ ~&~&11&74.05\\ \hline \multirow{4}{*}{Cityscapes}&Baseline&-&71.66\\ ~&\multirow{3}{*}{+Line}&9&71.78\\ ~&~&11&\textbf{73.26\tiny{(+1.60)}}\\ ~&~&13&72.11\\ \hline \end{tabu} \caption{\textbf{The equipotential line loss} performance on Pascal Voc 2012 and Cityscapes validation set.} \label{tab:abl_line} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{./Figures/boundary_evaluation.pdf} \end{center} \caption{(a)(b) plot how F-measure and Trimap mIoU change with the pixel width of the ground-truth boundary region on Pascal Voc 2012 (\textbf{Pascal}) and Cityscapes (\textbf{City}) before and after employing $L_{line}$ (\textbf{w/L.}) and (\textbf{w/P.}). } \label{fig:boundary_evaluation} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{./Figures/Line_abl.pdf} \end{center} \caption{ Example results of the equipotential line loss on Cityscapes. In the second row, we compare the visualized potential fields of the red cropped bicycle area. } \label{fig:Abl_line_pascal} \end{figure} \textbf{Effects of anisotropic convolution:} In this part, we ablate the AC operator with the standard convolution (\textbf{SC}) to verify its efficiency. Empirically, SC destroys the ground-truth image content and distorts its probability distributions. In Table \refTab{tb:comparison_sc}, we replace AC with SC and optimize the potential fields with $L_{point}$ and $L_{line}$ similarly. We observe that SC degrades the baseline performance on both datasets, confirming the advantages of AC and the feasibility of our "anisotropic observation" assumption. \textbf{Effects of directional splitters:} The choice of the splitters has a crucial influence on EPL. We test the other two splitters $B$ and $C$ (reported in Appendix, Table \refTab{tab:appendix_point_abl} \& \refTab{tab:appendix_line_abl}) and observe that splitter $A$ performs the best among all three candidates and can achieve consistent improvements on both datasets, indicating that the best semantic observation comes from the integrated result in the up, down, right, and left directions. \textbf{Effects of Kernel size:} In Table \refTab{tab:abl_point} \& \refTab{tab:abl_line}, we see that both losses' performances do not necessarily increase with AC's kernel size. We see $W$'s size as the range of semantic area that aims to optimize. In ablation experiments (Table \refTab{tab:abl_point}, \refTab{tab:abl_line}, \refTab{tab:appendix_point_abl} and \refTab{tab:appendix_line_abl}), we tested multiple kernels and found that the large ones do not work well with small object instances because the decision boundary area becomes negligible in these cases. Therefore, we use the optimal range of kernel sizes found in the ablation part, then scale AC in proportion to fit other network output resolutions in later experiments. \textbf{Effects of the exponential activation:} To implement $L_{line}$, we apply an exponential action factor $\mu$ to measure the overlapped part between the ground-truth $L$ and predicted $\hat{L}$(see line 6, 7 in Algorithm \ref{alg:line_Loss}) where $\mu$ must be even. Here, we test the effect of $\mu$ with five even values ${2,4,10,16,20}$, and then experiment $L_{line}$ ($\lambda_{1}=0, \lambda_{2}=0.2$) with PSPNet on two datasets. Table \refTab{tab:hyperparameter_activation} presents PSPNet's performance when setting $\mu$ with different values. We observe that the best segmentation performances on Pascal Voc 2012 and Cityscapes are achieved when $\mu=10, 2$. Therefore, we apply both values in later experiments. \begin{table} \centering \small \begin{tabu}[t]{m{3cm}|m{1.5cm}<\centering|m{1.5cm}<\centering} \textbf{Method} & \textbf{mIoU(\textbf{P.\%)}} & \textbf{mIoU(\textbf{C.\%)}} \\ \tabucline[0.8pt]{-} PSPNet(baseline) & 73.52&71.66 \\ PSPNet+SC+$L_{point}$& 71.08\tiny{(-2.44)}&68.37\tiny{(-3.29)}\\ PSPNet+SC+$L_{point}$& 72.17\tiny{(-1.35)}&70.93\tiny{(-0.73)}\\ PSPNet+AC+$L_{line}$& \textbf{74.84\tiny{(+1.32)}}&\textbf{72.39\tiny{(+0.73)}}\\ PSPNet+AC+$L_{point}$& \textbf{74.71\tiny{(+1.19)}}&\textbf{73.26\tiny{(+1.60)}}\\ \hline \end{tabu} \caption{Compare results of AC with standard convolution (\textbf{SC}) on Pascal Voc 2012 (\textbf{P.}) and Cityscapes (\textbf{C.}).} \label{tb:comparison_sc} \end{table} \begin{table}[t] \small \centering \begin{tabu}{ccc} $\mu$&\textbf{mIoU(P.\%)}&\textbf{mIoU(C.\%)} \\ \tabucline[0.8pt]{-} Baseline&73.52&71.66\\ 2&73.12&\textbf{72.37}\\ 4&73.73&71.04\\ 10&\textbf{74.71}&71.47\\ 16&73.85&65.51\\ 20&73.60&66.51\\ \hline \end{tabu} \caption{Effects of $\mu$ on segmentation results on Pascal Voc 2012 (\textbf{P.}) and Cityscapes (\textbf{C.})} \label{tab:hyperparameter_activation} \end{table} \begin{table*}[t] \begin{center} \scriptsize \begin{tabu}{m{0.8cm}<{\centering}|m{1.3cm}<{\centering}|m{0.5cm}<{\centering}|m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}} \textbf{Method} &\textbf{Backbone}& \textbf{mIoU}&\textbf{road}&\textbf{swalk}&\textbf{build}&\textbf{wall}&\textbf{fence}& \textbf{pole}&\textbf{tligh.}&\textbf{tsign}&\textbf{veg}&\textbf{terr.}&\textbf{sky}&\textbf{pers.}& \textbf{rider}&\textbf{car}&\textbf{truck}&\textbf{bus}&\textbf{train}&\textbf{mcyc}&\textbf{bcyc}\\ \tabucline[0.8pt]{-} PSANet&ResNet-101& 78.01&98.1&85.1&92.5&54.1&60.9& 64.7&70.8&78.6&92.6&65.5&94.6&82.9&63.2&95.0&74.9&88.0&77.6&65.1&78.0\\ +EPL&ResNet-101& \textbf{79.46}&96.8&83.7&92.9&\textbf{56.7}&\textbf{62.7}&\textbf{68.8}&\textbf{75.5}&\textbf{81.6}&93.3&66.0&94.7&\textbf{85.2}&\textbf{66.9}&95.6&70.3&\textbf{90.3}&76.7&\textbf{71.5}&\textbf{80.5}\\ \hline PSPNet&ResNet-101& 78.35&98.4&86.3&92.9&55.3&63.1& 65.9&73.1&79.8&93.0&66.0&94.9&83.8&64.8&95.1&74.0&85.8&75.8&71.1&79.3\\ +EPL&ResNet-101&\textbf{79.44}&96.7&83.5&\textbf{94.1}&\textbf{56.3}&62.4&\textbf{69.4}&73.7& \textbf{82.1}&92.7&\textbf{67.4}&94.2&\textbf{86.4}&\textbf{66.3}&95.7&72.3&\textbf{88.5}&\textbf{78.1}&\textbf{72.3}&80.2\\ \hline DeepLab&WResNet-38& 79.38&98.2&85.7&92.7&60.1&62.9& 66.6&70.4&79.2&92.7&66.6&94.6&82.7&64.2&95.2&80.5&89.6&80.9&67.1&78.3\\ +EPL&WResNet-38& \textbf{80.34}&97.5&84.3&92.6&\textbf{61.2}&\textbf{64.1}&66.5&70.7& 80.1&90.5&\textbf{68.1}&94.7&\textbf{84.5}&65.1&94.9&\textbf{81.6}&\textbf{90.7}&\textbf{82.1}&67.4&\textbf{79.5}\\ \hline \end{tabu} \end{center} \caption{Category-level mIoU Comparison on Cityscapes validation set. } \label{tb:perclassL_city_PSPNET} \begin{center} \scriptsize \begin{tabu}{m{0.8cm}<{\centering}|m{1.3cm}<{\centering}|m{0.5cm}<{\centering}|m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}m{0.3cm}<{\centering}} \textbf{Method} & \textbf{Backbone}&\textbf{mIoU}&\textbf{bgr}&\textbf{aero}&\textbf{bicy}&\textbf{bird}&\textbf{boat}& \textbf{bottle}&\textbf{bus}&\textbf{car}&\textbf{cat}&\textbf{chair}&\textbf{cow}&\textbf{dt.}&\textbf{dog}&\textbf{horse}&\textbf{motor}& \textbf{pers.}&\textbf{pott}&\textbf{sheep}&\textbf{sofa}&\textbf{train}&\textbf{tv.}\\ \tabucline[0.8pt]{-} PSANet&ResNet-101& 79.25&94.8&91.2&43.7&89.8&76.0& 80.6&94.4&88.4&93.3&44.5&88.6&56.0&89.4&87.5&85.8&88.2&65.0&92.2&50.7&86.7&77.4\\ +EPL&ResNet-101&\textbf{80.33}&95.1&92.0&44.4&90.5&\textbf{77.9}&\textbf{82.1}&94.5& \textbf{90.1}&\textbf{94.4}&43.8&89.0&\textbf{59.2}&90.1&88.1&\textbf{88.0}&88.9&65.0&91.9&\textbf{53.9}&\textbf{89.3}&\textbf{78.9}\\ \hline PSPNet&ResNet-101& 79.50&94.9&90.8&44.2&90.0&74.0& 81.0&95.3&90.2&94.2&42.8&87.9&57.0&89.5&87.2&89.8&88.3&65.7&91.3&47.3&88.7&79.3\\ +EPL&ResNet-101&\textbf{80.46}&\textbf{95.1}&92.4&44.7&88.9&\textbf{75.7}&81.7&95.8& \textbf{91.5}&94.6&43.5&\textbf{89.1}&\textbf{59.4}&\textbf{90.5}&\textbf{88.3}&90.4&88.4&64.7&\textbf{93.4}&\textbf{49.0}&\textbf{91.5}&76.9\\ \hline DeepLab&ResNet-101&79.15&93.9&89.7&42.3&90.4&69.0& 81.1&93.0&91.0&93.5&41.8&90.2&62.1&90.8&88.6&86.4&86.8&67.0&87.5&50.3&87.5&79.2\\ +EPL&ResNet-101&\textbf{80.77}&94.2&89.8&\textbf{43.5}&90.6&\textbf{72.9}&81.8&93.4& 90.3&93.9&\textbf{47.2}&\textbf{92.3}&\textbf{64.7}&\textbf{91.8}&89.2&\textbf{88.7}&87.7&\textbf{70.4}&\textbf{89.9}&\textbf{59.8}&87.8&77.3\\ \hline \end{tabu} \end{center} \caption{Category-level mIoU comparison on Pascal Voc 2012 Validation set.} \label{tb:perclass_pascal_PSPNET} \end{table*} \subsection{Comparison with related works} \label{section:c_w_r} The section compares $L_{point}$ and $L_{line}$ with their related works. Note that none of the compared losses has been adapted to the multi-class segmentation problem in previous studies. \textbf{Point loss vs. boundary loss:} A principal property of the potential fields is that points' potential energy increase with their spatial distance to the semantic boundary, conforming to the character of the distance fields. We consider the boundary loss \cite{bound_loss} related to our method and then assemble it to PSPNet (see Appendix \ref{section: append_comparison}). To make a fair comparison, we test the boundary loss $L_{bd}$ and $L_{point}$ with five loss weights $\{0.05,0.10,0.20,0.25, 0.50\}$, and then report both losses' best segmentation performances on two datasets. In Table \refTab{tb:comparison_related}, we see $L_{point}$ outperforms $L_{bd}$ because the boundary loss is not applicable for the multi-class task and is less informative than $L_{point}$ that conducts field regression in all directions (see Table \refTab{tab:comparison_bd} in Appendix). \textbf{Equipotential line loss vs. dice loss:} This part compares the $L_{line}$ with the dice loss ($L_{dice}$) that has a similar optimization principle. We see $L_{dice}$ as an auxiliary for image segmentation and adopt the same evaluation protocol as experiments with $L_{point}$ (see Appendix \ref{section: append_comparison} and Table \refTab{tab:comparison_dice} for implementation details and more comparisons). In the end, we report the comparison results in Table \refTab{tb:comparison_related} and observe that $L_{line}$ performs better than $L_{dice}$ \cite{diceloss}. This comparison result proves the effect of anisotropic convolution and indicates that contour learning can benefit semantic segmentation. \begin{table} \centering \small \begin{tabu}[t]{m{1.5cm}<\centering|m{4cm}<\centering|m{1.2cm}<\centering} \textbf{Dataset} & \textbf{Method} & \textbf{mIoU(\%)} \\ \tabucline[0.8pt]{-} \multirow{5}{*}{Pascal Voc} & $+L_{ce}$ & 73.52 \\ ~&$+L_{ce}+L_{bd}$ & 74.55\\ ~&\textbf{$+L_{ce}+L_{point}$} &\textbf{74.84\tiny{(+1.32)}}\\ ~& $+L_{ce}+L_{dice}$ & 74.45\\ ~&\textbf{ $+L_{ce}+L_{line}$} & \textbf{74.71\tiny{(+1.19)}}\\ \hline \multirow{5}{*}{Cityscapes} & $+L_{ce}$& 71.66 \\ ~&$+L_{ce}+L_{bd}$& 71.81\\ ~&\textbf{ $+L_{ce}+L_{point}$}&\textbf{72.40\tiny{(+0.74)}}\\ ~&$+L_{ce}+L_{dice}$ & 70.30\\ ~&\textbf{$+L_{ce}+L_{line}$} &\textbf{73.26\tiny{(+1.60)}}\\ \hline \end{tabu} \caption{Comparisons with the boundary loss \cite{bound_loss} ($L_{bd}$) and dice loss \cite{diceloss} (\textbf{$L_{dice}$ }) on two datasets' validation set.} \label{tb:comparison_related} \end{table} \subsection{Main Results} \label{main_results} After loss balance, we apply the full EPL on five FCN baselines: PSPNet\cite{pspnet}, PSANet\cite{psanet}, DeepLab V3+\cite{Deeplabv3plus}, CCNet\cite{ccnet} and GSCNN\cite{gscnn}. Besides, we conduct intensive experiments with variable backbones: ResNet-50/101 \cite{ResNet}, MobileNet \cite{mobilenets}, DRN \cite{DRN} and WRNet38 \cite{WIDERESNET}. Note that all models are trained in a batch of 16 with reported image size in Table \refTab{tb:main_pascal} \& \refTab{tb:main_city}. \textbf{Performance on Pascal Voc 2012:} In Table \refTab{tb:main_pascal}, we compare baseline models' performances with and without using EPL. Models with EPL consistently outperform their corresponding baseline models with considerable mIoU increasements. In the best case, we improve the mIoU of DeepLab v3+ (with ResNet101) up to 1.62\%. \refFig{fig:Result_comparison_voc} visualizes the segmentation results, and we see that EPL can greatly help existing FCN-based segmentation models solve the challenges from the inter-class (the first and the second row) or the intra-class regions (the third row). \textbf{Performance on Cityscapes:} We report the result on cityscapes in Table \refTab{tb:main_city}. Once again, we achieve substantial performance gains over all three FCN baseline models when we employ EPL, regardless of the backbone type. Similarly, visual comparisons are exhibited in \refFig{fig:Result_comparison_city}. \textbf{Category-level evaluation.} We show the category-level mIoU comparison on both datasets before and after adopting EPL in Table \ref{tb:perclassL_city_PSPNET} \& \ref{tb:perclass_pascal_PSPNET}. On Cityscapes, we observe that EPL significantly improves baselines' ability to distinguish category pairs that have similar semantics or strong semantic relationships, such as \{``person", ``rider,"\}, \{``rider", ``bicycle"\}, and \{``traffic sign," ``traffic light"\}. On Pascal Voc 2012, EPL enhances baseline models' ability to segment categories with complicated shapes, such as the cow, dog, and dining table. \begin{table}[h!] \small \begin{center} \begin{tabu}{m{1cm}<{\centering}|m{1.2cm}<{\centering}|m{1.5cm}<{\centering}|m{1.2cm}<{\centering}|m{1.2cm}<{\centering}} \textbf{Model}&\textbf{Size}&\textbf{Backbone}&\textbf{Method}&\textbf{mIoU(\%)}\\ \tabucline[0.8pt]{-} \multirow{4}{*}{PSANet}&\multirow{4}{*}{465$\times$465}&\multirow{2}{*}{ResNet-50}&Baseline&77.85\\ ~&~&~&+EPL&\textbf{79.08\tiny{(+1.23)}}\\ ~&~&\multirow{2}{*}{ResNet-101}&Baseline&79.25\\ ~&~&~&+EPL&\textbf{80.33\tiny{(+1.08)}}\\ \hline \multirow{4}{*}{PSPNet}&\multirow{4}{*}{473$\times$473}&\multirow{2}{*}{ResNet-50}&Baseline&78.02\\ ~&~&~&+EPL&\textbf{78.84\tiny{(+0.82)}}\\ ~&~&\multirow{2}{*}{ResNet-101}&Baseline&79.50\\ ~&~&~&+EPL&\textbf{80.46\tiny{(+0.96)}}\\ \hline \multirow{6}{*}{DeepLab}&\multirow{6}{*}{513$\times$513}&\multirow{2}{*}{MoblieNet}&Baseline&71.49\\ ~&~&~&red{+EPL}&\textbf{72.65\tiny{(+1.16)}}\\ ~&~&\multirow{2}{*}{DRN-54}&Baseline&79.58\\ ~&~&~&+EPL&\textbf{80.63\tiny{(+1.05)}}\\ ~&~&\multirow{2}{*}{ResNet-101}&Baseline&79.15\\ ~&~&~&+EPL&\textbf{80.77\tiny{(+1.62)}}\\ \hline \end{tabu} \end{center} \caption{Overall results on Pascal Voc validation set.} \label{tb:main_pascal} \end{table} \begin{table}[h!] \small \begin{center} \begin{tabu}{m{1cm}<{\centering}|m{1.2cm}<{\centering}|m{1.5cm}<{\centering}|m{1.2cm}<{\centering}|m{1.2cm}<{\centering}} \textbf{Model}&\textbf{Size}&\textbf{Backbone}&\textbf{Method}&\textbf{mIoU(\%)}\\ \tabucline[0.8pt]{-} \multirow{4}{*}{PSANet}&\multirow{4}{*}{560$\times$560}&\multirow{2}{*}{ResNet-50}&Baseline&76.69\\ ~&~&~&+EPL&\textbf{77.61\tiny{(+0.92)}}\\ ~&~&\multirow{2}{*}{ResNet-101}&Baseline&78.01\\ ~&~&~&+EPL&\textbf{79.46\tiny{(+1.45)}}\\ \hline \multirow{4}{*}{PSPNet}&\multirow{4}{*}{560$\times$560}&\multirow{2}{*}{ResNet-50}&Baseline&77.34\\ ~&~&~&+EPL&\textbf{78.49\tiny{(+1.15)}}\\ ~&~&\multirow{2}{*}{ResNet-101}&Baseline&78.35\\ ~&~&~&+EPL&\textbf{79.44\tiny{(+1.09)}}\\ \hline \multirow{2}{*}{DeepLab}&\multirow{2}{*}{560$\times$560}&\multirow{2}{*}{WRNet-38}&Baseline&79.38\\ ~&~&~&+EPL&\textbf{80.34\tiny{(+0.96)}}\\ \hline \multirow{2}{*}{CCNet}&\multirow{2}{*}{560$\times$560}&\multirow{2}{*}{WRNet-38}&Baseline&77.73\\ ~&~&~&+EPL&\textbf{78.81\tiny{(+1.08)}}\\ \hline \multirow{2}{*}{GSCNN}&\multirow{2}{*}{560$\times$560}&\multirow{2}{*}{WRNet-38}&Baseline&80.67\\ ~&~&~&+EPL&\textbf{81.78\tiny{(+1.11)}}\\ \hline \end{tabu} \end{center} \caption{Overall results on Cityscapes validation set.} \label{tb:main_city} \end{table} \begin{figure}[h!] \begin{center} \includegraphics[width=8.5cm]{./Figures/Main_pascal.pdf} \end{center} \caption{Qualitative comparison for segmentation results on Pascal Voc 2012 validation set. } \label{fig:Result_comparison_voc} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm]{./Figures/Main_city.pdf} \end{center} \caption{ Qualitative comparison for segmentation results on Cityscapes validation set. } \label{fig:Result_comparison_city} \end{figure} \subsection{Comparison with state-of-the-art methods} In this section, we compare the proposed EPL module with the existing boundary segmentation approaches \cite{pointrend,segfix,DSN} in Cityscapes. In Table \refTab{tb:main_pascal} and \refTab{tb:main_city}, we observe that EPL enhances all segmentation baselines by over 1\% regardless of the backbone models. Compared (in Table \refTab{tb:comparison_sota}) with other boundary segmentation studies \cite{pointrend,DSN,segfix,inf}, EPL is slightly worse than InverseForm (\textbf{InF}) yet competitive against the other studies. Besides, we compare the properties of each method in Table \refTab{tb:property_comparison} and observe that EPL is the only approach that does not utilize edge maps for the network training and brings no increased parameter size and inference cost. \begin{table}[h!] \small \begin{center} \begin{tabu}{m{3.5cm}|m{1.6cm}<{\centering}|m{1.3cm}<{\centering}} \textbf{Method}&\textbf{Backbone}&\textbf{mIoU(\%)}\\ \tabucline[0.8pt]{-} PSPNet&ResNet-50&77.34\\ PSPNet w/ EPL&ResNet-50&\textbf{78.49\tiny{(+1.15)}}\\ \hline PSANet&ResNet-101&78.01\\ PSANet w/ EPL&ResNet-101&\textbf{79.46\tiny{(+1.45)}}\\ \hline DeepLabv3 &ResNet-101&77.80\\ DeepLabv3 w/ PR\cite{pointrend} & ResNet-101&78.40\tiny{(+1.20)}\\ \hline DeepLabv3 &ResNet-50&79.18\\ DeepLabv3 w/ IABL\cite{IABL}&ResNet-50&79.94\tiny{(+0.76)}\\ \hline DeepLabv3+&WRNet-38&79.50\\ DeepLabv3+ w/ SFix\cite{segfix}&WRNet-38&80.30\tiny{(+0.80)}\\ \hline DeepLabv3+&ASPP\cite{Deeplabv3plus}&81.30\\ DeepLabv3+ w/ DSN\cite{DSN}&ASPP\cite{Deeplabv3plus}&82.40\tiny{(+1.10)}\\ \hline DeepLabv3+ (our Impl.)&WRNet-38&79.38\\ DeepLabv3+ w/ EPL&WRNet-38&\textbf{80.34\tiny{(+0.96)}}\\ \hline GSCNN &WRNet-38&81.0\\GSCNN w/ InF \cite{inf} & WRNet-38&82.60\tiny{(+1.50)}\\ \hline GSCNN (our Impl.)&WRNet-38&80.67\\ GSCNN w/ EPL&WRNet-38&\textbf{81.78\tiny{(+1.11)}}\\ \hline \end{tabu} \end{center} \caption{Comparisons with SOTA studies on Cityscapes. Note that our result (\textbf{our Impl.}) is experimented with $560\times 560$ images and hence results in a small mIoU gap. } \label{tb:comparison_sota} \end{table} \begin{table} \centering \small \begin{tabu}[t]{m{2.0cm}|m{1.7cm}<\centering|m{1.7cm}<\centering|m{1.7cm}<\centering} \textbf{Method} & \textbf{No param. overhead} & \textbf{No edge super.}& \textbf{No infer. overhead}\\ \tabucline[0.8pt]{-} SFix \cite{segfix} &{-}&{-}&{-}\\ DSN \cite{DSN}&{-}&{-}&{-}\\ PR\cite{pointrend} &{-}&{\checkmark}&{-}\\ InF\cite{inf}&{-} &{-}&{\checkmark}\\ IABL\cite{IABL}&{-}&{\checkmark}&{\checkmark}\\ \hline EPL&{\checkmark}&{\checkmark}&{\checkmark}\\ \hline \end{tabu} \caption{Comparisons of the SOTA boundary segmentation methods. We evaluate these studies' properties from three perspectives: whether using edge maps for boundary supervision (\textbf{super.}) and increasing parameters (\textbf{param.}) and inference (\textbf{infer.}) overhead.} \label{tb:property_comparison} \end{table} \section{Conclusion} \label{sec:conclusion} This paper addresses the semantic boundary segmentation problem with anisotropic field regression and category-level contour learning. With the proposed EPL (equipotential learning) module, we transfer the original probability estimation problem to the self-defined potential domain with the anisotropic convolution. Besides, we sequentially introduced the point loss to fit the image content along different directions from variable distances and the equipotential line loss to enforce the category-level contour learning. Experiments on Pascal Voc 2012, Cityscapes show that the designed EPL, serving as an add-on method, can significantly enhance existing FCN models' performance on the semantic boundary regions. Compared with other studies \cite{pointrend,efficient,segfix,DSN,inf}, our approach does not introduce additional supervision information and adds no parameters and inference cost. We believe that EPL can be generalized and benefit other segmentation tasks, like point cloud \cite{pcsg1,pcsg2}, instance, and panoptic segmentation \cite{ps1}. {
1,477,468,750,736
arxiv
\section{Surface states} \label{app:SS} \end{document}
1,477,468,750,737
arxiv
\section{Introduction} The early inflationary period of our universe is a phase of very rapid and nearly exponential accelerated expansion. This phase was proposed in order to resolve the three puzzles of the standard Big Bang cosmology -- the horizon problem, the flatness problem and the problem of the hitherto unobserved magnetic monopoles~\cite{Linde, Weinberg:2008zzc} (also references therein). The inflationary paradigm also satisfactorily explains how the primordial perturbations, which were quantum field theoretic in nature initially, grew, became large and classical and eventually developed into the large scale cosmic structures as we observe them today. After the end of the inflation, our universe entered respectively the phases of radiation and matter domination. These phases were also expanding but not with acceleration. Interestingly, the astrophysical data from high redshift supernovae, the galaxy cluster and the cosmic microwave background suggest that our current universe is also undergoing a phase of accelerated expansion (see \cite{Riess:2006fw} and references therein). In order to drive such accelerated expansions, the universe should be endowed with some exotic matter with negative isotropic pressure, called the dark energy. The simplest and phenomenologically one of the most successful model of the dark energy is simply the positive cosmological constant, and the corresponding solution of the Einstein equation is known as the de Sitter spacetime, having an exponential scale factor and hence a constant Hubble rate. Quantum field theory in the de Sitter background might thus make interesting physical predictions whose imprint can be found for example, in the cosmic microwave background. A very important topic in this area is the non-perturbative infrared or secular effect at late times, which is likely to have connection to the cosmological constant and the cosmic coincidence problem, e.g.~\cite{Woodard:2014jba} and references therein. In this paper, we shall however be interested in the field theoretic effects in the de Sitter universe endowed with topological defects. Such defects might have created via some symmetry breaking phase transitions in the early universe after the Big Bang. They may be in the form of cosmic strings, the global monopoles, domain walls and textures~\cite{Vile82, Vilenkin:2000jqa, Kibble:1976sj, Kibble}. Our interest in this paper will be the global monopoles, first proposed in~\cite{Starobinsky} (see also~\cite{Barriola:1989hx, Ola}). It is a spherically symmetric topologically stable gravitational defect with a deficit in the solid angle $4\pi$. This may result from a global-symmetry breaking ($O(3) \to U(1)$) of a self interacting scalar triplet. Such defects might have singular structure at the centre (like the pointlike global monopole). However, such singularity can be relaxed by taking a finite core and allowing the core to inflate~\cite{Cho}. The topological defect that has perhaps received more attention than the others is the cosmic string~\cite{Vilenkin:2000jqa}, and references therein. It is a cylindrically symmetric spacetime with a $\delta$-function like line singularity along the symmetry axis. Note that although such defects introduce inhomogeneity, their role in the primordial structure formation is highly suppressed~\cite{Perivolaropoulos:2005wa}. Nevertheless, such defects may also create other physical effects like the emission of gravitational waves and lensing~\cite{Damour:2000wa, Sazhin:2006kf}. In particular, for a realistic situation like a network of cosmic strings, the characteristic caustics and the cusps in the lensing phenomena is largely expected. Besides the structure formation, the topological defects may be relevant to the baryon asymmetry of the universe~\cite{Davis:1996zg}. The superconducting cosmic strings \cite{Witten:1984eb} or the stable loops of current-carrying string called the vortons~\cite{Carter:1999an}, may be responsible for the production of the high energy cosmic rays \cite{Bonazzola:1997tk} or even the gamma ray bursts \cite{Berezinsky:2001cp}. The vacuum polarisation effects in the presence of a cosmic string is a much cultivated topic, both in flat and (anti)-de Sitter backgrounds. The study of the vacuum expectation values of the field squared, $\langle{0}|\phi^2(x)|{0}\rangle$ of a scalar field and of the fermionic condensate, $\langle{0}|\overline{\psi}(x){\psi}(x)|{0}\rangle$, the vacuum expectation values of their energy-momentum tensors as well as of the conserved currents and various anomalies have extensively been investigated in~\cite{Bellucci:2020qsr}-\cite{Spinelly:2003zd}. Apart from the usual Minkowski, de Sitter/anti-de Sitter backgrounds, these studies also include spacetimes with compactified dimensions, and even the presence of a magnetic flux. Several of such studies have been made in higher dimensions as well. In this paper we wish to compute the vacuum polarisation effects of the Dirac fermions induced by a pointlike global monopole located in the cosmological de Sitter spacetime. For example, the renormalised Euclidean Green function and the vacuum expectation value of the energy-momentum tensor for a fermionic field in a global monopole located in flat spacetime was computed in~\cite{BezerradeMello:2007dxm} in the context of the braneworld scenario. The vacuum expectation value of the energy-momentum tensor for massless as well as massive fermions were computed in~\cite{BezerradeMello:1999ge, Saharian:2005br}. Similar investigations were carried out by considering nontrivial core structure instead of a pointlike global monopole in~\cite{BezerradeMello:2007vz}. The ground state energy of a massive scalar field in this background using the $\zeta$-function regularisation can be seen in~\cite{BezerradeMello:1999mk}. Finite temperature effects on the vacuum expectation values of the energy-momentum tensor of the massless spin-1/2 fermions can be seen in~\cite{Carvalho:2001gi, BezerradeMello:2002rj}. We further refer our reader to various quantum field theoretic computations in such backgrounds including the higher dimensions and the Kaluza-Klein theory in~\cite{BezerradeMello:2001pg, Saharian:2003sp, CavalcantideOliveira:2003sb, BezerradeMello:2002rp, Spinelly:2002mt, BezerradeMello:2012nq, Barbosa:2008fk, BezerradeMello:2006kka, BezerradeMello:2006tjm, Spinelly:2005pd, CavalcantideOliveira:2003ie}. While all the above computations are done in flat backgrounds, the computation of renormalised vacuum expectation value of the two point function for a scalar field as well the energy-momentum tensor in a de Sitter global monopole background can be seen in~\cite{BezerradeMello:2009ju}. The rest of the paper is organised as follows. In the next Section and \ref{A}, we present the derivation of the Dirac modes in the de Sitter global monopole background in a closed form. Using these mode functions, we compute the fermionic vacuum condensate, $\langle 0| \overline{\Psi}\Psi | 0\rangle$, and the vacuum expectation value of the energy-momentum tensor for the fermionic field respectively in \ref{s3}, \ref{s4} and \ref{s5}. \ref{B} and \ref{C} contain some detail for these Sections. Finally we conclude in \ref{s6}. We shall work in $3+1$-dimensions with mostly negative signature of the metric, and will set $c=1=\hbar$ throughout. We shall chiefly follow the method involving the Abel-Plana summation formula, as described in e.g.~\cite{Braganca:2019ord, BezerradeMello:2010ci} in the context of the de Sitter cosmic string background. \section{The Dirac modes}\label{s2} We shall present below the derivation of an orthonormal, complete set of Dirac modes in the de Sitter background with a pointlike global monopole defect. The metric in the cosmological time reads, \begin{eqnarray} \label{metric1} ds^2=dt^2-e^{2t/\alpha}\left[dr^2+\beta^2 r^2 d\Omega^2\right], \end{eqnarray} where $\alpha^{-2}=\Lambda/3$, $\Lambda$ being the cosmological constant. The parameter $\beta$ is conventionally expressed as $\beta^2=(1-8\pi G\xi^2)$~\cite{Barriola:1989hx}, where $\xi$ stands for the symmetry breaking scale, which lies well below the Planck energy, $G^{-1/2}$. Thus $0< \beta^2\leq 1$, and we have a deficit in the solid angle, $4\pi(1-\beta^2)$ in \ref{metric1}. In terms of the conformal time, $\eta=-\alpha e^{-t/\alpha}$ ($-\infty <\eta < 0^-$), the above metric reads, % \begin{eqnarray} \label{metric2} ds^2=\frac{\alpha^{2}}{\eta^2} \left(d\eta^2-dr^2-\beta^2 r^2d\Omega^2\right) \end{eqnarray} The pontlike global monopole breaks the maximal symmetry of the de Sitter spacetime and introduces a curvature singularity at $r=0$~\cite{BezerradeMello:2009ju}, $$R=\frac{12}{\alpha^2}+\frac{2\left(1-\beta^2\right)\eta^2}{\alpha^2\beta^2 r^2}$$ % Setting $\eta/\alpha \to 1$ in \ref{metric1} recovers the metric of the flat spacetime with a global monopole, which may interestingly also describe an effective metric produced in the superfluid $^3{\rm H}$e-A, but with a negative deficit in the solid angle~\cite{BezerradeMello:2007vz}. \\ The four orthonormal Dirac modes in the above background are given by (see \ref{A} for detail), \begin{eqnarray} \label{psi7p} \Psi^{(+)}_{\sigma j l m}(\eta,r,\theta,\phi)&=&\frac{\lambda {\sqrt \pi} e^{\pi m\alpha/2}}{2\beta \alpha^{3/2} \sqrt{r}}\left( {\begin{array}{cc} \eta^2 H^{(1)}_{1/2-im\alpha}(\lambda|\eta|){J_{\nu_\sigma}}(\lambda r)\Omega_{jl_\sigma m}\\ -i(\hat{r}\cdot{\vec{\sigma}}) \eta^2 H^{(1)}_{-1/2-im\alpha}(\lambda|\eta|){J_{\nu_\sigma+(-1)^\sigma}}(\lambda r)\Omega_{jl_\sigma m} \\ \end{array} } \right) \nonumber\\ \Psi^{(-)}_{\sigma j l m}(\eta,r,\theta,\phi)&=&\frac{\lambda {\sqrt \pi} e^{-\pi m\alpha/2}}{2\beta \alpha^{3/2}\sqrt{r}}\left( {\begin{array}{cc} i(-1)^\sigma(\hat{r}\cdot{\vec{\sigma}})\eta^2 H^{(2)}_{-1/2+im\alpha}(\lambda|\eta|){J_{\nu_\sigma+(-1)^\sigma}}(\lambda r)\Omega_{jl_\sigma m}\\ \eta^2H^{(2)}_{1/2+im\alpha}(\lambda|\eta|){J_{\nu_\sigma}}(\lambda r)\Omega_{jl_\sigma m} \\ \end{array} } \right) \end{eqnarray} where $\sigma =0,1$ correspond to two different solutions and the $(\pm)$-sign in the superscripts represent respectively positive and negative frequency solutions. The ${\vec \sigma} $'s are the Pauli matrices. The $\Omega$'s are the spin-1/2 spherical harmonics and $l_\sigma$, $\nu_\sigma$ are respectively given in \ref{sigma7} and below \ref{rad_sol1}. $\lambda$ is a real, positive parameter. It is easy to check that the above modes satisfy \begin{eqnarray} \left(\Psi^{(+)}_{\sigma j l m},\Psi^{(+)}_{\sigma' j' l' m'}\right)=\left(\Psi^{(-)}_{\sigma j l m},\Psi^{(-)}_{\sigma' j' l' m'}\right)=\delta_{jj'}\delta_{ll'}\delta_{mm'}{\delta_{\sigma\sigma'}} \end{eqnarray} with rest of the inner products vanishing. Thus they form a complete and orthonormal set. \section{The fermionic condensate, $\langle 0| \overline{\Psi} \Psi | 0\rangle$}\label{s3} Using the complete, orthonormal mode functions of \ref{psi7p}, we can quantise the fermionic field in the de Sitter-global monopole background. Using this field quatisation, we wish to compute below the fermionic condensate, i.e. the vacuum expectation value of the operator $ \overline{\Psi}\Psi$. By the mode-sum formula (e.g.~\cite{BezerradeMello:2009ng} and references therein), we have \begin{eqnarray} \label{conds} \langle{0}|\overline{\Psi}\Psi |0 \rangle=\sum_{\sigma=0,1}\sum_{l,j,m}\int_{0}^{\infty} d\lambda\, {\overline{\Psi}^{(-)}_{\sigma jlm}\left(x\right)\Psi^{(-)}_{\sigma jlm}(x)} \end{eqnarray} where $\overline{\Psi}^{(\pm)}$=$\Psi^{\dagger(\pm)}\gamma^{(0)}$ is the adjoint spninor. Using the second of \ref{psi7p} and the completeness relationship for the spherical harmonics, \ref{conds} can be expanded as \begin{eqnarray} \label{conds1} \langle{0}|\overline{\Psi}\Psi |0 \rangle&=&\frac{\eta^4 e^{-m\pi\alpha}}{16\beta^2 \alpha^{3}r}\sum_{j=1/2}^{\infty}(2j+1)\int_{0}^{\infty} d\lambda \lambda^2\left(J_{\nu_1}^2(\lambda r)+J_{\nu_0}^2(\lambda r)\right)\left(|H^{(2)}_{1/2-im\alpha}\left(\lambda|\eta|\right)|^2-|H^{(2)}_{-1/2-im\alpha}\left(\lambda|\eta|\right)|^2\right)\nonumber\\ \end{eqnarray} where $\nu_{\sigma}\,\,(\sigma =0,1)$ is given below \ref{rad_sol1}. In order to evaluate the above integral, we use the following identities e.g.~\cite{BezerradeMello:2010ci}, \begin{eqnarray} \label{relations} |H^{(2)}_{\pm1/2-im\alpha}(\lambda|\eta|)|^2&=&\frac{4}{\pi^2}e^{m\pi\alpha}|K_{1/2\mp im\alpha}(i\lambda|\eta|)|^2,\;\;K_{1/2-im\alpha}(x)=K_{-1/2+im\alpha}(x)\nonumber\\ |K_{1/2-im\alpha}(i\lambda|\eta|)|^2-|K_{1/2+im\alpha}(i\lambda|\eta|)|^2&=&-\frac{i}{\lambda}\left(\partial_{|\eta|}+\frac{1-2im\alpha}{|\eta|}\right)K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|) \qquad \end{eqnarray} where $K$ is the modified Bessel function of the second kind. Then \ref{conds1} takes the form \begin{eqnarray} \label{conds2} \langle{0}|\overline{\Psi}\Psi |0 \rangle =\frac{-i\eta^3}{4\pi^2\beta^2\alpha^{3} r}\sum_{j=1/2}^{\infty}(2j+1)\int d\lambda \lambda^2\left(J_{\nu_1}^2(\lambda r)+J_{\nu_0}^2(\lambda r)\right) \left(\eta\partial_{\eta}+{1-2im\alpha}\right)K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta| \end{eqnarray} In order to evaluate the above integral we need to simplify the product $K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)$. We write it as an integral representation \cite{Prud, Grad, Wats44, Abra64} \begin{eqnarray} \label{integral_1} K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta| &=& \int_0^\infty \frac{du}{u}\int_{0}^\infty dy'\;\cosh 2\mu y' \,e^{-2u \lambda^2\eta^2\sinh^2y'-1/2u},\;\mu=1/2-im\alpha \qquad \end{eqnarray} Substituting \ref{integral_1} into \ref{conds2}, we have \begin{eqnarray} \label{conds31} \langle{0}|\overline{\Psi}\Psi |0 \rangle&=&\frac{-i\eta^3}{4\pi^2\beta^2\alpha^{3} r}\left(\eta\partial_{\eta}+{1-2im\alpha}\right)\sum_{j=1/2}^{\infty}(2j+1)\int_0^\infty \frac{du}{u}e^{-1/2u}\int_{0}^\infty dy'\;\cosh2\mu y'\nonumber\\ &&\times \int d\lambda \lambda\left(J_{\nu_1}^2(\lambda r)+J_{\nu_0}^2(\lambda r)\right)e^{-2\lambda^2u\eta^2\sinh^2y'}\nonumber\\ &=&\frac{-i\eta^3}{4\pi^2\beta^2\alpha^{3} r}\left(\eta\partial_{\eta}+{1-2im\alpha}\right)\sum_{j=1/2}^{\infty}(2j+1)\int_{0}^\infty dy'\,\cosh2\mu y'\;\int_0^\infty \frac{du}{u}\frac{e^{-1/2u}}{4u\eta^2\sinh^2{y'}}e^{-r^2/(4u\eta^2\sinh^2{y'})}\nonumber\\ &&\times \left[I_{\nu_1}(r^2/4u\eta^2\sinh^2{y'})+I_{\nu_0}(r^2/4u\eta^2\sinh^2{y'})\right], \end{eqnarray} where $I_{\nu}$ is the modified Bessel function of the first kind. We introduce a new variable $x=r^2/(4u\eta^2\sinh^2{y'})$ and perform the $y'$-integral in \ref{conds31} to have \begin{eqnarray} \label{conds3} \langle{0}|\overline{\Psi}\Psi |0 \rangle&=&\frac{-i\eta^3}{8\pi^2\beta^2\alpha^{3} r^3}\left(\eta\partial_\eta+{1-2im\alpha}\right)\sum_{j=1/2}^{\infty}(2j+1)\int_0^\infty {dx}\;e^{-x(1-\eta^2/r^2)}{\left(I_{\nu_1}(x)+I_{\nu_0}(x)\right)}K_{1/2-im\alpha}(x\eta^2/r^2) \nonumber\\ \end{eqnarray} We next use~\cite{BezerradeMello:2010ci} $$\left(\eta\partial_{\eta}+{1-2im\alpha}\right)e^{x\eta^2/r^2}K_{1/2-im\alpha}(x\eta^2/r^2)=\frac{2 x\eta^2}{r^2}e^{x\eta^2/r^2}\left(K_{1/2-im\alpha}(x\eta^2/r^2)-K_{-1/2-im\alpha}(x\eta^2/r^2)\right),$$ in order to further simplify \ref{conds3} as \begin{eqnarray} \label{conds4'} \langle{0}|\overline{\Psi} \Psi |0 \rangle&=&\frac{\eta }{2\pi^2\beta^2 \alpha^{3}r}\sum_{j=1/2}^{\infty}(2j+1)\int_0^\infty {dy}\;y\;e^{y\left(1-r^2/\eta^2\right)}\text{Im}[K_{1/2-im\alpha}(y)]\left(I_{\nu_1}(yr^2/\eta^2)+I_{\nu_0}(yr^2/\eta^2)\right \qquad \end{eqnarray} where following~\cite{BezerradeMello:2010ci}, we also have defined a new variable, $y= x \eta^2/r^2$. We wish to split now $\langle{0}|\overline{\Psi}\Psi|{0}\rangle$ into two parts -- one corresponding to the pure de Sitter spacetime without any monopole ($\beta=1$) and the other corresponding to the global monopole defect, so that we {\it define} \begin{eqnarray} \label{conds4} \langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{gm}}:= \langle{0}|\overline{\Psi} \Psi|{0}\rangle-\langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{dS}} \end{eqnarray} Similar things can be seen in e.g.~\cite{BezerradeMello:2010ci} and references therein in the context of cosmic strings. For the pure de Sitter part, we have the renormalised expression, \begin{eqnarray} \label{final_2} \langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{dS,~Ren.}}=\frac{m }{2\pi^2\alpha^{2}}\left(1+m^2\alpha^2\right)\left[\ln\left(m\alpha\right)-\text{Re}\, \psi(im\alpha)+\frac{1}{12}\right], \end{eqnarray} where $\psi$ is the digamma function~\cite{Abra64}. Although the above expression has been found earlier in various references (e.g.~\cite{BezerradeMello:2010ci} in the context of de Sitter cosmic strings), we have outlined its derivation in \ref{B}, as a check of consistency of our modes, \ref{psi7p}. A couple of comments regarding the definition of \ref{conds4} are in order here. It will turn out below that the quantity $\langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{gm}}$ is ultraviolet finite. In other words, the curvature induced by the defect $\beta<1$ in \ref{metric1} does not induce any divergence. The divergence only comes from the pure de Sitter part and can be renormalised via a cosmological constant counterterm. This seems to have some qualitative similarity (but not the same) with the Hadamard subtraction cum regularisation of the Feynman propagator in a general curved spacetime using the Riemann normal coordinates, e.g.~\cite{Parker:2009uva}. One can try regularisation techniques different from that of the present one, as well. Apart from the left hand side being regular, the decomposition of \ref{conds4} has the obvious advantage regarding the application of the Abel-Plana summation formula, eventually helping us to do many computations analytically, we will see below. Despite this advantage, however, this regularisation scheme might have some caveat as well, as we shall point out in \ref{s6}. \\ After subtracting the integral expression of $\langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{dS}}$, \ref{dsgms-1}, from \ref{conds4'}, it turns out from \ref{conds4} that we need to evaluate the expression, % $$\sum_{j}(j+1/2)\left[\left(\frac{1}{\beta^2}I_{\frac{\left(j+1/2\right)}{\beta}+ 1/2}-I_{j+1/2+1/2}\right)+\left(\frac{1}{\beta^2}I_{\frac{\left(j+1/2\right)}{\beta}- 1/2}-I_{j+1/2-1/2}\right)\right],$$ % which can be done conveniently by using the Abel-Plana summation formula~\cite{Saharian:2007ph}, \begin{eqnarray} \label{abel-planna1} \sum_{j=0}^\infty f(j)=\frac{f(0)}{2}+\int_{0}^{\infty}dv f(v)-i\int_{0}^{\infty}du\frac{\left(f(iu)-f(-iu)\right)}{e^{2\pi u}+1} \end{eqnarray} Thus it follows from the above equation that \begin{eqnarray} \label{abel-planna2} \sum_{j}\left[f(j/\beta)/\beta-f(j)\right]=-i\int_{0}^{\infty}du{\left(f(iu)-f(-iu)\right)}\left(\frac{1}{e^{2\pi u/\beta}+1}-\frac{1}{e^{2\pi u}+1}\right) \end{eqnarray} When we apply this formula in our case we have the relevant integral to be \begin{eqnarray} \label{abel-planna3} &&\sum_{j=1/2}^{\infty}(j+1/2)\Bigg[\frac{1}{\beta^2}I_{\frac{j+1/2}{\beta}+1/2}(yr^2/\eta^2)-I_{{j+1/2}+1/2}(yr^2/\eta^2)+\frac{1}{\beta^2}I_{\frac{j+1/2}{\beta}-1/2}(yr^2/\eta^2)-I_{{j+1/2}-1/2}(yr^2/\eta^2)\Bigg]\nonumber\\ &=&-\frac{4}{\pi}\int du\, u \,g(\beta,u)\text{Im}\left[K_{1/2+iu}(yr^2/\eta^2)\right] \end{eqnarray} where we have defined, \begin{equation} g(\beta,u)=\cosh\pi u\left(\frac{1}{e^{2\pi\beta u}+1}-\frac{1}{e^{2\pi u}+1}\right) \label{g} \end{equation} Using this, we have after some algebra \begin{eqnarray} \label{gmconds1} \langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{gm}}&&= \langle{0}|\overline{\Psi}\Psi|{0}\rangle-\langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{dS}}\nonumber\\ &&= -\frac{4\eta }{\pi^3 \alpha^{3} r}\int_0^\infty\;du\;u\;g(\beta,u)\int_0^\infty {dy}\;y\;e^{y\left(1-r^2/\eta^2\right)}\text{Im}[K_{1/2-im\alpha}(y)]\text{Im}[K_{1/2+iu}(yr^2/\eta^2) \qquad \end{eqnarray} We shall evaluate the above integral numerically. However, two special cases are of interest, for which analytic expressions can be found. The first is when the proper distance from the monopole $(r=0)$ is small, $r\alpha/|\eta| \to 0$. Recalling $y= x \eta^2 /r^2$ (cf., the discussion below \ref{conds4'}), we have for large $y$-values~\cite{Abra64} $$\text{Im}[K_{1/2-im\alpha}(y)] \approx -\frac{m\sqrt{\pi}\alpha}{(2y)^{3/2}}e^{-y},$$ % so that \ref{gmconds1} becomes % \begin{eqnarray} \label{gmconds2} \langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{gm}}&=&\frac{2^{1/2}\eta m}{\pi^{5/2} \alpha^{2}r}\int_0^\infty\;du\;u\;g(\beta,u)\int_0^\infty {dy}\;y^{-1/2}\;e^{-yr^2/\eta^2}\text{Im}[K_{1/2+iu}(yr^2/\eta^2) \end{eqnarray} Using now the integral~\cite{Grad} \begin{eqnarray} \label{identity1} \int_0^\infty {d\zeta } \zeta^{\beta-1} e^{-\zeta} K_{\nu}(\zeta)=\frac{\sqrt{\pi}\Gamma(\beta+\nu)\Gamma(\beta-\nu)}{2^\beta\Gamma(\beta+1/2)} \end{eqnarray} \ref{gmconds2} becomes \begin{eqnarray} \label{gmconds4} {\langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{gm}}}\Big\vert_{r/|\eta| \to 0}&=&\frac{m \eta^2}{\pi^{2} \alpha^2 r^2}\text{Im} \int_0^\infty\;du\;u\;g(\beta,u){\Gamma(1+iu)\Gamma(-iu)}\nonumber\\ &=&\frac{m }{24\pi\left(r \alpha/\eta\right)^2}\left(\frac{1}{\beta^2}-1\right)+\frac{m}{2\pi^3\left(r \alpha/\eta\right)^2}\sum_{n=0}^\infty (-1)^n \left({\zeta(2,1+(n+1)\beta)}-{\zeta(2,1+(n+1))}\right)\nonumber\\ \end{eqnarray} where we have used $\Gamma(1-iu)\Gamma(iu)=i\pi/\sinh{\pi u}$~\cite{Abra64} and the expansion, % \begin{eqnarray} \label{express1} \frac{g(\beta,u)}{\sinh{\pi u} =\left(\frac{1}{e^{2\pi\beta u}+1}-\frac{1}{e^{2\pi u}+1}\right)+\frac{2}{e^{2\pi u}-1}\sum_{n=0}(-1)^{n}\left(e^{-2\pi(n+1)\beta u}-e^{-2\pi(n+1)u}\right) \end{eqnarray} In order to obtain \ref{gmconds4}, we also have used the formula given in \cite{Grad} involving the multiple $\zeta$-function $$\zeta(2,1+(n+1)\beta)=\sum_{n_1=1}^\infty\frac{1}{(n_1+1)^2}\sum_{n_2=1}^{n_1}\frac{1}{(n_1+1)^{1+(n+1)\beta}}$$ The above series is obviously convergent, as $\beta>0$. For example, for $\beta=0.5$, its numerical value is close to $0.9$, as can be easily checked using e.g., Mathematica. Thus, the fermionic condensate \ref{gmconds4} diverges as the square of the proper radial distance $r\alpha/|\eta|$, as we move towards the monopole, expected due to the curvature singularity at $r=0$. Note also that the condensate vanishes for $m=0$. \\ As the second special case, let us consider the opposite scenario, i.e. large proper distance from the monopole, $r\alpha/|\eta| \gg 1$. Since $y= x \eta^2/r^2$ (cf., the discussion below \ref{conds4'}), we make an expansion for small $y$-values~\cite{Grad} % $$K_{1/2-i m\alpha}(y)\approx \frac{1}{2}\Gamma\left(\frac12-i m\alpha\right)\left(\frac{y}{2}\right)^{i m\alpha-1/2}$$ Substituting the above into \ref{gmconds1} and using \ref{identity1}, we find \begin{eqnarray} \label{gmconds5} \langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{gm}}\Big\vert_{r/|\eta| \to \infty}&\approx&-\frac{4\eta }{\pi^3 \alpha^{3}r}\int_0^\infty du\, u\ g(\beta,u)\int_0^\infty {dy} y e^{-yr^2/\eta^2}\text{Im}\Bigg[\frac{1}{2}\Gamma(1/2-i m\alpha)\left(\frac{y}{2}\right)^{i m\alpha-1/2}\text{Im}[K_{1/2+iu}(yr^2/\eta^2)]\Bigg]\nonumber\\ &=&\frac{1}{2^{1/2}\pi^{7/2}\alpha^3\left(r/\eta\right)^{4}}\int_0^\infty\;du\;u^2\;g(\beta,u)\Bigg[\frac{\Gamma(1/2-i m\alpha)}{\Gamma(2+i m\alpha)}\left(\frac{\eta}{2r}\right)^{2 mi\alpha}\Gamma(1+i m\alpha+i u)\Gamma(1+i m\alpha-i u)\Bigg]\nonumber\\ &=&\frac{2^{1/2}\alpha f(\beta,m\alpha)}{\pi^{7/2}\left(r\alpha/\eta\right)^{4}}\sin\left(2m\alpha\,\text{ln}\left(2r/|\eta|\right)-\varphi_0\right) \end{eqnarray} showing quartic fall off along with oscillatory behaviour. The phase $\varphi_0$ and the function $f(\beta,m\alpha)$ above are determined by the complex relationship \begin{eqnarray} \label{B-function} f(\beta,m\alpha)e^{i\varphi_0}=\frac{\Gamma(1/2-i m\alpha)}{\Gamma(2+i m\alpha)}\int_0^\infty du u^2 g(\beta,u) \Gamma(1+i m\alpha+i u)\Gamma(1+i m\alpha-i u) \end{eqnarray} Note that if we set $m=0$ above, the integral becomes real and hence $\phi_0$ becomes vanishing. This makes \ref{gmconds5} vanishing too, in the massless limit. We also note that the radial dependence of the divergence in \ref{gmconds4} or the fall off in \ref{gmconds5}, are qualitatively similar to that of the de Sitter cosmic string ~\cite{BezerradeMello:2010ci}. Finally, we have plotted the variation of the condensate $\langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{gm}}$, \ref{gmconds1}, with respect to the dimensionless mass parameter $m\alpha$ and the dimensionless proper distance squared $r^2/\eta^2$, in \ref{f1}, for two different values of the defect parameter $\beta$. For sufficiently high values of either of these variables, the curves tend to merge. The first plot shows that the condensate is vanishing for $m=0$, in agreement with our previous results found for the special cases. \begin{figure}[h] \includegraphics[scale=0.70]{Monopole_1.eps}\hspace{0.2cm} \includegraphics[scale=0.7]{Monopole_2.eps} \caption{Global monopole induced part of the vacuum expectation value, $\langle{0}|\overline{\Psi}\Psi|{0}\rangle_{\text{gm}}$, \ref{gmconds1}, as a function of the dimensionless mass parameter $m\alpha$ (left) and as a function of the dimensionless proper radial distance squared, $r^2/\eta^2$. See main text for discussion.} \label{f1} \end{figure} \section{Vacuum expectation value of the energy-momentum tensor}\label{s4} We next wish to compute the vacuum expectation value of the fermionic energy-momentum tensor in the background \ref{metric2}. We have for the Dirac field, \begin{eqnarray} \label{emt1} T_{\mu\nu}=\frac{i}{2}\left[{\overline{\Psi}}\gamma_{(\mu}\nabla_{\nu)}{\Psi}-(\nabla_{(\mu}{\overline{\Psi}})\gamma_{\nu)}{\Psi}\right] \end{eqnarray} Expanding the spinor and its adjoint in terms of mode functions, and taking the expectation value with respect to the vacuum, we have \begin{eqnarray} \langle 0|T_{\mu\nu}|0\rangle=\frac{i}{2}\int_{0}^{\infty} d\lambda\sum_{\sigma,j,l,m}\left[{\overline{\Psi}^{(-)}_{\sigma j l m}}(x)\gamma_{(\mu}\nabla_{\nu)}{\Psi^{(-)}_{\sigma j l m}(x)-(\nabla_{(\mu}\overline{\Psi}^{(-)}_{\sigma j l m}})\gamma_{\nu)}{\Psi^{(-)}_{\sigma j l m}}(x)\right] \label{T1} \end{eqnarray} In the above expression we encounter anti-commutators between the $\gamma$'s and the spin connection marices, $[\gamma_{\mu}, \Gamma_{\nu}]_+$. However, using \ref{basis} and \ref{gamma2'}, it is easy to see that such anti-commutators vanish for all $\mu$, $\nu$, leaving us only with the partial derivatives to deal with, \begin{eqnarray} \langle 0|T_{\mu\nu}|0\rangle=\frac{i}{2} \int_{0}^{\infty} d\lambda\sum_{\sigma,j,l,m}\left[{\overline{\Psi}^{(-)}_{\sigma j l m}}(x)\gamma_{(\mu}\partial_{\nu)}{\Psi^{(-)}_{\sigma j l m}(x)-(\partial_{(\mu}\overline{\Psi}^{(-)}_{\sigma j l m}})\gamma_{\nu)}{\Psi^{(-)}_{\sigma j l m}}(x)\right] \label{T2} \end{eqnarray} Using now the second of \ref{psi7p}, we shall explicitly compute below the above expression component-wise. \subsection{Energy density} Using \ref{basis} in order to express the curved space $\gamma$-matrices in terms of the flat space ones, we have from \ref{T2} \begin{eqnarray} \label{emt_2} \langle{0}|T^0_{0}|0 \rangle&=&\frac{i\eta}{2 \alpha}\int_{0}^{\infty} d\lambda\sum_{\sigma j l m}\left[{{\Psi^\dagger}^{(-)}_{\sigma j l m}}(x)\partial_{0}{\Psi^{(-)}_{\sigma j l m}}(x)-(\partial_{0}{\Psi^\dagger}^{(-)}_{\sigma j l m}(x)){\Psi^{(-)}_{\sigma j l m} }(x)\right] \end{eqnarray} Using now the second of \ref{psi7p} and the formula for the spherical harmonics, \begin{eqnarray} \label{spherical_harmonics} \sum_{m_j=-j}^{m_j=+j}|\Omega_{jlm}|^2=\frac{2j+1}{4\pi}, \end{eqnarray} \ref{emt_2} takes the form after some algebra, \begin{eqnarray} \label{emt_3} \langle 0|T^{0}_{0}|0\rangle&=&\frac{\eta^5 }{4\pi^2\beta^2 \alpha^{4} r}\sum_{j=1/2}^{\infty}(2j+1)\int d\lambda \lambda \left(J_{\frac{j+1/2}{\beta}+1/2}^2(\lambda r)+J_{\frac{j+1/2}{\beta}-1/2}^2(\lambda r)\right)\nonumber\\ &&\Bigg[\Bigg(\partial_{\eta}^2+\frac{2}{\eta}\partial_{\eta}+4\left(\lambda^2+\frac{i m\alpha(1/2-im\alpha)}{\eta^2}\right)\Bigg)K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)\Bigg] \end{eqnarray} Rearranging the above expression a little bit, we have \begin{eqnarray} \label{vev_00comp} \langle{0}|T^0_0|{0}\rangle &=&\frac{\eta^5 }{4\pi^2\beta^2 \alpha^{4} r}\sum_{j=1/2}^{\infty}(2j+1)\Bigg[\left(\partial_{\eta}^2+\frac{2}{\eta}\partial_{\eta}+\frac{4i m\alpha(1/2-im\alpha)}{\eta^2}\right)\nonumber\\ &&\times\int d\lambda \lambda \left(J_{\frac{j+1/2}{\beta}+1/2}^2(\lambda r)+J_{\frac{j+1/2}{\beta}-1/2}^2(\lambda r)\right)K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)\nonumber\\ &&+4\int d\lambda \lambda^3 \left(J_{\frac{j+1/2}{\beta}+1/2}^2(\lambda r)+J_{\frac{j+1/2}{\beta}-1/2}^2(\lambda r)\right)K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)\Bigg] \end{eqnarray} Using the integral representation for the product $K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)$, \ref{integral_1}, we reduce the first integral in \ref{vev_00comp} to \begin{eqnarray} \label{first_int} &&\int d\lambda \lambda \left(J_{\frac{j+1/2}{\beta}+1/2}^2(\lambda r)+J_{\frac{j+1/2}{\beta}-1/2}^2(\lambda r)\right)K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)\nonumber\\ &=&\frac{1}{2r^2}\int_0^\infty{dx}e^{-x}\left(I_{\frac{j+1/2}{\beta}+1/2}(x)+I_{\frac{j+1/2}{\beta}-1/2}(x)\right)e^{x\eta^2/r^2}K_{1/2-im\alpha}(x\eta^2/r^2), \end{eqnarray} where for the variable $x$ we used the previous definition $x=r^2/(4u\eta^2\sinh^2{y^\prime})$ (cf., the discussion below \ref{conds31}) and have performed the integration over $y'$. \\ Similarly, for the second integral in \ref{vev_00comp}, we have \begin{eqnarray} \label{second_int} &&\int d\lambda \lambda^3 \left(J_{\frac{j+1/2}{\beta}+1/2}^2(\lambda r)+J_{\frac{j+1/2}{\beta}-1/2}^2(\lambda r)\right)K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)\nonumber\\ &=&-\frac{2}{r^4}\frac{\partial}{\partial{\eta^2}}\eta^2\int_0^\infty\;dx\;x\;{e^{-x}}\left(I_{\frac{j+1/2}{\beta}+1/2}(x)+I_{\frac{j+1/2}{\beta}-1/2}(x)\right)e^{x\eta^2/r^2}K_{1/2-im\alpha}(x\eta^2/r^2) \end{eqnarray} As earlier, by defining $x=yr^2/\eta^2$, we rewrite the differential operator appearing in \ref{vev_00comp} as, \begin{eqnarray} \label{operator3} \partial_{\eta}^2+\frac{2 \partial_{\eta}} {\eta}+\frac{4im\alpha\left(1/2-im\alpha\right)}{\eta^2}\equiv \frac{4x}{r^2}\left(x\partial_x^2+\frac{3 \partial_x}{2}+ \frac{im\alpha\left(1/2-im\alpha\right)}{x}\right) \end{eqnarray} Substituting now \ref{first_int}, \ref{second_int} and \ref{operator3} into \ref{vev_00comp}, we have the vacuum expectation value of the energy density, \begin{eqnarray} \label{energy_density} \langle{0}|T^0_0|{0}\rangle&=&\frac{\eta^5}{2\pi^3 \beta^2 \alpha^{4} {r^5}}\sum_{j=1/2}^{\infty}(2j+1)\int_0^\infty{dx}\;x\;e^{-x}\left(I_{\frac{j+1/2}{\beta}+1/2}(x)+I_{\frac{j+1/2}{\beta}-1/2}(x)\right)\nonumber\\ &&\times \left(x\partial_x^2+\left(\frac32-2x\right)\partial_x-2+\frac{im\alpha\left(1/2-im\alpha\right)}{x}\right)e^{x\eta^2/r^2}K_{1/2-im\alpha}(x\eta^2/r^2) \end{eqnarray} Following now~\cite{BezerradeMello:2010ci}, we use the properties of $K_{1/2-im\alpha}$ to simplify the operator part of the above equation as \begin{eqnarray} && \left(x\partial_x^2+\left(\frac32-2x\right)\partial_x-2+\frac{im\alpha\left(1/2-im\alpha\right)}{x}\right)e^{x\eta^2/r^2}K_{1/2-im\alpha}(x\eta^2/r^2)\nonumber\\ &=&-\frac{1}{2}e^{x\eta^2/r^2}\left(K_{1/2-im\alpha}(x\eta^2/r^2)+K_{-1/2+im\alpha}(x\eta^2/r^2)\right) \end{eqnarray} Using the above expression and also once again converting the variable to $y=x\eta^2/r^2$ for our convenience, the integral of \ref{energy_density} takes the form, \begin{eqnarray} \label{energy-density3} \langle{0}|T^0_0|{0}\rangle&=&\frac{\eta }{4\pi^2\beta^2 \alpha^{4} {r}}\sum_{j=1/2}^{\infty}(2j+1)\int_0^\infty{dy}\;y\;e^{y\left(1-r^2/\eta^2\right)}\left(I_{\frac{j+1/2}{\beta}+1/2}(yr^2/\eta^2)+I_{\frac{j+1/2}{\beta}-1/2}(yr^2/\eta^2)\right)\text{Re}[K_{1/2-im\alpha}(y)],\nonumber\\ \end{eqnarray} \subsection{Radial pressure} We have \begin{eqnarray} \label{rr-comp} \langle{0}|T_{rr}|0\rangle=\frac{i}{2}\int d\lambda \sum_{\sigma j l m}\left[{\Psi^\dagger}^{(-)}_{\sigma j l m}\gamma^{(0)}\gamma_{r}\partial_{r}\Psi^{(-)}_{\sigma j l m}-(\partial_{r}{\Psi^\dagger}^{(-)}_{\sigma j l m})\gamma^{(0)}\gamma_{r}\Psi^{(-)}_{\sigma j l m}\right] \end{eqnarray} which, after using \ref{psi7p} and \ref{basis} becomes \begin{eqnarray} \label{radial_1} \langle{0}|T^{r}_{r}|{0}\rangle&=&\frac{\eta^5 }{4\pi^2\beta^2 \alpha^{4} r}\sum_{j=1/2}^{\infty}(2j+1)\int d\lambda \lambda^3 \Bigg(J_{\frac{j+1/2}{\beta}-1/2}(\lambda r)\partial_r{J_{\frac{j+1/2}{\beta}+1/2}}(\lambda r)-J_{\frac{j+1/2}{\beta}+1/2}(\lambda r)\partial_r{J_{\frac{j+1/2}{\beta}-1/2}(\lambda r)}\Bigg)\nonumber\\ &&\times \left(K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)+{\rm c.c.}\right \end{eqnarray} In order to evaluate the integral in \ref{radial_1}, we further use~\cite{BezerradeMello:2010ci} \begin{eqnarray} \label{formula_1} &&J_{\frac{j+1/2}{\beta}-1/2}(\lambda r)\partial_r{J_{\frac{j+1/2}{\beta}+1/2}}(\lambda r)-J_{\frac{j+1/2}{\beta}+1/2}(\lambda r)\partial_r{J_{\frac{j+1/2}{\beta}-1/2}(\lambda r)}\nonumber\\ &=&\frac{1}{\lambda^2}\left(\frac{1}{2}\partial_r^2+\frac{1}{r}\partial_r+\frac{j+1/2}{\beta}\left(\frac{1/2-\frac{j+1/2}{\beta}}{r^2}\right)\right)J_{{\frac{j+1/2}{\beta}-1/2}}^2(\lambda r)+2J_{{\frac{j+1/2}{\beta}-1/2}}^2(\lambda r) \end{eqnarray} Substituting this into \ref{radial_1}, we have \begin{eqnarray} \label{radial_2} \langle{0}|T^{r}_{r}|{0}\rangle&=&-\frac{\eta^5 }{4\pi^2\beta^2 \alpha^{4} r}\sum_{j=1/2}^{\infty}(2j+1)\int d\lambda \Bigg(\lambda\left(\frac{1}{2}\partial_r^2+\frac{1}{r}\partial_r+\frac{j+1/2}{\beta}\left(\frac{1/2-\frac{j+1/2}{\beta}}{r^2}\right)\right)J_{{\frac{j+1/2}{\beta}-1/2}}^2(\lambda r)\nonumber\\ &&+2\lambda^3 J_{{\frac{j+1/2}{\beta}-1/2}}^2(\lambda r)\Bigg)\Bigg(K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)+{\rm c.c.}\Bigg \end{eqnarray} Now the first term in the above integral can be rewritten as, after using the integral representation \ref{integral_1} \begin{eqnarray} \label{int4} &&\int d\lambda \lambda J_{{\frac{j+1/2}{\beta}-1/2}}^2(\lambda r)\Bigg(K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)+{\rm c.c.}\Bigg)\nonumber\\ &=&\frac{1}{2r^2}\int_0^\infty{dx}e^{-x}I_{\frac{j+1/2}{\beta}-1/2}(x)e^{x\eta^2/r^2}\left( K_{1/2-im\alpha}(x\eta^2/r^2)+{\rm c.c.}\right) \end{eqnarray} Likewise, the second integral of \ref{radial_2} can be recast as \begin{eqnarray} \label{int5} &&\int d\lambda \lambda^3 J^2_{{\frac{j+1/2}{\beta}-1/2}}(\lambda r)\Bigg(K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)+{\rm c.c.}\Bigg)\nonumber\\ &=&\frac{2}{r^4}\int_0^\infty{dx}\left(\left(x\partial_x+1\right) e^{-x}I_{\frac{j+1/2}{\beta}-1/2}(x)\right)e^{x\eta^2/r^2}\left( K_{1/2-im\alpha}(x\eta^2/r^2)+{\rm c.c.}\right) \end{eqnarray} where in both the above cases we have defined the variable $x=yr^2/\eta^2$, as earlier. Using now \ref{int4} \ref{int5} and once again \ref{integral_1} into \ref{radial_2}, and converting the variables to $y=x\eta^2/r^2$, we find after a little algebra \begin{eqnarray} \label{rr-emt1} \langle{0}|T^r_r|{0}\rangle&=&\frac{\eta }{4\pi^2\beta^2 \alpha^{4} r}\sum_{j=1/2}^\infty\left(2j+1\right)\int_0^\infty{dy}\;y\;e^{y\left(1-r^2/\eta^2\right)}\;\text{Re}[K_{1/2-im\alpha}(y)]\nonumber\\ &&\times \left(I_{\frac{j+1/2}{\beta}+1/2}(yr^2/\eta^2)+I_{\frac{j+1/2}{\beta}-1/2}(yr^2/\eta^2)\right)=\langle{0}|T^0_0|{0}\rangl \end{eqnarray} where the last equality follows from comparison with \ref{energy-density3}. \subsection{The angular stresses} We have \begin{eqnarray} \label{theta_1} \langle{0}|T^{\theta}_{\theta}|{0}\rangle&=&\frac{\eta^5 }{4\pi^2\beta^3 \alpha^{4} r^2}\sum_{j=1/2}^{\infty}(2j+1)^2\int d\lambda \lambda^2 J_{\frac{j+1/2}{\beta}-1/2}(\lambda r)J_{\frac{j+1/2}{\beta}+1/2}(\lambda r)\nonumber\\ &&\times\left(K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)+{\rm c.c.}\right) \end{eqnarray} For the product of the two Bessel functions appearing above, we use the relation~\cite{Braganca:2019ord} $$ J_{\frac{j+1/2}{\beta}-1/2}(\lambda r)J_{\frac{j+1/2}{\beta}+1/2}(\lambda r)=\frac{1}{\lambda}\left(\frac{(j+1/2)/\beta-1/2}{r}-\frac{1}{2}\partial_r\right)J_{\frac{j+1/2}{\beta}-1/2}^2(\lambda r)$$ We then have after using \ref{integral_1}, % \begin{eqnarray} \label{11-theta-theta-1} &&\sum_{j=1/2}^{\infty}(2j+1)^2\int d\lambda \lambda \Bigg(\left(\frac{(j+1/2)/\beta-1/2}{r}-\frac{1}{2}\partial_r\right)J_{\frac{j+1/2}{\beta}-1/2}^2(\lambda r)\Bigg)\nonumber\\ &&\times \Bigg(K_{1/2-im\alpha}(i\lambda|\eta|)K_{1/2-im\alpha}(-i\lambda|\eta|)+{\rm c.c.}\Bigg)\nonumber\\ &=&\frac{r}{\eta^4}\sum_{j=1/2}^{\infty}(2j+1)^2\int_0^\infty{dy}\;y\;e^{y}\text{Re}\left[K_{1/2-im\alpha}(y)\right]e^{-yr^2/\eta^2} \left(I_{\frac{j+1/2}{\beta}-1/2}(yr^2/\eta^2)-I_{\frac{j+1/2}{\beta}+1/2}(yr^2/\eta^2)\right)\nonumber\\ \end{eqnarray} Substituting now \ref{11-theta-theta-1}, into \ref{theta_1}, we obtain after some algebra \begin{eqnarray} \label{11-theta-theta-3'} \langle{0}|{T^\theta_\theta}|{0}\rangle&=&{\frac{\eta }{4\pi^2\beta^3 \alpha^{4} r}\int_0^\infty{dy}\;y\;e^{y\left(1-r^2/\eta^2\right)}\text{Re}\left[K_{1/2-im\alpha}(y)\right]} \sum_{j=1/2}^{\infty}(2j+1)^2\left(I_{\frac{j+1/2}{\beta}-1/2}(yr^2/\eta^2)-I_{\frac{j+1/2}{\beta}+1/2}(yr^2/\eta^2)\right)\nonumber\\ \end{eqnarray} Note that the angular symmetry of the our background \ref{metric2} trivially guarantees that $\langle{0}|{T^\phi_\phi}|{0}\rangle=\langle{0}|{T^\theta_\theta}|{0}\rangle$.\\ Finally, using \ref{psi7p} and \ref{basis} into \ref{T2}, it can be easily seen that $\langle 0 |T_{\mu}{}^{\nu}|0\rangle =0$, for all $\mu \neq \nu$, leaving us only with the diagonal components for the vacuum expectation value. \section{Pure global monopole contribution to $\langle 0| T_{\mu}{}^{\nu}|0\rangle$} \label{s5} Now as of \ref{s3}, we wish to extract below the pure global monopole contribution from the expressions \ref{energy-density3}, \ref{rr-emt1}, \ref{11-theta-theta-3'}. We write in analogy of \ref{conds4} \begin{eqnarray} \label{pure-gm-1} \langle{0}|{T^\nu_\mu}|{0}\rangle_{\rm gm}&=&\langle{0}|{T^\nu_\mu}|{0}\rangle-\langle{0}|{T^\nu_\mu}|{0}\rangle_{\rm dS}, \end{eqnarray} The derivation of the pure de Sitter contribution, $\langle{0}|{T_\mu{}^\nu}|{0}\rangle_{\rm dS}$ (corresponding to $\beta=1$ in \ref{energy-density3}, \ref{rr-emt1}, \ref{11-theta-theta-3'}) is very briefly sketched in \ref{C}, once again for a check of consistency of our mode functions. Note that $\langle{0}|{T^\nu_\mu}|{0}\rangle_{\rm dS}$ has been computed in numerous places earlier, e.g. in~\cite{BezerradeMello:2010ci} in the context of de Sitter cosmic strings. Owing to the maximal symmetry of the de Sitter spacetime, each component of this vacuum expectation value is the same constant, \ref{emt-ds-ren-pure}. Subtracting now the first equation of \ref{ds-emt-tt} respectively from \ref{energy-density3}, \ref{rr-emt1}, \ref{11-theta-theta-3'}, we have the corresponding pure global monopole contributions, \begin{eqnarray} \langle{0}|T_0{}^0|{0}\rangle_{\rm gm}&&=\frac{\eta }{2\pi^2\alpha^{4}{r}}\int_0^\infty{dy}\;y\;e^{y\left(1-r^2/\eta^2\right)}\text{Re}[K_{1/2-im\alpha}(y)]\sum_{j=1/2}^{\infty}\frac{(j+1/2)}{\beta^2}\nonumber\\ &&\times\Bigg[\left(I_{\frac{j+1/2}{\beta}+1/2}(yr^2/\eta^2)-I_{{j+1/2}+1/2}(yr^2/\eta^2)\right)+\left(I_{\frac{j+1/2}{\beta}-1/2}(yr^2/\eta^2)-I_{{j+1/2}-1/2}(yr^2/\eta^2)\right)\Bigg],\nonumber\\ \langle{0}|{T_r{}^r}|{0}\rangle_{\rm gm}&&=\langle{0}|T_0{}^0|{0}\rangle_{\rm gm},\nonumber\\ \langle{0}|{T_\theta{}^\theta}|{0}\rangle_{\rm gm}&&=\langle{0}|{T_{\phi}{}^\phi}|{0}\rangle_{\rm gm}=\frac{\eta }{2\pi^2\alpha^{4}{r}}\int_0^\infty{dy}\;y\;e^{y\left(1-r^2/\eta^2\right)}\text{Re}[K_{1/2-im\alpha}(y)]\sum_{j=1/2}^{\infty}\frac{(j+1/2)^2}{\beta^3}\nonumber\\ &&\times\left[\left(I_{\frac{j+1/2}{\beta}-1/2}(yr^2/\eta^2)-I_{{j+1/2}-1/2}(yr^2/\eta^2)\right)-\left(I_{\frac{j+1/2}{\beta}+1/2}(yr^2/\eta^2)-I_{{j+1/2}+1/2}(yr^2/\eta^2)\right)\right]\nonumber\\ \end{eqnarray} Applying now the Abel-Planna summation formula~\ref{abel-planna1} into the above equations, we get \begin{eqnarray} \label{final-gm} \langle{0}|T_0{}^0|{0}\rangle_{\rm gm}&=&\langle{0}|{T_r{}^r}|{0}\rangle_{\rm gm}=\frac{2\eta }{\pi^3\alpha^{4}{r}}\int_{0}^\infty{du u g(\beta,u)}\int_0^\infty{dy} y e^{y\left(1-r^2/\eta^2\right)}\text{Re}[K_{1/2-im\alpha}(y)]\text{Im}\left[K_{1/2-iu}\left(yr^2/\eta^2\right)\right],\nonumber\\ \langle{0}|{T_\theta{}^\theta}|{0}\rangle_{\rm gm}&=&\langle{0}|{T_\phi{}^\phi}|{0}\rangle_{\rm gm}=\frac{2\eta}{\pi^3\alpha^{4}{r}}\int_{0}^\infty{du u^2 g(\beta,u)}\int_0^\infty{dy}\;y\;e^{y\left(1-r^2/\eta^2\right)}\text{Re}[K_{1/2-im\alpha}(y)]\text{Re}\left[K_{1/2-iu}\left(yr^2/\eta^2\right)\right]\nonumber\\ \end{eqnarray} where the function $g(\beta,u )$ is defined in \ref{g}. Also, in deriving $\langle{0}|{T_\theta{}^\theta}|{0}\rangle_{\rm gm}$ (or $\langle{0}|{T_\phi{}^\phi}|{0}\rangle_{\rm gm}$), we have used, $$\sum_{j=1/2}\frac{1}{\beta}\left(\frac{j+1/2}{\beta}\right)^2\left(\left(I_{\frac{j+1/2}{\beta}-1/2}-I_{j+1/2-1/2}\right)-\left(I_{\frac{j+1/2}{\beta}+1/2}-I_{j+1/2+1/2}\right)\right)=\frac{4}{\pi}\int_{0}^{\infty}du\;u^2 g(\beta,u)\text{Re}[K_{1/2-iu}(yr^2/\eta^2)]$$ Let us now take the Minkowski limit of the above expressions. This should correspond to $t/\alpha \to 0$ in the metric, where $t$ is the cosmological time. This means the conformal time $\eta \approx -\alpha$ in this limit. Thus in \ref{final-gm}, defining a variable $z=y r^2/ \alpha^{2}$, we have for a fixed $z$-value and large $\alpha$~\cite{Abra64}, \begin{eqnarray} \label{mink-lim1} \text{Re}[K_{1/2-i;m\alpha}(z\alpha^2)]\approx \sqrt{\frac{{\pi}}{2z \alpha^2}}e^{-\frac{m^2}{2z}-z\alpha^2} \end{eqnarray} Substituting the above into \ref{final-gm}, we have \begin{eqnarray} \label{Minkgm} \langle{0}|T_0{}^0|{0}\rangle_{\rm gm}^{\rm (M)}&=&\langle{0}|T_r{}^r|{0}\rangle_{\rm gm}^{\rm (M)}=\frac{2^{1/2}}{\pi^{5/2}{r^4}}\int_{0}^\infty{du\;u\;g(\beta,u)}\int_0^\infty{dz}\;z^{1/2}e^{-\frac{m^2r^2}{2z}-z}\text{Im}\left[K_{1/2-iu}\left(z\right)\right]\nonumber\\ \langle{0}|T_{\theta}{}^\theta |{0}\rangle_{\rm gm}^{\rm (M)}&=&\langle{0}|T_{\phi}{}^\phi |{0}\rangle_{\rm gm}^{\rm (M)}=\frac{2^{1/2}}{\pi^{5/2}{r^4}}\int_{0}^\infty{du\;u^2\;g(\beta,u)}\int_0^\infty{dz}\;z^{1/2}e^{-\frac{m^2r^2}{2z}-z}\text{Re}\left[K_{1/2-iu}\left(z\right)\right] \end{eqnarray} Setting $m^2=0$ above, let us compute the trace of $\langle{0}|T_\mu{}^\nu|{0}\rangle_{\rm gm}^{\rm (M)}$. We have \begin{eqnarray} \label{renorm0} \langle{0}|T_\mu{}^\mu|{0}\rangle_{\rm gm}^{(M)}\big\vert_{m^2=0}&=&\frac{2^{3/2}}{\pi^{5/2}{r^4}}\int_{0}^\infty{du\;u\;g(\beta,u)}\int_0^\infty{dz}\;z^{1/2}e^{-z}\left(\text{Im}\left[K_{1/2-iu}\left(z\right)\right]+{u}\text{Re}\left[K_{1/2-iu}\left(z\right)\right]\right) \qquad \end{eqnarray} We now separate the right hand side of \ref{identity1} into real and imaginary parts and substitute into the above integral. It is easy to see that both the $z$-integrals vanish. Thus the trace anomaly induced by the global monopole in the flat spacetime for fermionic field is vanishing.\\ Likewise, we can show that the trace anomaly induced by the global monopole in the de Sitter spacetime is vanishing, too. Setting $m=0$ in \ref{final-gm}, we have \begin{eqnarray} \label{trace-anomaly} \langle{0}|T_\mu{}^\mu|{0}\rangle_{\rm gm}\big\vert_{m=0}&=&\frac{2^{3/2}}{\pi^{5/2}\left(\alpha{r}/\eta\right)^4}\int_{0}^\infty{du\;u\;g(\beta,u)}\int_0^\infty{dx}\;x^{1/2}\;e^{-x}\left(\text{Im}\left[K_{1/2-iu}\left(x\right)\right]+{u}\text{Re}\left[K_{1/2-iu}\left(x\right)\right]\right) \qquad \end{eqnarray} where we have used $K_{1/2}=\sqrt{\pi}e^{-y}/2y$~\cite{BezerradeMello:2010ci}, and the variable transformation $x=yr^2/\eta^2$ to arrive at the above expression. The above integral is formally similar to \ref{renorm0} and hence is vanishing. The vanishing of the above trace anomalies can also be seen from the vanishing of the condensate $\langle 0| \overline{\Psi} \Psi | 0\rangle_{\rm gm} $ for $m=0$, as discussed in \ref{s3}. From \ref{emt1} and the Dirac equation, the trace anomaly for the fermionic field is given by \begin{equation} \langle 0|T_{\mu}{}^{\mu}| 0\rangle\big\vert_{m=0}= \lim_{m\to 0}m \langle 0| \overline{\Psi} \Psi | 0\rangle \label{add} \end{equation} Thus it is clear that the trace anomaly induced by the global monopole in the de Sitter spacetime (and hence in the Minkowski spacetime) should be vanishing. This also shows that the decompositions of \ref{conds4} and \ref{pure-gm-1} are consistent with each other. It can also be easily checked that the global monopole contribution to the energy momentum tensor \ref{final-gm}, obeys the conservation equation, $\nabla_{\mu}{\langle{T^\mu{}_\nu}\rangle_{\rm gm}}=0$. (The pure de Sitter part \ref{emt-ds-ren-pure}, satisfies the same trivially). In the background \ref{metric2}, we have two independent components of $\nabla_{\mu}{\langle{T^\mu{}_\nu}\rangle_{\rm gm}}$, \begin{eqnarray} \label{cov1} &&\partial_\eta{\langle{T^0_0}\rangle_{\rm gm}}+\frac{1}{\eta}\left({\langle{T^r_r}\rangle_{\rm gm}}+2{\langle{T^\theta_\theta}\rangle_{gm}}-3{\langle{T^0_0}\rangle_{\rm gm}}\right),\quad {\rm and} \quad \partial_r{\langle{T^r_r}\rangle_{\rm gm}}+\frac{2}{r}\left({\langle{T^r_r}\rangle_{\rm gm}}-{\langle{T^\theta_\theta}\rangle_{\rm gm}}\right) \end{eqnarray} Substituting the components from \ref{final-gm}, it is easy to check that both the above expressions vanish.\\ \begin{figure}[ht] \begin{center} \includegraphics[scale=0.55]{T00_malpha.eps} \includegraphics[scale=0.55]{T00_r_eta.eps} \end{center} \caption{Variation of the vacuum expectation values of energy density or the radial pressure of the global monopole induced part (the first of \ref{final-gm}) as a function of the dimensionless mass parameter $m\alpha$ (left) and the dimensionless radial distance (right). The $m\to0$ limit does not diverge, but touch the $y$-axis at some high negative value. On the other hand, the $r/|\eta| \to 0$ limit is divergent, owing to the curvature singularity located there due to the global monopole. See main text for detail.} \label{energy} \end{figure} \begin{figure} \includegraphics[scale=0.65]{Tthetatheta.eps}\hspace{0.2mm} \includegraphics[scale=0.65]{Tthetatheta_r_eta.eps} \caption{The variation of the vacuum expectation values of the polar or azimuthal stresses induced by the global monopole, \ref{final-gm}, as a function of the dimensionless mass parameter $m\alpha$ (left) and the radial distance $r$/$\eta$ (right). See main text for detail.} \label{angular} \end{figure} As of \ref{s3}, let us now look into the two special cases of \ref{final-gm}, i.e. small and large proper distances from the monopole. Using for the real part of the modified Bessel function~\cite{Abra64} for small argument, $$\text{Re}\left[{K_{1/2-im\alpha}}(y)\right]\approx{\sqrt{\frac{\pi}{2y}}}e^{-y}\left(1-\frac{m^2\alpha^2}{2y}\right)$$ Substituting the above into the first of \ref{final-gm}, we have \begin{eqnarray} \label{tt-gm} \langle{0}|T^0_0|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to 0}&&=\frac{2^{1/2}\eta }{\pi^{5/2}\alpha^{4}{r}}\int_{0}^\infty{du\;u\;g(\beta,u)}\int_0^\infty{dy}\;y^{1/2}\;e^{-yr^2/\eta^2}\left(1-\frac{m^2\alpha^2}{2y}\right)\text{Im}\left[K_{1/2-iu}\left(yr^2/\eta^2\right)\right]\nonumber\\ &&=\langle{0}|T^r_r|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to 0} \end{eqnarray} which can be evaluated exactly using \ref{identity1}, \ref{express1} \begin{eqnarray} \label{tt1-gm} \langle{0}|T^0_0|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to 0}&&=-\frac{1}{72\pi}\bigg(\frac{7}{40\left( r\alpha/\eta\right)^{4}}\left(\frac{1}{\beta^4}-1\right)+\frac{3}{4\pi^3 \left(\alpha r/\eta\right)^4}\sum_{n=0}^\infty (-1)^{n} \big({\zeta(4,1+(n+1)\beta)}-{\zeta(4,1+(n+1))}\big)\nonumber\\ &&-\frac{\pi m^2}{3\left(r\alpha/\eta\right)^2}\left(\frac{1}{\beta^2}-1\right)+\frac{9m^2}{2\pi \left(\alpha r/\eta\right)^2}\sum_{n=0}^\infty (-1)^{n} \left[{\zeta(2,1+(n+1)\beta)}-{\zeta(2,1+(n+1))}\right]\bigg)\nonumber\\&&=\langle{0}|T^r_r|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to 0} \end{eqnarray} where the multiple $\zeta$ function is defined below \ref{express1}. Likewise we have \begin{eqnarray} \label{radial_1-gm} \langle{0}|T^\theta_\theta|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to 0}&=&\langle{0}|T^\phi_\phi|{0}\rangle_{\rm gm}=\frac{1}{72\pi}\bigg(\frac{7}{40\left( r\alpha/\eta\right)^{4}}\left(\frac{1}{\beta^4}-1\right)+\frac{3}{4\pi^3\left(\alpha r/\eta\right)^4}\sum_{n=0}^\infty (-1)^{n} \big({\zeta(4,1+(n+1)\beta)}\nonumber\\ &-&{\zeta(4,1+(n+1))}\big)-\frac{9m^2\zeta(3)}{\pi^2\left(r\alpha/\eta\right)^2}\left(\frac{1}{\beta^3}-1\right)+\frac{18m^2}{\pi^2 \left(\alpha r/\eta\right)^2}\sum_{n=0}^\infty (-1)^{n} \big({\zeta(3,1+(n+1)\beta)}\nonumber\\ &-&{\zeta(3,1+(n+1))}\big)\bigg) \end{eqnarray} Note that all the components diverge at the location of the monopole $(r=0)$, due the curvature singularity present there. As earlier, the next special case is to take small value of $y$ in \ref{final-gm}, corresponding to large proper radial distances from the global monopole. Following procedure similar to that of described at the end of \ref{s3}, we find \begin{eqnarray} \label{final-gm0} \langle{0}|T^0_0|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to \infty}&=&\frac{f(\beta,m\alpha)}{\sqrt{2}\pi^{5/2}\left(r\alpha/|\eta|\right)^{4}}\cos\left(2m\alpha\text{ln}\left(2r/|\eta|\right)-\varphi_0\right)=\langle{0}|{T^r_r}|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to \infty}\nonumber\\ \langle{0}|{T^\theta_\theta}|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to \infty}&=&\langle{0}|{T^\phi_\phi}|{0}\rangle_{\rm gm}\big\vert_{r/|\eta| \to \infty}=\frac{f(\beta,m\alpha)}{\sqrt{2}\pi^{5/2}\left(r\alpha/|\eta|\right)^{4}}(1+m^2\alpha^2)^{1/2}\,\sin\left(2m\alpha\text{ln}\left(2r/|\eta|\right)-\varphi_0+\phi_1\right)\qquad \end{eqnarray} where $\phi_1=\cot^{-1}m\alpha$, and the function $f(\beta,m\alpha)$ and the phase factor $\phi_0$ are defined in \ref{B-function}. Note that both the short distance divergence (\ref{tt1-gm}, \ref{radial_1-gm}) and the rapid fall-off with oscillation (\ref{final-gm0}) are qualitatively similar to that of the vacuum condensate discussed in \ref{s3}. This is also similar to the case of the de Sitter cosmic string~\cite{BezerradeMello:2010ci}. However, this is not exactly the case for a scalar field located in the de Sitter global monopole with a finite core background~\cite{BezerradeMello:2009ju}. For this case we have off-diagonal components of the energy-momentum tensor as well as a milder divergence at short distance. \\ Finally, we have plotted the variations of \ref{final-gm} in \ref{energy} and \ref{angular}, with respect to the dimensionless mass parameter $m\alpha$, as well as the dimensionless radial distance $r/|\eta|$, with different values of the defect parameters. Note that for heavily massive cases, $m\alpha \gg 1$, the expectation values are highly suppressed. \section{Conclusions}\label{s6} In this work we have computed the vacuum expectation values of $\overline {\Psi}\Psi$ and the energy-momentum tensor of the Dirac fermions in the de Sitter spacetime endowed with a pointlike global monopole defect. We have extracted the pure global monopole contributions for various quantities, \ref{gmconds1}, \ref{final-gm}, by subtracting the pure de Sitter part from the full contribution. These results contain radial dependences as a consequence of the breaking of the translational symmetry of the de Sitter spacetime due to the presence of the global monopole located at $r=0$. The general results \ref{gmconds1}, \ref{final-gm}, could not further be simplified analytically and hence was plotted in~\ref{f1}, \ref{energy}, \ref{angular}. However, in the special scenarios when we are sufficiently close to, or far away from the monopole are also analysed analytically in \ref{s3}, \ref{s5}. Note that all the results we have found diverge at the the location of the monopole due to the curvature singularity introduced by it and fall of sufficiently rapidly for large proper radial distances. We also note that the vanishing of the fermionic condensate, \ref{s3}, for the global monopole indicates vanishing of the trace anomaly due to itself as well, which has been confirmed in \ref{s5}. This also shows that the decompositions of \ref{conds4} and \ref{pure-gm-1} are consistent with each other. However, we must also note here an essential caveat regarding what we mean by the `global monopole induced' part in all the above results. Precisely, the expressions for anomaly are universal, given by curvature invariants, e.g.~\cite{Parker:2009uva}. The curvature of \ref{metric1} should also get contribution from the defect $\beta <1$, even for $\alpha\to \infty$. Indeed, by this method we directly have the anomaly % $$\langle T_{\mu}{}^{\mu}\rangle= \frac{4b(1-\beta^2)^2}{3\beta^4(\alpha r/\eta)^4}+\frac{8b'\left(r^4(9-8(1-\beta^2))-(\eta^2-r^2)^2(1-\beta^2) \right)}{3\eta^4 \beta^2 (\alpha r/\eta)^4}+\frac{4b''(1-\beta^2)(\eta^2+r^2)}{\eta^2\beta^2(\alpha r/\eta)^4}$$ % where $b=1/320\pi^2$ and $b'=-11/5760\pi^2$. The coefficient $b''$ is not fixed, and can be changed by adding a term proportional to $R^2$ in the gravitational action. Assuming $\beta$ to be close to unity, we may identify the terms dependent on $(1-\beta^2)$ or its powers in the above expression to be the part induced by the global monopole, it is clear that no choice of $b''$ can make the anomaly vanishing, leading to a contradiction to what we have found above using the Dirac modes, \ref{psi7p}, even though \ref{energy}, \ref{angular} show that an individual massless $\langle T_{\mu}{}^{\nu}\rangle$ is indeed dependent on the defect, $\beta$. It seems similar mismatch is present for a de Sitter cosmic string as well~\cite{BezerradeMello:2010ci}. Keeping in mind that the anomaly is essentially an ultraviolet phenomenon, this ambiguity could probably be related to the particular regularisation scheme we have adopted here, i.e. subtracting the de Sitter contribution from the full expression, \ref{conds4}, \ref{pure-gm-1}. The standard regularisation techniques in curved spacetime on the other hand, apart from the dimensional regularisation, either use the so called adiabatic subtraction, or subtract the flat spacetime contribution from the Hadamard expansion of the Green function, obtained via the Riemann normal coordinates~\cite{Parker:2009uva}. Thus possibly we are loosing some essential curvature contribution, e.g. via equations like \ref{abel-planna3}, \ref{g}. It is well known that the results of quantum field theory in curved spacetimes can be regularisation dependent. Perhaps one should compute the fermion Green function using the modes of \ref{psi7p} and use some other regularisation technique to see if it reproduces the above mentioned non-vanishing expression of anomaly due to the global monopole. We reserve this for a future work. There can be further some directions towards which our present study can be extended. For example, the orthonormal Dirac modes found here may be used to compute cosmological correlation functions. In particular as we stated above, we may use them to find out the Green or the Wightman functions, which can be used in perturbative calculations in the presence of some interactions. It will further be interesting to investigate the vacuum expectation value of the conserved current and in particular the chiral anomaly in this background. We hope to return to these issues in our future works. \bigskip \section*{Acknowledgement} The research of M.S.A was supported by the ISIRD grant 9-252/2016/IITRPR/708. The research of SB was partially supported by the ISIRD grant 9-289/2017/IITRPR/704. We would like to acknowledge anonymous referee for useful critical comments on an earlier version of this manuscript. The research of M. S. A. is also supported by the National Postdoctoral Fellowship of the Science and Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India, file No., PDF/2021/003491. \bigskip
1,477,468,750,738
arxiv
\section{Introduction} Organometallic complexes containing magnetic centers are currently under intense investigation for their potential uses as building blocks for nanotechnologies. Low-dimensional magnetic nanostructures have a high potential for applications in spintronics, magnetic recording and sensing devices \cite{boga,bader} These systems also offer a unique platform to study exotic phases of matter. In particular, Minamitani {\it et al.} \cite{mina} have shown that the Kondo effect observed in isolated iron(II) phtalocyanine (FePc) molecules deposited on top of clean Au(111) [FePc/Au(111)] (in the most usual on-top configuration) is a new realization of the SU(4) Kondo model, in which not only the spin degeneracy but also the orbital degeneracy between $3d_{xz}$ and $3d_{yz}$ orbitals of Fe play a role ($z$ is the direction normal to the surface). The SU(4) Kondo effect manifests itself as a dip at the Fermi energy in the differential conductance observed in scanning tunneling spectroscopy (STS). The half width at half maximum of the dip is about $T_K \sim$ 0.4 meV, with $T_K$ the Kondo temperature. In another set of experiments, a self-organized square lattice of FePc/Au(111) as well as small clusters were studied by STS \cite{tsuka}. It was found that as for the isolated molecule, a single dip in the differential conductance remains for the molecules that lie at the corners of the clusters. However, increasing the coordination, the peak tends to split and for the lattice, a clear splitting of approximately 2 meV becomes apparent. Romero and the present authors were able to explain these experimental results, using a natural generalization of the single-impurity SU(4) Anderson model to the lattice, including hoppings between effective orbitals of nearest-neighbor molecules respecting the symmetry of these orbitals \cite{mole}. The relevant effective orbitals (i.e., the closest to the Fermi level), are essentially the $3d_{xz}$ and $3d_{yz}$ of Fe, with some admixture of other orbitals of the molecule. The splitting of the Kondo dip is a consequence of the anisotropy in the hopping for a given effective orbital. Preliminary results suggests that the square lattice of FePc/Au(111)FePc is not far from a magnetic instability \cite{mole}. Ferromagnetic order was observed in a two-dimensional layer of organic molecules absorbed on graphene \cite{gar} and in metal-organic networks on Au surfaces \cite{umba,abdu}. In addition, magnetic and orbital ordering are intertwined \cite{kk,gusm}. In this work, starting from the orbitally degenerate Hubbard-Anderson model that describes the observed STS in different arrays of FePc molecules \cite{mole}, we perform a Schrieffer-Wolff transformation to obtain an effective model which includes spin and orbital interactions. Solving this model in a slave-boson mean-field approximation (SBMFA), we obtain the critical values of the interactions leading to different symmetry-breaking magnetic and orbital instabilities. The dominant one turns out to be a spin ferromagnetic and orbital antiferromagnetic order. We show that the effect of the Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions is small and can be neglected. \section{The Hubbard-Anderson model} Our starting model was derived Ref. \cite{mole} and discussed in detail in the supplemental material of that paper. The Hamiltonian $H=H_{1}+H_{2}$ can be separated into one-body ($H_{1}$) and two-body ($H_{2}$) parts. The first one can be written as \begin{eqnarray} H_{1}&=&\sum_{ij}^{N} [ E_{h}n_{\mathbf{r}_{ij}}-\sum_{\sigma \nu }\left( t_{2}h_{\mathbf{r}_{ij},\sigma }^{\nu \dagger }h_{\mathbf{r}_{ij}\pm \mathbf{a}_{\nu },\sigma }^{\nu }+t_{1}h_{\mathbf{r}_{ij},\sigma }^{\bar{\nu}\dagger }h_{\mathbf{r}_{ij}\pm \mathbf{a}_{\nu },\sigma }^{\bar{\nu}}\right) +\sum_{\xi \sigma \nu }\epsilon _{\xi }c_{\mathbf{r}_{ij},\xi ,\sigma }^{\nu \dagger }c_{\mathbf{r}_{ij},\xi ,\sigma }^{\nu } \nonumber \\ &+&V\sum_{\xi \sigma \nu }\left( h_{\mathbf{r}_{ij},\sigma }^{\nu \dagger }c_{\mathbf{r}_{ij},\xi ,\sigma }^{\nu }+\rm{H.c.}\right) ] . \label{h1} \end{eqnarray} Here the operators $h_{\mathbf{r}_{ij},\sigma }^{\nu }$ annihilate a hole (create an electron) in the state $\vert \nu _{\mathbf{r}_{ij},\sigma }\rangle$, where $\nu =\left( x,y\right) $ denotes one of the two orbitally degenerate molecular states with spin $\sigma $ at site with position $\mathbf{r}_{ij}$ of the square lattice. $n_{\mathbf{r}_{ij}}=\sum_{\sigma \nu }n_{\mathbf{r}_{ij},\sigma }^{\nu }$, with $n_{\mathbf{r}_{ij},\sigma }^{\nu }=h_{\mathbf{r}_{ij},\sigma }^{\nu \dagger }h_{\mathbf{r}_{ij},\sigma }^{\nu }$ is the total number of holes at the molecule lying at site $ij$. The lattice vectors $\mathbf{a}_{\nu }$ connect nearest-neighbor sites, $\bar{x}=y$, and$\ \bar{y}=x$. The operator $c_{\mathbf{r}_{ij},\xi ,\sigma }$ annihilates a conduction hole with spin $\sigma $ and quantum number $\xi $ at position $\mathbf{r}_{ij}$. The first and second terms of \ $H_{1}$ describe the molecular states and the hopping between them. The hopping $t_{2}$ between $x$ ($y$) orbitals in the $x$ ($y$) direction is larger than the hopping $t_{1}$ between $x$ ($y$) orbitals in the $y$ ($x$) direction. The third term of \ H_{1}$ corresponds to a band of bulk and surface conduction electrons of the substrate and the last term is the hybridization between molecular and conduction states. The interaction term can be written as \cite{gusm,kroll} \begin{eqnarray} H_{2}&=&\sum_{ij}^{N} \left[ U\sum_{\nu }n_{\mathbf{r}_{ij},\uparrow }^{\nu }n_ \mathbf{r}_{ij},\downarrow }^{\nu }+\sum_{\sigma \sigma^\prime }[(U-2J_{H})n_{\mathbf{ }_{ij},\sigma }^{x}n_{\mathbf{r}_{ij},\sigma ^{\prime }}^y+J_{H}h_{\mathbf{r}_{ij},\sigma }^{x \dagger} h_{\mathbf{r}_{ij},\sigma ^{\prime }}^{y \dagger} h_{\mathbf{r}_{ij},\sigma ^{\prime }}^x h_{\mathbf{r}_{ij},\sigma }^y] \right. \nonumber \\ &+&\left. J_{H}\sum_{\nu }h_{\mathbf{r _{ij},\uparrow }^{\nu \dagger }h_{\mathbf{r}_{ij},\downarrow }^{\nu \dagger }h_{\mathbf{r}_{ij},\downarrow }^{\bar{\nu}}h_{\mathbf{r}_{ij},\uparrow }^ \bar{\nu}} \right] . \label{h2} \end{eqnarray} For the system we are considering, the occupation of the molecular states is nearly one-hole per site ($\left\langle n_{\mathbf{r}_{ij}}\right\rangle \sim 1$) and the main effect of $H_{2}$ is to inhibit double occupancy. Therefore as a first approximation one can take $U\longrightarrow +\infty $ and therefore neglect the Hund rules exchange $J_{H}$ in comparison with $U$. In this case, the interaction takes the simpler form $H_{2}^{s}=\frac{U}{2}\sum_{ij}^{N}n_{\mathbf{r}_{ij}}\left( n_{\mathbf{r}_{ij}}-1\right)$. \section{Symmetry of $H_{i}$} For the case of one molecule only, the resulting impurity Anderson Hamiltonian has SU(4) symmetry, which in simple terms means that permutations of the four states $\left\vert \nu _{\sigma }\right\rangle $ leave the Hamiltonian invariant. The fifteen generators of the SU(4) symmetry are three trivial diagonal matrices, six permutations of two states and other six permutations with a change of phases for the permuted states \cite{sba}. The twelve non trivial generators can also be written as a generalization of the raising and lowering operators for SU(2) \cite{li}. Specifically, for the impurity Anderson model they are $S_{\nu \sigma }^{\nu ^{\prime }\sigma ^{\prime }}=\left( h_{\sigma }^{\nu \dagger }h_{\sigma ^{\prime }}^{\nu ^{\prime }} + \sum_\xi c_{\xi,\sigma }^{\nu \dagger }c_{\xi,\sigma ^{\prime }}^{\nu ^{\prime }}\right)$ for \nu \sigma \neq \nu ^{\prime }\sigma ^{\prime }$ (note that the conduction electron degrees of freedom must be taken into account to keep the SU(4) symmetry of the total system.) For the lattice, the simplest generalization of these generators leads to $S_{\nu \sigma }^{\nu ^{\prime }\sigma ^{\prime }}=\sum_{ij}^{N}\left( h_{\mathbf{r _{ij},\sigma }^{\nu \dagger }h_{\mathbf{r}_{ij},\sigma ^{\prime }}^{\nu ^{\prime }}+ \sum_\xi c_{\mathbf{r _{ij},\xi,\sigma }^{\nu \dagger }c_{\mathbf{r _{ij},\xi,\sigma ^{\prime }}^{\nu ^{\prime }}\right)$. All these generators commute with $H_{2}^{s}$, but those with \nu \neq \nu ^{\prime }$ commute with $H_{1}$ only in the particular case $t_{2}=t_{1}$, which seems incompatible with the observed STS in the square lattice of FePc/Au(111) \cite{mole}. In the general case, $t_{2}\neq t_{1} , $H_{1}$ has however SU(4) symmetry with the following non-trivial generators $S_{\nu \sigma }^{\nu \bar{\sigma}}=\sum_{ij}^{N}\left(h_{\mathbf{r}_{ij},\sigma }^{\nu \dagger }h_{\mathbf{r}_{ij},\bar{\sigma}}^{\nu }+ \sum_\xi c_{\mathbf{r _{ij},\xi,\sigma }^{\nu \dagger }c_{\mathbf{r _{ij},\xi,\bar{\sigma}}^{\nu}\right)$ and $S_{\nu \sigma }^{\bar{\nu}\sigma ^{\prime }}=\sum_{ij}^{N}\left(h_{\mathbf{r}_{ij},\sigma }^{\nu \dagger }h_{R\mathbf{r}_{ij},\sigma ^{\prime }}^{\bar{\nu}}+ \sum_\xi c_{\mathbf{r _{ij},\xi,\sigma }^{\nu \dagger }c_{R\mathbf{r _{ij},\xi,\sigma ^{\prime }}^{\bar{\nu}}\right)$, where $R$ is the reflection that permutes $x$ and $y$ (it is an element of the point group $C_{4v}$ of the system). It can be verified easily that these generators commute with $H_{1}$. However, inclusion of $H_{2}$ reduces the symmetry to spin SU(2) times orbital Z$_2\times$U(1). Only the term of $H_{2}^{s}$ with $\mathbf{r}_{ij}=0$ commutes with the $S_{\nu \sigma }^{\bar{\nu}\sigma ^{\prime }}$ generators which contain $R$. Nevertheless, in a Fermi liquid, the interaction becomes irrelevant at the Fermi energy and we expect that SU(4) is an emergent symmetry at low energies \cite{bat} if there is not a symmetry breaking (a magnetic or orbital instability). In fact, in the SBMFA, where the action is reduced to an effective non-interacting one near the Fermi energy \cite{mole} or in a dynamical-mean field approximation in which the interaction is treated exactly at one site in an effective medium, the effective model has SU(4) symmetry if the symmetric form $H_{2}^{s}$ is taken. \section{Effective generalized Heisenberg interactions} \vspace{0.5cm}When two nearest-neighbor sites are singly occupied and if $U\gg t_{i}$ (as it seems to be the case for FePc/Au(111) \cite{mole}, the hopping terms connecting these sites can be eliminated by means of a canonical transformation, in a similar fashion as the $t-J$ model is derived from the Hubbard model. This leads to an effective exchange model for spins and orbitals, as in the Kugel-Khomskii model \cit {kk,gusm}. For simplicity we write first the result using the SU(4) symmetric form of the interaction $H_{2}^{s}$. After a lengthy but straightforward algebra we obtain \begin{eqnarray} H_{H}& =\sum_{i}\sum_{\nu }\left[ \frac{4t_{2}^{2}}{U}\mathbf{S}_{i}^{\nu }\cdot \mathbf{S}_{i+\nu }^{\nu }+\frac{4t_{1}^{2}}{U}\mathbf{S}_{i}^{\bar \nu}}.\mathbf{S}_{i+\nu }^{\bar{\nu}}\right] +\sum_{i}\sum_{\nu }\left[ \frac{t_{2}^{2}+t_{1}^{2}}{4U}\left( 4T_{i}^{z}T_{i+\nu }^{z}-3\right) \right] \nonumber \\ & +\sum_{i}\sum_{\nu }\frac{2t_{1}t_{2}}{U}\left( T_{i}^{-}T_{i+\nu }^{+}+T_{i}^{+}T_{i+\nu }^{-}\right) \left( \frac{1}{2}+2\mathbf{S}_{i}\cdot \mathbf{S}_{i+\nu }\right) , \label{hh} \end{eqnarray where $\mathbf{S}_{i}^{\nu }=\sum_{\alpha \beta }h_{i\alpha }^{\nu \dagger } \mathbf{\sigma }}_{\alpha \beta }h_{i\beta }^{\nu }/2$ is the spin of the orbital $\nu $ at site $i$, and $\mathbf{S}_{i}=\sum_{\nu }\mathbf{S _{i}^{\nu }$. The operator $\mathbf{T}_{i}$ denotes the orbital SU(2) pseudospin (with the identification of $x$ for pseudospin up). The subscript $i+\nu $ denotes the nearest neighbor of $i$ in the $+ \nu $ direction. Note that in the case $t_{1}=t_{2}=t$, this Hamiltonian reduces to \begin{equation} H_{H}^{SU(4)}=\frac{2t^{2}}{U}\sum_{i}\sum_{\nu }\left( 2\mathbf{S}_{i}\cdot \mathbf{S}_{i+\nu }+\frac{1}{2}\right) \left( 2\mathbf{T}_{i}\cdot \mathbf{T _{i+\nu }+\frac{1}{2}\right) +\rm{cte}. \label{hhsu4} \end{equation where we have used that $\mathbf{S}_{i}^{\nu }\cdot \mathbf{S}_{i+\nu }^{\nu }+\mathbf{S}_{i}^{\bar{\nu}}\cdot \mathbf{S}_{i+\nu }^{\bar{\nu}}=\mathbf{S _{i}\cdot \mathbf{S}_{i+\nu }\left( 2T_{i}^{z}T_{i+\nu }^{z}+1/2\right) $. This Hamiltonian is a sum of products of a spin SU(2) invariant form (first factor) times a pseudospin SU(2) invariant (last factor). Thus, it is explicitly SU(2)$\times $SU(2) invariant. However, it has been shown \cit {li} that the symmetry of $H_{H}^{SU(4)}$ is actually SU(4), which is larger than SU(2)$\times $SU(2). When $t_{1}$ and $t_{2}$ are very different, as in the realistic case for for FePc/Au(111) \cite{mole}, the first two terms of $H_{H}$ Eq. (\ref{hh}) are the most important ones. The first one is optimized for orbital ferromagnetic and spin antiferromagnetic order, while the second one favors orbital antiferromagnetic order. In a classical picture, the energy of both orders would be the same, -(t_{1}^{2}+t_{2}^{2})/U$ per site. However when Hund rules are included [the form $H_{2}$, Eq. (\ref{h2}) is used for the interaction] the spin ferromagnetic order is favored. Projecting over intermediate double occupied triplet states, the dominant term of $H_{H}$ takes the form \begin{equation} H_{H}^{d}=\frac{J}{2}\sum_{\mathbf{r}_{ij},\mathbf{a}}\left( -1 +4T_{\mathbf{r}_{ij}}^{z}T_{\mathbf{r}_{ij}+\mathbf{a}}^{z}\right) \left( \frac{3}{4}+ \mathbf{S}_{\mathbf{r}_{ij}}.\mathbf{S}_{\mathbf{r}_{ij}+\mathbf{a}}\right), \hspace{1cm}{\rm }J=\frac{t_{1}^{2}+t_{2}^{2}}{2(U-3J_{H})} \label{hd} \end{equation} \section{Instabilities due to $H_{H}$} The simplest SBMFA of $H$ \cite{mole} is not enough to treat the magnetic and orbital instabilities. As it is usually done in mean-field treatments of the Kondo lattice, in which the RKKY interaction should be included explicitly \cite{coq}, in this section we consider $H+H_{H}^{d}$ within the SBMFA to study the instabilities induced by $H_{H}^{d}$. In this approximation, the limit $U\rightarrow +\infty $ is taken with a constraint of forbidden double occupancy. However, the effects of a finite $U$ are considered explicitly in $H_{H}$. As before \cite{mole}, the hole operators are written in terms of auxiliary particles as h_{\mathbf{r}_{ij},\sigma }^{\nu }=b_{\mathbf{r}_{ij}}^{\dagger }f_{\mathbf r}_{ij},\sigma }^{\nu }$. The spin and pseudospin operators take the form \begin{equation} \mathbf{S}_{\mathbf{r}_{ij}}\equiv -\frac{1}{2}\sum_{\sigma ,\nu }f_{\mathbf r}_{ij},\sigma }^{\nu \dagger }\hat{\mathbf{\sigma }}_{\sigma ,\sigma ^{\prime }}f_{\mathbf{r}_{ij},\sigma ^{\prime }}^{\nu } ,\hspace{0.5cm} \mathbf{T}_ \mathbf{r}_{ij}}\equiv -\frac{1}{2}\sum_{\sigma ,\nu }f_{\mathbf{r _{ij},\sigma }^{\nu \dagger }\hat{\tau}_{\nu ,\nu ^{\prime }}f_{\mathbf{r _{ij},\sigma }^{\nu ^{\prime }}, \label{st} \end{equation We also define the mixed operator $U_{\mathbf{r}_{ij}}^{z}\equiv S_{\mathbf{r}_{ij}}^{z}T_{\mathbf{r}_{ij}}^{z}=\frac{1}{4}\sum_{\sigma ,\nu }\eta \left( \sigma \right) \eta \left( \nu \right) f_{\mathbf{r}_{ij},\sigma }^{\nu \dagger }f_{\mathbf{r}_{ij},\sigma }^{\nu }$, where $\eta\left(\uparrow\right)=\eta\left(x\right)=+1$ and $\eta\left(\downarrow\right)=\eta\left(y\right)=-1$, and where the constraint of no double occupation has been used. Without loss of generality, we can assume that the ferromagnetic spin order occurs along the $\hat{z}$ axis. Then $H_{H}^{d}$ can be written as \begin{equation} H_{H}^{d}=\frac{J}{2}\sum_{\mathbf{r}_{ij},\mathbf{a}}\left[ -S_{\mathbf{r _{ij}}^{z}S_{\mathbf{r}_{ij}+\mathbf{a}}^{z}+3T_{\mathbf{r}_{ij}}^{z}T_ \mathbf{r}_{ij}+\mathbf{a}}^{z}+4U_{\mathbf{r}_{ij}}^{z}U_{\mathbf{r}_{ij} \mathbf{a}}^{z}\right] +\rm{cst}, \end{equation We now introduce the following Hubbard-Stratonovich (HS) decouplings \begin{eqnarray} -\frac{J}{2}\sum_{\mathbf{r}_{ij},\mathbf{a}}S_{\mathbf{r}_{ij}}^{z}.S_ \mathbf{r}_{ij}+\mathbf{a}}^{z}& \rightarrow \sum_{\mathbf{r}_{ij},\mathbf{a }\left[ \frac{1}{2J}m_{\mathbf{r}_{ij}}m_{\mathbf{r}_{ij}+\mathbf{a}}+\frac{ }{2}m_{\mathbf{r}_{ij}+\mathbf{a}}S_{\mathbf{r}_{ij}}^{z}+\frac{1}{2}m_ \mathbf{r}_{ij}}S_{\mathbf{r}_{ij}+\mathbf{a}}^{z}\right] , \label{eq:HS_SS} \\ \frac{3J}{2}\sum_{\mathbf{r}_{ij},\mathbf{a}}T_{\mathbf{r}_{ij}}^{z}T_ \mathbf{r}_{ij}+\mathbf{a}}^{z}& \rightarrow \sum_{\mathbf{r}_{ij},\mathbf{a }\left[ -\frac{1}{6J}n_{\mathbf{r}_{ij}}n_{\mathbf{r}_{ij}+\mathbf{a}}+\frac 1}{2}n_{\mathbf{r}_{ij}+\mathbf{a}}T_{\mathbf{r}_{ij}}^{z}+\frac{1}{2}n_ \mathbf{r}_{ij}}T_{\mathbf{r}_{ij}+\mathbf{a}}^{z}\right] , \label{eq:HS_TT} \\ 2J\sum_{\mathbf{r}_{ij},\mathbf{a}}U_{\mathbf{r}_{ij}}^{z}U_{\mathbf{r}_{ij} \mathbf{a}}^{z}& \rightarrow \sum_{\mathbf{r}_{ij},\mathbf{a}}\left[ -\frac{ }{J}p_{\mathbf{r}_{ij}}p_{\mathbf{r}_{ij}+\mathbf{a}}+4p_{\mathbf{r}_{ij}}U_ \mathbf{r}_{ij}+\mathbf{a}}^{z}+4p_{\mathbf{r}_{ij}+\mathbf{a}}U_{\mathbf{r _{ij}}^{z}\right] , \label{eq:HS_UU} \end{eqnarray where $m_{\mathbf{r}_{ij}}$ is related to the usual magnetization order parameter, $n_{\mathbf{r}_{ij}}$ is the orbital order parameter and $p_ \mathbf{r}_{ij}}$ is a magneto-orbital order parameter. In order for Eqs. \ref{eq:HS_TT}) and (\ref{eq:HS_UU}) to be well-defined HS decouplings, the fields $n_{\mathbf{r}_{ij}}$ and $p_{\mathbf{r}_{ij}}$ should have antiferromagnetic order. Introducing the Fourier decompositions \begin{equation} m_{\mathbf{r}_{ij}}=\sum_{\mathbf{q}}\frac{e^{i\left( \mathbf{Q}_{m}+\mathbf{q}\right) \mathbf{r}_{ij}}}{\sqrt{N}}m_{\mathbf{q}},\ \ n_{\mathbf{r _{ij}}=\sum_{\mathbf{q}}\frac{e^{i\left( \mathbf{Q}_{n}+\mathbf{q}\right) \mathbf{r}_{ij}}}{\sqrt{N}}n_{\mathbf{q}},\ \ p_{\mathbf{r}_{ij}}=\sum_{\mathbf{q}}\frac{e^{i\left(\mathbf{Q}_{p}+\mathbf{q}\right) \mathbf{r}_{ij}}}{\sqrt{N}}p_{\mathbf{q}}, \label{fourier} \end{equation where $\mathbf{Q}_{m}=\left( 0,0\right) $ and $\mathbf{Q}_{n}=\mathbf{Q}_{p} \mathbf{Q}=\left( \frac{\pi }{a_{0}},\frac{\pi }{a_{0}}\right) $, $H_{H}^{d}$ becomes \begin{eqnarray} H_{H}^{d}&=&\sum_{\mathbf{q},\mathbf{a}}e^{-i\mathbf{q}\mathbf{a}}\left[ \frac{\beta }{2J}m_{\mathbf{q}}m_{-\mathbf{q}}-m_{\mathbf{q}}\sum_{\nu \frac{1}{\sqrt{N}}\sum_{\mathbf{k},\omega _{n}}\frac{1}{2}\left( \bar{f}_ \mathbf{k},\uparrow }^{\nu }f_{\mathbf{k}-\mathbf{q},\uparrow }^{\nu }-\bar{ }_{\mathbf{k},\downarrow }^{\nu }f_{\mathbf{k}-\mathbf{q},\downarrow }^{\nu }\right) _{\omega _{n}}\right] \nonumber \\ &+&\sum_{\mathbf{q},\mathbf{a}}e^{-i\mathbf{q}\mathbf{a}}\left[ \frac{\beta }{6J}n_{\mathbf{q}}n_{-\mathbf{q}}-n_{\mathbf{q}}\sum_{\sigma }\frac{1} \sqrt{N}}\sum_{\mathbf{k},\omega _{n}}\frac{1}{2}\left( \bar{f}_{\mathbf{k ,\sigma }^{x}f_{\mathbf{k}-\left( \mathbf{Q}+\mathbf{q}\right) ,\sigma }^{x} \bar{f}_{\mathbf{k},\sigma }^{y}f_{\mathbf{k}-\left( \mathbf{Q}+\mathbf{q \right) ,\sigma }^{y}\right) _{\omega _{n}}\right] \nonumber \\ &+&\sum_{\mathbf{q},\mathbf{a}}e^{-i\mathbf{q}\mathbf{a}}\left[ \frac{\beta }{J}p_{\mathbf{q}}p_{-\mathbf{q}}-p_{\mathbf{q}}\sum_{\sigma \nu }\frac{1} \sqrt{N}}\sum_{\mathbf{k},\omega _{n}}2\eta \left( \sigma \right) \eta \left( \nu \right) \left( \bar{f}_{\mathbf{k},\sigma }^{\nu }f_{\mathbf{k -\left( \mathbf{Q}+\mathbf{q}\right) ,\sigma }^{\nu }\right) _{\omega _{n} \right] , \label{s1} \end{eqnarray The minimum of the free energy with respect to the order parameters defines the saddle-point equations \begin{eqnarray} m_{\mathbf{q}}&=&J\sum_{\nu }\frac{1}{\beta N}\sum_{\mathbf{k},\omega _{n} \frac{1}{2}\left\langle \bar{f}_{\mathbf{k},\uparrow }^{\nu }\left( i\omega _{n}\right) f_{\mathbf{k}+\mathbf{q},\uparrow }^{\nu }\left( i\omega _{n}\right) -\bar{f}_{\mathbf{k},\downarrow }^{\nu }\left( i\omega _{n}\right) f_{\mathbf{k}+\mathbf{q},\downarrow }^{\nu }\left( i\omega _{n}\right) \right\rangle , \label{eq:mq_sp} \\ n_{\mathbf{q}}&=&3J\sum_{\sigma }\frac{1}{\beta N}\sum_{\mathbf{k},\omega _{n}}\frac{1}{2}\left\langle \bar{f}_{\mathbf{k},\sigma }^{x}\left( i\omega _{n}\right) f_{\mathbf{k}-\left( \mathbf{Q}-\mathbf{q}\right) ,\sigma }^{x}\left( i\omega _{n}\right) -\bar{f}_{\mathbf{k},\sigma }^{y}\left( i\omega _{n}\right) f_{\mathbf{k}-\left( \mathbf{Q}-\mathbf{q}\right) ,\sigma }^{y}\left( i\omega _{n}\right) \right\rangle , \label{eq:nq_sp} \\ p_{\mathbf{q}}&=&\frac{J}{4}\sum_{\sigma \nu }\frac{1}{\beta N}\sum_{\mathbf k},\omega _{n}}2\eta \left( \sigma \right) \eta \left( \nu \right) \left\langle \bar{f}_{\mathbf{k},\sigma }^{\nu }\left( i\omega _{n}\right) f_{\mathbf{k}-\left( \mathbf{Q}-\mathbf{q}\right) ,\sigma }^{\nu }\left( i\omega _{n}\right) \right\rangle . \label{eq:pq_sp} \end{eqnarray In order to determine which of these order parameters is the first to develop, we compute the change of sign in the second derivative of the free energy in the symmetric phase. This leads to the following conditions for the critical value of $J$ for each instability (similar to Stoner criteria) \begin{equation} \frac{1}{J_{c, m}}=\chi \left( \mathbf{0},0\right) ,\ \ \frac{1}{J_{c, n} =3\chi \left( \mathbf{Q},0\right) ,\ \ \frac{1}{J_{c, p}}=4\chi \left( \mathbf{Q},0\right) ,\rm{ } \label{jc} \end{equation where we have defined the static susceptibility of the electronic system, \begin{equation} \chi \left( \mathbf{q},0\right) \equiv \chi \left( \mathbf{q},\omega=0\right) =-\frac{4}{\beta N}\sum_{\mathbf{k},\omega _{n}}\frac{G_{\nu ,\sigma }^{ff}\left( \mathbf{k},i\omega _{n}\right) -G_{\nu ,\sigma }^{ff}\left( \mathbf{k}+\mathbf{q},i\omega _{n}\right) } \epsilon _{\mathbf{k}}-\epsilon _{\mathbf{k}+\mathbf{q}}}, \label{eq:susceptibility} \end{equation and the pseudofermion Green function is given in \cite{mole} The problem now reduces to computing Eq. (\ref{eq:susceptibility}) for a generic $\mathbf{q}$. Performing the Matsubara sum at $T=0$ we obtain \begin{equation} \chi \left( \mathbf{q},0\right) =\frac{4}{N}\sum_{\mathbf{k}}\frac{1}{\pi \frac{\arctan \left( \frac{E_{h}+\lambda +\epsilon _{\mathbf{k}}} z^{2}\Gamma }\right) -\arctan \left( \frac{E_{h}+\lambda +\epsilon _{\mathbf k}+\mathbf{q}}}{z^{2}\Gamma }\right) }{\epsilon _{\mathbf{k}}-\epsilon _ \mathbf{k}+\mathbf{q}}}, \end{equation} where $z=\left\langle b_{\mathbf{r}_{ij}}^{\dagger }\right\rangle $, $\lambda $ is a Lagrange multiplier used to impose the constraint $z^{2}+\sum_{\sigma ,\nu =x,y}f_{\mathbf{r}_{ij},\sigma }^{\nu \dagger } f_{\mathbf{r}_{ij},\sigma }^{\nu }=1$, $W$ is half the band width and $\Gamma $ the resonant level width of the isolated impurity. This function is represented in Fig. \ref{chiq} \begin{figure}[t] \begin{center} \includegraphics[width=14pc]{chiq} \caption{\label{chiq}Susceptibility $\protect\chi \left( \mathbf{q},0\right) $ vs \mathbf{q}$ in the (1,1) direction. } \end{center} \end{figure} For $\mathbf{q}\rightarrow 0$ we recover the usual expression $\chi \left( \mathbf{0},0\right) =\rho _{f}^{t}\left( 0\right) $, where $\rho _{f}^{t}\left( 0\right) $ is the total spectral density (summing over spin and orbital) at the Fermi level. For the parameters $\lambda ,z$ that minimize the SBMFA action, we obtain $\chi \left( 0,0\right) =5.05/t_{1}$, $\chi \left( \mathbf{Q},0\right) =3.45/t_{1}$ which replaced in Eq. (\ref{jc}) lead to the following critical values ($t_{1}=0.007$ eV) \begin{equation} J_{c, m}\approx 16.1{\rm K}, \ \ J_{c, n}\approx 7.9{\rm K}, \ \ J_{c, p}\approx 5.9\ {\rm K}. \label{eq:Jcm} \end{equation} We conclude that the first instability occurs in the magneto-orbital channel. For FePc/Au(111) one can estimate $t_{1}=0.007$ eV, t_{2}=3t_{1}$ and $U=1.6$ eV \cite{mole}. The exchange constant $J_{H}$ is difficult to estimate for effective molecular orbitals. For pure Fe orbitals it is of the order of 0.7 eV. From these estimates and the second Eq. (\re {hd}), one expects $J \geq 1.8$ K. Thus, one might infer that the system is not too far from an instability against spin ferro- and orbital antiferro-magnetic order. \section{RKKY interactions } Another possible source of magnetic instabilities is the RKKY interaction, which consists in the indirect interaction $I$ between spins mediated by conduction electrons generated by the effective Kondo coupling between spins and conduction electrons $J_{K}\mathbf{S}_{i}\cdot \mathbf{s}_{i}$ at second order in $J_{K}$, where $\mathbf{s}_{i}$ is the spin of the conduction electrons at site $i.$ An advantage of Au and its (111) surface is that both the bulk sates and the surface Shockley states near the Fermi energy can be described as free electrons and therefore the calculations in Ref. \cite{kittel} for three dimensions (3D) and Ref. \cite{beal} for the 2D case are valid. Following these works one can write for dimension N=2 or 3 for two spins $\mathbf{S _{1}$ and $\mathbf{S}_{2}$ at a distance $R$ (we will consider nearest neighbors only) \begin{equation} H_{RKKY}\left( R\right) =I_{\rm{ND}}\left( R\right) \mathbf{S}_{1}.\mathbf S}_{2}, \ \ \ \ I_{\rm{ND}}\left( R\right) =-\frac{1}{4}\tilde{J}_{\rm{ND }^{2}\chi _{\rm{ND}}\left( R\right) , \label{hrkky} \end{equation where $\tilde{J}_{\rm{3D}}=J_{K}V_{\rm{at}}$ , $\tilde{J}_{\rm{2D }=JS_{\rm{at}}$, $V_{\rm{at}}$ ($S_{\rm{at}}$) is the volume (surface) per Au atom in the bulk (surface) and $\chi _{\rm{ND}}\left( R\right) $ is the spin susceptibility, given by Eqs. (14) and (15) of Ref. \cite{beal}: \begin{equation} \chi _{\rm{3D}}\left( R\right) =-\rho _{\rm{3D}}\left( \epsilon _{F}\right) \frac{4k_{F}^{3}}{\pi }F_{\rm{3D}}\left( 2k_{F}R\right), \chi _{\rm{2D}}\left( R\right) =-\rho _{\rm{2D}}\left( \epsilon _{F}\right) k_{F}^{2}F_{\rm{2D}}\left( k_{F}R\right) , \label{sus} \end{equation where $\rho _{\rm{3D}}\left( \epsilon _{F}\right) =mk_{F}/2\pi ^{2}\hbar ^{2}$ ($\rho _{\rm{2D}}\left( \epsilon _{F}\right) =m^{\ast }/2\pi \hbar ^{2}$) is the density of states per spin and per unit volume (surface), and \begin{equation} F_{\rm 3D}\left( x\right) \equiv \frac{x\cos -\sin x}{x^{4}},\ \ F_{\rm 2D}\left( x\right) \equiv J_{0}\left( x\right) Y_{0}\left( x\right) +J_{1}\left( x\right) Y_{1}\left( x\right) \xrightarrow[x \rightarrow\infty]{} -\frac{\sin \left( 2x\right) }{\pi x^{2}}, \label{fn} \end{equation with $J_{\nu }\left( x\right) $($Y_{\nu }\left( x\right) $) the Bessel function of the first (second) kind. For the more realistic 3D case, using the value $k_{F}=1.21$ \AA$^{-1}$ for Au \cite{kevan}, one obtains $\rho _{\rm{3D}}\left( \epsilon _{F}\right) =0.00805/$ (eV \AA$^{3}$). The density per atom and spin projection is $\rho =\rho _{\rm{3D}}\left( \epsilon _{F}\right) V_{\rm{at}}=0.137/$eV, where we have used $V_{\rm{at}}=17.0$ \AA$^{3}$ (the lattice parameter of f.c.c. Au is $a=4.08$ \AA and $V_{\rm{at}}=a^{3}/4$). Keeping the product $\tilde{J}_{\rm{3D}}\rho _{\rm 3D}}\left( \epsilon _{F}\right) =J_{K}\rho $ that leads to the observed T_{K}\simeq W\exp [-1/(2J_{K}\rho )]$ = 4.5 K, with $W=1/(2\rho )$ for the SU(4) impurity Kondo model, one obtains J_{K}\simeq 0.4$ eV. Using the above equations with $|F_{\rm{3D}}\left( x\right) |\leq 1/x^{3}$ for large $x$ and $R=14.7$ \AA for the intermolecular distance \cite{tsuka} we obtain \begin{equation} |I_{\rm{3D}}|\leq \left( JV_{\rm{at}}\right) ^{2}\rho _{\rm{3D}}\left( \epsilon _{F}\right) \frac{1}{\pi (2R)^{3}}=0.05\rm{ K.} \label{i3} \end{equation} For 2D, the effective mass of the surface Shockley states is $m^{\ast }=0.28m $ \cite{kevan}. This leads to $\rho _{\rm{2D}}\left( \epsilon _{F}\right) =0.00589/\left( \rm{eV}\mathring{A}^{2}\right) $. Using $S_{\rm{at}} \sqrt{3}a^{2}/4=7.21$ \AA$^{2}$ one obtains $\rho =\rho _{\rm{2D}}\left( \epsilon _{F}\right) S_{\rm{at}}=0.0425/\rm{eV}$. Knorr {\it et al.} have shown that the bulk states dominate the hybridization with the impurity \cite{knorr}. Assuming (as an overestimation) that half of the contribution to $J_{K}\rho $ is due to surface states leads to $J_{K}\simeq 0.65$ eV. From Eq. (\ref{fn}) for large $x$, $|F_{\rm{2D}}\left( x\right) |\leq 1/\left( \pi x^{2}\right) $, and using Eqs. (\ref{hrkky}) and (\ref{sus}) for $R=14.7$ \AA we obtain \begin{equation} |I_{\rm{2D}}|\leq \left( JS_{\rm{at}}\right) ^{2}\rho _{\rm{2D}}\left( \epsilon _{F}\right) \frac{1}{4\pi R^{2}}=0.54\rm{ K.} \label{i2} \end{equation} A calculation that follows the same steps as done in the previous section shows that to have a ferromagnetic instability in the system $I_{\rm{ND}}<0 $, with $|I_{\rm{ND}}|>I_{c}=J_{cm}\approx 16.1$ K. Therefore, magnetic instabilities driven by the RKKY interaction are unlikely for FePc/Au(111). \section{Summary and discussion} We have studied the magnetic and orbital instabilities of a model used before to explain the scanning tunneling spectroscopy (STS) of a system of FePc molecules on Au(111). The model generalizes to the lattice the SU(4) Anderson model and is expected to have emergent SU(4) symmetry at low energies. We find that due to effective generalized exchange interactions originated by the hopping terms, the system is close to a combined instability of spin ferromagnetic and orbital antiferromagnetic character. Due to this combined character it is possible that the application of a magnetic field induces not only a finite magnetization but also a checkerboard orbital ordering, which might be observed by STS if the tip is not radially symmetric. \section*{Acknowledgments} AML acknowledges support form JQI-NSF-PFC. AAA is partially supported by CONICET, Argentina. This work was sponsored by PICT 2010-1060 and 2013-1045 of the ANPCyT-Argentina and PIP 112-201101-00832 of CONICET. \section*{References}
1,477,468,750,739
arxiv
\section{Introduction} Dust in the universe can absorb and re-emit up to 30$\%$ of starlight in the infrared \citep{Hauser2001}. Hence, the formation of dust has become one of the most important topics in galaxy evolution. It is well known that there are two types of stellar sources that produce dust. Asymptotic giant branch (AGB) stars, which are evolved stars with initial masses of 0.8--8.0 M$_\odot$, create dust in their cooling dense ejecta. This process is associated with an intense phase of mass loss due to stellar winds, up to 10$^{-4}$~M$_\odot$~yr~$^{-1}$ \citep{Bowen1991}. They can dominate dust production as long as a burst of star formation took place at least 400\,Myr prior to observations \citep{Dwek2007,Valiante2009,Dwek2011}. One AGB star is able to produce 10$^{-5}$~--~10$^{-2}$ M$_\odot$ of dust \citep{Morgan2003,Ferrarotti2006,Ventura2012,Nanni2013,Nanni2014,Schneider2014}. Supernovae (SNe) are another stellar source of dust. Dust formation takes place in the expanding ejecta a few hundred or thousand days after the explosion, and stellar progenitors have initial masses of 8--40 M$_\odot$. Observations of SN1987A in the Large Magellanic Cloud have revealed that during such an event, up to 0.7 M$_\odot$ of dust could be created \citep{Matsuura2011}. Similarly, a large amount of dust, $\sim0.5 M_\odot$, has been reported for several supernovae \citep{Gall2014, Owen2015, Bevan2017, DeLooze2017, Temim2017, Rho2018, Chawner2019}. This means that in the early galaxies a large amount of dust can be formed by SNe \citep[e.g.][]{Gall2018}. However, it is possible that SNe destroy most of the dust they form by reverse shock waves \citep{Temim2015,Bianchi2007,Cherchneff2010,Gall2011,Lakicevic2015}, but it is debated how much of the new and pre-existing dust is destroyed by a supernova as SN dust grains may be large and distributed in clumps \citep{Gall2014,Lau2015,Wesson2015,Bevan2016,Micelotta2016,Gall2018,Matsuura2019}. Dust grains formed by AGB stars and SNe can act as seeds that grow in the ISM, and this process can lead to a significant increase in the total dust mass \citep{Draine1979,Dwek1980,Dwek2007, Draine2009}. However, it is not clear if this process is efficient and quick enough, especially at high redshift. \citet{Ferrara2016} show that it is too slow in the diffuse ISM, and probably prohibited in molecular clouds because of icy mantles forming on the gas grains. In this work we investigate a sample of galaxies at $6<z<8.4$ (900 -- 600 Myr after the Big Bang) with the latest observational constraints on dust masses. Our aim is to test whether AGB stars or SNe are able to explain the observed amounts of dust in these galaxies, or whether dust accumulation must have also happened by a different (non-stellar) mechanism, for example grain growth in the ISM. We use a cosmological model with $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}$~=~0.7, and $\Omega_m$~=~0.3. \section{Sample} We have selected all galaxies at $z>6$ for which observations of dust continuum have been attempted, except those for which a similar method has already been applied: quasars J1048+4637, J1148+5251, and J2054-0005 \citep{Michalowski2010}, and galaxies analysed in \citet{Michalowski2015}. We describe below the measurements used to estimate the dust and stellar masses needed for our analysis. \begin{figure*}[h] \centering \includegraphics[width=18cm]{DUST.png} \caption{Dust yield per SN (top) and AGB star (bottom) required to explain the dust masses in galaxies in our sample. Three forms of the initial mass function are assumed: top-heavy (red), Kroupa (green), and Salpeter (blue). The stellar mass was determined in various ways: $M_{dyn}$-$M_{gas}$ (cross), $M_{dyn}$ (open circle), and SED modelling (filled circle). In addition, galaxies with upper limits of dust masses are indicated by down-pointing triangles. Multiple points with the same shape and colour for a galaxy are for different dust or stellar mass estimates. Galaxies considered elsewhere \citep{Michalowski2010, Michalowski2015} are shown as grey symbols. For SNe two regions have been defined: below the maximum theoretical dust yield without dust destruction of 1.3$M_\odot$ (yellow) and below the limit of 0.1$M_\odot$ including $\sim90$\% dust destruction (vertical orange stripes). For AGB stars the theoretically allowed dust yields are indicated (light blue).} \label{Fig1} \end{figure*} HATLAS\,J090045.4+004125 \citep[hereafter HATLAS;][]{HATLAS} is one of a few submillimetre galaxies above $z = 6$ (together with that reported by \citealt{Riechers2013}). HATLAS was selected using the {\it Herschel} Space Observatory within the {\it Herschel} Astrophysical Terahertz Large Area Survey \citep{Eales2010,Bourne2016,Valiante2016}. Emission lines for $^{12}$CO(6-5) and $^{12}$CO(5-4) and the 1\,mm continuum were detected by the Large Millimeter Telescope \citep{HATLAS}. The mass of molecular gas was determined from the CO lines, whereas the dust mass was calculated in two ways: based on fitting photometric measurements to modified black-body function and via modelling using the MAGPHYS SED code (Table \ref{Tab1}; \citealt{HATLAS}). The dynamical mass was calculated using the isotropic virial estimator (equation 5 in \citealt{HATLAS}). Discovered with the Subaru telescope at redshift z$\sim$5.95, HIMIKO was at that moment the most luminous Ly$\alpha$ emitter \citep{Ouchi2009}. Recent observations of this object using ALMA in band 6 revealed the presence of three clumps. From the [CII] detections the size of HIMIKO and the velocity linewidth were measured \citep{HIMIKOmod}. The upper limit on the continuum emission of S158$\mu$m~<~27$~\mu$Jy was also reported. This is deeper than the previous ALMA data for this galaxy reported in \cite{Ouchi2013}. Cosmos Redshift 7 (CR7) located at z = 6.6 is the most luminous Ly$\alpha$ emitter, discovered using the Subaru Telescope \citep{Matthee,Sobral2015}. From the ALMA [CII] line detection and dust continuum upper limit, \cite{CR7} estimated the dynamical masses and dust masses found in two regions of this galaxy, Clump A and Clump C-2 (Table \ref{Tab1}). SPT-S J031132-5823.4 (SPT0311–58) was discovered using the South Pole Telescope \citep{Mocanu2013}, and confirmed at a redshift of 6.9 \citep{Strandet2017}. With high-resolution ALMA observations, \cite{SPT0311} detected continuum, [CII] 158 $\mu$m and [OIII] 88 $\mu$m lines of two components of SPT0311–58, named East and West, and determined their gas and dust masses. Only for SPT0311–58 East was the stellar mass determined because no stellar emission was detected for the West component. SXDF-N1006-2 was discovered during a survey for Ly$\alpha$ Emitters (LAEs) by the Subaru Telescope \citep{Shibuya} and is located at z = 7.2. \cite{SXDF} used ALMA to detect the [OIII] 88 $\mu$m emission line and, assuming that the galaxy is a circular disk, determined its dynamical mass (Table \ref{Tab1}). The upper limit on the continuum flux at band 6 (1.33 mm) of $<0.042$~mJy was also obtained. J1342+0928 is a quasar located at z = 7.54, first detected by \citet{Banados2018}. The detection of the [CII] 158 $\mu$m line emission allowed \citet{J1342} to determine the dynamical mass of this galaxy based on the virial theorem, and the assumption that the [CII] emission comes from a rotating disk. From the detection of the 223.5 GHz continuum, the mass of dust in this galaxy was determined (Table \ref{Tab1}). The limit on the molecular gas mass was derived from the observations of the CO(3-2) line (Table \ref{Tab1}). MACS0416$\_$Y1 is one of the brightest Lyman Break Galaxies (LBGs) at $z\sim8$ \citep{Infante2015,Laporte2015}. Using the detection of the [OIII] 88 $\mu$m line and the dust continuum, \citet{MACS} confirmed its redshift as 8.3118, and measured its dust mass (assuming a dust temperature of 40 and 50~K). From the optical emission the stellar mass was determined (Table \ref{Tab1}). In our analysis, the lower error bar for the stellar mass was modified disregarding model solutions with ages lower than one million years (Y.~Tamura, private communication). \begin{table*} \caption[]{\label{Tab1}List of physical properties of the galaxies in our sample.} \centering \large \begin{tabular}{lcccccc} \hline & $z$ & M$_{dyn}$ & M$_{dust}$ & M$_{gas}$& M$_{stellar}$ & Ref\\ & & (10$^{10}$ M$_\odot$) & (10$^{7}$ M$_\odot$) & (10$^{11}$ M$_\odot$) & (10$^{9}$ M$_\odot$) & \\ \hline \hline HATLAS & 6.027 & 2.6 & 19$\pm$4 \quad 42$\pm$7 & 0.16$\pm$0.06 & --- & 1\\ \hline HIMIKO & 6.595 & 1.168 $^{\dagger}$ & $<$0.16 $^{\dagger}$ & --- & 35$_{-26}^{+15}$ & 2, 3\\ \hline CR7 & 6.604 & --- & --- & --- & 20 & 4\\ CR7 Clump A & 6.601 & 3.9$\pm$1.7 & $<$0.81 & --- & --- & 5\\ CR7 Clump C-2 & 6.598 & 2.4$\pm$1.9 & $<$0.81 & --- & --- & 5\\ \hline SPT0311-58E & 6.9 & 7.7 $^{\dagger}$ & 40$\pm$20 & 0.4$\pm$0.2 & 35 $\pm$15 & 6\\ SPT0311-58W & 6.9 & 54.222 $^{\dagger}$ & 250$\pm$160 & 2.7$\pm$1.7 & --- & 6\\ \hline SXDF & 7.2 & 5 & $<$0.29 $^{\dagger}$ & --- & 0.347$_{-0.166}^{+0.616}$ & 7\\ \hline J1342+0928 & 7.54 & $<$15 \quad $<$3.2 & 24.5$\pm$18.5 & $<$0.12 & --- & 8 \\ \hline MACS0416\_Y1 & 8.3118 & --- & 0.36$\pm$0.07 \quad 0.82$\pm$0.16 & --- & 4.8$_{-1.8}^{+6.8}$ \quad 5.1$_{-4.9}^{+7.1}$ & 9\\ \hline A2744\_YD4 & 8.38 & --- & 0.55$_{-0.17}^{+1.96}$ & --- & 1.97$_{-0.66}^{+1.45}$ & 10 \\ \hline \end{tabular} \tablebib{$\dagger$ indicates the value determined in this work. (1)~\citet{HATLAS}; (2) \citet{HIMIKOclump}; (3) \citet{Ouchi2009}; (4) \citet{Sobral2015}; (5) \citet{CR7}; (6) \citet{SPT0311}; (7) \citet{SXDF}; (8) \citet{J1342}; (9) \citet{MACS}; (10) \citet{A2744}.} \end{table*} A2744$\_$YD4 at z = 8.38 is an LBG lensed by the Pandora Cluster. This object was observed in the Hubble Frontier Fields (HFF) by \cite{Abell2744}. The ALMA detection of the [OIII] 88 $\mu$m line allowed the confirmation of its redshift \citep{A2744}. Based on the dust continuum detection and the optical emission, \cite{A2744} estimated the mass of dust and stars in this galaxy (Table \ref{Tab1}). \section{Method}\label{Method} We calculated the dynamical masses for galaxies with emission line detections for which this was not reported. This was the case for HIMIKO and SPT0311-58. Based on the detection of the [CII] emission line \citep{HIMIKOclump,SPT0311}, the sizes of these galaxies were measured as 3.9$\pm$1.1~$\times$~1.7$\pm$1.1~kpc for HIMIKO, 2.2~kpc for SPT0311-58E, and 7.5~$\times$~2.0~kpc for SPT0311-58W. The [CII] linewidths were measured as 180$\pm$50~kms$^{-1}$ for HIMIKO, 500~kms$^{-1}$ for SPT0311-58E, and 1000~kms$^{-1}$ for SPT0311-58W. In order to calculate the dynamical masses we used eq.~5 in \citet{HATLAS}, based on the isotropic virial estimator. The results of these calculations are flagged in Table \ref{Tab1} with dagger symbols ($\dagger$). Dust masses were not reported for HIMIKO and SXDF. We used the reported dust continuum upper limits to estimate the upper limits of the dust masses of these galaxies, assuming the dust temperature of 40 K and using eq.~(5) in \citet{Michalowski2009}, based on \citet{Taylor2005} and \citet{Hildebrand1983}. A value of the emissivity index of $\beta=2$ was assumed. This gives conservatively low dust masses (see Fig.~3 in \citealt{Michalowski2010c}). In particular, if we adopted $\beta=1.5$, then we would obtain dust masses 2.7 times higher, which would make the stellar dust producers even less likely. If we used this method for all galaxies, not only for those whose dust masses have not been calculated elsewhere, we would obtain masses a factor of $0.84\pm0.33$ higher. This would not change any of our conclusions. The results of our adopted dust mass calculations are flagged in Table \ref{Tab1} with dagger symbols ($\dagger$). Using the same methodology as presented by \citet{Michalowski2010b} and \citet{Michalowski2015}, we determined the amount of dust that one star would have to produce in order to explain the observed amount of dust in every galaxy. The number of dust-producing stars was estimated from the stellar masses in the studied galaxies. The stellar masses were estimated in three ways: (1) as equal to $M_{dyn}$ to obtain the maximum possible value, (2) as equal to $M_{dyn}$~-~$M_{gas}$, and (3) from SED modelling. The number of stars with masses between $M_0$ and $M_1$ can be calculated by the integration of the stellar initial mass function (IMF), according to the formula $N(M_0 - M_1) = M_{stellar} \int_{M_0}^{M_1} \xi(M) \mathrm{d}M / \int_{M_{min}}^{M_{max}} \xi(M) M \mathrm{d}M$, where $\xi(M)$ is an IMF parametrised as $M^{-\alpha}$. We assumed $M_{min}$~=~0.15, $M_{max}$ = 120 $M_\odot$, and three types of IMFs: the \citet{Salpeter1955} IMF with $\alpha$ = 2.35, the \citet{Kroupa2001} IMF with $\alpha$ = 1.3 in the mass range 0.15 - 0.5 $M_\odot$ and $\alpha$ = 2.3 in the mass range 0.5 - 120 $M_\odot$, and a top-heavy IMF with $\alpha$ = 1.5. The dust yield per star required to explain dust observed in a galaxy is then $M_{dust}/N(M_{0} - M_{1})$. At the redshifts of the studied galaxies the time since the Big Bang was short, such that low-mass stars had not had time to leave the main sequence and start producing dust during the AGB phase. Based on a lifetime on the main sequence of 10$^{10}$~$\times$~[M/M$_\odot$]$^{-2.5}$ \citep{Kippenhahn1990}, we assumed that at z < 7.0 only stars with masses between 3 to 8 M$_\odot$ would have had time to enter the AGB phase, whereas at z $\ge$ 7.0 the range of 3.5--8~M$_\odot$ was assumed. For SNe we assumed that their progenitors had masses in the range of 8--40 M$_\odot$. \section{Results and discussion} Figure \ref{Fig1} shows the dust yield per star required to explain the dust mass for a given galaxy, using all possible combinations of dust and stellar masses. Galaxies considered elsewhere \citep{Michalowski2010, Michalowski2015} are shown as grey symbols. The top and bottom panels assume that the dust is produced by SNe and AGB stars, respectively. Some regions have been highlighted. In the top panel are shown the theoretical dust yield per SN without dust destruction ($<1.3\,M_\odot$; \citealt{Todini2001,Nozawa2003}) and with $\sim90$\% dust destruction ($<0.1\,M_\odot$; \citealt{Bianchi2007,Cherchneff2010,Gall2011,Lakicevic2015}). We note that it is difficult to constrain the fraction of newly formed dust destroyed by a reverse shock \citep{Micelotta2016}, so a weaker dust destruction is also possible. In the bottom panel the maximum theoretical dust yield per AGB star is shown (0.04$M_\odot$; \citealt{Morgan2003,Ferrarotti2006,Ventura2012,Nanni2013,Nanni2014,Schneider2014}). Depending on the data available, the stellar mass was obtained in various ways: as $M_{dyn}$-$M_{gas}$, as $M_{dyn}$, and from SED modelling. Only for one object, SPT0311-58E, was it possible to determine the required dust yield per star with the three different assumptions on stellar mass, and they do not differ from each other by more than their uncertainties. For this galaxy the required dust yield per SN is about 0.1--1$M_\odot$. This is close to the maximum dust yield predicted by simulations and to the highest observed values. Therefore, it is in principle possible that dust in this galaxy was formed by SNe, but that would require weak dust destruction and very efficient production. We cannot accept a similar conclusion for AGB stars. One AGB star would need to create between 0.1 to 1$M_\odot$ of dust to explain the dust mass in SPT0311-58E, which is significantly higher than the allowed values. The dust yield per SN required to explain dust in the HATLAS galaxy is close to 1$M_\odot$, so if they are responsible for the bulk of the dust production, then they would need to be maximally efficient and not to destroy any dust. AGB stars cannot be responsible for the dust production in this galaxy because the required dust yield per star is more than 10 times higher than the theoretical value. In the case of galaxies SPT0311-58W, J1342+0928, and A2744$\_$YD4 it is possible that SNe are responsible for the observed dust. Some dust destruction, but not a significant amount, by SNe would be allowed in these cases as the derived dust yields are above 0.1$M_\odot$. This is in line with the result of \citet{Gall2018} that dust in distant galaxies (including A2744$\_$YD4) was formed by SNe, which requires very little dust destruction. Again, the required dust yields per AGB star for these three galaxies are significantly above the theoretical limit, so AGB stars have not contributed substantially to the dust production in these galaxies. For the remaining four galaxies in our sample the data are of insufficient quality to constrain the dust production mechanism. HIMIKO, CR7, and SXDF only have upper limits for the mass of dust, so we cannot rule out either SNe or AGB stars as dust producers. We can only conclude that one SN in these galaxies could not produce more than 0.01$M_\odot$ of dust. This indicates that SNe in the early Universe are much less efficient than the maximum theoretical values, and casts doubts on the efficient SN dust production in the remaining galaxies in our sample \citep[see also][]{Hirashita2014}. Our limit is two times deeper than that obtained by \citet{Hirashita2014} because we used stellar masses that are two times higher. For the last galaxy, MACS0416$\_$Y1, we also cannot rule out the dust production mechanism because the large uncertainty on the measurement of its stellar mass results in a derived dust yield consistent with zero. In summary, AGB stars were not able to form dust in the majority of $z>6$ galaxies. Our results are conservative (leading to low required dust yields) because we include all stars that in principle could contribute to dust production. Stars with the masses close to our AGB lower limit ($3$ or $3.5\,M_\odot$) could have reached the AGB phase, but only if they were all born at the beginning of the galaxy evolution. Supernovae would need to be maximally efficient and not to destroy the dust they formed \citep[as in][]{Hjorth2014,Gall2018}. One of the recent observations of SN 1987A indicates that dust can re-form and re-grow in post-shock region after being destroyed by the shock (\citealt{Matsuura2019}, but see \citealt{Biscaro2014}). This is consistent with high dust production efficiency of SNe. Similarly, the detection of SN dust in a 10\,000 year old SN remnant \citep{Lau2015,Chawner2019} indicates that dust is not efficiently destroyed by SNe. It is unclear, however, whether {all} SNe can produce close to $1\,M_\odot$ of dust. If this is not the case, then some non-stellar mechanism is required, for example grain growth in the ISM. \citet{Asano2013} found that dust mass accumulation is dominated by the grain growth in the ISM if the metallicity is higher than a threshold value of $0.3$ solar metallicity (or less if the star formation timescale is longer). This is likely for very dusty galaxies in our sample (HATLAS, SPT0311-58, J1342+0928), but more normal galaxies (MACS0416\_Y1, A2744\_YD4) may have lower metallicities. However, for A2744\_YD4 we derived very high required dust yields per star, so either its metallicity is above this threshold or grain growth is always efficient below it. We consider AGB stars and SNe separately, but in reality both contribute to dust production at the same time. However, this does not affect our conclusions because for the detected galaxies the required dust yields for AGB stars are approximately ten times higher than the allowed value. This means that AGB stars could produce at most 10\% of the dust in these galaxies, and thus considering AGB stars and SNe at the same time would lead to revising down the required dust yield per SN by only 10\%. The question remains of the source of heavy elements building the dust grains in these galaxies. Theoretical work has shown that each SN can produce around $1\,M_\odot$ of heavy elements \citep{Todini2001,Nozawa2003,Bianchi2007,Cherchneff2009}. This is close to the required dust yields per SN in our sample, so, as do \citet{Michalowski2010}, we conclude that SNe are efficient enough to produce heavy elements needed to build dust grains in these galaxies, even if they do not directly form most of the dust. \section{Conclusions} We determined the dust yield per AGB star and SN required to explain the observed amount of dust in galaxies at redshift $6<z<8.4$. We obtained very high required dust yields per AGB stars, so they were not able to produce the majority of the dust in these galaxies. In most cases we accepted the hypothesis about the formation of dust by SNe, but they would need to be maximally efficient and not to destroy much dust. This suggests either that supernovae were efficient in producing dust in these galaxies or that a non-stellar mechanism was responsible for a significant fraction of dust mass accumulation, for example grain growth in the ISM. \begin{acknowledgements} We thank the referee for helpful comments and Yoichi Tamura for the clarification on the stellar mass measurements in \citet{MACS}. M.J.M.~acknowledges the support of the National Science Centre, Poland, through the POLONEZ grant 2015/19/P/ST9/04010; this project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sk{\l}odowska-Curie grant agreement No. 665778. A.L. acknowledges the support from Adam Mickiewicz University in Poznań, Faculty of Physics, through grant POWR.03.01.00-00-S157/17; this project has received funding from The National Centre for Research and Development. This research has made use of The Edward Wright Cosmology Calculator http://www.astro.ucla.edu/\textasciitilde wright/CosmoCalc.html \citep{Wright2006} and of NASA's Astrophysics Data System. \end{acknowledgements} \bibliographystyle{aa} \input{output.bbl} \end{document}
1,477,468,750,740
arxiv
\section{Introduction} Let $S$ be a closed oriented surface of genus at least $2$ and let $\mathcal{T}(S)$ indicate the Teichm\"uller space of marked hyperbolic structures on $S$. Let $\gamma$ be the free homotopy class of an immersed curve on $S$, and consider $\ell(\gamma):\mathcal{T}(S)\to \mathbb{R}$, the hyperbolic length function of $\gamma$. We are interested in the quantity $L(\gamma)= \inf \ell(\gamma)$, where the infimum is taken over $\mathcal{T}(S)$ and, when it exists, the minimizing point $X(\gamma)\in \mathcal{T}(S)$. The aim of this paper is to study the quantities $L(\gamma)$, $X(\gamma)$ and their relationship to other invariants of curves, such as the self-intersection $\iota(\gamma,\gamma)$ or the simple lifting degree $\deg(\gamma)$ (defined to be the minimum degree of a finite cover $Y \rightarrow S$ so that $\gamma$ admits a simple elevation), when $\gamma$ is chosen ``at random''. One version of the notion of a ``random curve'', well-studied in the literature, can be constructed by first choosing a marked hyperbolic metric $\sigma$, viewed as a point in $\mathcal{T}(S)$. Because the unit tangent bundle of a finite-volume hyperbolic surface is itself finite-volume, one can choose a unit tangent vector on $S$ uniformly at random. Following this basepoint with the geodesic flow for a long time, one builds ``hyperbolically random'', or $\sigma-$\textit{random} curves $\gamma_T$ on $S$, roughly of $\sigma$-length $T$. In fact, an appropriate weighting of the resulting curves limit to the Liouville current of the hyperbolic surface. This makes the quantities $L(\gamma_T)$, $\rho(\gamma_T)$, and the self-intersection $\iota(\gamma_T,\gamma_T)$ all fairly transparent. Since the Liouville current is length-minimized at $\sigma$, the length minimizing metric of $\gamma_T$ converges to $\sigma$: evidently the `random' story has been biased by the choice at the outset of a hyperbolic metric. The aim of this paper is to study the situation when $\gamma$ is instead chosen in a combinatorially random fashion. Let $\mathcal{A}=\{w_1,\ldots,w_{2g}\}$ be a standard generating set for $\pi_1S$, let $\Cay_\mathcal{A}(S)$ indicate the Cayley graph of $\pi_1S$ with respect to $\mathcal{A}$, and let $\mu$ be a non-elementary probability distribution supported on $\mathcal{A}$. Let $\gamma_n$ be the (unoriented conjugacy class) of the result of an $n$-step random walk on $\Cay_\mathcal{A}(S)$ driven by $\mu$. By $\mathbb{P}_{w}$, ($w$ for \textit{walk}) we will mean the probability operator induced by a simple random walk on $\Cay_\mathcal{A}(S)$, and by $\mathbb{P}^{(n)}_{g}$ ($g$ for \textit{generic}), we will mean the probability operator associated to the uniform distribution on the ball of radius $n$ about the identity in $\Cay_\mathcal{A}(S)$. In analogy with $\sigma$-random curves, we show that the location of the metric in $\mathcal{T}(S)$ minimizing the length of a curve is biased by the choice of generating set, in that with $\mathbb{P}_{w}$ (resp. $\mathbb{P}_{g}$) probability going to $1$ as the length of the walk (resp. the radius of the ball) goes to infinity, it lives nearby the hyperbolic metric minimizing the length of the corresponding rose: \begin{theorem} \label{thm: compact} Assume $S$ is closed. Then there is $K>0$ so that \[ \mathbb{P}_{w}\left[X(\gamma_{n}) \hspace{1 mm} \mbox{exists and} \hspace{1 mm} d_{\mathcal{T}(S)}(X(\gamma_{n}), X_{\mathcal{A}}) \leq K\right] \xrightarrow[\substack{n\to\infty}]{} 1, \] where $d_{\mathcal{T}(S)}$ denotes distance in the Teichm{\"u}ller metric and where $X_{\mathcal{A}}$ is the point minimizing the hyperbolic length of the rose associated to $\mathcal{A}$. Similarly, letting $B_{n}$ denote the ball of radius $n$ about the identity in $\Cay_\mathcal{A}(S)$ and given $\gamma \in B_{n}$, \[ \mathbb{P}^{(n)}_{g}\left[X(\gamma_{n}) \hspace{1 mm} \mbox{exists and} \hspace{1 mm} d_{\mathcal{T}(S)}(X(\gamma), X_{\mathcal{A}}) \leq K\right] \xrightarrow[\substack{n\to\infty}]{} 1. \] \end{theorem} Using Theorem \ref{thm: compact}, we prove that the self-intersection number of a combinatorially random curve grows quadratically in the length of a random walk, or when $S$ is closed, in the radius of a ball in which a curve is chosen uniformly at random: \begin{theorem} \label{thm: intersection} If $S$ is any finite type surface, then there is $L \ge 1$ so that \[\mathbb{P}_{w} \left[ \frac{n^{2}}{L} \leq i(w_n,w_n ) \leq L \cdot n^{2} \right] \xrightarrow[\substack{n\to\infty}]{} 1. \] Moreover, if $S$ is closed, then for $\gamma \in B_{n}$, \[ \mathbb{P}^{(n)}_{g} \left[ \frac{n^{2}}{L} \leq i(\gamma, \gamma) \leq L \cdot n^{2} \right] \xrightarrow[\substack{n\to\infty}]{} 1.\] If $S$ is not closed, then for $\gamma \in B_{n}$, \[ \mathbb{P}^{(n)}_{g} \left[ \frac{n}{L \cdot (\log(n))} \leq i(\gamma, \gamma) \leq L \cdot n^{2} \right] \xrightarrow[\substack{n\to\infty}]{} 1.\] \end{theorem} We point out that, in simplest terms, the random walk model counts \textit{paths} in the Cayley graph, whereas choosing uniformly at random in larger and larger balls counts \textit{elements}. Depending on the group, relationships may exist between the two models, but there is in general no straightforward way of converting from one to the other. \subsection{Simple lifting degree} Given $\gamma \in \pi_{1}(S)$ and a finite degree cover $p:\Sigma \rightarrow S$, an \textit{elevation} of $\gamma$ to $\Sigma$ is a lift $\tilde{\gamma} \in \pi_{1}(\Sigma)$ of some power of $\gamma$ (see preliminaries for a more formal definition). The \textit{simple lifting degree} of $\gamma$, denoted $\deg(\gamma)$, is the minimum $n$ such that there exists a degree $n$ cover $p: \Sigma \rightarrow S$ and an elevation $\tilde{\gamma}$ to $\Sigma$ such that $\tilde{\gamma}$ is a simple closed curve. A classical and celebrated result of Scott (\cite{Scott78}, \cite{Scott782}) implies that $\deg(\gamma)$ is always finite and thus well-defined. Simple lifting degree has been studied extensively, see for instance \cite{Patel14}, \cite{AGPS17}, \cite{Gaster16}, \cite{AC19}. For example, Patel shows that $\deg(\gamma)$ can be bounded above by a linear function of the length of a geodesic representative for $\gamma$ on a particular hyperbolic metric \cite{Patel14}. Aougab-Gaster-Patel-Sapir show that $\deg(\gamma)$ is bounded above by some linear function of the self-intersection number \cite{AGPS17} (with constants depending on the complexity of the underlying surface), and Arenas--Coto-Neumann improve this to the uniform estimate $\deg(\gamma)\le 5i(\gamma,\gamma)+5$ \cite[Thm.~1.1]{AC19}. In the other direction, Gaster gives examples of curves $\gamma$ such that $\deg(\gamma)$ is essentially $i(\gamma, \gamma)$ \cite{Gaster16}. However, it remains quite difficult to bound $\deg(\gamma)$ from below, especially in the generic setting. Gupta-Kapovich show that in the presence of punctures, with $\mathbb{P}_{w}$ probability converging to $1$, $\deg(\gamma_{n})$ grows faster than $\log(n)^{(1/3)}$ \cite{GuKa19}. We improve this lower bound to one that is almost linear in the length of the walk, and as above, our results hold for both random walks and generically chosen curves. \begin{theorem} \label{thm: simple lifting degree for punctures} If $S$ is punctured, there is $\mathcal{M}>0$ so that \[ \mathbb{P}_{w} \left[ \frac{1}{\mathcal{M}} \cdot \frac{n}{\log(n)} \leq \deg(\gamma_{n}) \leq \mathcal{M} \cdot n \right] \xrightarrow[\substack{n\to\infty}]{} 1. \] Similarly, for $\gamma \in B_{n}$, \[ \mathbb{P}^{(n)}_{g} \left[ \frac{1}{\mathcal{M}} \cdot \frac{n}{\log(n)} \leq \deg(\gamma) \leq \mathcal{M} \cdot n \right] \xrightarrow[\substack{n\to\infty}]{} 1\] \end{theorem} \begin{remark} \label{any gen set} The techniques used to prove Theorem \ref{thm: simple lifting degree for punctures} require more restrictive assumptions on $\mathbb{P}_{w}$; specifically, we will assume that $\mu$ is supported on a basis for $\pi_{1}(S)$ and that it assigns equal weight to each generator. \end{remark} When $S$ is closed, we prove a genericity result for lower bounds on lifting degree that pertains to conjugacy classes \begin{theorem} \label{thm: simple lifting degree for closed surfaces} Let $S$ be a closed surface, and let $C_{n}$ denote the set of all conjugacy classes admitting a representative of word length at most $n$. Then there exists some $\mathcal{Q} >0$ \[ \frac{\#( [c] \in C_{n} : \deg([c]) \geq \mathcal{Q} \cdot n/\log(n) )}{\#(C_{n})} \xrightarrow[\substack{n\to\infty}]{} 1. \] \begin{comment} Finally, is $\rho: \pi_{1}(S) \rightarrow PSL_{2}(\mathbb{R})$ is discrete and faithful, $o \in \mathbb{H}^{2}$ is an arbitrary choice of basepoint, then \[ \lim_{L \rightarrow \infty} \frac{\#(\gamma \in \pi_{1}(S): d_{\mathbb{H}^{2}}(o, \rho(\gamma) \cdot o) \leq L \hspace{1 mm} \mbox{and} \hspace{1 mm} \deg(\gamma) \geq \frac{1}{\mathcal{Q}} \cdot n/\log(n))}{\#(\gamma \in \pi_{1}(S): d_{\mathbb{H}^{2}}(o, \rho(\gamma) \cdot o) \leq L)} = 1. \] \end{comment} \end{theorem} As a corollary of the lower bounds on self-intersection in Theorem \ref{thm: intersection}, we prove that for ``random'' curves on a closed surface, the linear upper bounds on simple lifting degree from Arenas--Coto-Neumann \cite{AC19} or from Aougab-Gaster-Patel-Sapir \cite{AGPS17} in terms of intersection number can be improved to a square root upper bound: \begin{corollary} \label{better degree versus intersection} When $S$ is closed, there is some $J \geq 1$ so that \[ \mathbb{P}_{w} \left[ \deg(\gamma_{n}) < J \cdot \sqrt{i(\gamma_{n}, \gamma_{n})} \right] \xrightarrow[\substack{n\to\infty}]{} 1, \] and similarly for $\gamma \in B_{n}$, \[\mathbb{P}_{g}\left[ \deg(\gamma) < J \cdot \sqrt{i(\gamma, \gamma)} \right] \xrightarrow[\substack{n\to\infty}]{} 1. \] \end{corollary} \begin{proof} It is easy to see that any conjugacy class represented by a word of length at most $n$ has simple lifting degree at most on the order of $n$ (see the beginning of Section \ref{section: lifting} for the details). On the other hand, with $\mathbb{P}_{w}$ (resp. $\mathbb{P}_{g}$) probability converging to $1$ as $n \rightarrow \infty$, $\gamma_{n}$ (resp. $\gamma \in B_{n}$) has self intersection number at least on the order of $n^{2}$. The result follows. \end{proof} \begin{remark} \label{closed versus punctures} The strategy for proving the lower bounds on simple lifting degree in Theorem \ref{thm: simple lifting degree for punctures} is to first prove the sort of genericity result for conjugacy classes stated in Theorem \ref{thm: simple lifting degree for closed surfaces} (see Section \ref{strategy} for a brief description of the idea); indeed, when proving this genericity statement for conjugacy classes, we handle both closed and punctured surfaces simultaneously. The results in Theorem \ref{thm: simple lifting degree for punctures} then follow by giving an upper bound which has a strictly smaller exponential growth rate than the group itself for the intersection of any conjugacy class with a ball of radius $n$ in the Cayley graph (see Lemma \ref{lem: bounding conjugacy}). Let $\mathfrak{k}_{n}$ denote this quantity, i.e., the maximum --taken over all conjugacy classes $\mathfrak{c}$-- of the size of the intersection $\mathfrak{c} \cap B_{n}$. We conjecture that similar bounds on $\mathfrak{k}_{n}$ hold in the setting of a surface group with a standard generating set; indeed, it is not too hard to see that in any hyperbolic group, the exponential growth rate of a fixed conjugacy class is always half the growth of the parent group (see for instance \cite{PP15} for a proof). Proofs of this fact rely on standard properties of hyperbolicity, for example the fact that the centralizer of an element is quasi-convex. Crucially, these arguments apply in the regime when a conjugacy class is fixed once and for all. In general, the quality of the quasi-convexity of a centralizer $C_{\gamma}$ depends on the word length of $\gamma$, so obtaining bounds on the \textit{maximum} possible size-- over all conjugacy classes-- of $\mathfrak{c} \cap B_{n}$ seems like a potentially more delicate problem in general. This motivates the following two conjectures; when combined with the proofs in Section \ref{section: lifting}, verifying them would imply a version of Theorem \ref{thm: simple lifting degree for punctures} for closed surfaces: \begin{conjecture} Let $\Gamma_{g}$ denote the Cayley graph of a closed surface group with respect to a standard generating set. Then there is some polynomial $p$ so that \[ \max_{\mathfrak{c}} \#(\mathfrak{c} \cap B_{n}) \leq p(||\mathfrak{c}||) \cdot \#(B_{(n- ||\mathfrak{c}||)/2}), \] where $||\mathfrak{c}||$ denotes the minimum word length of a representative of the conjugacy class $\mathfrak{c}$. \end{conjecture} \begin{conjecture} \label{maximizing over conjugacy} Let $\Gamma_{g}$ be as above. Then there is some constant $\rho < 1$ so that \[ \max_{\mathfrak{c}}\mathbb{P}_{w}(w_{n} \in \mathfrak{c}) \leq \rho^{n}. \] \end{conjecture} We think of both of these conjectures as an attempt to apply statements already known for individual conjugacy classes, to \textit{all} conjugacy classes at once and in a uniform way. Indeed, as mentioned above, the exponential growth rate of any individual conjugacy class is understood. Analogously, using work of Maher-Tiozzo outlined below in Section \ref{subsec: R}, one can show that the $\mathbb{P}_{w}$ probability that $w_{n}$ lies in some \textit{fixed} conjugacy class decays exponentially fast with $n$. In fact, we hypothesize that Conjecture \ref{maximizing over conjugacy} holds in any hyperbolic group when the distribution $\mu$ is supported on some finite generating set. \end{remark} It is in general quite difficult to bound the simple lifting degree from below in terms of explicit combinatorial properties of a given curve $\gamma$. The aforementioned work of the second author \cite{Gaster16} represents one of the only results in this spirit. Loosely speaking, it says that if a curve $\gamma$ enters some annulus $A$ on the surface, spirals $k$ times around its core, and then exits $A$ over the same boundary component it crossed to enter, then $\deg(\gamma) \ge k$ (see Lemma \ref{lem:degree} in Section \ref{section: lifting} for a formal statement and proof). If $\gamma$ exhibits such behavior, we will call $A$ a $k$-\textit{spiraling annulus} for $\gamma$. One can use work of Sisto-Taylor \cite{ST19} to show that with $\mathbb{P}_{w}$ probability converging to $1$, a given annulus $A$ can be a $k$-spiralling annulus for $\gamma$ only if $k$ is at most on the order of $\log(n)$ (again, see Section \ref{section: lifting} for details). It therefore follows by Theorems \ref{thm: simple lifting degree for punctures} and \ref{thm: simple lifting degree for closed surfaces} that there are many curves whose lifting degree is large \textit{for some other reason besides the presence of spiraling annuli}. This motivates the search for a deeper understanding of how to translate between lifting degree and combinatorics: \begin{question} Are there simple combinatorial properties exhibited by a generic curve $\gamma$ which force $\deg(\gamma)$ to be large? \end{question} \subsection{Random point-pushing maps} In the setting where $S= (S,p)$ has a preferred marked point or puncture $p$, one has the celebrated Birman exact sequence of mapping class groups associated to forgetting the puncture: \[ 1 \rightarrow \pi_{1}(S,p) \rightarrow \mathcal{MCG}(S,p) \xrightarrow[]{\pi} \mathcal{MCG}(S) \rightarrow 1. \] The kernel of $\pi$ is isomorphic to $\pi_{1}(S,p)$ and is called the \textit{point-pushing subgroup} of $\mathcal{MCG}(S,p)$ (see the preliminaries section for more details). We are motivated by the following general question: \begin{question} \label{comb point push} Let $\gamma \in \pi_{1}(S,p)$ and suppose the corresponding point-pushing mapping class $f_{\gamma}$ is pseudo-Anosov. How can one relate dynamical properties of $f_{\gamma}$ (such as its translation length in Teichm{\"u}ller space or in the curve complex) to combinatorial or topological properties of the underlying curve $\gamma$? \end{question} Question \ref{comb point push} seems quite difficult in general. As of now, the deepest result addressing it is due to Dowdall \cite{Dowdall11} who relates the dilatation $\mbox{dil}(f_{\gamma})$ of $f_{\gamma}$ to the self-intersection number of $\gamma$ as follows: \[ (i(\gamma, \gamma)+ 1)^{(1/5)} \leq \mbox{dil}(f_{\gamma}) \leq 9^{i(\gamma, \gamma)}. \] Using Theorems \ref{thm: compact} and \ref{thm: intersection}, we can drastically improve these bounds for random or generic point-pushing maps: \begin{theorem} \label{thm: point push} There is $R>0$ so that \[ \mathbb{P}_{w} \left[ e^{\sqrt{\frac{i(\gamma_{n}, \gamma_{n})}{R}}} \leq \mbox{dil}(f_{\gamma_{n}}) \leq e^{\sqrt{R \cdot i(\gamma_{n}, \gamma_{n})}} \right] \xrightarrow[\substack{n\to\infty}]{} 1. \] Similarly, for $\gamma \in B_{n}$, \[ \mathbb{P}^{(n)}_{g}\left[ e^{\sqrt{\frac{i(\gamma, \gamma)}{R}}} \leq \mbox{dil}(f_{\gamma}) \leq e^{\sqrt{R \cdot i(\gamma, \gamma)}} \right] \xrightarrow[\substack{n\to\infty}]{} 1. \] \end{theorem} \subsection{Outline of strategy and tools} \label{strategy} To prove Theorems \ref{thm: compact} and \ref{thm: intersection}, we combine Teicm{\"u}ller theoretic arguments with probabilistic and counting results in the setting of negative curvature, due to Gekhtman-Taylor-Tiozzo \cite{GTT18} and Maher-Tiozzo \cite{MT18}. To get a feel for the strategy, we briefly outline the proof of Theorem \ref{thm: compact} for random walks. We first use Maher-Tiozzo \cite{MT18} to show that with high probability, $\gamma_{n}$ intersects any given simple closed curve at least linearly in $n$ many times. It follows by a basic collar lemma argument that with high probability, $X(\gamma_{n})$ is uniformly thick, independent of $n$. From there (and again using Maher-Tiozzo in a different way) we argue that with high probability, any spelling of $\gamma_{n}$ in the generating set $\mathcal{A}$ uses linearly many occurrences of each generator. On the other hand, as one moves further and further away from $X_{\mathcal{A}}$ while remaining in the thick part of $\mathcal{T}(S)$, classical theorems of Kerckhoff \cite{Ker83} imply that for at least one generator $w_{i}$, the geodesic length of $w_{i}$ blows up. It follows that since $w_{i}$ must be traversed linearly often to spell $\gamma_{n}$, the geodesic length of $\gamma_{n}$ on a sequence of thick metrics which get further and further away from $X_{\mathcal{A}}$ must grow super-linearly in $n$. However, $\gamma_{n}$ admits a representative on $X_{\mathcal{A}}$ which has length bounded above by a linear function of $n$. These arguments occur in Section \ref{section: minimizer}. With Theorem \ref{thm: compact}, we approach Theorem \ref{thm: intersection} as follows. Let $\ell_{n}$ denote the geodesic length of $\gamma_{n}$ on its length-minimizing surface $X_{\gamma_{n}}$, and consider the weighted curve $\gamma_{n}/\ell_{n}$, viewed as a geodesic current on $X_{\gamma_{n}}$. A theorem of Bonahon \cite{Bon88} implies the existence of an accumulation point $c$ of the sequence $( \gamma_{n} / \ell_{n} )$. Since, by Theorem \ref{thm: compact}, the sequence of points $( X_{\gamma_{n}} )$ all lie in a compact subset of $\mathcal{T}(S)$, the geodesic current $c$ must be length-minimized somewhere in the interior of $\mathcal{T}(S)$. On the other hand, if $i(\gamma_{n}, \gamma_{n})$ grows sub-quadratically, one can show that the self-intersection number of $\gamma_{n}/\ell_{n}$ goes to $0$ as $n \rightarrow \infty$. It would then follow that $c$ is a geodesic lamination, and no geodesic lamination is length-minimized at an interior point of $\mathcal{T}(S)$. These arguments can be found in Section \ref{section: intersection}. To prove Theorems \ref{thm: simple lifting degree for punctures} and \ref{thm: simple lifting degree for closed surfaces}, we use arguments that rely on counting simple closed geodesics on a hyperbolic surface up to a given length, in the spirit of Mirzakhani \cite{M08}-- we quickly summarize the idea for proving the genericity result for conjugacy classes. Let $d =\deg(\gamma)$, let $\Sigma$ be the degree $d$ cover of $S$ such that $\gamma$ admits a simple elevation to $\Sigma$, and let $\tilde{\gamma}$ denote this elevation. Equip $S$ with a complete hyperbolic metric and let $\ell(\gamma)$ denote the geodesic length of $\gamma$ in this metric. Since $\gamma$ represents a word that lives in $B_{n}$, the Milnor-Svarc lemma implies that $\ell(\gamma)$ is $O(n)$. The chosen metric pulls back to $\Sigma$, and by basic covering space theory, $\tilde{\gamma}$ admits a representative of length at most $d \cdot \ell(\gamma)$. Thus, the number of conjugacy classes $[\gamma]$ in $C_{n}$ admitting an elevation to a cover of degree at most $d$ is bounded above by the number of conjugacy classes of degree $d$ covers of $S$, multiplied by the number of simple closed geodesics on any such cover with length at most $d \ell(\gamma)$. Using Dehn-Thurston coordinates in a careful way, we estimate the number of all such curves from above, and show that they grow sub-exponentially in $n$ under the assumption that $d$ grows slower than $n/\log(n)$. The details can be found in Section \ref{section: lifting}. Since our main results in this section are stated only for standard generating sets, we conclude this section with a way to obtain lower bounds on $\deg(w_{n})$ that hold with high $\mathbb{P}_{w}$ probability for any finite generating set $S$, and with any distribution whose support is $S$. However, these more general lower bounds are only on the order of $\log(n)$. Finally, we conclude the paper with a short proof of Theorem \ref{thm: point push} in Section \ref{section: point push}. The idea is simple: one uses either Maher-Tiozzo \cite{MT18} (in the setting of $\mathbb{P}_{w}$) or Gekhtman-Taylor-Tiozzo \cite{GTT18} (in the setting of $\mathbb{P}_{g}$) to argue that $\mbox{dil}(f_{\gamma_{n}})$ (or respectively $\mbox{dil}(f_{\gamma})$ for some $\gamma \in B_{n}$) grows exponentially in $n$ with high probability. On the other hand, Theorem \ref{thm: intersection} implies that $i(\gamma_{n}, \gamma_{n})$ grows quadratically in $n$, and so the theorem follows by bounding $\mbox{dil}(f_{\gamma_{n}})$ above and below by functions of $n$, and then expressing $n$ as a function of $i(\gamma_{n}, \gamma_{n})$. \subsection{Acknowledgements.} The authors thank Ilya Gekhtman and Mark Hagen for guidance regarding the growth of conjugacy classes (and the growth of a single conjugacy class) relevant in Section \ref{section: lifting}. The authors also thank Samuel Taylor for many extremely helpful conversations. The first author was partially supported by NSF grant DMS 1939936. \section{Preliminaries} \label{section: prelim} \subsection{Curves and surfaces} \label{subsec: CS} In what follows, $S$ will be an orientable surface of finite type. By $S_{g,p,b}$ we will mean the surface of genus $g$ with $p \geq 0$ punctures and $b \geq 0$ boundary components. A \textit{closed curve} is a continuous function $\gamma: S^{1} \rightarrow S$. It is \textit{essential} if it is not homotopic to a constant map or into an arbitrarily small neighborhood of a puncture. We will use the notation $\sim$ for homotopy. We will sometimes conflate a curve with its image, or its entire homotopy class, when convenient. A curve is called \textit{simple} if it is an embedding. We will also sometimes refer to an entire homotopy class as simple when it admits a simple representative. Given a closed curve $\gamma: S^{1} \rightarrow S$ and a finite degree covering $p: \Sigma \rightarrow S$, an \textit{elevation} of $\gamma$ to $\Sigma$ is a closed curve $\tilde{\gamma}: S^{1} \rightarrow \Sigma$ such that there exists a covering map $\rho : S^{1} \rightarrow S^{1}$ with $p \circ \tilde{\gamma} = \gamma \circ \rho$. In other words, if we view $\gamma \in \pi_{1}(S, x)$ as an element of the fundamental group for some $x \in \mbox{Im}(\gamma)$, then $\tilde{\gamma} \in \pi_{1}(\Sigma, \tilde{x})$ is a path lift of some power of $\gamma$. Given curves $\alpha, \beta$, their \textit{geometric intersection number}, denoted $i(\alpha, \beta)$, is the minimum set-theoretic intersection taken over all representatives of the two homotopy classes: \[ i(\alpha, \beta) = \min_{\alpha' \sim \alpha, \beta' \sim \beta}|\alpha' \cap \beta'|. \] Curves $\alpha, \beta$ are said to be in \textit{minimal position} if they achieve the geometric intersection number for their respective homotopy classes. Similarly, a single curve $\alpha$ is in \textit{minimal position} if it achieves the geometric self-intersection number $i(\alpha, \alpha)$. Given a collection of curves $\Gamma = \left\{\gamma_{1},..., \gamma_{n} \right\}$ in pairwise minimal position, $\Gamma$ is said to \textit{fill} $S$ if $S \setminus \Gamma$ is a union of disks or once-punctured disks. A \textit{pants decomposition} of $S$ is a collection $\mathcal{P}$ of essential and pairwise disjoint simple closed curves so that $S \setminus P$ is a disjoint union of pairs of pants (meaning a topological sphere such that the sum of boundary components and punctures is $3$). A standard Euler characteristic argument yields that the size of any pants decomposition is determined completely by the values of $g,p,b$. A \textit{multi-curve} is a finite collection of pairwise non-homotopic closed curves $\left\{\gamma_{1},..., \gamma_{n} \right\}$. A multi-curve is \textit{simple} if each component of it is simple and disjoint from all other components. The proof of Theorem \ref{thm: compact} will require the use of a certain type of generating set which we now characterize: Choose a basepoint $p$ of $S$; then a \textit{standard generating set} of $\pi_{1}(S,p)$ consists of $2g$ elements $\left\{\alpha_{1}, \beta_{1},..., \alpha_{g}, \beta_{g} \right\}$ each represented by a simple closed curve, satisfying the single relation \[ [\alpha_{1}, \beta_{1}]...[\alpha_{g}, \beta_{g}]= 1\] and such that \[ i(\alpha_{j}, \beta_{k}) = \delta^{j}_{k}. \] When $S$ has punctures or boundary components, its fundamental group is the free group $F_{r}$ of some rank $r$. Assume first that $b+p=1$, and let $\pi$ denote the canonical projection from $F_{r}$ to a closed surface group associated to modding out by the normal closure of a generator for the cyclic subgroup associated to the puncture or boundary component. Then a \textit{standard generating set} of $\pi_{1}(S)$ is a generating set $\mathcal{S}$ of minimum cardinality so that $\pi(\mathcal{S})$ is a standard generating set of the closed surface group. When $b+p>1$, a generating set $\mathcal{S}$ is \textit{standard} if there is some puncture $x$ (resp. boundary component $c$) so that the projection associated to forgetting all other punctures and boundary components sends $\mathcal{S}$ to a standard generating set of the surface with the single puncture $x$ (resp. boundary component $c$). See Figure \ref{fig:standard} for an example. \begin{figure} \label{fig:standard} \includegraphics[scale=0.5]{Fig1Random.png} \centering \caption{An example of a standard generating set of $S_{3,4,0}$.} \end{figure} \subsection{Dehn-Thurston coordinates} \label{subsec: DT} Fix a pants decomposition $\mathcal{P}= \left\{\gamma_{1},..., \gamma_{3g-3} \right\}$ of $S$ and let $\mathcal{S}(S)$ denote the set of homotopy classes of simple multi-curves on $S$. The \textit{Dehn-Thurston coordinates} is a parameterization of $\mathcal{S}(S)$ by coordinates in $\mathbb{Z}^{6g-6}$, described as follows (we follow the treatment of D. Thurston \cite{Thurston08} and Penner \cite{Penner}). The $i^{th}$ \textit{intersection number} $m_{i}$ of a given simple multi-curve $\alpha$ is simply the number of intersections between $\alpha$ and $\gamma_{i}$ when $\alpha$ is placed in minimal position with each $\gamma_{i}$. The non-negative numbers $m_{1},..., m_{n}$ determine the intersection of $\alpha$ with each pair of pants in the complement of $\mathcal{P}$ up to isotopy. All that is left to do to recover $\alpha$ completely is to determine how neighboring pairs of pants glue together. For this, we need the \textit{twist numbers}, described informally as follows. Fix a ``doubling'' of each $\gamma_{i}$-- a choice of a parallel copy $\overline{\gamma_{i}}$ of $\gamma_{i}$. Each pair $\left\{\gamma_{i}, \overline{\gamma_{i}} \right\}$ bound an annulus $\mathcal{A}_{i}$. Letting $P$ be one of the pairs of pants in the complement of $\mathcal{P}$, let $P'$ be the complement of the intersection of $\bigcup_{i} \mathcal{A}_{i}$ with $P$. A basic lemma attributed to Dehn and Thurston (see for instance \cite{Penner}) implies that there is a way to isotope a given simple multi-curve $\alpha$ so that it intersects each $P'$ in a ``standard form'' that is determined by the isotopy class of the arc system $\alpha \cap P$. By choosing carefully a reference arc in each $A_{i}$, one can then define the $i^{th}$ twisting number to be the signed intersection number between this reference arc and $\alpha$. Intuitively, it measures the number of times (and the direction in which) $\alpha$ twists about each pants curve when passing over it. \begin{comment} \subsection{Intersecting an immersed curve with an annulus} \label{subsec:beta} In \cite{AGPS17}, the authors together with Patel and Sapir (and following ideas from \cite{Sapir16}) outline a combinatorial encoding of a non-simple closed curve on $S$ in terms of how it interacts with a given pants decomposition $\mathcal{P}$. We refer the reader to either \cite{AGPS17} or \cite{Sapir16} for details, but we describe the essentials here very briefly. Given a closed curve $\alpha$ and a pants decomposition $\mathcal{P} =\left\{\gamma_{1},..., \gamma_{3g-3} \right\}$, one first fixes a choice of annulus $\mathcal{A}_{i}$ with core curve $\gamma_{i}$. Then $\alpha$ intersects $\mathcal{A}_{i}$ in potentially many arcs, each having one of two types: \begin{enumerate} \item $\beta$-\textit{arcs}, which have one endpoint on each boundary component of $\mathcal{A}_{i}$; and \item $\tau$-\textit{arcs}, which have both endpoints on one boundary component of $\mathcal{A}_{i}$. \end{enumerate} Each arc of $\alpha$ in $\mathcal{A}_{i}$ -- whether it is a $\beta$ or a $\tau$ arc-- has a \textit{twisting number} which one can formalize in a way that is analogous to the twisting numbers of Dehn-Thurston coordinates. Intuitively, it measures the number of times the arc winds around $\gamma_{i}$ before leaving $\mathcal{A}_{i}$. Note that $\beta$-arcs are properly embedded, but $\tau$-arcs self-cross before leaving $\mathcal{A}_{i}$; moreover, only $\beta$-arcs intersect $\gamma_{i}$ essentially. \end{comment} \subsection{Hyperbolic geometry} \label{subsec: HG} A \textit{hyperbolic metric} on $S$ is a complete, finite area Riemannian metric of constant sectional curvature $-1$. The surface $S$ equipped with such a metric will be called a \textit{hyperbolic surface}. It is a standard consequence of negative curvature that given a hyperbolic surface $S$, in every homotopy class of essential closed curves, there exists a unique length-minimizing representative. This representative is called the \textit{geodesic} in that homotopy class. Another standard consequence of negative curvature is that geodesics always realize the geometric intersection number. Given $\epsilon>0$, a hyperbolic surface is said to be $\epsilon$-\textit{thick} if every closed geodesic has length at least $\epsilon$. Given an essential curve $\gamma$, we can define its \textit{length function} \[ \ell_{\gamma}: \mathcal{T}(S) \rightarrow \mathbb{R}, \] which records the geodesic length of $\gamma$ at a point $X \in \mathcal{T}(S)$. We indicate $L(\gamma)=\inf \ell_\gamma(X)$, the infimal geodesic length of $\gamma$ taken over all $X \in \mathcal{T}(S)$. The \textit{collar lemma} states that any simple closed geodesic $\alpha$ on a hyperbolic surface has an embedded collar neighborhood whose width is inversely proportional to the length of $\alpha$ (see for instance \cite{Buser} for more details). \begin{lemma} \label{lem: collar} Letting $\ell$ denote the geodesic length of a simple closed curve $\alpha$ on a hyperbolic surface $S$, there is an embedded collar neighborhood of $\alpha$ with width at least \[ \sinh^{-1}\left( \frac{1}{\sinh\left( \frac{\ell(\alpha)}{2} \right)} \right)~. \] \end{lemma} The collar lemma implies the existence of some universal constant $C>0$ such that if $\alpha, \beta$ are two closed geodesics on a hyperbolic surface $S$ with geodesic lengths less than $C$, they must be disjoint. In section \ref{section: lifting}, we will need to consider hyperbolic surfaces with geodesic boundary and perhaps also with finitely many punctures. When a surface of the form $S_{g,p,0}$ is equipped with an arbitrary complete hyperbolic metric, a classical theorem of Bers states the existence of a pants decomposition $\left\{\gamma_{1},..., \gamma_{3g-3+p} \right\}$ satisfying \[ \ell(\gamma_{k}) \leq 4k \cdot \log \left( \frac{4\pi (2g-2+p)}{k} \right)~.\] See \cite{Buser} for more details. This was generalized to surfaces equipped with arbitrary complete Riemannian metrics with area normalized to be $4\pi \left( g + \frac{p}{2} - 1\right)$ on closed surfaces (where $p$ now denotes the number of marked points) by Balacheff-Parlier-Sabourau \cite{BPS12}. They show that on such a surface, there is a pants decomposition $\mathcal{P}$ such that the total length $\ell(P)$ satisfies \[ \ell(P) \leq C_{g} \cdot p \log(p+1), \] where $C_{g}$ depends only on genus. In fact, they prove a more general result which applies to surfaces $S_{g,p,b}$ with potentially multiple boundary components and with arbitrary finite area. Following through the proof of their Theorem $6.10$, one finds the existence of a polynomial $\mathcal{F}(x,y)$ which is of degree $4$ in $x$ and degree $3$ in $y$ such that if $S$ is a hyperbolic surface with totally geodesic boundary all of whose boundary components have length at most $L$, there is a pants decomposition of total length at most $\mathcal{F}(|\chi(S)|, L)$. \begin{comment} following explicit bound on the length of any curve in the pants decomposition they construct as a function of $g,p,b, \text{Area}(S)$, and $L$, the maximum length of any boundary component (below, $C$ is a universal constant not depending on $S$): \[ 2\cdot g^{5/2} \cdot C^{2} \cdot \left( \text{Area}(S) + \frac{b}{2\pi}L^{2} + \frac{2g \cdot \left(C^{2} \cdot g \cdot \text{Area}(S)+ \frac{b}{2\pi}L^{2} \right)}{2\pi} \right) \cdot \sqrt{\text{Area}(S)+ \frac{b}{2\pi}L^{2}} + b \cdot L \] In the event that $S$ is hyperbolic, $\mbox{Area}(S)$ is of course a linear function of $g,p,$ and $b$. Thus the key take-away from the above is that there is an explicit polynomial $\mathcal{F}(x,y)$ which is of degree $4$ in $x$ and degree $3$ in $y$ such that if $S$ is a hyperbolic surface with totally geodesic boundary all of whose boundary components have length at most $L$, there is a pants decomposition of total length at most $\mathcal{F}(|\chi(S)|, L)$. \end{comment} \begin{remark} \label{easier but worse} We remark that there is a more elementary way to construct a bounded length pants decomposition $\mathcal{P}$ on a hyperbolic surface $S$ with totally geodesic boundary, although the bounds one gets are not as strong as in \cite{BPS12}. We sketch this briefly: First, add in any curve to $\mathcal{P}$ whose geodesic length is less than $C$. Cutting along all such curves yields a (possibly disconnected) hyperbolic surface $S'$ with boundary and which is $C$-thick. The original area of $S$ is bounded (in terms only of the topology of $S$) by the Gauss-Bonnet theorem, and so each component of $S'$ has uniformly bounded area. Thus the $C$-thick part of each component has diameter at most some $D$, bounded only in terms of the original topology of $S$. It follows that on each component $\Sigma$ of $S'$, there is a simple closed geodesic $\sigma$ of length at most $2(D+L)$, where $L$ is the maximum of $C$, the length of any original boundary component of $S$, and the length of the boundary of a collar neighborhood of a geodesic of length $C$. One simply starts at a boundary component of $\Sigma$ and picks the shortest essential arc with at least one endpoint there; complete this arc to a simple closed curve by concatenating it with arcs that run around pieces of the boundary. We then cut along $\sigma$, and repeat the argument. At each stage, the maximum length of a boundary component can (roughly) double, and the number of stages is bounded above linearly in terms of $|\chi(S)|$. As a result, some curves in $\mathcal{P}$ might have exponentially long length as a function of both $|\chi(S)|$ and $L$. For our purposes in Section \ref{section: lifting}, it will turn out that either this bound or the one from \cite{BPS12} will suffice. \end{remark} \subsection{Teichm{\"u}ller space and the mapping class group} \label{subsec: TS} The \textit{Teichm{\"u}ller space} of $S$, denoted $\mathcal{T}(S)$, is a space of (equivalence classes of) pairs $(\phi, \sigma)$ where $\sigma$ is a surface homeomorphic to $S$ equipped with a hyperbolic metric, and $\phi: S \rightarrow \sigma$ is a homeomorphism. Two pairs $(\phi, \sigma)$ and $(\phi', \sigma')$ are equivalent when there is an isometry $j: \sigma \rightarrow \sigma'$ such that $j \circ \phi = \phi'$ up to homotopy. The first coordinate is called the \textit{marking} of $X$. The \textit{mapping class group}, denoted $\mathcal{MCG}(S)$, is the group of homotopy classes of orientation preserving homeomorphisms of $S$. Given a finite set of marked points or punctures $\mathfrak{p} \subset S$, we denote by $\mathcal{MCG}(S, \mathfrak{p})$ the group or orientation preserving homeomorphisms sending $\mathfrak{p}$ to itself, up to homotopy. The mapping class group acts on $\mathcal{T}(S)$ by composition with the marking. When $S$ has boundary, mapping classes correspond to homotopy classes of orientation preserving homeomorphisms that pointwise fix each boundary component. Mapping classes fit into a dynamical trichotomy known as the \textit{Nielsen-Thurston classification} (see \cite{FM12} for details): $f \in \mathcal{MCG}(S)$ is either finite order (known as elliptic); infinite order but preserving of some simple multi-curve (reducible); or there exists a pair of transverse measured filling (singular) foliations and a real number $\lambda>1$ such that both foliations are preserved by $f$ and such that one of the transverse measures is multiplied by $\lambda$ and the other by $1/\lambda$ (known as pseudo-Anosov). The \textit{dilatation} of a pseudo-Anosov homeomorphism, denoted $\mbox{dil}(f)$, is the number $\lambda$ (when $f \in \mathcal{MCG}(S)$ is not pseudo-Anosov, we will define $\mbox{dil}(f)$ to be zero by convention). When a closed surface $S$ comes equipped with a preferred marked point $p$, the group $\mathcal{MCG}(S, p)$ fits into a short exact sequence known as the Birman exact sequence coming from the map $\pi: (S,p) \rightarrow S$ corresponding to forgetting the significance of $p$: \[ 1 \rightarrow \ker \pi \rightarrow \mathcal{MCG}(S,p) \xrightarrow[]{\pi} \mathcal{MCG}(S) \rightarrow 1. \] The kernel of $\pi$ is naturally identified with the surface group $\pi_{1}(S,p)$ and is called the \textit{point-pushing subgroup} of $\mathcal{MCG}(S,p)$. An element $\gamma \in \pi_{1}(S,p)$ gives rise to point-pushing homeomorphism $f_{\gamma}$ by ``pushing'' the marked point $p$ around the curve $\gamma$ and dragging the surface along with it (see \cite{FM12} for formal details). A classical result of Kra \cite{Kra81} states that $f_{\gamma}$ is pseudo-Anosov exactly when $f$ fills $S$. The Teichm{\"u}ller space can be topologized in several equivalent ways. One such way is to use so-called \textit{Fenchel-Nielsen coordinates} which in some sense mirrors the Dehn-Thurston coordinates mentioned above. Fixing a pants decomposition $\mathcal{P}= \left\{\gamma_{1},..., \gamma_{n}\right\}$, the Fenchel-Nielsen coordinates of a point $X \in \mathcal{T}(S)$ are given by \[ \left( \ell_{\gamma_{1}}(X),..., \ell_{\gamma_{n}}(X), \tau_{1}(X),..., \tau_{n}(X) \right), \] where $\ell_{\gamma_{i}}(X)$ is the geodesic length of $\gamma_{i}$ on $X$, and $\tau_{i}(X)$ is a \textit{twisting parameter} which measures the extent to which a reference arc crossing from one pair of pants to another twists around the bounding curve when the surface is glued together. See for instance \cite{Buser} for the formal details. One can then topologize $\mathcal{T}(S)$ such that the Fenchel-Nielsen coordinates give a homeomorphism to $\mathbb{R}^{2n}$. With respect to this topology, the mapping class group acts properly discontinuously and by homeomorphisms. The quotient is naturally identified with the moduli space $\mathfrak{M}_{S}$ of hyperbolic surfaces homeomorphic to $S$. Given $\epsilon >0$, the $\epsilon$-\textit{thick} part $\mathcal{T}_{\epsilon}(S)$ of $\mathcal{T}(S)$ is the set of all $\epsilon$-thick hyperbolic surfaces. The projection of $\mathcal{T}_{\epsilon}(S)$ to $\mathfrak{M}_{S}$ is compact. The Teichm{\"u}ller space admits a natural metric called the \textit{Teichm{\"u}ller metric}, which we will denote by $d_{\mathcal{T}}$. The formal details can be found for instance in \cite{FM12}; the essential idea is that $d_{T}(X,Y)$ is the $\log$ of the infimal dilatation of a quasi-conformal homeomorphism from $X$ to $Y$, isotopic to the identity (defined relative to the markings of $X$ and $Y$). The $\mathcal{T}(S)$-\textit{translation length} of a mapping class $f$, denoted $\tau_{\mathcal{T}}(f)$, is defined to be \[ \min_{X \in \mathcal{T}(S)} d_{\mathcal{T}}(X, f(X)). \] In the event that $f$ is pseudo-Anosov, one has the relation \[ \log(\mbox{dil}(f)) = \tau_{\mathcal{T}}(f). \] In the context of his celebrated proof of the Nielsen Realization Theorem, Kerckhoff \cite{Ker83} shows that if $\Gamma$ is a filling collection of curves, there exists a unique point $X_{\Gamma}$ in $\mathcal{T}(S)$ minimizing the combined geodesic length of the curves in $\Gamma$. Roughly speaking, this argument has two steps: First, because $\Gamma$ is filling, the length function $\ell_\Gamma$ is proper, so a minimum exists. Second, one computes that the second-variation of the length function $\ell_\Gamma$ along the twist flow of a simple closed curve $\alpha$ is a sum of cosines at intersection points $\Gamma\cap \alpha$ (which are nonempty because $\Gamma$ fills $S$). Because simple closed curves are dense in the space of measured laminations, one deduces that the length function $\ell_\Gamma$ is strictly convex along any earthquake path. Because any pair of points in $\mathcal{T}(S)$ is joined by an earthquake path, the minimum of $\ell_\Gamma$ is unique. Similarly, if $G \hookrightarrow S$ is a graph on $S$ that fills $S$, in the sense that each complementary region is a disk or once punctured-disk, then Kerckhoff's argument again implies that there is a unique point $X_{G} \in \mathcal{T}(S)$ minimizing the length of $G$. \subsection{The curve complex} \label{subsec: CC} The \textit{curve complex} of $S$, denoted $\mathcal{C}(S)$, is the simplicial complex whose vertices correspond to homotopy classes of essential simple closed curves on $S$ and whose $k$-simplices correspond to collections of $k+1$ (homotopy classes of) simple closed curves that can be realized pairwise disjointly on $S$. For our purposes, it will suffice to consider the $1$-skeleton of $\mathcal{C}(S)$, known as the \textit{curve graph} (and which, slightly abusing notation, we will also denote by $\mathcal{C}(S)$). The curve graph is made into a metric space by identifying each edge with a unit length segment. A celebrated result of Masur-Minsky states that the associated path metric $d_{\mathcal{C}}$ on $\mathcal{C}(S)$ is $\delta$-hyperbolic \cite{MM99}, meaning that there is $\delta>0$ so that the $\delta$-neighborhood of any two of the sides of a geodesic triangle contains the third. The mapping class group $\mathcal{MCG}(S)$ acts by simplicial automorphisms on $\mathcal{C}(S)$ by acting on the homotopy classes of vertices and then extending to the higher dimensional simplices. There is a coarsely well-defined projection map \[ \mbox{sys}: \mathcal{T}(S) \rightarrow \mathcal{C}(S)\] from the Teichm{\"u}ller space to the curve complex: given $X \in \mathcal{T}(S)$, $\mbox{sys}(X)$ is sent to the simple closed curve representing the \textit{systole} of $X$, the shortest closed geodesic. The map $\mbox{sys}$ is technically not well-defined since $X$ could have multiple systoles. However, no two systoles can intersect more than once and therefore the set of all systoles constitute a diameter at most $2$ subset of $\mathcal{C}(S)$. We can then define $\mbox{sys}$ as a set map, taking values in the power set of the vertices of $\mathcal{C}(S)$. Masur-Minsky show that $\mbox{sys}$ is \textit{coarsely Lipschitz}: there is some $K>0$ depending only on the topology of $S$ so that \[ d_{\mathcal{C}}(\mbox{sys}(X), \mbox{sys}(Y)) \leq K \cdot d_{\mathcal{T}}(X, Y) + K. \] Furthermore, $\mbox{sys}$ is $\mathcal{MCG}(S)$-equivariant, in the sense that $\mbox{sys}(f(X))$ coincides with $f(\mbox{sys}(X))$ for any $f \in \mathcal{MCG}(S)$ (keeping in mind that technically both are subsets of $\mathcal{C}(S)$). \subsection{Geodesic currents} \label{subsec: GC} In this subsection, we assume that $S$ is closed in order to avoid technical difficulties in the theory that won't arise in the course of our arguments. Fix a point $X \in \mathcal{T}(S)$; it determines an action of $\pi_{1}(S)$ on the universal cover of $X$ which is identified with $\mathbb{H}^{2}$. This action extends to the Gromov boundary $\partial_{\infty}\mathbb{H}^{2} \cong S^{1}$. The \textit{space of unoriented geodesics} of $\mathbb{H}^{2}$, denoted $\mathcal{G}(\mathbb{H}^{2})$ is identified with \[ ((S^{1} \times S^{1}) \setminus \Delta)/\mathbb{Z}_{2}, \] where $\Delta$ is the diagonal and the $\mathbb{Z}_{2}$ action comes from the two choices of orientation for a given bi-infinite geodesic. The action of $\pi_{1}(S)$ on $\partial_{\infty}(\mathbb{H}^{2})$ coming from $X$ induces a natural action of $\pi_{1}(S)$ on $\mathcal{G}(\mathbb{H}^{2})$. A \textit{geodesic current} on $X$ is then a locally finite $\pi_{1}(S)$-invariant Radon measure on $\mathcal{G}(\mathbb{H}^{2})$. A key example of a geodesic current comes from a choice of a (weighted) closed curve $\lambda \cdot \gamma$, for $\lambda\in\mathbb{R}^+$. Given such a weighted curve, there is the counting measure supported on the set of all lifts of $\gamma$ to $\mathbb{H}^{2}$, where we assign a subset of geodesics a measure of $\lambda \cdot n$ exactly when it contains $n$ lifts. Measured laminations are also geodesic currents. Let $C(X)$ denote the set of all geodesic currents on $X$. Celebrated work of Bonahon \cite{Bon88} shows how to naturally topologize $C(X)$ in a way that doesn't depend on the underlying choice of metric or marking, and thus one can unambiguously speak of $C(S)$, the space of geodesic currents on $S$. Moreover, the geometric intersection number of closed curves extends to a bilinear, continuous intersection form $i:C(X)\times C(X)\to \mathbb{R}^+$, invariant under the natural diagonal mapping class group action by homeomorphisms. Bonahon also shows that $\mathcal{T}(S)$ embeds in $C(S)$ in such a way where its closure reproduces Thurston's compactification by projective measured laminations. This embedding can be defined using so-called \textit{Liouville currents}: for each $X \in \mathcal{T}(S)$, there is a unique geodesic current $\mathcal{L}_{X}$ such that for any closed curve $\gamma$ on $X$, \[ \ell_{X}(\gamma) = i(\gamma, \mathcal{L}_{X}). \] The embedding mentioned above sends $X$ to $\mathcal{L}_{X}$. A geodesic current $\mu\in C(S)$ is \textit{filling} provided $i(\mu,\alpha)>0$ for every simple closed curve $\alpha$. Of key importance for us is the following compactness criteria of Bonahon \cite{Bon88}. \begin{theorem} \label{compact currents} Let $c \in C(S)$ be a filling current and fix $R >0$. Then \[ N(c,R) = \left\{ \gamma \in C(S) : i(\gamma, c) \leq R \right\} \] is compact in $C(S)$. \end{theorem} \subsection{Counting and random walks in hyperbolic spaces} \label{subsec: R} Let $G$ be a countable group and $\mu$ a probability distribution on $G$. We can then consider the \textit{random walk driven by} $\mu$, obtained by taking products of the form \[ w_{n} = g_{1}g_{2}...g_{n} \] where $g_{i} \in G$ are independent and identically distributed with distribution $\mu$. Let $(X,x_{0})$ be a pointed separable $\delta$-hyperbolic space, and suppose $G$ acts by isometries on $X$. The orbit map $g \mapsto g \cdot x_{0}$ gives us a way of pushing forward a random walk on $G$ to a sequence $(w_{1}x_{0}, w_{2}x_{0},...)$ in $X$ called a \textit{sample path}. See for instance \cite{MT18} for more details. We let $\mathbb{P}^{(n)}_{w}$ denote the probability induced by the $n$-fold convolution $\mu^{\ast n}$ on the space of sample paths of size $n$. For convenience and since the argument of $\mathbb{P}^{(n)}_{w}$ will always be in terms of some variable that is indexed by $n$, we will often omit the superscript and will simply write $\mathbb{P}_{w}$. The distribution $\mu$ is called \textit{non-elementary} if the subgroup of $G$ generated by its support contains a pair of loxodromic isometries with disjoint fixed points on the Gromov boundary $\partial_{\infty}X$. Maher-Tiozzo \cite{MT18} describe very detailed structural properties of sample paths in this context: \begin{theorem} \label{thm:MT} Let $G$ be a countable group of isometries of a separable $\delta$-hyperbolic space $(X, x_{0})$ and let $\mu$ be a non-elementary probability distribution on $G$. Then there is $L>0$ so that \[ \mathbb{P}_{w}\left[ d_{X}(x_{0}, w_{n}x_{0}) \leq L \cdot n \right] \xrightarrow[\substack{n\to\infty}]{} 0. \] Moreover, letting $\tau_{X}(w_{n})$ denote the translation length of $w_{n}$ on $X$, \[ \mathbb{P}_{w}\left[ \tau_{X}(w_{n}) \leq L \cdot n \right] \xrightarrow[\substack{n\to\infty}]{} 0. \] \end{theorem} In fact, in this context, the progress made from the identity has an almost surely defined limiting behavior (see for instance Theorem $8.14$ of \cite{Woess02}): \begin{theorem} \label{thm: limit of progress exists} In the context of Theorem \ref{thm:MT}, there is $m>0$ so that \[ \lim_{n \rightarrow \infty} \frac{d_{X}(x_{0}, w_{n}x_{0})}{n} = m, \hspace{2 mm} \mathbb{P}_{w} - \mbox{almost surely}. \] \end{theorem} \begin{comment} \begin{remark} \label{drift versus translation} The arguments in Maher-Tiozzo \cite{MT18} imply that if $L>0$ is the constant so that \[ \mathbb{P}_{w}\left[ \tau_{X}(w_{n}) \leq L \cdot n \right] \xrightarrow[\substack{n\to\infty}]{} 0, \] then one can choose $L = m/2$. \end{remark} Given a random walk driven by $\mu$ on a countable group $G$, the \textit{spectral radius} $\rho$ of the walk is defined to be \[ \rho = \lim_{n \rightarrow \infty} \mathbb{P}_{w}(w_{2n}= 1)^{1/2n}. \] Zuk \cite{Zuk97} showed that if $\rho_{g}$ denotes the spectral radius of the symmetric random walk on the Cayley graph of a surface group with respect to a standard generating set, then \begin{equation} \label{Zuk} \rho_{g} \leq \frac{1}{\sqrt{g}} \end{equation} In section \ref{section: lifting}, we will need the following estimate often called the Carne-Varapoulos theorem (\cite{Carne85}, \cite{Var85}) which bounds the probability of landing on any one vertex $x$ as a function of the distance to the identity $d(1,x)$ \begin{equation} \label{CV theorem} \mathbb{P}_{w}[w_{n} = x] \leq \rho^{n} \cdot e^{-2d(1,x)^{2}/n}. \end{equation} \end{comment} Assume next that $G$ itself is Gromov hyperbolic and fix a finite generating set $\mathcal{S}$ of $G$. Let $B_{n}$ denote the ball of radius $n$ about the identity in the Cayley graph $\Gamma_{\mathcal{S}}$ associated to $\mathcal{S}$. We can then consider the probability operator $\mathbb{P}^{(n)}_{g}$ coming from the uniform distribution on $B_{n}$. Again, we will sometimes drop the super-script $n$ and simply write $\mathbb{P}_{g}$, but we'll make this abbreviation slightly less often than with $\mathbb{P}_{w}$ since the argument of $\mathbb{P}^{(n)}_{g}$ needn't involve a term that calls on $n$. In this setting, Gekhtman-Taylor-Tiozzo \cite{GTT18} show the analog of Theorem \ref{thm:MT}: \begin{theorem} \label{thm: GTT} Let $G$ be a hyperbolic group with a non-elementary action on a separable pointed hyperbolic space $(X, x_{0})$. Then the proportion of elements in $B_{n}$ that act loxodromically on $X$ goes to $1$: \[ \frac{\#\left\{h \in B_{n}: h \hspace{2 mm} \mbox{acts loxodromically on} \hspace{2 mm} X \right\}}{\#B_{n}} = \mathbb{P}^{(n)}_{g}\left[h \in G: h \hspace{2 mm} \mbox{acts loxodromically on} \hspace{2 mm} X\right] \xrightarrow[\substack{n\to\infty}]{} 1. \] Furthermore, a generically chosen element $h$ in $B_{n}$ moves $x$ linearly far as a function of its word length $|h|$, and in fact its stable translation length is also bounded below by a linear function of word length: there is some $L> 0$ so that \[ \mathbb{P}^{(n)}_{g}\left[ h: d_{X}(x_{0}, h \cdot x_{0}) \geq L \cdot |h| \right] \xrightarrow[\substack{n\to\infty}]{} 1; \] \[ \mathbb{P}^{(n)}_{g}\left[h : \tau_{X}(h) \geq L \cdot |h|\right] \xrightarrow[\substack{n\to\infty}]{} 1. \] \end{theorem} \begin{remark} \label{stable} Gekhtman-Taylor-Tiozzo state their results in terms of \textit{stable} translation length, which is defined as \[ \liminf_{n \rightarrow \infty} \frac{d_{X}(x_{0}, g^n \cdot x_{0})}{n}. \] However, it's easy to see that stable translation length is bounded above by translation length. \end{remark} \begin{remark} \label{rem: large length} In a surface group with a standard generating set, it is easy to see that the proportion of elements in $B_{n}$ whose word length is at least $n/2$ goes to $1$ as $n \rightarrow \infty$. It follows that when $G$ is a surface equipped with a standard generating set, one gets a linear lower bound-- in terms of $n$-- on both progress away from $x_{0}$ and $X$-translation length for generic elements in $B_{n}$. \end{remark} By the phrase ``with high probability'', we will mean ``with either $\mathbb{P}_{w}$ or $\mathbb{P}^{(n)}_{g}$ (depending on context) going to $1$ as $n \rightarrow \infty$''. \section{Locating length minimizers} \label{section: minimizer} For concision, we will state arguments in a way that applies to both $\mathbb{P}_{g}$ and $\mathbb{P}_{w}$ whenever possible. We assume $S$ is closed for this entire section. First, we verify that $X(\gamma_n)$ exists with high probability: \begin{lemma} \label{lem:minimizer exists} We have \[ \mathbb{P}_{w} \big[ \exists ! X\in \mathcal{T}(S) \text{ such that } \ell_{X}(\gamma_{n}) = L(\gamma_n) \big] \xrightarrow[\substack{n\to\infty}]{} 1, \] and similarly, \[ \mathbb{P}^{(n)}_{g} \big[ \gamma \in B_{n}: \exists ! X \in \mathcal{T}(S) \text{ such that } \ell_{X}(\gamma) = L(\gamma) \big] \xrightarrow[\substack{n\to\infty}]{} 1. \] \end{lemma} \begin{proof} First and foremost, choose a basepoint $p \in S$. Then for $\mathbb{P}_{w}$, let $w_{n} \in \pi_{1}(S, p)$ denote the location of the walk after $n$ steps, and consider the point-pushing map $f_{\gamma_{n}}$. Since the translation length of an element $f \in \mathcal{MCG}(S,p)$ acting on the curve complex $\mathcal{C}(S)$ is larger than $2$ only if $f$ is pseudo-Anosov, it follows by Theorem \ref{thm:MT} that with high probability, $f_{w_{n}}$ is pseudo-Anosov. By Kra's theorem mentioned in Section \ref{subsec: TS}, $\gamma_{n}$ fills $S$ with high $\mathbb{P}_{w}$ probability. Finally, by Kerckhoff's work mentioned in Section \ref{subsec: HG}, we are done because filling curves are length minimized at a unique point in $\mathcal{T}(S)$. For $\mathbb{P}_{g}$, the proof is the same, except instead of Theorem \ref{thm:MT}, we use Theorem \ref{thm: GTT} to argue that with high $\mathbb{P}_{g}$ probability, the point-pushing map with defining curve $\gamma \in B_{n}$ is pseudo-Anosov, and therefore the homotopy class associated to the conjugacy class of $\gamma$ fills $S$. \end{proof} Henceforth, we denote by $X_\gamma \in \mathcal{T} (S)$ the point which uniquely minimizes the length of $\gamma$, if it exists. Let $\alpha$ be an arbitrary essential simple closed curve on $S$. The goal of the next lemma is to quantify the intersection between $\alpha$ and $\gamma_n$, or between $\alpha$ and a generically chosen $\gamma \in B_{n}$. \begin{lemma} \label{linear intersection} There exists $C > 0$ so that for any essential simple closed curve $\alpha$, \[ \mathbb{P}_{w} \left[ \frac{i(\gamma_n , \alpha)}{n} > C \right] \xrightarrow[\substack{n\to\infty}]{} 1\] Similarly, \[ \mathbb{P}^{(n)}_{g} \left[ \gamma \in B_{n}: \frac{i(\gamma, \alpha)}{n}> C \right] \xrightarrow[\substack{n\to\infty}]{} 1~. \] \end{lemma} \begin{proof} For $\mathbb{P}_{w}$, just as in in the proof of Lemma \ref{lem:minimizer exists}, we argue by reinterpreting the random walk as a walk in the point-pushing subgroup of the mapping class group of the surface obtained by puncturing $S$. Then Theorem \ref{thm:MT} implies the existence of some $L>0$ so that \[ \mathbb{P}_{w} \left[\frac{\tau_{\mathcal{C}}(f_{w_{n}})}{n} > L \right] \xrightarrow[\substack{n\to\infty}]{} 1~,\] where $\tau_{\mathcal{C}}(f_{w_n})$ denotes the translation length of $f_{w_{n}}$ in the curve complex of $S$. One has the analogous statement for $\mathbb{P}_{g}$ by using Theorem \ref{thm: GTT}. Thus, the lemma follows (for instance by setting $C=L$) by establishing the following general inequality, relating translation lengths of point-pushing maps to intersection numbers between the defining closed curve and any simple closed curve: \[ \tau_{\mathcal{C}}(f_{\gamma}) \leq \min_{\alpha} i([\gamma], \alpha)~. \] We show this using induction on intersection number. Abusing notation slightly, we will use $\gamma$ to refer both to the element in $\pi_{1}(S,p)$ and to its conjugacy class. If there exists a simple closed curve $\alpha$ disjoint from $\gamma$, it is obviously fixed by $f_\gamma$ and the desired inequality follows. If there exists $\alpha$ intersecting $\gamma $ exactly once, $\alpha$ will be disjoint from $f_{\gamma}(\alpha)$ and the inequality follows again. In general, let $\alpha$ be a simple closed curve so that $i(\gamma, \alpha)= n$. Base $\pi_1 (S)$ at one of the intersection points $x$ between $\alpha$ and $\gamma$. Then $\gamma$ can be expressed as a concatenation of $\gamma_1 , \gamma_2 \in \pi_1 (S,x)$ as follows: starting at $x$, traverse along $\gamma$ until arriving at another intersection point $x'$ between $\gamma$ and $\alpha$; then make a choice of one of the two sub-arcs of $\alpha$ bounded by $x, x'$ to travel along to arrive back at $x$. The resulting closed curve is defined to be $\gamma_1$. Then $\gamma_2$ begins at $x$ with the same sub-arc of $\alpha$ chosen to close up $\gamma_1$ but traversed in the opposite direction. Once arriving at $x'$, traverse along the portion of $\gamma$ that $\alpha_1$ missed. It is clear that $\gamma = \alpha_1 \ast \alpha_2$ in $\pi_1(S, x)$. Moreover, \[i(\alpha, \gamma_1) \leq 1, \hspace{2 mm} \mbox{and} \hspace{2mm} i(\alpha, \gamma_2) < i(\alpha,\gamma). \] Thus, by the base cases demonstrated above and the induction hypothesis, we have that \[ d_{\mathcal{C}}(\alpha, f_{\gamma_{1}}(\alpha))\leq 1, d_{\mathcal{C}}(\alpha, f_{\gamma_{2}}(\alpha)) \leq n-1, \] where $d_{\mathcal{C}}$ denotes distance in the curve graph. Since the identification of the point-pushing subgroup with $\pi_1 (S, x)$ is induced by an isomorphism, we have that \[f_{\gamma}= f_{\gamma_1 \ast \gamma_2} = f_{\gamma_2} f_{\gamma_1}. \] We then have \[d_{\mathcal{C}}(\alpha, f_{\gamma}(\alpha)) = d_{\mathcal{C}}(\alpha, f_{\gamma_1 \ast \gamma_{2}}(\alpha))= d_{\mathcal{C}}(\alpha, f_{\gamma_{2}}f_{\gamma_{1}}(\alpha)) \] \[ \leq d_{\mathcal{C}}(\alpha, f_{\gamma_{2}}(\alpha))+d_{\mathcal{C}}(f_{\gamma_{2}}(\alpha), f_{\gamma}(\alpha))\] \[ = d_{\mathcal{C}}(\alpha, f_{\gamma_{2}}(\alpha)) + d_{\mathcal{C}}(\alpha, f_{\gamma_{1}}(\alpha)) \leq (n-1) +1 = n.\qedhere\] \end{proof} Given $X \in \mathcal{T} (S)$, a \textit{rose} for $\mathcal{A}$ centered at $x \in S$, or an $\mathcal{A}$-rose, is the union of minimum length representatives for each $w_i \in \mathcal{A}$, taken over all representatives starting and ending at $x$. The point $x$ is called the basepoint of the rose. Let $\ell_{\mathcal{A}}(X)$ denote the infimal length of a rose on $X$, taken over all possible choices of basepoint. As discussed in Section \ref{subsec: HG}, Kerckhoff's argument for length functions of filling curve systems implies that there is a unique point in $\mathcal{T}(S)$ minimizing the function $\ell_{\mathcal{A}}: \mathcal{T} (S) \rightarrow \mathbb{R}$. Denote this point by $X_{\mathcal{A}}$. We are now ready to prove Theorem \ref{thm: compact}, stated here again for convenience: \textit{ There is $K>0$ so that} \[ \mathbb{P}_{w}(d_{\mathcal{T}(S)}(X(\gamma_{n}), X_{\mathcal{A}}) \leq K) \xrightarrow[\substack{n\to\infty}]{} 1, \] \textit{Similarly, for} $\gamma \in B_{n}$, \[ \mathbb{P}^{(n)}_{g}(d_{\mathcal{T}(S)}(X(\gamma), X_{\mathcal{A}}) \leq K) \xrightarrow[\substack{n\to\infty}]{} 1. \] We remark that in the statement of Theorem \ref{thm: compact}, the event that $X_{\gamma}$ does not exist is by convention interpreted as $d_{\mathcal{T}(S)}(X_{\gamma}, X_{\mathcal{A}}) > K$. \begin{proof} We first consider $\mathbb{P}_{w}$. Suppose there is an exhaustion of an infinite diameter neighborhood $N$ (in the Teichm{\"u}ller metric) of $X_{\mathcal{A}}$ by nested compact subsets $K_0 \subset K_1 \subset K_2 \subset... $ with $X_{\mathcal{A}} \in \bigcap_{i} K_{i}$ and some positive $\mathbb{P}_{w}$ probability so that $X_{\gamma_{n}} \notin K_n$. Assume furthermore that there exists a sequence of positive real numbers ${r_{n}}$ converging to $0$ so that, with some positive probability, $X_{\gamma_{n}}$ admits a simple closed geodesic $\alpha_{n}$ of length at most $r_{n}$. Then by Lemma \ref{linear intersection}, the probability that $i(\gamma_n, \alpha_{n})> Cn$ converges to $1$. It follows that there is a positive probability that the length of $\gamma_{n}$ on $X_{n}$ grows super-linearly in $n$. Indeed, the length of $\gamma_{n}$ on $X_{\gamma_{n}}$ is at least $Cn \cdot f(r_{n})$, where $f : \mathbb{R}_+ \rightarrow \mathbb{R}$ is the function stated in Lemma \ref{lem: collar} that outputs the minimum width of a collar neighborhood about a geodesic with length a given input, and $\lim_{r \rightarrow 0}f(r) = \infty$. However, $\gamma_{n}$ admits a representative on $X_{\mathcal{A}}$ of length at most $Dn$, where $D$ is the maximum length of any of the representatives of the generators based at the point minimizing the length of the $\mathcal{A}$-rose. This contradicts the fact that $X_{n}$ exists with probability converging to $1$. Therefore, we can assume there exists some $\epsilon$ so that with probability converging to 1, every simple closed geodesic on $X_{\gamma_{n}}$ has length at least $\epsilon$. It follows by an application of the Milnor-Svarc lemma that with high probability, the geodesic length of $\gamma_{n}$ on $X_{\gamma_{n}}$ is \textit{uniformly} comparable to the minimum length representative of $\gamma_n$ lying on a minimum length $\mathcal{A}$-rose in $X_n$. Before proceeding with the proof of $\mathbb{P}_{w}$, we establish the above for $\mathbb{P}_{g}$, namely, that there is $\epsilon>0$ so that the proportion of curves $\gamma \in B_{n}$ with the property that $X_{\gamma_{n}}$ is $\epsilon$-thick, goes to $1$. The proof is essentially the same: Lemma \ref{linear intersection} implies that with high $\mathbb{P}_{g}$ probability, $i(\gamma, \alpha)> Cn$ for $\gamma \in B_{n}$ and $\alpha$ arbitrary. Now, assume there is some $\delta >0$ and some function $r: \mathbb{N} \rightarrow \mathbb{R}$ going to $0$ so that for each $n$, \[ \mathbb{P}^{(n)}_{g} \big[ \gamma \in B_{n}: X_{\gamma} \text{ is not } r(n) \text{-thick } \big] > \delta. \] But then there is positive $\mathbb{P}^{(n)}_{g}$ probability that $\gamma \in B_{n}$ has geodesic length at least $f(r(n)) \cdot Cn$ on $X_{\gamma}$. This again is a contradiction since $\gamma$ has word length at most $n$, and therefore admits a representative of geodesic length that is at most linear in $n$ on some metric that is thick independent of $n$. We now continue with the proof, first in the setting of $\mathbb{P}_{w}$. Recall that we are assuming that $\mathcal{A} = \left\{\alpha_{1}, \beta_{1}..., \alpha_{2g}, \beta_{2g} \right\}$ is a standard generating set, and one has \begin{equation} \label{eq:std gen set} i(\alpha_{i} , \beta_j)= \delta^{i}_j. \end{equation} As each $\alpha_i$ and $\beta_{i}$ is simple, there is a tree $T_{\alpha_{i}} \subset \mathbb{H}^{2}$ (resp. $T_{\beta_{i}}$ or $s_{i}$) dual to the lifts of $\alpha_i$. The tree $T_{\alpha_{i}}$ admits a $\pi_{1}(S)$-action, induced by the action of $\pi_{1}(S)$ on the universal cover of $S$. In this way, we can pushforward the sample path $(w_{1}, w_{2},...)$ on the $\mathcal{A}$-Cayley graph of $\pi_{1}(S)$ to a walk $(w_{1} \cdot x_{0}, w_{2} \cdot x_{0},...)$ on $T_{\alpha_{i}}$ once we choose a basepoint $x_{0}$ for the tree. Since the tree is of course hyperbolic, Theorem \ref{thm:MT} applies and we find some $D_i >0$ so that \[\mathbb{P}_{r} \big[ d_{T_{\alpha_{i}}}(w_{n} x_{0}, x_{0}) > D_i \cdot n \big] \xrightarrow[\substack{n\to\infty}]{} 1. \] Similarly, Theorem \ref{thm: GTT} gives us the analogous statement for $\mathbb{P}_{g}$: there is some $D'_{i}>0$ so that \[ \mathbb{P}_{g} \big[d_{T_{\alpha{i}}}(\gamma \cdot x, x_{0})> D'_{i}\cdot n] \xrightarrow[\substack{n\to\infty}]{} 1 \] It follows that if $s$ is any string of generators representing $w_n$, there are either at least $K_{i}: = (\mbox{min}(D_i, D'_{i})/2) \cdot n$ occurrences of $\beta_i$, or of $\beta_i^{-1}$ in $s$ with high $\mathbb{P}_{w}$ probability. Indeed, distance from the basepoint in $T_{\alpha_{i}}$ measures the number of times a word crosses $\alpha_i$, and $\beta_i$ is the only generator crossing $\alpha_i$. Similarly, Theorem \ref{thm: GTT} together with Remark \ref{rem: large length} gives that $\gamma \in B_{n}$ has at least $K_{i} \cdot (n/2)$ occurrences of $\beta_{i}^{\pm 1}$. Let $g_n$ denote a minimum length representative of $\gamma_n$ on $X_n$, taken over all representatives lying on a minimum length $\mathcal{A}$-rose in $X_n$. Then with high $\mathbb{P}_{w}$-probability, the above paragraph implies that $g_n$ traverses the $\beta_i$ petal at least $K_{i} \cdot n$ times. Similarly, a minimum length representative of $\gamma \in B_{n}$ lying on the minimum length $\mathcal{A}$-rose traverses the $\beta_{i}$ petal at least $K_{i} \cdot (n/2)$ times. Since any $\mathcal{A}$-rose fills the surface, the length function $\ell_{\mathcal{A}}$ is proper, and $X_n \notin K_n$ implies that the $X_n$-minimum length of $\mathcal{A}$ goes to infinity with $n$. This in turn (without loss of generality) implies that for some $i$, the length of the $\beta_i$-petal on a minimum length rose on $X_n$ goes to infinity with $n$. Since $g_n$ traverses this petal at least $K_{i} \cdot n$ times and the $X_n$ length of $g_n$ is comparable to the geodesic length of $\gamma_n$ on $X_n$ (where the discrepancy depends only on the thickness of $X_{n}$), it follows that the geodesic length of $\gamma_n$ on $X_n$ grows super-linearly in $n$. On the other hand, $\gamma_{n}$ has geodesic length on the order of $n$ on $X_{\mathcal{A}}$, a contradiction when $X_{n}$ exists. Since $X_{n}$ exists with high probability, we are done. The argument for $\mathbb{P}_{g}$ from this point is totally analogous. \end{proof} \begin{remark} \label{generating sets} We remark that the proof of Theorem \ref{thm: compact} is the only place we use something specific about standard generating sets -- namely, the fact that each generator is simple and there is a decomposition into pairs that intersect once and such that any two curves in different pairs are disjoint. These assumptions are actually overkill, and so the conclusion of Theorem \ref{thm: compact} applies for a larger class of generating set besides standard ones. Indeed, if the generators are not simple, one can replace the dual tree in the universal cover with the dual cube complex, which is still hyperbolic. The key property we used is that there is a constant $k$ so that the generating set is of the form $\left\{\alpha_{1}, \beta_{1},..., \alpha_{n}, \beta_{n} \right\}$, and so that $i(\alpha_{i}, \beta_{j}) \leq k \cdot \delta^{j}_{i}$. This property was only required for arguing that with high probability, any spelling of a word representing $\gamma_{n}$ (or, $\gamma \in B_{n}$ in the case of $\mathbb{P}_{g}$) uses linearly in $n$ many occurrences of each generator. It would be interesting to characterize those finite generating sets of $\pi_{1}(S)$ for which this property holds with high probability. \end{remark} \section{Self-intersection number} \label{section: intersection} We are now ready to prove Theorem \ref{thm: intersection}. For the majority of this section, we will assume that $S$ is closed; we will then derive the $\mathbb{P}_{w}$ statement of Theorem \ref{thm: intersection} for punctured surfaces from the corresponding statement in the closed setting. Note that a quadratic upper bound on self intersection for $w_n$, and in fact for any $\gamma \in B_{n}$, holds non-probabilistically. Indeed, for all $\gamma\in B_n$ one has \[i(\gamma, \gamma) \leq \frac{n(n-1)}{2}~. \] This follows by choosing a representative for $\gamma$ on an $\mathcal{A}$-rose (which we will also denote by $\gamma$); $\gamma$ returns to the basepoint $n$ times and we can select a small neighborhood $N$ of the basepoint so that all self-intersection occurs within $N$. Then $N \cap \gamma$ is a collection of $n$ simple arcs with endpoints on $\partial N$, and $\gamma_n$ can be homotoped within $N$ so that any two of these arcs cross at most once. Thus, in Theorem \ref{thm: intersection}, for both $\mathbb{P}_{w}$ and $\mathbb{P}_{g}$, it suffices to prove the lower bound. \begin{proof} We will phrase the proof in terms of $\mathbb{P}_{w}$; the $\mathbb{P}_{g}$ argument will be essentially identical, replacing occurrences of Theorem \ref{thm:MT} with Theorem \ref{thm: GTT}. With high $\mathbb{P}_{w}$ probability, $X_{\gamma_{n}}$ lies in a fixed compact subset $K \subset \mathcal{T}(S)$ by Theorem \ref{thm: compact}. The word length of $w_{n}$ is at most $n$; on the other hand, Theorem \ref{thm:MT} implies that it is at least $L \cdot n$ for some $L>0$, with high $\mathbb{P}_{w}$ probability. The Milnor-Svarc lemma then implies that the $X_{\gamma_{n}}$-length of $\gamma_{n}$ is on the order of $n$ with high $\mathbb{P}_{w}$ probability: \[ \mathbb{P}_{w} \left[ \ell_{X_{n}}(\gamma_{n}) \geq \frac{n}{Q} \right] \xrightarrow[\substack{ n \to \infty}]{} 1. \] Here, $Q$ is some constant depending only on the topology of $S$ and the compact set $K$ (which in turn depends only on the chosen distribution $\mu$). Now, suppose that the lower bound in Theorem \ref{thm: intersection} does not hold with high probability. Thus, with positive probability, \[ i(\gamma_{n}, \gamma_{n}) = o(n^{2}). \] Consider, as a geodesic current, the weighted curve $\gamma_{n}/\ell_{X_{n}}(\gamma_{n})$. Choose a point $p \in K$ and let $\mathcal{L}_{p}$ denote the corresponding Liouville current. By compactness of $K$, the $p$-length of $\gamma_{n}$ is uniformly comparable to its $X_{n}$-length. Furthermore, the $p$-length of $\gamma_{n}$ is equal to the intersection number $i(\gamma_n,\mathcal L_p)$. Thus, using linearity of the intersection form, with high probability one has \[ i \left(\gamma_{n}/\ell_{X_{n}}(\gamma_{n}), \mathcal{L}_{p} \right) = O(1). \] Since $\mathcal{L}_{p}$ is filling, Theorem \ref{compact currents} implies that, with high $\mathbb{P}_{w}$ probability, $\left\{ \gamma_{n}/ \ell_{X_{n}}(\gamma_{n}) \right\}$ lives in a compact subset of the space of geodesic currents. We can therefore pass to a convergent subsequence (and re-index the sequence if necessary) to obtain a limiting current, $c$. By the assumption that $i(w_{n}, w_{n}) = o(n^{2})$ with positive probability, it follows that there is a positive probability that \[ i(c, c) = \lim_{n \rightarrow \infty} \frac{1}{\ell_{X_{n}}(\gamma_{n})^{2}} i(w_{n}, w_{n}) = o(1)= 0. \] Thus, there is a positive probability that $c$ is a geodesic lamination. Let $C_{fill}(S)$ denote the subset of $C(S)$ consisting of filling geodesic currents and consider the function $\pi: C_{fill}(S) \rightarrow \mathcal{T}(S)$ sending a filling current $c$ to the unique point in $X(c) \in \mathcal{T}(S)$ minimizing its length. This is a continuous projection (see for instance \cite{HS21}). One can extend $\pi$ to a continuous function $\overline{\pi}$ with domain $C_{fill}(S) \cup \mathcal{ML}(S)$ and codomain $\mathcal{T}(S) \cup \mathcal{PML}(S)$ as follows: if $\lambda$ is a geodesic lamination, then $\overline{\pi}(\lambda) = [\lambda]$, the point in Thurston's compactification associated to the projective class of $\lambda$. Continuity of $\overline{\pi}$ implies that \[ \lim_{n \rightarrow \infty} \overline{\pi}(\gamma_{n}/\ell_{X_{n}(\gamma_{n})}) = \overline{\pi}(\lim_{n \rightarrow \infty}\gamma_{n}/\ell_{X_{n}(\gamma_{n})}),\] whenever both sides exist. If $c$ is a geodesic lamination, the left hand side is a point in $\mathcal{PML}(S)$, whereas the right hand side is a point in $K$ and therefore in the interior of $\mathcal{T}(S)$. This is a contradiction, and therefore there can not be a positive probability that $c$ is a geodesic lamination, and hence there can not be a positive probability that $i(w_{n}, w_{n})$ grows sub-quadratically. This completes the proof. \end{proof} The proof of Theorem \ref{thm: intersection} uses the assumption that $S$ is closed in fairly essential ways. For example, when $S$ has punctures, it is easy to see that Theorem \ref{compact currents} is false. In that setting, the complement of a filling curve $\gamma$ will contain punctured disks; one can then choose a sequence $(\gamma_{i})$ of closed curves each intersecting $\gamma$ some bounded number of times, but which self intersect more and more by winding more times about a cusp in the complement of $\gamma$. Continuity of self-intersection implies that $N(c,R)$ is not compact: the sequence $(\gamma_i)$ cannot have a convergent subsequence. However, one can derive the $\mathbb{P}_{w}$ portion of Theorem \ref{thm: intersection} for punctured surfaces $(S,p)$ once one has it for closed surfaces by using the forgetful map $\pi: (S,p) \rightarrow S$ that fills in the puncture. Self-intersection number can only decrease under $\pi$, and so starting with a random walk $(\gamma_{n})$ on a Cayley graph for $\pi_{1}(S-p)$ associated to a standard generating set, one can push it forward via $\pi$ to a Cayley graph for $\pi_{1}(S)$ and apply Theorem \ref{thm: intersection} for closed surfaces. A similar argument doesn't quite work for $\mathbb{P}_{g}$ in the setting of punctured surfaces. One can still use $\pi$ to pass between the ball of radius $n$ in the punctured and closed surface groups, but a priori the fibers can have very different sizes, and perhaps a very large fiber sits over one of the very few curves in the closed surface with small self intersection. For this reason, we derive the $\mathbb{P}_{g}$ statement of Theorem \ref{thm: intersection} from Theorem \ref{thm: simple lifting degree for punctures}, using the work of Arenas--Coto-Neumann \cite{AC19} as an intermediary (alternatively, one could also use the main theorem of \cite{AGPS17}). In more detail, in Section \ref{section: lifting} below we will prove that when $S$ is punctured, with high $\mathbb{P}_{g}$ probability the simple lifting degree of a curve $\gamma \in B_{n}$ is at least \[ \mathcal{M} \cdot n/\log(n) \] for some constant $\mathcal{M}>0$. Arenas--Coto-Neumann show is that if $\gamma$ is an essential closed curve on any orientable surface, one has \[ \deg(\gamma) < 5(i(\gamma, \gamma) +1). \] Thus, if $i(\gamma, \gamma) = o(n/\log(n))$, we contradict Theorem \ref{thm: generic lifting}. We conclude this section with a brief sketch of an alternative argument for Theorem \ref{thm: intersection} that is a bit less self-contained and relies on more resent work. We frame it in terms of $\mathbb{P}_{g}$, but just as with the proof presented above, the argument for $\mathbb{P}_{w}$ is totally analogous. Aougab-Gaster-Patel-Sapir \cite{AGPS17} prove that if $\gamma$ is any essential closed curve on either a closed or punctured surface $S$, there is some constant $C= C(S)$ and a complete hyperbolic metric $\sigma$ on $S$ so that \[ \ell_{\sigma}(\gamma) \leq C \cdot \sqrt{i(\gamma, \gamma)}. \] In Section \ref{section: minimizer}, we showed that when $n$ is large, an overwhelming proportion of the elements in $B_{n}$ have the property that the infimal length over $\mathcal{T}(S)$ is on the order of $n$. Therefore, an overwhelming proportion of the elements in $B_{n}$ must have on the order of \textit{at least} quadratically in $n$ many self-intersections, for otherwise, some definite proportion of the elements in $B_{n}$ would achieve a sub-linear in $n$ length on some hyperbolic metric. \section{Lifting degree} \label{section: lifting} In this section, we prove Theorems \ref{thm: simple lifting degree for punctures} and \ref{thm: simple lifting degree for closed surfaces}. The upper bound on simple lifting degree in both settings is straightforward using work of Patel \cite{Patel14} and the fact that given a fixed hyperbolic surface $X \in \mathcal{T}(S)$, any $\gamma \in B_{n}$ admits a representative with $X$-length at most $K \cdot n$, where $K$ depends only on $X$. It follows that $\gamma \in B_{n}$ has length at most $K \cdot n$ on the surface used by Patel to bound simple lifting degrees from above, and thus $\deg(\gamma) \leq K \cdot 17 \cdot n$. We next tackle the lower bound on simple lifting degree for $\mathbb{P}_{g}$ in the case of punctures or boundary components, and for a generic conjugacy class in the setting of closed surfaces. When $S$ is closed, we assume the use of a standard generating set; when $S$ has boundary or punctures, we choose any basis for $\pi_{1}(S)$ as our generating set. The argument will be simpler to organize by replacing each puncture with a boundary component; this has no effect on the relevant counts. By $S_{g,b}$, we will mean the surface of genus $g$ with $b \geq 0$ boundary components. \begin{theorem} \label{thm: generic lifting} When $\pi_{1}(S)$ is free, there is some $\mathcal{M}>0$ so that with high $\mathbb{P}_{g}$ probability, the simple lifting degree is at least $\mathcal{M} \cdot n/\log(n)$. Moreover, when $S$ is closed, \[ \lim_{n \rightarrow \infty} \frac{ \#([c] \in C_{n}: \deg([c]) \geq \mathcal{M} \cdot n/log(n))}{\#(C_{n})} \rightarrow 1. \] \end{theorem} \begin{proof} Let $d$ denote the simple lifting degree $\deg(\gamma)$ of some curve $\gamma \in B_{n}$. A theorem of Hall \cite{Hall49} yields the following recursive count for $N_{d}(r)$, the number of conjugacy classes of index $d$ subgroups of $F_{r}$, the free group of rank $r$: \[ N_{d}(r) = d \cdot (d!)^{r-1} - \sum_{k=1}^{d-1}(d-k)!^{r-1} \cdot N_{k}(r) \leq (d+1)!^{r} \] For surface groups, we get the following count for (what we will call) $N_{d}(2g)$ due to Mednykh (see for instance \cite[Ch.~14]{LS03}): \[ N_{d}(2g)= \frac{h_{d}}{(d-1)!} - \sum_{k=1}^{d-1}\frac{h_{d-k}\cdot N_{k}(2g)}{(d-k)!}~, \] where $h_{d}$ is the quantity \[ h_{d}= (d!)^{(2g-1)}\cdot \sum_{\chi \in \text{Irr}(\Sigma_{d})} \chi(1)^{2-2g}~. \] In the above sum, $\chi$ ranges over all irreducible characters of $\Sigma_{d}$, the symmetric group on $d$ symbols. Irreducible representations of $\Sigma_{d}$ are in correspondence with conjugacy classes of $\Sigma_{d}$, and thus the number of irreducible representations is equal to the number of partitions of $\left\{1,..., d \right\}$. Therefore, using the classical asymptotic expansion for the number of such partitions and the fact that the degree of an irreducible representation must divide the order of the group, we get that \[ \frac{N_{d}(2g)}{(d!)^{2g+1}\cdot e^{\pi \sqrt{2d/3}}} = O(1), \] from whence it follows that for $d$ sufficiently large, \[ N_{d}(2g) \leq M \cdot (d!)^{2g+2} \] for some universal constant $M>0$. Fix $c>0$ and consider the unique hyperbolic surface $\mathfrak{p}$ homeomorphic to a pair of pants with all boundary components having geodesic length $c$. Then, fix once and for all a choice of hyperbolic metric on $S$ with totally geodesic boundary that comes from gluing together the appropriate number of copies of $\mathfrak{p}$ in some configuration that produces a surface homeomorphic to $S$. By construction, this hyperbolic surface (which, abusing notation slightly, we will also call $S$) admits a pants decomposition $\mathcal{P}= \left\{\gamma_{1},..., \gamma_{m} \right\}$ where each curve has geodesic length $c$ (and where $m= m(S)$ depends only on the topology of $S$). It follows by the Milnor-Svarc lemma that $\gamma$ has geodesic length at most $\tau \cdot n$ for some universal constant $\tau$ depending only on $S$. Let $\rho: Y \rightarrow S$ be a covering space of degree at most $d$. The choice of metric on $S$ pulls back to $Y$- again by a slight abuse of notation, refer to the resulting hyperbolic surface by $Y$. Let $\mathcal{P}_{Y}$ denote the full pre-image of $\mathcal{P}$ under $\rho$ to $Y$. We will first specify a pants decomposition on $Y$, equipped with this metric, such that each curve has length bounded above polynomially in terms only of $c,d$. \begin{lemma} \label{lem: good pants on Y} There is an explicit polynomial $\mathcal{F}(x,y)$ of degree $3$ in $x$ and degree $4$ in $y$ satisfying the following. There exists a pants decomposition $P_{Y}$ of $Y$ containing $\mathcal{P}_{Y}$ as a sub-multi-curve such that each curve in $P_{Y}$ has geodesic length at most $\mathcal{F}(d\cdot |\chi(S)|, d \cdot c)$. \end{lemma} \begin{proof} Each curve in $\mathcal{P}_{Y}$ has geodesic length at most $d \cdot c$. Note that $\mathcal{P}_{Y}$ is a simple multi-curve, but it need not be a pants decomposition on $Y$. In the event that $\mathcal{P}_{Y}$ is not a pants decomposition, there exists complementary components of $S \setminus \mathcal{P}_{Y}$ which are not pairs of pants; let $Z \subset Y$ be such a subsurface. Then $Z$ is bounded by curves in $\mathcal{P}_{Y}$. There is then a $1$-Lipschitz map from a hyperbolic surface (with boundary) $W$ to $Z$, obtained by perhaps identifying distinct boundary components of $W$. The surface $W$ has area at most $\text{Area}(Y)= d \cdot \text{Area}(S)$, and each boundary component of $W$ has length at most $d \cdot c$. Using the polynomial $\mathcal{F}(x,y)$ mentioned at the end of Section \ref{subsec: HG}, we see that $W$ admits a pants decomposition of total length at most $\mathcal{F}(d \cdot |\chi(S)|, d \cdot c)$. We can push this pants decomposition forward to $Z$, and apply this to each non-pants subsurface in the complement of $\mathcal{P}_{Y}$ to obtain a complete pants decomposition of $Y$ such that each curve has length at most $\mathcal{F}(d \cdot |\chi(S)|, d \cdot c)$. \qedhere \end{proof} \begin{remark} \label{not too short} Note that there is a universal constant $\kappa$, depending only on $S$, such that each curve in $P_{Y}$ has geodesic length at least $\kappa$. Indeed, one can set $\kappa$ equal to the length of the systole of $S$. \end{remark} Now, assume $\gamma \subset S$ is a closed curve that admits a simple elevation $\tilde{\gamma}$ to $Y$. Our goal will be to bound the Dehn-Thurston coordinates of $\tilde{\gamma}$ with respect to $P_{Y}$. To begin, by Remark \ref{not too short}, each twisting number is bounded above by $2\kappa \cdot d \cdot \tau \cdot n$, for if $\tilde{\gamma}$ twists more than this number of times about any curve in $P_{Y}$, it will be too long. It remains to bound the intersection number $i(\tilde{\gamma}, w)$ for each $w \in P_{Y}$. There are two possible cases, depending on whether or not $w$ is in the pre-image of $\mathcal{P}$. \begin{lemma} \label{lem: w comes from S} There is a universal constant $\eta>0$ such that if $w \in \mathcal{P}_{Y}$, then \[ i(\tilde{\gamma}, w) \leq d \cdot \eta \cdot n. \] \end{lemma} \begin{proof} Recall that the geodesic length of $\gamma$ on $S$ is at most $\tau \cdot n$ for some universal constant $\tau$. We can decompose $\gamma$ into sub-arcs bounded by intersection points with $\mathcal{P}$ and containing no such intersection points in their interiors. Thus each sub-arc lies entirely inside of one pair of pants, each of which is a copy of $\mathfrak{p}$. If we then set $\eta$ to be the product of $\tau$ with the minimum distance between any two boundary components of $\mathfrak{p}$, we see that \[ i(\gamma, \rho(w)) \leq \eta \cdot n. \] \end{proof} If, on the other hand, $w$ is not in the pre-image of $\mathcal{P}$, then $w$ comes from the construction outlined in Lemma \ref{lem: good pants on Y}, and $\rho(w)$ lies entirely inside of one copy of $\mathfrak{p}$ on $S$. It follows that \[ \ell_{S}(\rho(w)) \leq \mathcal{F}(d \cdot |\chi(S)|, d \cdot c). \] \begin{comment} For simplicity of notation we will abbreviate the right hand side above by $\mathcal{F}$. It follows from work of Basmajian \cite{Bas13} that \[ i(\rho(w), \rho(w)) \leq H \cdot \mathcal{F}^{2}, \] where $H$ is a universal constant depending only on $\mathfrak{p}$. \end{comment} We wish to control the number of intersections between $\rho(w)$ and $\gamma$; we will then get a bound on the intersection number between $w$ and $\tilde{\gamma}$ by multiplying by $d$. Decompose $\mathfrak{p}$ into two right-angled hyperbolic hexagons $h_{1}, h_{2}$ by cutting along geodesic arcs orthogonal to the boundary components. Note that each sub-arc of $\rho(w) \cap h_{i}$ ($i=1,2$) is an embedded arc with endpoints on distinct edges of $\partial h_{i}$ not arising from $\partial \mathfrak{p}$. It follows that the number of such sub-arcs is bounded above by $(c/2) \cdot \mathcal{F}$. Indeed, $\rho(w)$ has length at most $\mathcal{F}$, and each edge of $h_{i}$ coming from $\partial \mathfrak{p}$ has length $c/2$ and it constitutes the shortest path between the two edges on either side of it. Since $\gamma$ has geodesic length at most $\tau \cdot n$, by the same argument, the number of components of $\gamma \cap h_{1}$ is at most $(c/2) \cdot \tau \cdot n$. Since a given sub-arc of $\gamma \cap h_{i}$ can intersect one of $\rho(w) \cap h_{i}$ ($i= 1,2$) at most once, we get that \[ i(\rho(w), \gamma) \leq \frac{c^{2} \cdot \tau \cdot n \cdot \mathcal{F}}{4} \Rightarrow i(w, \tilde{\gamma}) \leq \frac{d \cdot c^{2} \cdot \tau \cdot n \cdot \mathcal{F}}{4}. \] Hence, each Dehn-Thurston coordinate of $\tilde{\gamma}$ with respect to $P_{Y}$ is bounded above by \[ 2 \kappa \cdot d \cdot \tau \cdot n + d \cdot \eta \cdot n + \frac{d \cdot c^{2} \cdot \tau \cdot n \cdot \mathcal{F}}{4} =: Q(d),\] which we note depends polynomially on $d$ of degree at most $5$. The number of simple closed (multi-)curves on $Y$ with all Dehn-Thurston coordinates bounded above by $Q(d)$ is at most the number of integer points in a cube in $\mathbb{R}^{\dim(\mathcal{T}(Y))}$ with side lengths bounded by $Q(d)$, which of course is \[ Q(d)^{d \cdot \dim(\mathcal{T}(S))}. \] It follows that the number of homotopy classes of curves on $S$ with simple lifting degree at most $d$ is bounded above by \[ M \cdot (d+1)!^{2g+p+2} \cdot Q(d)^{d \cdot \dim(\mathcal{T}(S))} \] \[ \sim M \cdot \left( \frac{(d+1)}{e} \right)^{(d+1)(2g+p+2)} \cdot Q(d)^{d \cdot \dim(\mathcal{T}(S))}. \] Taking the logarithm of the above produces \[ (d+1)(2g+p+2) \cdot \log \left( M \cdot (d+1) \right) + d \cdot \dim(\mathcal{T}(S)) \cdot \log(Q(d))-(d+1)(2g+p+2). \] Now, if $d = o(n/log(n))$, the above grows sub-linearly in $n$. At this stage, we have shown the existence of some $\mathcal{M}$ such that the number of distinct conjugacy classes $[c]$ with a representative $c \in B_{n}$ having lifting degree less than $\mathcal{M} \cdot n/log(n)$, grows sub-exponentially in $n$. This completes the proof of the lower bound for generic conjugacy classes for closed surfaces, and thus for Theorem \ref{thm: simple lifting degree for closed surfaces}; indeed, $|C_{n}|$ grows exponentially with the same growth rate of the Cayley graph itself (see for instance \cite{CK02}). It therefore remains to prove the lower bound for a generically chosen \textit{element} in the free group setting. Theorem \ref{thm: generic lifting} will be implied by the following lemma which bounds the size of an intersection of any conjugacy class with $B_{n}$ above by a function that grows at most exponentially with a strictly smaller growth rate than $|B_{n}|$ itself. \begin{lemma} \label{lem: bounding conjugacy} Let $B_{n}$ denote the ball of radius $n$ in the Cayley graph of a free group $F_{r}$ with respect to a free basis. Then \[ \#(B_{n} \cap [c]) \leq n \cdot \#(B_{(n-||c||)/2}), \] where $[c]$ is any conjugacy class and $||c||$ is the minimum word length for any representative of $[c]$. \end{lemma} \begin{remark} After constructing the following argument, the authors found a very similar and more concise proof of Lemma \ref{lem: bounding conjugacy} in Parkkonen-Paulin \cite{PP15}. We include our original version here for self-containment. \end{remark} \begin{proof} Let $c \in F_{r}$ be a cyclically reduced representative of the conjugacy class $[c]$; $c$ is also a minimal length representative. Let \[ c= s_{1}s_{2}...s_{p} \] be a minimal length spelling of $c$. By an \textit{initial subword} of $c$, we will mean any word of the form $s_{1}...s_{k}$ for $k \leq p$. Similarly, a \textit{tail subword} of $c$ will be any word of the form $s_{k}...s_{p}$ for $k \geq 1$. Say that a word $w \in F_{r}$ satisfies the \textit{no-cancellation condition} with respect to $c$ if $w$ satisfies both of the following: \begin{itemize} \item No tail subword of $w$ coincides with the inverse of an initial subword of $c$; \item No tail subword of $w$ coincides with a tail subword of $c$. \end{itemize} If $w$ satisfies the no-cancellation condition with respect to $c$, then the word length of $wcw^{-1}$ is precisely $2\mbox{length}(w)+ \mbox{length}(c)$. It follows that the number of words $w$ that satisfy the no-cancellation condition and that have the property that $wcw^{-1}$ has word length at most $n$, is at most the number of words with word length $(n-||c||)/2$, i.e., $\#(B_{(n-||c||)/2})$. On the other hand, if $x \in F_{r}$ does not satisfy the no-cancellation condition, there is some initial subword $x'$ of $x$ that does satisfy the no-cancellation condition, and which is obtained from $x$ by deleting the tail subword that experiences cancellation with $c$. We will refer to $x'$ as the \textit{no-cancellation reduction} of $x$. Up to multiplication by a power of $c$, to each word $x'$ satisfying the no-cancellation condition, there are at most $\mbox{length}(c)$ words whose no-cancellation reduction is $x$: these correspond to multiplying $x'$ on the right by all possible tail subwords of $c$. Since $wc^{k}$ and $w$ conjugate $c$ to the same element, it follows that for every conjugator $w$ satisfying the no-cancellation condition, there are at most $\mbox{length}(c) \leq n$ many conjugates coming coming from conjugators that do not satisfy the no-cancellation condition and whose no-cancellation reduction is $w$. Thus the total number of conjugates of $c$ with length at most $n$ is at most $n \cdot \#(B_{(n-||c||)/2})$, as desired. \end{proof} \begin{comment} We next turn to surface groups (and for ease of notation, we will denote $\pi_{1}(S_{g})$ by $G$), again letting $B_{n}$ denote the ball of radius $n$ in the Cayley graph associated to a standard generating set and fix some $c \in B_{n}$ which is a minimal length representative of the conjugacy class $[c]$. We are inspired by the proof of Proposition $5$ in \cite{PP15}, in which it is shown that if $[c]$ is any infinite conjugacy class in a finitely generated $\delta$-hyperbolic group, then the exponential growth rate of $\#([c] \cap B_{n})$ is exactly one half of the exponential growth rate of the group itself. We briefly summarize the argument there, modifying it to suit our slightly different needs; for example, we are only interested in an upper bound but we also need to keep track of how the bound is impacted by $||c||$: For each conjugate $w= gcg^{-1}$ of $c$, consider a geodesic axis $A_{w}$. For the moment, consider only the conjugates $w$ so that $A_{w}$ and $A_{c}$ do not cross. Formally, the endpoints of $A_{w}$ and $A_{c}$ do not link along $\partial_{\infty}G= S^{1}$. There are then at most on the order of $\#(B_{n})$ such elements so that $d_{G}(A_{c}, A_{w})$ is at most $n$. Indeed, to each $A_{w}$ one can associate a right coset $g \cdot \langle c \rangle$ of the centralizer of $c$; distance between $A_{c}$ and $A_{w}$ can be identified with the minimal word length of some $h \in g \cdot \langle c \rangle$. Note that the distance $d_{G}(g \langle c \rangle , A_{c})$ is, up to a uniformly bounded additive error, equal to $d_{G}(g, A_{c})$. We next claim that the distance between $gcg^{-1}$ and $e$ (that is, the word length of $gcg^{-1}$) is, up to a uniformly bounded additive error, equal to twice the distance between $g$ and $A_{c}$, plus $||c||$. Indeed, the word length of $gcg^{-1}$ is equal to $d_{G}(c^{-1}g^{-1}, g^{-1})$. Consider the geodesic quadilateral with vertices \[ \left\{g^{-1}, \pi_{c}(g^{-1}), c^{-1}\cdot \pi_{c}(g^{-1}) \sim \pi_{c}(c^{-1}g^{-1}), c^{-1}g^{-1} \right\}, \] where $\pi_{c}$ denotes the nearest point projection to $A_{c}$ (coarsely well-defined because $A_{c}$ is geodesic and $G$ is hyperbolic) and we use $\sim$ to denote the relationship of being within a uniformly bounded neighborhood (of size determined by the hyperbolicity constant) of one another. By hyperbolicity, the edge $[g^{-1}, c^{-1}g^{-1}]$ is contained in a uniformly bounded neighborhood of the union of the other three sides. The claim then follows from the fact that the sides $[g^{-1}, \pi_{c}(g^{-1})]$ and $[c^{-1}g^{-1}, \pi_{c}(c^{-1}g^{-1})]$ remain at least a distance of $||c||$ apart. Since the number of such axes within distance $n$ of $A_{c}$ is at most on the order of $\#(B_{n})$, it follows that the number of corresponding conjugates of $c$ with word length at most $n$ is at most some $T \cdot \#(B_{(n-||c||)/2})$. The multiplicative constant $T$ accounts for the various additive errors mentioned above and which depend only on the hyperbolicity constant. The argument is finished by associating to each conjugate $dcd^{-1}$ one of boundedly many elements $\left\{q_{i} \right\}_{i=1}^{k}$ (one of which is $c$ itself) of $[c]$ so that $A_{q}$ is disjoint from $A_{dcd^{-1}}$. Indeed, the images of $A_{c}$ that cross $A_{c}$ correspond to conjugates of the form $c^{n} \cdot q_{i}$ for some $i$ and some $n \in \mathbb{Z}$; moreover, $A_{c^{n}q_{i}cq_{i}^{-1}c^{-n}}$ is disjoint from $A_{q_{i}cq_{i}^{-1}}$, assuming the latter crosses $A_{c}$. One can bound $k$ above by $||c||^{2}$; indeed, fix some hyperbolic metric on $S$ and let $\gamma$ denote a geodesic representative for $[c]$. Then the images of $A_{c}$ crossing a given fundamental lift of $\gamma$ correspond one-to-one to self-intersections of $\gamma$, and as pointed out at the beginning of Section \ref{section: intersection}, $\gamma$ possesses at most $||c||^{2}$ many self-intersections. \end{comment} We now complete the proof of Theorem \ref{thm: generic lifting}: letting $J(n)$ denote the number of conjugacy classes with minimal word length at most $n$ and with the property that the simple lifting degree is less than $\mathcal{M} \cdot n/\log(n)$, Lemma \ref{lem: bounding conjugacy} implies that the total number of elements in $B_{n}$ with lifting degree less than $\mathcal{M} \cdot n/log(n)$ is at most \[ T \cdot J(n) \cdot n \cdot \#(B_{n/2}). \] Since $J(n)$ grows sub-exponentially, we have that \[ \lim_{n \rightarrow 0} \frac{J(n) \cdot n \cdot \#(B_{n/2})}{\#(B_{n})} = 0, \] as desired. \end{proof} We next turn our attention to $\mathbb{P}_{w}$ in the setting of punctures. We assume that $\mu$ is supported on a minimal rank symmetric generating set and that it assigns equal weight to each generator. In this setting, the $\mathbb{P}_{w}^{(n)}$ probability of a given event is simply the proportion of length $n$ sample paths terminating at an element described by that event. When the Cayley graph is a tree, there is a one-to-one correspondence between elements and non-backtracking paths originating at the identity; we will use this to derive the desired result for $\mathbb{P}_{w}$ from the result above regarding $\mathbb{P}_{g}$ in the presence of punctures or boundary components. \begin{remark} \label{PS versus hitting} It would be interesting to study $\mathbb{P}_{w}$ in the setting of more general distributions $\mu$. Our strategy is to understand $\mathbb{P}_{w}$ from Theorem \ref{thm: generic lifting} and the properties of $\mathbb{P}_{g}$. The challenge of extending this strategy so that it applies to more general choices of $\mu$ relates to the difficulty of comparing the hitting measure for a random walk with the Patterson-Sullivan measure on the boundary of a hyperbolic group $G$-- see for instance Proposition 1.7 of \cite{GTT18}. \end{remark} \begin{proposition} \label{prop: lifting degree for walks with punctures} With high $\mathbb{P}_{w}$ probability, $w(n)$ has simple lifting degree at least (on the order of) $n/log(n)$ when $S$ has a puncture. \end{proposition} \begin{proof} Under the assumptions made on $\mu$, the $\mathbb{P}^{(n)}_{w}$ probability that $w_{n}$ equals some given $x \in \pi_{1}(S)$ is equal to the proportion of length $n$ sample paths terminating at $x$. Let $T^{(n)}(x)$ denote the number of length $n$ sample paths terminating at $x$. Since the Cayley graph is a tree, there is a unique non-backtracking path between the identity and any such $x$-- denote this path by $p(x)$. We claim that $T^{(n)}(x)$ is a function only of the distance $d(x)$ between $x$ and the identity. For example if $d(x)= n$, $T^{(n)}(x)= 1$. If $d(x) = n- C$ for some $C>0$, any sample path terminating at $x$ must be comprised of a family of loops with total length $C$ appended to $p(x)$. Given any loop family (defined to be a finite collection of loops based at potentially different vertices) in the Cayley graph, by its \textit{type}, we will mean the result of translating each loop in the family to the identity. There are only finitely many types of loop families with length $C$; an arbitrary sample path contributing to $T^{(n)}_{x}$ is given by choosing one such family, and for each loop $\ell$ in that family, a choice of vertex along $p(x)$ at which it is based. Let then $T^{k}$ denote the size of $T^{k}(x)$ for any $x \in S_{k}$, the sphere of radius $k$ about the identity in the Cayley graph. From the proof of Theorem \ref{thm: generic lifting}, there is a constant $K>0$ so that if $J(n)$ denotes the number of elements in $B_{n}$ with lifting degree less than $\mathcal{M} \cdot n/\log(n)$ then \[ \lim_{n \rightarrow \infty} \frac{J(n)}{\#(B_{\lambda \cdot n/2})} = 0, \] for all $\lambda> 1$. On the other hand, if $H(n)$ denotes the number of elements in $S_{n}$ with lifting degree \textit{at least} $\mathcal{M} \cdot n/\log(n)$, $H(n)$ grows on the order of $\#(B_{n})$ (indeed, $\#(S_{n})$ grows exponentially with the same growth rate as $\#(B_{n})$). It follows that the number of length $n$ sample paths terminating at an element in $S_{k}$ with lifting degree less than $\mathcal{M} \cdot n/\log(n)$ is at most \[ J(k) \cdot T^{k}. \] \begin{comment} Let $\mathcal{A}_{n}$ denote the annulus \[ \mathcal{A}_{n} = \left\{ x \in \pi_{1}(S): L \cdot n \leq d(x) \leq n \right\}. \] We can then decompose $\mathcal{A}_{n}$ into spheres $\mathcal{S}_{L \cdot n},..., \mathcal{S}_{n}$, where \[ \mathcal{S}_{k} = \left\{x \in \pi_{1}(S): d(x) = k \right\}, \] and by the previous paragraph, $T^{(n)}$ is constant on each sphere. Moreover, the number of elements in each such sphere grows exponentially in $n$, where the coefficient of $n$ is bounded below in terms only of $L$. Thus, by the argument used in the proof of Theorem \ref{thm: generic lifting}, as $n \rightarrow \infty$, the proportion of elements in each sphere with lifting degree at least $\frac{1}{\mathcal{M}} \cdot n/\log(n)$, goes to $1$. \end{comment} Now, fix some small $\epsilon>0$. By Theorem \ref{thm: limit of progress exists}, there is some function $f(n)$ satisfying $\lim_{n \rightarrow \infty} f(n)/n = 0$ so that for large enough $n$, a fraction of at least $1- \epsilon$ of all length $n$ sample paths terminate on some sphere $\mathcal{S}_{k}$ for $m \cdot n - f(n) \leq k \leq m \cdot n + f(n)$. It follows that, if $\mathcal{P}_{n}$ denotes the subset of length $n$ sample paths terminating on such a sphere, then $\mathcal{P}_{n}$ accounts for at least a proportion of $1-\epsilon$ of all length $n$ sample paths, and the number of length $n$ sample paths with lifting degree less than $\mathcal{M} \cdot n/\log(n)$ in $\mathcal{P}_{n}$ is at most \[ J(m \cdot n - f(n)) \cdot T^{m \cdot n- f(n)} +...+ J(m \cdot n +f(n)) \cdot T^{m \cdot n + f(n)} =: T(n). \] On the other hand, the number of length $n$ sample paths in $\mathcal{P}_{n}$ terminating at a point with lifting degree at least $\mathcal{M} \cdot n / \log(n)$ is \[ H(m \cdot n- f(n)) \cdot T^{m \cdot n- f(n)} +... + H(m \cdot n + f(n)) \cdot T^{m \cdot n + f(n)} =: G(n). \] We claim that \[ \lim_{n \rightarrow \infty}\frac{T(n)}{G(n)} = 0. \] If this is so, then it would follow that for $n$ sufficiently large, the above ratio is at most $\epsilon$, and therefore that the proportion of length $n$ sample paths with lifting degree less than $\mathcal{M} \cdot n/\log(n)$ is at most $\epsilon + \epsilon \cdot (1-\epsilon)$. Let $\epsilon \rightarrow 0$ and we are done. It therefore remains to prove that $G(n)$ overwhelms $T(n)$ as $n \rightarrow \infty$. Fixing $n$, choose $k, m \cdot n - f(n) \leq k \leq m \cdot n + f(n)$ maximizing the function $T^{k}$. Then \[ T(n) \leq 2f(n) \cdot J(m \cdot n + f(n)) \cdot T^{k}, \hspace{2 mm} \mbox{and} \hspace{2 mm} G(n) \geq H(k) \cdot T^{k}.\] Therefore, \[ \frac{T(n)}{G(n)} \leq \frac{2f(n) \cdot J( m \cdot n +f(n)) \cdot T^{k}}{H(k) \cdot T^{k}} = \frac{2f(n) \cdot J(m \cdot n +f(n))}{H(k)}. \] Since $k \geq m \cdot n- f(n)$, the denominator grows exponentially in $n$ with growth rate at least that of $\#(B_{m \cdot n})$. On the other hand, the growth rate of the numerator is at most that of $\#(B_{m \cdot n/2})$. This completes the proof. \end{proof} We conclude this section with a weaker version of Theorem \ref{thm: simple lifting degree for punctures} that holds for a much wider class of generating sets. Theorem $1.1$ of Sisto-Taylor \cite{ST19} states that if $Q$ is an infinite index quasiconvex subgroup of a hyperbolic group $G$ and $\mathcal{S}$ is a fixed finite generating set containing a generating set for $Q$, then if $\mathbb{P}_{w}$ denotes the probability operator induced by a random walk on the Cayley graph $\Gamma_{\mathcal{S}}$ driven by some $\mu$ with finite support generating $G$, there is some $C>0$ so that \[ \mathbb{P}_{w}( \max_{xQ \in G/Q} d_{xQ}(\pi_{xQ}1, \pi_{xQ}w_{n}) \in [C^{-1}\log(n), C \log(n)]) \rightarrow 1. \] Here $\pi_{xQ}$ denotes the nearest point projection to the left coset $xQ$ (well-defined because $Q$-- and therefore $xQ$-- is quasiconvex). In our context, $G$ will be $\pi_{1}(S)$ for $S$ a surface with non-empty boundary and $Q$ will be the infinite cyclic subgroup associated to a boundary component. Choose a generating set for $\pi_{1}(S)$ containing a generator for the infinite cyclic subgroup associated to the boundary component. We will identify $\mbox{Cay}_{S}(G)$ with its image under its orbit map $\mathcal{O}$ into the universal cover $\mathcal{U}$ which, after choosing a complete hyperbolic metric on $S$ with totally geodesic boundary, we can associate with a subset of $\mathbb{H}^{2}$. With this interpretation, the work of Sisto-Taylor \cite{ST19} implies that the the geodesic segment from $\mathcal{O}(1)$ to $\mathcal{O}(w_{n})$ projects to a path of length on the order of $\log(n)$ to the geodesic in $\mathbb{H}^{2}$ projecting to $\partial S$. This implies that there is an annular cover $A$ of $S$ and a lift of $w_n:S^1\to S$ to $\widetilde w_n:\mathbb{R}\to A$ so that both ends of $\widetilde w_n$ lie on one of the two ends of $A$, and so that $\widetilde{w_{n}}$ self-crosses on the order of $\log(n)$ times. Lower bounds on the degree of $w_n$ can now be made accessible via the following lemma, motivated by the work in \cite{Gaster16}: \begin{lemma} \label{lem:degree} Suppose that $\pi:A\to S$ is an annular cover, and $\widetilde{\gamma}$ is a lift of the curve $\gamma:S^1\to S$ to $\widetilde \gamma: \mathbb{R}\to A$ so that both ends of $\widetilde \gamma$ lie on a common end of $A$. Then $\deg\gamma\ge \iota(\widetilde{\gamma},\widetilde{\gamma})$. \end{lemma} \begin{proof} Suppose that $p':S'\to S$ is a finite-sheeted cover of $S$ so that $\gamma$ has a simple elevation $\gamma'\subset S'$, and so that $\deg p'=\deg \gamma$. Let $\alpha\subset S$ be the image of the core curve of $A$. Because $p'$ is a finite-sheeted cover, there is a smallest integer $r\ge 1$ so that $\alpha^r$ admits an elevation $\widetilde \alpha$ to $S'$ that is `next to' $\gamma'$. Consider the degree $r$ covering $A_r\to A$, and let $\widetilde \gamma_r \subset A_r$ be a lift of $\widetilde \gamma$. By the lifting criterion, the map $A_r\to S$ admits a lift to $A_r\to S'$ so that the core curve of $A_r$ maps to $\widetilde \alpha$, and under which $\widetilde\gamma_r$ maps to $\gamma'$. Because $\gamma'$ is simple, $\widetilde \gamma_r$ is simple as well. Observe that the self-intersection number $m:=\iota(\widetilde \gamma,\widetilde \gamma)$ measures the number of times $\widetilde \gamma$ winds around the core of $A$. Because $\widetilde \gamma_r$ is simple (i.e.~embedded), we must have $r\ge m$. It is evident that $\deg p' \ge r$, as points on $\alpha$ have at least $r$ preimages under $p'$. Therefore $\deg p' \ge m$ as claimed. \end{proof} It follows that when $S$ has a boundary component $b$ and we choose a finite generating set $\mathcal{S}$ containing a generator for the infinite cyclic subgroup coming from $b$, then if $\mu$ is \textit{any} distribution with support $\mathcal{S}$ and $\mathbb{P}_{w}$ is the probability operator corresponding to a random walk on $\Cay_{\mathcal{S}}$ driven by $\mu$, there is $K$ so that \[ \lim_{n \rightarrow \infty} \mathbb{P}_{w} \left[ \deg(w_{n}) \geq K \cdot \log(n) \right] \rightarrow 1.\] \begin{comment} Finally, we turn our attention to $\mathbb{P}_{w}$ in the case of closed surfaces; we assume that the Cayley graph is with respect to a standard generating set and again that $\mu$ assigns equal weight to each generator. The argument above used the fact that the Cayley graph for $\pi_{1}(S)$ was a tree in order to relate non-backtracking sample paths to elements themselves. In spirit, our strategy is the same as in Proposition \ref{prop: lifting degree for walks with punctures}, but complicated by the fact that non-backtracking paths no longer uniquely correspond to elements. To address this, we use the Carne-Varopolous theorem \ref{CV theorem} stated in Section \ref{subsec: R} and careful estimates for the rate of escape. \begin{proposition} \label{prop: lifting degree for walks closed} Let $w_{n}$ denote the result of an $n$-step random walk on $\Cay_{\mathcal{S}}$ associated to a uniform probability distribution on a standard generating set $\mathcal{S}$. Then so long as $g \ge 3$, with overwhelming $\mathbb{P}_{w}$ probability, the lifting degree of $w_{n}$ is at least on the order of $n / \log(n)$. \end{proposition} \begin{proof} As usual, let $B_{n}$ denote the ball of radius $n$ about the identity in the Cayley graph-- which henceforth we will denote by $\Gamma_{g}$-- of the surface group $\pi_{1}(S_{g})$ with respect to the standard generating set. Let $\pi: F_{2g} \rightarrow \pi_{1}(S_{g})$ be the standard projection map from the free group on $2g$ generators to $\pi_{1}(S_{g})$. Given a sample path $\gamma$, denote by $d(\gamma)$ the location in $\Gamma_{g}$ of its terminal vertex. Theorem \ref{thm:MT} implies that there is some constant $L>0$ an overwhelming proportion of sample paths $\gamma$ satisfy the property that $d(1, d(\gamma)) \ge L \cdot n$. Note that we can assume that $L< (4g-1)/4g$, the rate of escape for a random walk on a Cayley graph for $F_{2g}$ driven by a uniform distribution supported on a basis. Indeed, this follows from the fact that word length is non-increasing under the projection map $\pi$. Trivially, one has \begin{dmath} \label{trivial} {\mathbb{P}_{w}(\deg(w_{n}) < \mathcal{M} \cdot n/\log(n))} = {\mathbb{P}_{w}(\deg(w_{n} < \mathcal{M} \cdot n/\log(n) | d(1,w_{n}) < L \cdot n)} + {\mathbb{P}_{w}(\deg(w_{n} < \mathcal{M} \cdot n/\log(n) | d(1,w_{n}) \geq L \cdot n)}. \end{dmath} The first summand converges to $0$ as $n \rightarrow \infty$ by Theorem \ref{thm:MT}, so it suffices to consider the second summand. Moreover, let $|w_{n}|$ denote the minimum word length of any representative of the conjugacy class $[w_{n}]$. We claim that with overwhelming $\mathbb{P}_{w}$ probability, $|w_{n}|$ is at least $Ln/2$. Indeed, by Remark \ref{drift versus translation}, the probability that the stable translation length $\tau(w_{n})$ is at least $Ln/2$ converges to $1$ as $n \rightarrow \infty$, and $\tau(w_{n})$ is an upper bound for $|w_{n}|$. Thus we can again split the second summand of \ref{trivial} into two expressions according to whether $|w_{n}|$ is less than $Ln/2$ or not, and focus only on the latter. Letting $\overline{x}$ denote a minimum word length representative of the conjugacy class containing $x$, we then have: \begin{equation} \label{strategy} \mathbb{P}_{w}\big[ \deg(w_{n}) < \mathcal{M} \cdot n/\log(n)| d(1,w_{n}) \geq L \cdot n, |w_{n}| \geq Ln/2 \big] \leq \max_{x \in B_{n} \setminus B_{L \cdot n - 1}} \mathbb{P}_{w}(w_{n} = x) \cdot \max_{\overline{x} \in B_{n} \setminus B_{\frac{Ln}{2} - 1}} \#([x] \cap B_{L \cdot n}) \cdot J(n), \end{equation} where $J(n)$ denotes the number of elements in $B_{n}$ with lifting degree less than $\mathcal{M} \cdot n/log(n)$. In words, \ref{strategy} says that the probability that $w_{n}$ has small lifting degree is at most the probability that $w_{n}$ is equal to any given element (where we have already conditioned on the event that $w_{n}$ has sufficiently large word length), multiplied by the size of the intersection of that element's conjugacy class with the appropriately sized ball (where we have already conditioned on the event that a minimum length representative of that conjugacy class is sufficiently large), multiplied by the number of conjugacy classes with small lifting degree. From the proof of Theorem \ref{thm: generic lifting}, we have that $J(n)$ grows sub-exponentially in $n$. We are therefore done if we can show that the quantity \begin{equation} \label{zooming in} \max_{x \in B_{n} \setminus B_{L \cdot n - 1}} \mathbb{P}_{w}(w_{n} = x) \cdot \max_{\overline{x} \in B_{n} \setminus B_{\frac{Ln}{2} - 1}} \#([x] \cap B_{L \cdot n}) \end{equation} decays exponentially in $n$. In equations \ref{strategy} and \ref{zooming in}, we only account for the conjugates of $\overline{x}$ with word length at most $L \cdot n$, as opposed to considering conjugates with word length up to $n$. The justification for this is Theorem \ref{thm: limit of progress exists}: with overwhelming $\mathbb{P}_{w}$ probability, the word length of $w_{n}$ is on the order of $L \cdot n$ for some $L>0$. Technically, one should then also consider conjugates of $\overline{x}$ with word length up to $L \cdot n + f(n)$ for some function $f$ that grows sub-linearly in $n$, but the contribution of such an $f$ will not impact the growth estimates we make below. By the Carne-Varopolous theorem (together with the estimate for the spectral radius of $\rho_{g}$ mentioned in Section \ref{subsec: R} due to Zuk \cite{Zuk97}), the first factor is at most \[ \left( \frac{1}{\sqrt{g}} \right)^{n} \cdot e^{-L^{2}n/2} = e^{-\left(\frac{L^{2} + \log(g)}{2} \right)n }. \] By Lemma \ref{lem: bounding conjugacy}, the second factor is at most \[ T \cdot n^{2} \cdot \#(B_{n \cdot (L - L/2)/2}) = n^{2} \cdot \#(B_{L\cdot n/4}). \] Since the volume growth rate of $\Gamma_{g}$ is at most that of the free group $F_{2g}$ which has growth rate $4g-1$, this quantity is at most on the order of \[ n^{2} \cdot e^{L \cdot \log(4g-1)n/4}. \] Thus, \ref{zooming in} can be bounded above (up to a uniform multiplicative constant not depending on $n$) by \begin{equation} \label{final bound} n^{2} \cdot e^{\left( L \cdot \frac{\log(4g-1)}{4} - \frac{L^{2} + \log(g)}{2} \right) \cdot n} \end{equation} \begin{equation} \label{final bound 2} \leq n^{2} \cdot e^{\left( \frac{4g-1}{4g} \cdot \frac{\log(4g-1)}{4} - \frac{L^{2} + \log(g)}{2} \right) \cdot n}, \end{equation} where \ref{final bound 2} is obtained from \ref{final bound} by plugging in an upper bound for $L$ when it appears positively in the exponent. To account for the appearance of $-L^{2}$, we need a lower bound for $L$ as a function of $g$. For this, we turn to Chapter $8$ of Woess \cite{Woess02} (specifically, see the proof of Proposition $8.2$). There, it is shown that $L$ can be chosen to be at least the smallest value $t$ so that \[ \left( 4g \cdot \left( \frac{\rho_{g}}{4g} \right) \right)^{t} \cdot \rho_{g} < 1, \] where the first factor of $4g$ is a bound on the valence of any vertex, and the $4g$ appearing in the denominator of $\rho_{g}/4g$ corresponds to the minimum non-zero transition probability (which in this case is $4g$ because we are choosing the distribution that assigns equal weight to each generator). Solving for $t$ and using the bound $\rho_{g} \leq 1/\sqrt{g}$ yields \[ L \geq t \geq \frac{\log(\sqrt{g})}{\log(16g^{3/2})}. \] Plugging this in for the remaining occurrence of $L$ in \ref{final bound 2} yields a negative growth rate for all $g \ge 3$, and therefore for all such values of $g$, \ref{zooming in} decays exponentially fast with $n$, as desired. \end{proof} We conjecture that Proposition \ref{prop: lifting degree for walks closed} holds when $g=2$ as well: \begin{conjecture} \label{genus 2} There is $\mathcal{M}>0$ so that for a random walk on $\Gamma_{2}$ driven by a uniform distribution on a standard generating set, one has \[ \lim_{n \rightarrow \infty} \mathbb{P}_{w}(\deg(w_{n}) \geq \mathcal{M} n/log(n)) = 1. \] \end{conjecture} One could try to approach Conjecture \ref{genus 2} with a similar argument used to prove Proposition \ref{prop: lifting degree for walks closed}, but one would need at least one of the following: \begin{itemize} \item a smaller upper bound for $\rho_{2}$ than $1/\sqrt{2}$; \item a smaller upper bound for the growth rate of $\Gamma_{2}$ than $\log(4g-1) = \log(7)$; \item a smaller upper bound for the rate of escape than $(4g-1)/4g= 7/8$. \end{itemize} Nagnibeda \cite{Nag99} establishes an upper bound of \[ \rho_{2} \leq .6629 < 1/\sqrt{2}, \] but replacing $1/\sqrt{2}$ with this improved bound in \ref{final bound 2} produces a barely positive growth rate of roughly $.0056$. In any case, foundational work of Kesten \cite{Kes59} implies a lower bound of \[ \rho_{2} \geq \frac{\sqrt{7}}{4} \cong .6614, \] and replacing $1/\sqrt{2}$ with this theoretical lower bound in \ref{final bound 2} stills yields a positive growth rate of roughly $.0033$. Moreover, the true growth rate of $\Gamma_{2}$ is roughly $\log(6.97984)$ (see for instance \cite{Belk19}); replacing $\log(7)$ with this slightly smaller number-- even in combination with the aforementioned improvements-- does not yield a negative growth rate. Thus, pushing the argument in Proposition \ref{prop: lifting degree for walks closed} further for $\Gamma_{2}$ necessitates a tighter upper bound for the rate of escape. \end{comment} \begin{comment} Lower bounds on the lifting degree are more difficult to establish and we are able to obtain them when $S$ has a boundary component. For this, we appeal to work of Sisto-Taylor: \begin{theorem} \label{thm:SistoTaylor} Let $G$ be a hyperbolic group, $Q$ an infinite index quasiconvex subgroup, and $S$ a finite generating set of $G$ containing a generating set for $Q$. Let $\left\{C(Y) \right\}_{Y \in \mathcal{S}}$ be the set of induced subgraphs of left cosets of $Q$ in the Cayley graph $\mbox{Cay}_{S}(G)$. For $Y \in \mathcal{S}$ let $\pi_{Y}: G \rightarrow C(Y)$ denote a closest point projection in $\mbox{Cay}_{S}(G)$. Then there is $C$ so that \[ \mathbb{P} \big[ \sup_{Z \in \mathcal{S}} d_{C(Z)}(\pi_{Z}(1), \pi_{Z}(w_{n})) \in [C^{-1} \log n, C \log n] \big] \xrightarrow[\substack{ n \to \infty}]{} 1. \] \end{theorem} In our context, $G$ will be $\pi_{1}(S)$ and $Q$ will be the infinite cyclic subgroup associated to a boundary component. The generating set $S$ will be as pictured [figure], and we will identify $\mbox{Cay}_{S}(G)$ with its image under its orbit map $\mathcal{O}$ into the universal cover $\mathcal{U}$ which, after choosing a complete hyperbolic metric on $S$ with totally geodesic boundary, we can associate with a subset of $\mathbb{H}^{2}$. With this interpretation, Theorem \ref{thm:SistoTaylor} implies that the the geodesic segment from $\mathcal{O}(1)$ to $\mathcal{O}(w_{n})$ projects to a path of length on the order of $\log(n)$ to the geodesic in $\mathbb{H}^{2}$ projecting to $\partial S$. This implies that there is an annular cover $A$ of $S$ and a lift of $w_n:S^1\to S$ to $\widetilde w_n:\mathbb{R}\to A$ so that both ends of $\widetilde w_n$ lie on one of the two ends of $A$. Lower bounds on the degree of $w_n$ can now be made accessible via: \begin{lemma} \label{lem:degree} Suppose that $\pi:A\to S$ is an annular cover, and $\widetilde{\gamma}$ is a lift of the curve $\gamma:S^1\to S$ to $\widetilde \gamma: \mathbb{R}\to A$ so that both ends of $\widetilde \gamma$ lie on a common end of $A$. Then $\deg\gamma\ge \iota(\widetilde{\gamma},\widetilde{\gamma})$. \end{lemma} \begin{proof} Suppose that $p':S'\to S$ is a finite-sheeted cover of $S$ so that $\gamma$ has a simple elevation $\gamma'\subset S'$, and so that $\deg p'=\deg \gamma$. Let $\alpha\subset S$ be the image of the core curve of $A$. Because $p'$ is a finite-sheeted cover, there is a smallest integer $r\ge 1$ so that $\alpha^r$ admits an elevation $\widetilde \alpha$ to $S'$ that is `next to' $\gamma'$. Consider the degree $r$ covering $A_r\to A$, and let $\widetilde \gamma_r \subset A_r$ be a lift of $\widetilde \gamma$. By the lifting criterion, the map $A_r\to S$ admits a lift to $A_r\to S'$ so that the core curve of $A_r$ maps to $\widetilde \alpha$, and under which $\widetilde\gamma_r$ maps to $\gamma'$. Because $\gamma'$ is simple, $\widetilde \gamma_r$ is simple as well. Observe that the self-intersection number $m:=\iota(\widetilde \gamma,\widetilde \gamma)$ measures the number of times $\widetilde \gamma$ winds around the core of $A$. Because $\widetilde \gamma_r$ is simple (i.e.~embedded), we must have $r\ge m$. It is evident that $\deg p' \ge r$, as points on $\alpha$ have at least $r$ preimages under $p'$. Therefore $\deg p' \ge m$ as claimed. \end{proof} \begin{remark} In the most natural setting for the above lemma, the core of the annulus $A$ embeds in $S$. \end{remark} \begin{remark} It is striking that the conclusion of Theorem~\ref{thm:SistoTaylor} applies to \emph{any} infinite-index quasiconvex subgroup. In the application to non-closed surfaces, one uses the cyclic subgroup corresponding to a boundary component. If one wants to obtain lower bounds on the degree of $w_n$ in the closed setting, one might hope to leverage a more detailed choice of subgroup (or subgroups?) $Q$ that allows one to simultaneously obtain a large projection to an annulus, and the condition on the ends $w_n$ that allows one to employ Lemma~\ref{lem:degree}. \end{remark} \end{comment} \section{Pushing-point dilatation of random curves} \label{section: point push} We conclude the paper with a brief proof of Theorem \ref{thm: point push}. \begin{proof} For $\mathbb{P}_{w}$, Theorem \ref{thm:MT} implies that the curve complex translation length $\tau_{\mathcal{C}}(f_{w_{n}})$ is larger than $L \cdot n$ with high $\mathbb{P}_{w}$ probability. It then follows from the coarse-Lipschitzness and $\mathcal{MCG}$-equivariance of the systole map $\mbox{sys}$ (see Section \ref{subsec: CC}) that the Teichm{\"u}ller translation length $\tau_{\mathcal{T}}(f_{w_{n}})$ is at least some $(L/K) \cdot n$ with high $\mathbb{P}_{w}$ probability. Since Teichm{\"u}ller translation length is nothing but the log of dilatation, one then has \[ \mbox{dil}(f_{w_{n}}) \geq e^{(L/K) \cdot n}\] with high $\mathbb{P}_{w}$ probability. On the other hand, there is some $J>0$ so that \[ \mbox{dil}(f_{w_{n}}) \leq e^{J \cdot n}, \] just because the orbit map of $\mathcal{MCG}(S)$ into $\mathcal{T}(S)$ is (coarsely) Lipschitz and $w_{n}$ has word length at most $n$. On the other hand, Theorem \ref{thm: intersection} gives the existence of some $L'\ge 1$ so that (with high $\mathbb{P}_{w}$-probability), \[ \frac{n^{2}}{L'} \leq i(\gamma_{n}, \gamma_{n}) \leq L' \cdot n^{2}. \] It follows that with high $\mathbb{P}_{w}$ probability, one has \[ e^{(L/K) \cdot \sqrt{i(\gamma_{n}, \gamma_{n})/L'}} \leq \mbox{dil}(f_{w_{n}}) \leq e^{J \cdot \sqrt{L' \cdot i(\gamma_{n}, \gamma_{n}) }}. \] For $\mathbb{P}_{g}$, the proof is identical, using Theorem \ref{thm: GTT} in place of Theorem \ref{thm:MT}, and using the $\mathbb{P}_{g}$ portion of Theorem \ref{thm: intersection}. \end{proof} \bibliographystyle{acm}
1,477,468,750,741
arxiv
\section{Introduction} \indent \indent It has been proved in \cite{ExtMod1} that the scope of the HLS model \cite{HLSRef,FKTUY}, suitably broken \cite{taupaper}, can be extended in order to include annihilation processes like $e^+e^- \ra (\pi^0/\eta) \gamma$, $e^+e^- \ra \pi^+ \pi^-\pi^0$ and decay spectra like $\eta/\eta^\prime \ra \pi^+ \pi^- \gamma$, beside the $e^+e^- \ra \pi^+ \pi^-$ cross section. Actually, it also includes $e^+e^- \ra K^+ K^-$ and $e^+e^- \ra K^0 \overline{K}^0$ annihilations which have to be examined separately as they raise known specific problems \cite{BGPter}. Actually, most VMD physics up to the $\phi$ meson mass is covered by our extended model \cite{ExtMod1}, except for channels involving scalar mesons or channels where higher mass vector mesons could have a significant influence \cite{omgpi} as, seemingly, $e^+e^- \ra \omg \pi$. However, all the $e^+e^-$ annihilations channels examined in \cite{ExtMod1} are reasonably well described up to the $\phi$ mass region by the model presented in \cite{ExtMod1}. The issue is now to examine how the Extended (HLS) Model performs while including other processes like the $\tau^\pm \ra \pi^\pm \pi^0 \nu$ decay which is in the scope of the HLS model. This leads us to report on the results of global fits using the existing $\tau$ dipion spectra beside the $e^+e^- $ data extensively discussed in \cite{ExtMod1}. The same issue was partly addressed\footnote{We considered together with $e^+e^- \ra \pi^+ \pi^-$ cross sections, the pion form factor from ALEPH and the dipion spectrum {\it lineshape} from CLEO. We were, thus, less sensitive to issues related with the absolute scale of the $\tau$ spectra.} in our former \cite{taupaper}. The energy range of our model is limited approximately by the $\phi$ meson mass~; meanwhile, as far as issues like the muon $g-2$ value are concerned, this is an energy region where a reliable model can address some questions in a novel way. Indeed, an interesting outcome of such a global fit is the estimate it provides for various hadronic contributions to the muon $g-2$. This covers the $\pi^+ \pi^-$ loop contribution, but also those from the $(\pi^0/\eta) \gamma$ and $\pi^+ \pi^-\pi^0$ final states. The improvements following from using simultaneously $\tau$ data and $e^+ e^-$ data as well as the consequences of having a reliable global model describing all VMD physics up to the $\phi$ meson mass are interesting issues. Indeed, the underlying unified (HLS) framework of our model correlates several decay channels because of their common underlying physics and phenomenological studies indicate that these physics correlations are well accepted by the data \cite{ExtMod1}. Stated otherwise, one can examine in great details several consequences of accounting for $\tau$ decays and $e^+ e^-$ annihilations within a consistent framework. This turns also out to readdress the long--standing problem of the discrepancy between the BNL measurement \cite{BNL} of $g-2$ and the predictions based on $e^+ e^-$ annihilations and $\tau$ spectra as reported in the literature \cite{DavierPrevious1,DavierPrevious2,DavierPrevious3,Davier2003,Eidelman,Fred09}. A quite recent study \cite{DavierHoecker} tends to lessen the disagreement between these two kinds of predictions, but not to resorb it. Our present study is based on all the $e^+e^- $ data sets used in \cite{ExtMod1} and on the published $\tau$ spectra. These are the dipion mass spectra collected some time ago by ALEPH \cite{Aleph} and CLEO \cite{Cleo}~; a valuable data set collected by the BELLE Collaboration, with a statistics of several million events, has been made recently available \cite{Belle}. The paper is organized as follows. In Section \ref{FitTau}, we describe the model for the dipion spectrum in the $\tau^\pm \ra \pi^\pm \pi^0 \nu$ decay. The vector meson mixing produced by Isospin Breaking (IB) \cite{ExtMod1,taupaper} is complemented in a simple manner with another IB mechanism allowing lineshape distortions of the $\rho^\pm$ meson mass spectrum compared with $\rho^0$. In this Section, we also list the $\tau$ data sets and outline how they intervene in the global fit procedure. In Section \ref{FitProcess}, we first emphasize the correlation between pure IB shape distortion parameters and the absolute scale of $\tau$ spectra. Then, the consistency of the dipion spectra from $\tau$ decay -- not affected by vector meson mixing -- with all $e^+ e^-$ annihilation channels is investigated. The behavior of each $\tau$ data set -- the ALEPH \cite{Aleph}, CLEO \cite{Cleo} samples and the recently issued BELLE\cite{Belle} data sample -- are examined under various kinds of fit conditions. It is shown that the ALEPH spectrum can be well described with simple and intuitive IB lineshape distortions compared to $e^+ e^-$, whereas this does not work well with BELLE and CLEO spectra. The best way to account for these is rather a rescaling of the absolute scale of their spectrum. We argue that this could point towards a more complicated IB lineshape distortion model than ours. One shows, nevertheless, that a satisfactory simultaneous account of all $e^+ e^-$ annihilation data and the available dipion spectra in the $\tau$ decay can be reached, under quite reasonable conditions. In Section \ref{gMoins2}, we focus on the non--perturbative hadronic contributions to the muon $g-2$, especially the $\pi^+ \pi^-$ one. The results provided by the various $e^+ e^- \ra \pi^+ \pi^-$ data sets are examined and the effects of global fits involving the $e^+e^- \ra (\pi^0/\eta) \gamma$ and $e^+e^- \ra \pi^+ \pi^-\pi^0$ cross sections is shown. The effects of including the $\tau$ dipion spectra within the fitted data sets is examined in full details. It is also shown that the KLOE data set \cite{KLOE_ISR1} for $e^+ e^- \ra \pi^+ \pi^-$ does not lead to some inconsistency but, rather, allows for improved uncertainties at the expense of a worse fit probability. We also conclude on the most likely value of several contributions to the muon $g-2$ following from a global fit to a large sample of $e^+ e^-$ and $\tau$ data. The uncertainties yielded look much improved with respect to usual. Finally, Section \ref{FinalConclusion} is devoted to a summary of our conclusions. In particular, one emphasizes using the various $\tau$ spectra in order to provide -- or improve -- theoretical predictions for the muon $g-2$, taking into account the difficulty to model lineshape distortions in a way accepted simultaneously by the ALEPH, BELLE and CLEO data sets. In the present paper, which is the second part of a study started in \cite{ExtMod1}, one does not discuss the properties or the results of the fits to the $e^+ e^-$ data in isolation. These have been discussed at length in \cite{ExtMod1}~; we also refer the reader to that paper for the details of our model. All notations have been carefully chosen in order to ensure full consistency with \cite{ExtMod1}. Finally, the present work supersedes and improves large parts of our former study \cite{taupaper}. We also correct here for a (minor) computer code error which affected the treatment of the sample--to--sample correlated uncertainties in the data sets from \cite{CMD2-1995corr,CMD2-1998-1,SND-1998}~; this is, indeed, important in order to provide reliable uncertainties to our $g-2$ estimates. \section{Including $\tau^\pm \ra \pi^\pm \pi^0 \nu$ Data} \label{FitTau} \indent \indent The difference between $e^+ e^-$ and $\tau$ based estimates of the hadronic contribution to the muon $g-2$ is an important issue. Indeed, accounting for isospin symmetry breaking effects, the $\tau$ dipion spectra{\footnote{Each normalized to the world average branching ratio Br$(\tau \ra \pi \pi \nu)$, highly influenced by the ALEPH measurement \cite{Aleph}.}} provide predictions for the hadronic contribution which makes the expected value of $g-2$ close to its experimental measurement \cite{BNL}. Instead, all theoretical estimates based on $e^+ e^-$ data deviate by more than 3 $\sigma$. Comprehensive discussions of this issue can be found in \cite{DavierPrevious1,DavierPrevious2,DavierPrevious3} and more recently in \cite{Fred09}. Summaries can also be found in \cite{Davier2003,Eidelman}, for instance. A quite recent reanalysis of this discrepancy \cite{DavierHoecker} concludes to a smaller disagreement between $\tau$ and $e^+e^-$ based approaches (about $2 ~\sigma$)~; consequently, the newly proposed $\tau$ based estimate moves farther from the BNL measurement. However, even if reduced, the mismatch between $e^+ e^-$ and $\tau$ based estimates of the hadronic contribution to $g-2$ survives. It was shown in \cite{taupaper} that an appropriate account of isospin symmetry breaking (IB), including its effects on the ($\rho,~\omg,~\phi$) mixing, certainly solves a part of the reported discrepancy between $e^+ e^-$ and $\tau$ spectra. However, the IB vector mixing defined there and recalled in \cite{ExtMod1} does not exhaust all effects of IB. In this paper, we examine more deeply than in \cite{taupaper} the effects of IB shape distortions and their connection with absolute scale issues. In order to examine this kind of IB, one needs a data sample where the $\rho^0$ ($e^+e^-$ annihilation) and the $\rho^\pm$ ( $\tau$ decay) spectra are simultaneously present. The problem of the hadronic contributions to the muon $g-2$ was not addressed in \cite{taupaper}. This issue is examined here in a wider context by revisiting the consistency pattern of the $\tau^\pm \ra \pi^\pm \pi^0 \nu$ data on one hand, and the much larger data set on the $e^+e^- \ra \pi^+ \pi^-$, $e^+e^- \ra (\pi^0/\eta) \gamma$ and $e^+e^- \ra \pi^+ \pi^-\pi^0$ annihilation channels on the other hand. This is allowed by having extended the model presented in \cite{taupaper} in such a way that anomalous and non--anomalous channels are implemented within the unified framework presented in \cite{ExtMod1}. Most part of the $e^+e^-\ra \pi^+ \pi^-$ data sets has been commented on in \cite{taupaper}~; here, we only remind how the sample--to--sample correlated part of the systematic uncertainties should be treated, as this plays an important role in estimating the uncertainty on the muon $g-2$. All other $e^+e^-$ annihilation channels have been considered in details in our recent \cite{ExtMod1}. Because of the poor probability of the best fit to the KLOE data \cite{KLOE_ISR1} already commented on in \cite{ExtMod1}, the corresponding data sample is not included systematically in the set of $e^+e^- \ra \pi^+ \pi^-$ data samples considered~; however, its effects will be commented upon at the appropriate places. Finally, in order to fit the parameters of our $\rho,~\omg,~\phi$ mixing scheme \cite{taupaper}, one still uses a subset of 9 radiative decay width data which have been taken from the latest issue of the Review of Particle Properties \cite{RPP2008} and are given explicitly in \cite{ExtMod1}. \subsection{The Model For $\tau^\pm \ra \pi^\pm \pi^0 \nu$ Decay} \indent \indent Our model for the pion form factor in $\tau$ decay coincides exactly with the formulae given in \cite{taupaper}~: \be \displaystyle F_\pi^\tau(s) = \left [ (1-\frac{a}{2}) - \frac{ag}{2} F_\rho^\tau(s) \frac{1}{D_\rho(s)} \right] \label{Nobrk0} \ee with~: \be \left \{ \begin{array}{lll} \displaystyle F_\rho^\tau(s) =f_\rho^\tau - \Pi_{W}(s)~~~,~~~f_\rho^\tau=ag f_\pi^2\\[0.5cm] \displaystyle D_\rho(s)=s-m^2-\Pi_{\rho \rho}(s) \end{array} \right . \label{Nobrk1} \ee where $\Pi_{W}(s)$ accounts for the loop corrections to the $\rho^\pm - W^\mp$ transition amplitude $f_\rho^\tau$ and $\Pi_{\rho \rho}(s)$ is the $\rho^\pm$ self--mass. Both loops are such that they vanish at $s=0$. $g$ denotes, as usual \cite{HLSRef,ExtMod1}, the universal vector coupling and $m^2=a g^2 f_\pi^2$ is the $\rho$ meson mass squared as it occurs in the HLS Lagrangian. $a$ is the standard HLS parameter expected close to 2 \cite{HLSRef,taupaper,ExtMod1}. Beside the mixing of vector mesons produced by breaking Isospin Symmetry, Reference \cite{taupaper} examined the possibility of having a mass difference beween the neutral and charged $\rho$ mesons. Here, we also allow for a mass squared difference between neutral and charged $\rho$ mesons -- denoted resp. $m^2$ and $m^2 + \delta m^2$. Additionally, we also allow for a coupling difference of these mesons, resp. $g$ and $g^\prime = g + \delta g$. The $\rho^\pm - W^\mp$ transition amplitude should be modified correspondingly \cite{taupaper}, as will be reminded shortly. These two parameters correspond within our model to allowing mass and width differences between the charged and neutral $\rho$ mesons, as commonly done in other studies \cite{DavierHoecker,Ghozzi}. \subsubsection{The Pion Form Factor In the $\tau$ Decay} \label{ffth} \indent \indent With the IB modifications just defined, the pion form factor has to be slightly modified compared with Eq. (\ref{Nobrk0}). It can be written~: \be \displaystyle F_\pi^\tau(s) = \left [ (1-\frac{a}{2}) - F_\rho^\tau(s) g_{\rho \pi \pi}^\prime \frac{1}{D_\rho(s)} \right] \label{Model1} \ee where $g_{\rho \pi \pi}^\prime=ag^\prime/2=a[g + \delta g]/2$. The other ingredients are modified, compared with Eqs. (\ref{Nobrk0}) and (\ref{Nobrk1}), and become~: \be \left \{ \begin{array}{lll} F_\rho^\tau(s) =f_\rho^{ \prime \tau} - \Pi_{W}(s)\\[0.5cm] D_\rho(s)=s-m^2 -\delta m^2 -\Pi_{\rho \rho}^\prime(s)\\[0.5cm] \displaystyle f_\rho^{\prime \tau} = f_\rho^\tau + \delta f_\rho^\tau~~~, ~~~\delta f_\rho^\tau = \frac{\delta m^2}{g^\prime} -\frac{f_\rho^\tau \delta g}{g^\prime} \end{array} \right . \label{Model2} \ee where $f_\rho^\tau =a g f_\pi^2$ is the $\rho- W$ transition amplitude, $D_\rho(s)$ is the inverse $\rho^\pm$ propagator and $\Pi_{\rho \rho}^\prime(s)$ is the charged $\rho$ self--mass. With the $\delta f_\rho^\tau$ term in the last Eq. (\ref{Model2}), $F_\pi^\tau(0)=1$ is identically fulfilled. In \cite{taupaper}, we assumed $\delta g =0$. The (modified) $F_\rho^\tau(s)$ is the $W-\rho$ transition amplitude with its loop corrections. In terms of the pion $\ell_\pi(s)$ and kaon $\ell_{K}(s)$ amputated loops, one has the following expressions~: \be \left \{ \begin{array}{lll} \displaystyle \Pi_{W}(s)=g_{\rho \pi \pi}^\prime \left [ (1-\frac{a}{2}) \ell_\pi(s) + \frac{1}{2 z_A^2}(z_A-\frac{a}{2}) \ell_{K}(s) \right ]+ P_W(s) \\[0.5cm] \displaystyle \Pi_{\rho \rho}^\prime(s) = [g_{\rho \pi \pi}^\prime]^2 \left [ \ell_\pi(s) + \frac{1}{2 z_A^2} \ell_{K}(s)~ \right ] + P_\rho(s) \end{array} \right . \label{Model3} \ee where $z_A=[f_K/f_\pi]^2$ is the standard SU(3) breaking parameter in the BKY breaking scheme \cite{BKY,Heath1}, while $P_W(s)$ and $P_\rho(s)$ are subtraction polynomials with real coefficients to be fixed by external conditions. One could look for a motivated way, like the BKY mechanism \cite{BKY}, able to generate this kind of IB distortion effects. The proposed modifications look, however, reasonable and correspond to the usual way of introducing mass and width differences in other studies. This mechanism will be referred to as IB shape distortion and, if numerically relevant, may complement the IB vector mixing \cite{taupaper,ExtMod1}. We have checked that one can safely identify $\ell_\pi(s)$ and $\ell_{K}(s)$ -- both being charged--neutral meson loops -- occuring in these expressions with the amputated $\pi^+ \pi^-$ and $K^+ K^-$ loops appearing in $e^+ e^-$ annihilations \cite{taupaper}. In order to reduce the number of free parameters in the global fit procedure, we still identify (as in \cite{taupaper}) the subtraction polynomial for $\Pi_W(s)$ with those for its partner in $e^+ e^-$ annihilation (see Section 5 in \cite{ExtMod1}). On the other hand, as one can neglect pseudoscalar meson mass differences in loop calculations, one also identifies the charged $\rho$ self--mass $\Pi_{\rho \rho}^\prime(s)$ with its neutral $\rho$ partner -- up to the $\delta g$ effect -- as reminded in Section 5 of \cite{ExtMod1}. Finally, in order to fit $\tau$ data, one has to correct for specific isospin symmetry breaking effects. For this purpose, short range \cite{Marciano} ($S_{EW}=1.0235$) and long range \cite{Cirigliano1,Cirigliano2,Cirigliano3} ($G_{EM}(s)$) radiative corrections have to be considered. While comparing with experimental data, the quantity in Eq. (\ref{Model1}) has to be modified to~: \be F_\pi^\tau(s) \Longrightarrow \left [ S_{EW} G_{EM}(s) \right ]^{1/2} F_\pi^\tau(s) \label{Model5} \ee As the standard HLS model does not go beyond the lowest lying vector meson nonet, we cannot fit the whole dipion $\tau$ decay spectrum. We chose to stop around the $\phi$ mass \cite{ExtMod1} as, up to this energy, higher mass vector mesons seem to have a very limited influence within the channels examined in \cite{ExtMod1} and in the present work. \subsubsection{Useful Expressions For The Dipion Partial Width in the $\tau$ Decay} \indent \indent The dipion partial width in the decay of the $\tau$ lepton can be written \cite{taupaper}~: \be \displaystyle \frac{d\Gamma_{\pi \pi}(s)}{ds} = \displaystyle \frac{|V_{ud}|^2 G_F^2}{64 \pi^3 m_\tau^3} |F_\pi(s)|^2 G_0(s) + {\cal O}(\epsilon^2) \label{FF1} \ee \noindent with~: \be \left \{ \begin{array}{lll} G_0(s) &= \displaystyle \frac{4}{3} \frac{(m_\tau^2-s)^2(m_\tau^2+2 s)}{s^{3/2}} Q_\pi^3\\[0.5cm] Q_\pi &= \displaystyle \frac{\sqrt{[s-(m_{\pi^0}+m_{\pi^+})^2][s-(m_{\pi^0}-m_{\pi^+})^2]}}{2\sqrt{s}} \end{array} \right . \label{FF2} \ee \noindent where $\epsilon =(m_{\pi^0}^2 -m_{\pi^+}^2)/m_{\pi^+}^2\simeq -0.06455$. The terms of order $\epsilon^2$ -- which manifestly break Isospin Symmetry -- are negligible. On the other hand, one obviously has~: \be \displaystyle \frac{1}{\Gamma_{\pi \pi}} \frac{d\Gamma_{\pi \pi}(s)}{ds} = \frac{1}{N}\frac{dN(s)}{ds} \label{FF2b} \ee where $\Gamma_{\pi \pi}$ is the (integrated) $\pi \pi$ partial width in the $\tau$ decay~; $1/N dN(s)/ds$ is the normalized spectrum of yields over the accessible dipion invariant mass range{\footnote{Of course, the total number of pion pairs is defined by $N =\int [dN(s)/ds] ~ds$.}}. While referring to $\tau$ normalized spectra in the following, we always understand this quantity. Using Eqs. (\ref{FF1}) and (\ref{FF2b}) together with the customary expression \cite{RPP2008} for the the $\tau \ra e \nu_\tau \nu_e $ partial width, one can derive~: \be \displaystyle |F_\pi(s)|^2= \frac{2 m_\tau^8}{|V_{ud}|^2 (m_\tau^2-s)^2 (m_\tau^2+2 s) } \frac{1}{\beta_-^3} \frac{{\cal B}_{\pi \pi}}{{\cal B}_e} \frac{1}{N}\frac{dN(s)}{ds} \label{FF3} \ee \noindent which is the standard expression used by experimentalists to reconstruct the pion form factor from experimental data \cite{Cleo,Belle}. In this expression $\beta_-$ is the pion velocity in the dipion rest frame, ${\cal B}_{\pi \pi}$ and ${\cal B}_e$ are the branching ratios of the $\tau$ decays to resp. $\pi \pi \nu_\tau$ and to $e \nu_\tau \nu_e$. Eq. (\ref{FF2b}) can also be written~: \be \displaystyle \frac{1}{\Gamma_{\tau}} \frac{d\Gamma_{\pi \pi}(s)}{ds} = {\cal B}_{\pi \pi} \frac{1}{N}\frac{dN(s)}{ds} \label{FF4} \ee where $\Gamma_{\tau}$ denotes the full $\tau$ width. The theoretical expression for $d\Gamma_{\pi \pi}/ds$ on the left--hand side is given by Eq. (\ref{FF1}) and by $|F_\pi(s)|^2$ as following from Subsection \ref{ffth} above~; the additional factors shown by Eq. (\ref{Model5}) are be understood. Finally, the numerical value for $\Gamma_{\tau}$ -- not accessible to our model -- is derived from the measured lifetime \cite{RPP2008} and ${\cal B}_{\pi \pi}$ is numerically provided by each experiment with various uncertainties. Eq. (\ref{FF4}) is the main tool in the present analysis. \subsubsection{Absolute Normalization of the Dipion $\tau$ Spectrum} \label{tauscale1} \indent \indent As clear from Eqs. (\ref{Model5}) and (\ref{FF1}), the absolute normalization of the theoretical dipion partial width spectrum is determined by the product $G_F^2 |V_{ud}|^2 S_{EW} G_{EM}(s)$. Correspondingly, the absolute normalization of the experimental spectrum on the right-hand side of Eq.(\ref{FF4}) is determined by the branching ratio ${\cal B}_{\pi \pi}$. Less obvious analytically, but numerically important, is the role played by the universal vector coupling $g$ and the transition amplitude $f_\rho^{\prime \tau}$ in providing the theoretical normalization of the dipion spectrum. Indeed, as $a$ is found numerically close to 2, Eqs. (\ref{Model1}) and (\ref{Model2}) show that the absolute magnitude of the dipion spectrum is proportional to the product squared of the $\rho - W$ and $\rho \pi \pi$ amplitudes. Therefore, actually, non--vanishing $\delta g$ and $\delta m^2$ influence both the lineshape and the absolute normalization of the $\tau$ spectrum. Moreover, one cannot exclude some other mechanism breaking CVC by modifying essentially the absolute normalization of the $\tau$ spectra~; therefore, a correction factor $(1+\eta_{CVC})$ may enter Eq. (\ref{FF4}) and can be fitted. Related with this, one should note a recent BaBar measurement about the $\tau - \mu -e$ universality. BaBar reports\footnote{We thanks W. M. Morse to have drawn our attention on this paper.} \cite{BaBar09} $g_\mu/g_e= 1.0036 \pm 0.0020$ as expected, while $g_\tau/g_\mu= 0.985 \pm 0.005$ exhibits a $3~\sigma$ departure from 1. If confirmed, this may indicate a possible CVC violation in the $\tau$ sector\footnote{This BaBar result contradicts a former measurement from ALEPH \cite{Aleph} which was consistent with lepton universality.} affecting only the absolute scale of the $\tau$ spectrum. On the other hand, an experimental bias on the ${\cal B}_{\pi \pi}$ branching ratio cannot be excluded and could play in the same direction. Even if very close to standard approaches, the IB lineshape distortions have been introduced here in a very simplified manner. One can easily think of a more complicated structure of these than the one we inferred. \subsection{Dealing With The Fitted Data Sets} \indent \indent Besides the data sets provided\footnote{As in our \cite{taupaper}, we discard the OPAL data set \cite{Opal}.} by the ALEPH \cite{Aleph} and CLEO \cite{Cleo} Collaborations, a new sample has been recently made available by the BELLE Collaboration\cite{Belle}. These are the $\tau$ data sets which will be examined in conjunction with the whole set of $e^+ e^-$ data samples already considered in \cite{ExtMod1}. We remind that a subset of 9 vector meson decay partial widths is also used, corresponding to decay modes not related with the annihilation data considered in our fit procedure~; these are numerically extracted from \cite{RPP2008}. \subsubsection{The $\tau$ Input To The Fit Procedure} \label{taudata} \indent \indent In the present study, we submit to fit the experimental spectra as shown in the right--hand side of Eq. (\ref{FF4}). In order to remain consistent, we use for each experiment its own published branching ratio measurement and not the world average branching ratio. As for the CLEO data, our input is their published spectrum \cite{Cleo} for $1/N dN(s)/ds$ normalized to their latest updated branching ratio measurement\cite{CleoBr,RPP2008}, ${\cal B}_{\pi \pi}= (25.36 \pm 0.44) \%$. This Collaboration also claims an uncertainty on the absolute energy scale \cite{Cleo,CleoPriv} of about 0.9 MeV. However, in our former analysis \cite{taupaper}, no such uncertainty showed up significatively. Anticipating somewhat on our present analysis, we confirm its effective consistency with zero and, therefore, discard this freedom from now on. Concerning the ALEPH data, we use directly the last update of the ${\cal B}_{\pi \pi}/N dN(s)/ds$ spectrum \cite{Aleph}. The corresponding branching fraction, ${\cal B}_{\pi \pi}= (25.471\pm 0.097 \pm 0.085)\%$, is the most precise among the published measurements. The uncertainties will be added in quadrature (0.127\%). For the BELLE data \cite{Belle}, we have been provided \cite{HisakiPriv} with all information concerning the pion form factor spectrum\footnote{ Normalized to the world average branching ratio ${\rm Br}(\tau \ra \pi^+ \pi^- \nu)$, four times more precise than the BELLE own measurement, and slightly shifted.}, its covariance matrix for statistical errors and its systematics. The systematics have been added in quadrature to the statistical error covariance matrix. The BELLE $1/N dN(s)/ds$ spectrum data are published as such \cite{Belle}~; its error covariance matrix can be derived from the corresponding information provided for the pion form factor, using simple algebra. As stated above, we have submitted to fit the BELLE ${\cal B}_{\pi \pi}/N dN(s)/ds$ spectrum normalized to the BELLE branching ratio \cite{Belle} ${\cal B}_{\pi \pi}= (25.34 \pm 0.39) \%$. The uncertainty provided by the branching ratio error is clearly a scale uncertainty and a bin--to--bin correlated error~; this should be treated as reminded in Section 6 of \cite{ExtMod1}. This turns out to define the (partial) $\chi^2$ for each of the ALEPH, BELLE and CLEO data sets by \cite{ExtMod1}~: \be \chi^2_{Exp} = [(1+\lambda_{Exp} ) m_i - f(s_i)] [(1+\lambda_{Exp} ) m_j - f(s_j)] V^{-1}_{ij} +\left[ \frac{\lambda_{Exp}}{\eta_{Exp}} \right ]^2 \label{FF6} \ee having defined, for each experiment, the measurements $m_i$ as the central value for the branching ratio times $1/N dN(s_i)/ds$ and $V$ being the full error covariance matrix. $f(s_i)$ should be understood as the left--hand side of Eq. (\ref{FF4}) computed at the appropriate energy point. For each experiment, $\lambda$ is a scale parameter to be fitted and $\eta$ is the ratio of the branching ratio uncertainty to its central value. The second term in this $\chi^2$ is the standard way to account for a scale uncertainty. We have $\eta_{CLEO}=1.74\%$, $\eta_{ALEPH}=0.51\%$ and $\eta_{BELLE}=1.53\%$. With this input to the fit procedure, the ALEPH, BELLE and CLEO data sets are clearly treated on the same footing. As emphasized in \cite{ExtMod1}, if for some experiment the ratio $\lambda_{fit}/\eta_{Exp}$ is small enough (typically not greater than $\simeq 1 \div 2$), one can neglect this scale correction and use the standard $\chi^2$ expression in the minimization procedure, with the replacement $V_{ij} \Rightarrow V_{ij} +\eta_{Exp}^2 m_i m_j $. Otherwise, one may consider we are faced with some missing variance and keep Eq. (\ref{FF6}) as it stands. In a previous study \cite{taupaper}, we limited ourselves to considering only the $\tau$ data points up to 0.9 GeV in order to avoid at most effects of higher mass vector mesons. In the present work, however, preliminary studies using the BELLE and CLEO data\footnote{For instance, fitting the BELLE and CLEO spectrum lineshapes up to 1. GeV does not reveal worse fit quality than when stopping the fit at 0.9 GeV. Higher mass vector meson influence in this region, if any, is thus found small enough to be absorbed by the other fit parameters.} samples lead us to push this upper energy limit up to 1.0 GeV. Indeed, as in our $e^+ e^-$ fit studies \cite{ExtMod1}, the influence of higher mass vector mesons seems negligible all along the energy region from threshold to 1.0 GeV -- and even slightly above~; therefore, there is no physical ground to abstain from such an extension of the fitting range. \subsubsection{Testing The $\tau$ Spectrum Lineshapes} \label{taulineshape} \indent \indent The remarks presented in Subsection \ref{tauscale1} explain why it is certainly appropriate to test the various $\tau$ spectrum lineshapes independently from their absolute magnitudes. This can be done in two different ways. A first method turns out to normalize the data points $m_i$ to the sum of the data points covered by our fitted energy range (from threshold up to 1 GeV/c). Then, correspondingly, the model function function $f(s)$ on the left--hand side of Eq. (\ref{FF4}) should be normalized to its integral over the fitted range. Another method, is simply to minimize the $\chi^2$ as defined by Eq. (\ref{FF6}), but amputated this from the $(\lambda/\eta)^2$ term which constrains the scale in accordance with the claimed experimental uncertainty. Indeed, in this way, the scale factor is allowed to vary freely within the global fit procedure. We checked that these two methods give similar results. \subsubsection{Dealing With The Uncertainties In $e^+ e^-$ Data Samples} \label{eeUncertainties} \indent \indent Uncertainties in the $e^+ e^-$ data sets are accounted for in several ways. For the $e^+ e^- \ra \pi^0 \gamma$ data sets \cite{CMD2Pg2005,sndPg2000,sndPg2003} and the $e^+ e^- \ra \eta \gamma$ data sets \cite{CMD2Pg1999,CMD2Pg2001,CMD2Pg2005,sndPg2000,sndPg2007}, taking into account the magnitude of the systematics, we did not find motivated to split them up into their bin--to--bin correlated and uncorrelated parts. We just add in quadrature the reported systematic and statistical errors. For the $e^+ e^- \ra \pi^+ \pi^- \pi^0 $ samples, we dealt differently with the different data sets. For the relatively unprecise data sets \cite{ND3pion-1991,CMD3pion-1989} we did as just explained for $e^+ e^- \ra (\pi^0/\eta) \gamma$ data and simply added systematic and statistical errors in quadrature. Instead, for the more accurate data sets \cite{CMD2-1995corr,CMD2-2006,CMD2KKb-1,CMD2-1998}, only the uncorrelated part of the systematic uncertainty has been added in quadrature to the statistical errors. On the other hand, the bin--to--bin correlated error has been treated as emphasized above for the $\tau$ data sets. For reasons already emphasized in \cite{ExtMod1}, we have discarded the $ e^+ e^- \ra \pi^+ \pi^-\pi^0$ data provided by \cite{SND3pionHigh,SND3pionLow}. Finally, the various Novosibirsk $e^+ e^- \ra \pi^+ \pi^-$ data sets recently collected \cite{CMD2-1995corr,CMD2-1998-1,CMD2-1998-2,SND-1998} carry a common bin--to--bin {\it and} sample--to--sample correlated uncertainty of 0.4 \%. The older data from OLYA and CMD \cite{Barkov} also share a common correlated (scale) uncertainty \cite{SimonPriv} of $\simeq 1 $\%. In both cases, we have added the uncorrelated part of the systematics to the statistical errors in quadrature. Instead, in order to treat properly the correlated uncertainty, one should consider the data sets in \cite{CMD2-1995corr,CMD2-1998-1,CMD2-1998-2,SND-1998} as a single (merged) data set and use as $\chi^2$ an expression like Eq. (\ref{FF6}) to introduce the common scale to be fitted. Here also, if $\lambda/0.4\% \leq 1 \div 2$, one could remove this scale while performing the change $V_{ij} \Rightarrow V_{ij} + (0.4\%)^2 m_i m_j $. One has performed the same way, { \it mutatis mutandis}, with the older OLYA and CMD data sets \cite{Barkov}. Because of the poor probability of the best fit to the KLOE data \cite{KLOE_ISR1} already commented upon in \cite{ExtMod1}, the corresponding data sample is not included systematically in the set of $e^+e^- \ra \pi^+ \pi^-$ data samples considered~; however, its effects will be noted when relevant. In order to fit the parameters of the IB $\rho,~\omg,~\phi$ mixing scheme \cite{taupaper,ExtMod1}, one still uses a subset of 9 radiative decay width data which are taken from the latest issue of the Review of Particle Properties \cite{RPP2008} and are given explicitly in \cite{ExtMod1}. \section{Simultanous Fits To $e^+ e^-$ and $\tau$ Data} \label{FitProcess} \subsection{Interplay Between $\delta m^2$, $\delta g$ And $\lambda$} \label{preliminaire} \indent \indent Strictly speaking, the lineshape of the $\tau$ spectrum is determined by the HLS parameters $g^\prime=g+ \delta g $ and the Higgs--Kibble mass $m^2 + \delta m^2$. This is clear from the expressions given in Section \ref{FitTau} above. The specific Isospin breaking parameters $\delta m^2$ and $\delta g$ differentiate the $\rho^\pm$ lineshape from those of the $\rho^0$ meson. However, these parameters also govern the absolute scale of the $\rho^\pm$ spectrum compared to the $\rho^0$ one. Therefore, if an uncertainty on the absolute scale of a measured $\tau$ spectrum calls for a fit parameter $\lambda$ rescaling the whole data spectrum, it is quite important to examine its interplay with $\delta g$ and $\delta m^2$. \begin{table}[!htb] \hspace{-1.0cm} \begin{tabular}{|| c || c |c || c | c ||} \hline \hline Data Set & $\delta m^2$ (GeV$^2$) & $\delta g $ & $\lambda$ & $[\chi^2/points]_{Exp}$ \\ \hline ALEPH \rule[-3.mm]{0.mm}{9.mm} & $(3.37 \pm 1.27) ~10^{-3}$ & $(-0.56 \pm 0.12)~10^{-1} $ & $(-1.01 \pm 0.40) \%$ & $ 27.16/38$ \\ \hline BELLE \rule[-3.mm]{0.mm}{9.mm} & $(-0.01 \pm 0.77) ~10^{-3}$ & $(-0.12 \pm 0.10)~10^{-1} $ & $(-3.83 \pm 0.54) \%$ & $ 32.46/20$ \\ \hline CLEO \rule[-3.mm]{0.mm}{9.mm} & $(-1.53 \pm 1.07) ~10^{-3}$ & $(0.16 \pm 0.14)~10^{-1} $ & $(-5.51 \pm 0.74) \%$ & $ 38.99/30$ \\ \hline \hline ALEPH \rule[-3.mm]{0.mm}{9.mm} & $(4.04 \pm 1.22) ~10^{-3}$ & $(-0.69 \pm 0.11)~10^{-1} $ & ${\bf 0}$ & $ 29.19/37$ \\ \hline BELLE \rule[-3.mm]{0.mm}{9.mm} & $ (2.18 \pm 0.71) ~10^{-3}$ & $(-0.51 \pm 0.08)~10^{-1} $ & ${\bf 0} $ & $ 41.12/19$ \\ \hline CLEO \rule[-3.mm]{0.mm}{9.mm} & $(2.26\pm 0.94) ~10^{-3}$ & $(-0.54 \pm 0.11)~10^{-1} $ & ${\bf 0} $ & $ 61.49/29$ \\ \hline \hline \end{tabular} \caption{ \label{T0} Global fit results with each $\tau$ data sample separately. Bolface numbers are actually not allowed to vary in the fits. Global fit probabilities are always above 90\%. } \end{table} For the present exercise, we consider all data sets involving $e^+ e^- \ra \pi^+ \pi^-$ data together with all data sets covering the $ e^+ e^- \ra \pi^0 \gamma$ and $\eta \gamma$ annihilation channels. These additional channels allow to remove the $\rho^0/\omg/\phi \ra (\pi^0/\eta) \gamma$ partial widths from the vector meson decay mode subsample unavoidably used. In this Section, the ISR KLOE data sample for $e^+ e^- \ra \pi^+ \pi^-$ is removed from the collection of data sets to be fitted~; we also leave aside the $e^+ e^- \ra \pi^+ \pi^- \pi^0$ annihilation data which play a minor role in defining the vector meson mixing scheme. The $e^+ e^-$ measurements with $s < 1.05$ GeV$^2$ are submitted to fit -- in order to include the $\phi$ region -- together with all $\tau$ decay measurements with $m_{\pi \pi} < 1$ GeV. We have performed simultaneous fits of each of the A, B and C $\tau$ data sets together with the $e^+ e^-$ data referred to above. The results are shown in Table \ref{T0} and exhibit quite interesting features, depending on the particular $\tau$ data set considered. The first three lines provide parameter values when $\delta m^2$, $\delta g$ and $\lambda$ are allowed to vary. For each $\tau$ data sample, $\lambda$ is constrained by the relevant $[\lambda_{fit}/\eta_{Exp}]^2$ term in the global $\chi^2$. In this case, one notes that~: \begin{itemize} \item The significance for a non--zero $\delta m^2$ is at $ \simeq 2.6 ~\sigma$ for A and negligible for B or C , \item The significance for a non--zero $\delta g$ is at the $ \simeq 1 ~\sigma$ level for for B or C but large for A ($4.7 \sigma$), \item The values for some important correlation coefficients returned by the fit are large for each $\tau$ data set~: $(\delta g,\lambda)\simeq (\delta g,\delta m^2)\simeq - 50$ \% and $(\lambda,\delta m^2)\simeq (25 \div 50) $ \%. These values reflect the interplay between $\delta g$, $\delta m^2$ and $\lambda$ in determining the absolute scales of the experimental spectra. \item The significance for non--zero $\lambda$'s is data set dependent~: $2.5 ~\sigma_\lambda$ for A, $7.1 \sigma_\lambda$ for B and $7.40 \sigma_\lambda$ for C. Compared with the scale uncertainties induced by the errors on the respective ${\cal B}_{\pi \pi}$, this corresponds to $[\lambda_{fit}/\eta]_{ALEPH}=2.0 \pm 0.8$, $[\lambda_{fit}/\eta]_{BELLE}=2.5 \pm 0.35$ and $[\lambda_{fit}/\eta]_{CLEO}=3.2 \pm 0.43$. Taking into account the large correlations, between $\delta g$, $\delta m^2$ and $\lambda$, this looks to us acceptable. \end{itemize} The corresponding fit residuals are shown superimposed in the upmost Figure \ref{dmdg}. One clearly sees that the B and C residuals are well spread around zero. Those for A are slightly distorded around the $\rho$ peak in a way opposite to B and C. The last data column in Table \ref{T0} illustrates that each of A, B and C is well described by the global fit, simultaneously with $e^+ e^-$ data (The so--called New Timelike data \cite{CMD2-1995corr,CMD2-1998-1,SND-1998} always yield $\chi^2/points \simeq 118/127$). \vspace{0.5cm} The non--zero values for the $\lambda$'s could possibly mean that the reported measured values for ${\cal B}_{\pi \pi}$ are overestimated. However, the large correlations noted just above prevent such strong a conclusion, as the rescaling effect could well be absorbed by some different values for $(\delta g,\delta m^2) $. It is, therefore, worth examining the case when $\lambda \equiv 0$ is imposed. Of course, this turns out to prevent any rescaling of the experimental spectra and, thus, check wether the mass and coupling breaking we introduced could account alone for the measured reported normalizations of the various $\tau$ spectra. The corresponding results are shown in the downmost three lines of Table \ref{T0} and call for important remarks~: \begin{itemize} \item The significance for a non--zero mass shift $\delta m^2$ is always improved~: $3.6~\sigma$ for A, $3.1~\sigma$ for B and $2.4~\sigma$ for C, \item The coupling difference always becomes highly significant~: the ratio of the central value to its uncertainty is 6.3 for A and B and 4.9 for CLEO. \item However, fixing $\lambda \equiv 0$ marginally degrades the BELLE description, more significantly the account of CLEO data, whereas the fit quality is unchanged -- and quite good -- for ALEPH data. \end{itemize} The description of $e^+ e^-$ data are, in this case, marginally degraded (The new timelike data, for instance yield $\chi^2 \simeq 125$ instead of 118 before). \vspace{0.5cm} This leads us to conclude that the additional Isospin Symmetry breaking mechanism, which summarizes into mass and width differences for the $\rho$'s, allows to account for the original absolute normalization of the ALEPH spectrum with a very good probability. One can thus conclude that the dipion ALEPH spectrum -- or, equivalently, the ALEPH pion form factor -- is in in full accord with VMD predictions, {\it provided one suitably accounts for mass and width shifts in the $\rho^\pm$ information compared to $\rho^0$}. Numerically, this turns out to plug into the ALEPH spectrum parametrization~: \be \left\{ \begin{array}{llll} \displaystyle \frac{\Gamma_{\rho^\pm}-\Gamma_{\rho^0}}{\Gamma_{\rho^0}} \simeq & \displaystyle \frac{2 \delta g}{ g} =& [-2.50 \pm 0.40] ~\%\\[0.5cm] \displaystyle m^2_{\rho^\pm}-m^2_{\rho^0} =& \delta m^2=&(4.04 \pm 1.22) ~10^{-3} ~~{\rm GeV}^2 \end{array} \right . \label{rhobrk} \ee which approximately\footnote{Mass and width of broad objects like the $\rho$ are, conceptually, definition dependent and, additionally, we have not accounted for the large negative correlation term $(\delta g,\delta m^2)$. Moreover, these numerical values somewhat vary, depending on the exact $e^+ e^-$ data set content submitted to the global fit.} means $m_{\rho^\pm}-m_{\rho^0} \simeq 2.59 \pm 0.78$ MeV and $\Gamma_{\rho^\pm}-\Gamma_{\rho^0} \simeq -3.7 \pm 0.6 $ MeV. If $\delta m^2$, is numerically in the usual ballpark \cite{RPP2008}, $\delta g$ seems slightly larger than could have been inferred from commonly reported estimates \cite{RPP2008,Ghozzi} for $\Gamma_{\rho^\pm}-\Gamma_{\rho^0}$. The results corresponding to fits with $\lambda=0$ are shown in the downmost Figure \ref{dmdg}. Compared with the upmost one, this Figure shows interesting features~: the ALEPH residual distribution is essentially unchanged, but the residuals for BELLE and CLEO are shifted towards positive values and start resembling the ALEPH residual distribution. This indicates graphically that some rescaling is still needed in order to get a satisfactory description of the B and C data sets. Whether this residual rescaling could be absorbed into a more sophisticated Isospin breaking distortion mechanism cannot be discarded. As a summary, one can assert that, with appropriately chosen $\delta m^2$ and $\delta g$, ALEPH data can avoid any significant rescaling within the model we developped. This attractive property is unfortunately not completely shared by BELLE and CLEO in isolation. This is reflected by the fit information given in the last data column of Table \ref{T0} and by the downmost Figure \ref{dmdg}. Nevertheless, with some rescaling, the BELLE and CLEO data sets can be satisfactorily understood. Moreover, it is clear from the above analysis that there is a large interplay between parameters defining IB shape distortions and the absolute scale of the experimental spectra. Stated otherwise, some part of a fitted rescaling could be due to the difficulty to model the actual shape distortions and, conversely, some part of a real scale factor could well be absorbed by a fitted distortion. Therefore, one cannot claim, in view of a large fitted rescaling, that the reported ${\cal B}_{\pi \pi}$ should be rescaled by as much. \subsection{Fitting The $\tau$ Lineshapes} \indent \indent In view of the different behaviors of the A, B and C data sets, it is worth examining separately the $\tau$ spectrum lineshapes besides the full spectra. As explained in Subsection \ref{taulineshape}, this turns out to fit the $\tau$ data samples without including the scale constraining terms in their contributions to the total $\chi^2$. For this purpose, one has first performed a global fit leaving free $\delta m^2$, $\delta g$ and the $\lambda$'s unconstrained. One thus gets $\delta g=(0.15 \pm 0.08) ~10^{-1}$ and $\delta m^2 = (-1.47 \pm 0.67) ~10^{-3}$ GeV$^2$ with\footnote{The number of data points in the $\tau$ data samples submitted to fit are always 37 (A), 19 (B) and 29 (C).} $\chi^2_{ALEPH}=17.86$, $\chi^2_{BELLE}=29.37$ and $\chi^2_{CLEO}=29.71$ associated with a $\chi^2=118.31$ for the 127 data points of the so--called New Timelike data \cite{CMD2-1995corr,CMD2-1998-1,CMD2-1998-2,SND-1998}. Instead, when imposing $\delta g=\delta m^2 =0$, one gets $\chi^2_{ALEPH}=19.22$, $\chi^2_{BELLE}=27.61$ and $\chi^2_{CLEO}=31.83$ while the New Timelike data get $\chi^2=120.23$. One may compare the $\chi^2$ values yielded for each of the $\tau$ data samples when fitting together their normalized spectra with those already reported\footnote{Let us remind that the fit results reported in this Table are, instead, obtained from (global) fits using each of the $\tau$ data sets in isolation. } in Table \ref{T0}. The results given just above enforce the conclusions we reached in Subsection \ref{preliminaire}. In the global fit procedure, it looks difficult to distinguish effects due to IB shape distortions in the $\rho$ distribution from genuine rescaling effects. This is especially striking when considering the ALEPH data set~: One gets a quite acceptable description of the invariant mass distribution by either assuming $\lambda=0$ and $(\delta g,~\delta m^2)$ free or by letting $\lambda$ free and imposing $\delta g=\delta m^2=0$. So, the reported scale discrepancy can be absorbed~; however, the sharing of the effect between true (physical) lineshape distortions and an (experimental) bias in the accepted value for ${\cal B}_{\pi \pi}$ is beyond the scope of fit results. This is an important issue for using $\tau$ data in order to estimate hadronic contributions to $g-2$~; this will be commented on at the appropriate place. Therefore, at least when fitting the $\tau$ lineshapes, it is not justified to keep non--vanishing $\delta m^2$ and $\delta g$. In this case, the main difference between the $\rho^+$ and the (physical) $\rho^0$ is essentially carried \cite{taupaper,ExtMod1} by the $\gamma- \rho^0$ and the $W-\rho^\pm$ transition amplitudes $f_\rho^\gamma$ and $f_\rho^\tau$~: \be \frac{f_\rho^\gamma}{f_\rho^\tau}=1 + \frac{1}{3} \alpha(s) +\frac{z_V \sqrt{2}}{3} \beta(s) ~~~~,~~(f_\rho^\tau=ag^2f_\pi^2) \label{referee1} \ee These amplitudes differ through terms explicitly depending on the Isospin Symmetry breaking functions which account for the vector meson mixing \cite{taupaper,ExtMod1}. The $s$--dependent terms in Eq. (\ref{referee1}) account for the isospin zero component of the $\rho^0$ meson. Keeping non--vanishing $\delta g$ and $\delta m^2$ within our modelling, the right--hand side of Eq. (\ref{referee1}) gets a leading additional $(\delta m^2/m^2 - \delta g/g)$ term. \begin{table}[!htb] \hspace{-2.4cm} \begin{tabular}{|| c || c || c |c || c | c ||} \hline \hline \rule[-3.mm]{0.mm}{11.mm} ~~ & Expected & \multicolumn{2}{|c|}{ No $\tau$ Data} & \multicolumn{2}{|c|}{$\tau$ Data Set Configurations} \\ \rule[-3.mm]{0.mm}{11.mm} & r.m.s. & $e^+e^-$ NSK+KLOE & $e^+e^-$ NSK & NSK + A$^{sh}$B$^{sh}$C$^{sh}$ & NSK + ABC \\ \hline \hline Scale New Timelike \rule[-3.mm]{0.mm}{9.mm}& $0.4\%$ & $(-0.9 \pm 0.4) \%$ & $(-0.6 \pm 0.4) \%$ & $(-0.6 \pm 0.4)\%$ & $ (+0.4 \pm 0.3)$ \% \\ \hline Scale Old Timelike \rule[-3.mm]{0.mm}{9.mm} & $1.0\%$ & $(+1.4 \pm 0.7) \% $ & $(+1.6 \pm 0.7) \% $ & $(+1.6 \pm 0.7)\%$ & $ (+2.4 \pm 0.7)$ \%\\ \hline Scale CMD--2 \cite{CMD2-1995corr} \rule[-3.mm]{0.mm}{9.mm} & $ 1.3 \%$ & $(-0.4 \pm 1.2)\%$ & $(-0.5 \pm1.2)\%$ & $(-0.4 \pm 1.2) \%$ & $ (-0.7 \pm 1.2)$ \%\\ \hline Scale CMD--2 \cite{CMD2-2006} \rule[-3.mm]{0.mm}{9.mm} & $2.5\%$ & $(-2.6 \pm 1.9)\%$ & $(-1.8 \pm 1.9)\%$ & $(-1.8 \pm 1.9)\% $ & $ (-1.4 \pm 1.9)$ \% \\ \hline Scale CMD--2 \cite{CMD2KKb-1} \rule[-3.mm]{0.mm}{9.mm} & $4.6\%$ & $(-4.7 \pm 3.4) \%$ & $(-4.2 \pm 3.4) \%$ & $(-4.1 \pm 3.5) \%$ & $ (-3.6 \pm 3.4)$ \% \\ \hline Scale CMD--2 \cite{CMD2-1998} \rule[-3.mm]{0.mm}{9.mm} & $1.9\%$ & $(-2.6 \pm 1.6) \%$ & $(-2.2 \pm 1.6) \%$ & $(-2.0 \pm 1.6)\%$ & $ (-2.1 \pm 1.6)$ \% \\ \hline $g$ \rule[-3.mm]{0.mm}{9.mm} &--- & $5.569 \pm 0.003$ & $5.587 \pm 0.008$ & $5.565 \pm 0.006$ & $5.532 \pm 0.007$ \\ \hline $\delta g$ \rule[-3.mm]{0.mm}{9.mm} &--- & --- & --- & ${\bf 0}$ & $[-0.30 \pm 0.07]~10^{-1}$ \\ \hline $\delta m^2$ (GeV$^2$)\rule[-3.mm]{0.mm}{9.mm} &--- & --- & --- & ${\bf 0}$ & $[1.09 \pm 0.6]~10^{-3} $ \\ \hline $x$ \rule[-3.mm]{0.mm}{9.mm} &--- & $0.917 \pm 0.012$ & $0.908 \pm 0.013$ & $0.905 \pm 0.013$ & $0.902 \pm0.013$ \\ \hline $z_A$ \rule[-3.mm]{0.mm}{9.mm} &--- & $1.501 \pm 0.010$ & $1.476 \pm 0.009$ & $1.468 \pm 0.009$ & $1.454 \pm 0.009$ \\ \hline $a$ \rule[-3.mm]{0.mm}{9.mm} &--- & $2.372 \pm 0.002$ & $2.357 \pm 0.005$ & $2.361 \pm 0.004$ & $2.382 \pm 0.004$ \\ \hline $c_3$ \rule[-3.mm]{0.mm}{9.mm} &--- & $0.927 \pm 0.006$ & $0.936 \pm 0.006$ & $0.944 \pm 0.006$ & $0.952 \pm 0.006$ \\ \hline $c_1-c_2$ \rule[-3.mm]{0.mm}{9.mm} &--- &$1.194 \pm 0.031$ & $1.196 \pm 0.032$ & $1.228 \pm 0.032$ & $1.237 \pm 0.032$ \\ \hline \hline $\chi^2/\rm{dof}$ \rule[-3.mm]{0.mm}{11.mm} & --- & 647.43/653& 521.15/593 &602.36/675 &645.22/676 \\ Probability & --- & 55.4\% &98.5\% & 97.9\% & 79.7\% \\ \hline \hline \end{tabular} \caption{ \label{T1} Results in various fit configurations. $e^+e^-$ NSK data stand for all annihilation processes discussed in the text. Inclusion of ALEPH, BELLE and CLEO data are referred to as A, B and C spectra respectively, A$^{sh}$, B$^{sh}$ and C$^{sh}$ denote the corresponding normalized spectra. The upper part displays the rescaling factors for $\pi^+ \pi^-$ (first two lines) and $\pi^+ \pi^- \pi^0$ data samples. The lower part refers to breaking parameters discussed in Ref. \cite{ExtMod1}. Numbers written boldface are parameter values not allowed to vary inside fits. } \end{table} \vspace{1.cm} From now on, we choose to reintroduce the $e^+ e^- \ra \pi^+ \pi^-\pi^0$ data from \cite{ND3pion-1991,CMD3pion-1989} and \cite{CMD2-1995corr,CMD2-2006,CMD2KKb-1,CMD2-1998}, as discussed in Subsection \ref{eeUncertainties} above. We also let the scale factors vary for both the $e^+ e^-$ and $\tau$ data sets\footnote{keeping, of course, the term constraining the scale variation for all the relevant data sets.}. Some numerical results obtained with various data set configurations are reported in Table \ref{T1}. One notes that all scale factors introduced for the $e^+ e^- \ra \pi^+ \pi^-$ and $e^+ e^- \ra \pi^0 \pi^+ \pi^-$ cross sections are in good correspondence with the expectations reminded in the first data column. This result is understood as confirming once more \cite{ExtMod1} the claims of the corresponding experiments concerning their correlated systematic uncertainties. In the same Table, the data column A$^{sh}$B$^{sh}$C$^{sh}$ reports the results obtained while removing the scale constraining terms (the second one in Eq. (\ref{FF6}) for each $\tau$ data sample). The numerical values of the rescaling parameters for the $\tau$ spectra are given below. Table \ref{T1} justifies the removal of the $e^+ e^-$ rescaling factors\footnote{Then, the correlated scale uncertainties only appear in the full error covariance matrix as reminded in Subsection \ref{eeUncertainties} and in Section 6 of \cite{ExtMod1}.} from the set of parameters to be fitted. One also observes that the physics parameters have only (small) reasonable fluctuations. Finally, Table \ref{T1} clearly shows that all Novosibirsk (NSK) data (4 different cross sections) are quite consistent with the freely rescaled $\tau$ decay data as the fit yields a probability above the 90\% level. Moreover, it is also interesting to note that constraining the rescaling of the $\tau$ spectra by their reported scale uncertainty ({\it i.e.}, keeping the scale fixing terms as in Eq. (\ref{FF6})) still provides quite comfortable probabilities (above the 80\% level), as can be seen from the last data column in Table \ref{T1}. This last data column exhibits also interesting features~: In a simultaneous fit to all $\tau$ data samples together with $e^+ e^-$ data and vector meson decay modes, the significance for a non--vanishing $\delta g$ is $\simeq 4 \sigma$ whereas that for $\delta m^2$ is only $\simeq 1.8 \sigma$. Comparing with the separate $\tau$ data sample fits reported in Table \ref{T0} indicates that significantly non--vanishing $\delta m^2$ and $\delta g$ are driven by ALEPH data only. We have explored the effect of including the KLOE data set \cite{KLOE_ISR1} beside the whole set of Novosibirsk $e^+ e^-$ data and the three $\tau$ decay data sets. As expected from its intrinsically large $\chi^2$ \cite{ExtMod1} (123 for 60 measurements in the present case), the global fit probability drops to $\simeq 8$\%. However, the information displayed in Table \ref{T1} is not substantially modified. In particular, the scale factors affecting the $e^+ e^-$ Novosibirsk data remain very close to the expectations reported in the first data column. From now on, all scale factors affecting the $e^+e^-$ data -- except for KLOE, when relevant -- are set to zero and the scale uncertainties are transfered into the error covariance matrix as explained above and emphasized in Section 6 of \cite{ExtMod1}. \subsection{Final Global Fits To $e^+ e^-$ and $\tau$ Data} \label{FinalFit} \indent \indent In this Subsection, we only refer to fits performed without the KLOE data set. Some results are displayed in Table \ref{T2} and will be discussed now~; other pieces of information will be given later on, especially those with each $\tau$ data sample fitted in isolation with the whole set of $e^+e^-$ data. Two kind of results are shown in Table \ref{T2}, the former with fitting only the $\tau$ spectrum lineshapes\footnote{In this case, the $\tau$ data are flagged as A$^{sh}$, B$^{sh}$ and C$^{sh}$ for respectively the ALEPH, BELLE and CLEO data sets. When using, for each $\tau$ data set, the full Eq. (\ref{FF6}), these data sets are simply flagged A, B and C. } (see Subsection \ref{taulineshape} above), the later with the full $\tau$ spectrum information as expressed by Eq. (\ref{FF4}). The first data column in Table \ref{T2} displays the results obtained with only the whole set of $e^+e^-$ data \cite{ExtMod1}. The first two data columns in Table \ref{T2} clearly illustrate that the fit quality remains optimum\footnote{The origin of such favorable probabilities is discussed at the end of this Subsection.} (above the 90\% level) when including only the $\tau$ spectrum {\it lineshapes} inside the fit procedure (data column flagged with A$^{sh}$ B$^{sh}$ C$^{sh}$). This does not hide some badly described data set as reflected by the various partial $\chi^2$ displayed. Figure (\ref{Tau_norm}) displays the normalized spectra together with the fit function~; the residual distributions shown in Figure (\ref{ResTau_norm}) confirm the goodness of the fit. Comparing the individual $\chi^2$ in the second data column with those in the first one for the different $e^+e^-$ data subsets (and the decay width data set), one clearly observes negligible modifications of their fit quality. Moreover, the $\chi^2$ obtained for the individual A$^{sh}$, B$^{sh}$, C$^{sh}$ spectrum lineshapes are also quite reasonable. The numbers shown boldface and within parentheses in the first data column are also quite interesting. Fixing the arbitrary scales to their fit values (given in second data column) for each $\tau$ data set, one can {\it compute} the $\chi^2$ distance of each of the A$^{sh}$, B$^{sh}$, C$^{sh}$ spectrum lineshapes to the fit solution to {\it only} the $e^+ e^-$ data. Comparing these numbers to their homologues in the second column (obtained, instead, with the $\tau$ spectrum lineshapes fitted) clearly proves that the $e^+ e^-$ data allow to {\it predict} the $\tau$ spectrum lineshape with very good accuracy\footnote{We already reached this conclusion \cite{taupaper} with the CLEO data sample lineshape. }. One should note that, if unconstrained, all $\tau$ spectrum normalizations prefer consistently a rescaling by $\simeq -5 \%$. As discussed in Subsection \ref{preliminaire} above, the exact meaning of such rescalings should be considered with great care. In Figure \ref{ResTau_norm_spect} we plot the fit residuals normalized to the fit function value, namely~: \be x_i=\displaystyle \frac{(1+\lambda) m_i - f_i}{f_i}~~~,~~ i=1,N_{Exp} \label{FF8} \ee neglecting the effects of correlations on the plotted errors. This plot clearly shows that, up to $\simeq 1$ GeV, the residuals can be considered as flat, structureless, distributions~; Figure \ref{ResTau_norm_spect} is the exact analog of Figure 12 in \cite{Belle} and exhibits the similar character of BELLE and CLEO data in our fitted $s$ range. \begin{table}[!htb] \hspace{+1.0cm} \begin{tabular}{|| c || c || c | c || } \hline \hline \rule[-3.mm]{0.mm}{11.mm} Data Set & No $\tau$ Data & \multicolumn{2}{|c|}{$\tau$ Data Set Configurations} \\ \rule[-3.mm]{0.mm}{11.mm} ($\sharp$ data points) & NSK($e^+e^-$) & NSK ${\cal +}$A$^{sh}$B$^{sh}$C$^{sh}$ & NSK ${\cal +}$ ABC \\ \hline \hline Decays (9) \rule[-3.mm]{0.mm}{9.mm} & $14.84$ & $15.12$ & $16.20$ \\ \hline New Timelike (127)\rule[-3.mm]{0.mm}{9.mm} & $118.88$ & $120.24$ & $126.47$ \\ \hline Old Timelike (82) \rule[-3.mm]{0.mm}{9.mm} & $50.65$ & $51.08$ & $60.45$ \\ \hline $\pi^0 \gamma$ (86)\rule[-3.mm]{0.mm}{9.mm} & $66.03$ & $65.84$ & $66.07$ \\ \hline $\eta \gamma$ (182)\rule[-3.mm]{0.mm}{9.mm} & $135.26$ & $135.54$ & $135.78$ \\ \hline $\pi^+ \pi^- \pi^0$ (126) \rule[-3.mm]{0.mm}{9.mm} & $139.45$ & $139.42$ & $139.44$ \\ \hline \hline $\delta g$ \rule[-3.mm]{0.mm}{9.mm}& --- & ${\bf 0}$& $[-0.30 \pm 0.07]~10^{-1}$ \\ \hline $\delta m^2$ (GeV$^2$)\rule[-3.mm]{0.mm}{9.mm} & --- & ${\bf 0}$ &$[1.06 \pm 0.62]~10^{-3}$ \\ \hline \hline ALEPH (37+1) \rule[-3.mm]{0.mm}{9.mm} &{ \bf (23.52)}& 19.23 & 28.30/36.51 \\ \hline ALEPH Scale (\%) \rule[-3.mm]{0.mm}{9.mm} & --- & $-5.45 \pm 0.62$ & $-1.46 \pm 0.38$ \\ \hline \hline CLEO (29+1) \rule[-3.mm]{0.mm}{9.mm} & { \bf (35.57)} & 31.83 & 37.22/39.46 \\ \hline CLEO Scale (\%) \rule[-3.mm]{0.mm}{9.mm} & --- & $-5.48 \pm 1.01$ & $-2.60\pm 0.54$ \\ \hline \hline BELLE (19+1) \rule[-3.mm]{0.mm}{9.mm} & { \bf (27.16)} & 27.61& 26.43/28.29\\ \hline BELLE Scale (\%) \rule[-3.mm]{0.mm}{9.mm} & --- & $-4.80 \pm 0.71$ & $-2.09 \pm 0.46$ \\ \hline \hline $\chi^2/\rm{dof}$ \rule[-3.mm]{0.mm}{11.mm} & 525.10/597 & 605.90/679 & 648.68/680 \\ Probability & 98.4\% & 97.9\% & 80.1\% \\ \hline \hline \end{tabular} \caption{ \label{T2} Individual $\chi^2$ of the data samples under various fit configurations. The $e^+ e^- $ data sets are referred to in the text. Inclusion of ALEPH, BELLE and CLEO data sets are referred to in column flags as A, B and C, respectively. A$^{sh}$, B$^{sh}$, C$^{sh}$ correspond to fitting the lineshape of the normalized invariant mass spectra only. Numbers indicated boldface are the $\chi^2$ distances of A$^{sh}$, B$^{sh}$, C$^{sh}$ to the fit solution of {\it only} the $e^+ e^- $ data. Parameter values written boldface are not allowed to vary in fits. } \end{table} \vspace{0.5cm} The last data column in Table \ref{T2} displays the fit information while fitting the ${\cal B}_{\pi\pi}/N~dN/ds$ distributions of the various $\tau$ experiments together with the whole set of $e^+e^-$ data, taking into account the constraint imposed on the $\lambda$ scale factors by the uncertainty on the measured branching ratio values ${\cal B}_{\pi\pi}$. The full fit thus obtained is also quite satisfactory as reflected by the fit probability (80\%). It shows, nevertheless, that a significant rescaling of the experimental data is requested. For the $\tau$ data sets, the partial $\chi^2$ information is displayed as $x/y$ where $x$ is the part of the $\chi^2$ coming from the data points and $y-x$ is the contribution of the $(\lambda/\eta)^2$ term (see Eq. (\ref{FF6})). $\sqrt{y-x}$ tells how far from the ${\cal B}_{\pi \pi}$ central value -- in units of $\eta_{Exp}$ -- , the spectrum normalization is preferred. Assuming no external cause to the rescaling, the numerical values of the corresponding coefficients look close to the expected ${\cal B}_{\pi \pi}$ value, taking into account the reported accuracy of the various measurements. Compared to these, the fit values are found at resp. 2.9 $\sigma_{ALEPH}$, 1.4 $\sigma_{BELLE}$ and 1.5 $\sigma_{CLEO}$ towards lower central values for ${\cal B}_{\pi \pi}$. The results are represented in Figures (\ref{Tau_abs}), (\ref{ResTau_abs}) and (\ref{ResTau_abs_spect}). In Figure (\ref{ResTau_abs}) one clearly sees that the residuals for CLEO and BELLE are consistent with those in Figure (\ref{ResTau_norm}), while those for ALEPH exhibit a structure around the $\rho$ peak location. We have widely discussed in Subsection \ref{preliminaire} the behavior of ALEPH, BELLE and CLEO residuals under very close conditions\footnote{ The differences between Figures \ref{ResTau_abs} and \ref{dmdg} is essentially due to the fact that we now perform a simultanous fit of ALEPH, BELLE and CLEO data sets instead of examining each them in isolation. In these conditions the values found for $\delta g$ and $\delta m^2$ are dominated by BELLE and CLEO which prefer a smaller value for $\delta g$ than ALEPH. }. Figure (\ref{ResTau_abs_spect}) gives another view~: Comparing Figure (\ref{ResTau_abs_spect}) to Figure (\ref{ResTau_norm_spect}) shows that the residual distributions remain flat up to the $\simeq~\phi$ mass, but slightly shifted towards positive values. The error covariance matrix of the fit parameters still exhibits some large correlations as $(g, \delta g) \simeq 40 \%$ and $(g, \delta m^2)\simeq (\delta g, \delta m^2)\simeq -40 \%$. One also gets correlations at the +20\% level between $\lambda_{ALEPH}$, $\lambda_{BELLE}$ and $\lambda_{CLEO}$, while all others do not exceed a few percent level. The sign of the correlations for $(g, \delta m^2)$ and $(\delta g, \delta m^2)$ prevents a possible relation of the form $m^2 + \delta m^2= a [g+ \delta g]^2 f_\pi^2$ for the $\rho^\pm$ mass squared. Relying on the global fit of all $\tau$ data samples, one gets~: \be \left\{ \begin{array}{llll} \displaystyle \frac{\Gamma_{\rho^\pm}-\Gamma_{\rho^0}}{\Gamma_{\rho^0}} \simeq & \displaystyle \frac{2 \delta g}{ g} =& [-1.10 \pm0.26] ~\%\\[0.5cm] \displaystyle m^2_{\rho^\pm}-m^2_{\rho^0} =& \delta m^2=&(1.06 \pm 0.62) ~10^{-3} ~~{\rm GeV}^2 \end{array} \right . \label{rhobrk2} \ee significantly smaller than for ALEPH only (see Eq. (\ref{rhobrk})). In this case, the mass difference between the two $\rho$ mesons is $m_{\rho^\pm}-m_{\rho^0}\simeq 0.68 \pm 0.39$ MeV, whereas $\Gamma_{\rho^\pm}-\Gamma_{\rho^0} \simeq -1.63 \pm 0.39$ MeV only. Summarizing the results gathered in Table \ref{T2}, one can conclude that $e^+ e^-$ data and $\tau$ data do not exhibit inconsistencies. Indeed, the fit of the spectra reported in the third data column is quite satisfactory, and this takes into account all reported experimental information. This agreement at the the form factor level, does not {\it a priori} mean that similar $g-2$ values will be obtained when using/removing $\tau$ data. Taking into account the statistical significance of the rescaling factors as reflected by their uncertainties (see Table \ref{T2}), we neglect none of them in the rest of this analysis. One may stress once again that these rescaling factors may well only reflect an incomplete account of the isospin breaking effects affecting the $\tau$ spectra lineshape which have to be "subtracted" in order to compute estimates of the $\pi \pi$ loop contribution to $g-2$. \vspace{0.5cm} Finally, one may also wonder to reach as frequently such favorable probabilities. As can be seen in Table \ref{T2}, this reflects the low contributions to the $\chi^2$ yielded by the old $\pi \pi$ data \cite{Barkov,DM1} ($\chi^2/n \simeq 0.6$) and by the (recent) $e^+ e^- \ra (\pi^0/\eta) \gamma$ data \cite{CMD2Pg1999,CMD2Pg2001,CMD2Pg2005,sndPg2000,sndPg2003,sndPg2007} ($\chi^2/n \simeq 0.75$). Instead, the newly collected $\pi \pi$ data sets \cite{CMD2-1995corr,CMD2-1998-1,SND-1998} yield $\chi^2/n \simeq 0.9$ and the 3--pion data \cite{CMD2-1995corr,CMD2-2006,CMD2KKb-1,CMD2-1998,ND3pion-1991,CMD3pion-1989} $\chi^2/n \simeq 1.1$, quite comparable to CLEO data $\chi^2/n \simeq 1.3$. Discarding the contribution of the scale penalty terms, BELLE data yield $\chi^2/n \simeq 1.4$ and ALEPH $\simeq 0.9$. Therefore, these high probabilities reveal certainly some overestimation of systematic errors in definite $e^+ e^-$ data sets. When data sets for $e^+ e^- \ra (\pi^0/\eta) \gamma$ with better estimated errors will become available, this question will be naturally solved~; awaiting this, one may conclude that the most precise pieces of information provided presently by the full $e^+ e^- \ra (\pi^0/\eta) \gamma$ cross sections are essentially the partial widths for $\rho^0/\omg/\phi \ra (\pi^0/\eta) \gamma$. On the other hand, this explains why one should carefully control that some inconsistency in the fit is not hidden by these favorable probabilities. A good criterium is certainly provided by the $\chi^2/n$ value associated with each data set and by its behavior while modifying the data set collection submitted to fit. \section{Consequences for the Hadronic Contribution to $g-2$} \label{gMoins2} \indent \indent An important outcome of our model and of our treatment of the data sets -- {\it i.e.} our fitting procedure -- is related with the estimate of hadronic contributions to $g-2$ up to 1 GeV. Mixing different processes correlated by the same underlying physics cannot be successful without some clear understanding of the errors in each data set, which should be properly implemented inside the fitting code. Our procedure relies on the whole available information on uncertainties (magnitude and type). This allows us to draw conclusions directly from a global fit to several data sets rather than combining physics information derived from separate fits to the individual data samples. However, this does not guarantee that combining various experiments will result in improved uncertainties. Indeed, the way the systematic errors, especially scale uncertainties, combine in the fit procedure cannot be straightforwardly guessed. One may, nevertheless, expect the results to be more reliable~; indeed, assuming systematics are randomly distributed, the net result of mixing different experiments and/or data sets should be to neutralize them to a large extent\footnote{ Of course, one cannot completely exclude that systematics could "pile up" coherently~; however, if the number of different data sets is large enough, such a possibility looks rather unlikely.}. \vspace{0.5cm} The lowest order contribution of a given annihilation process $e^+ e^- \ra H$ to the muon anomalous magnetic moment $a_\mu=(g-2)/2$ is given by~: \be \displaystyle a_\mu (H) = \frac{1}{4 \pi^3} \int_{s_H}^{s_{cut}} ds ~K(s)~ \sigma(s) \label{eqp2} \ee where $\sigma(s)$ is the Born cross section of the annihilation process $e^+ e^- \ra H$, $s_H$ its threshold squared mass and $s_{cut}$ is an assumed end point of the non--perturbative region. $K(s)$ is a known kernel \cite{Fred09} given by the integral~: \be \displaystyle K(s)=\int_0^1 dx \frac{x^2(1-x)}{x^2+(1-x)s/m_\mu^2}~~~, \label{eqp3} \ee $m_\mu$ being the muon mass. For $s>4 m_\mu^2$, this writes~: \be \left \{ \begin{array}{lll} \displaystyle K(s)= \frac{x^2}{2}(2-x^2) + \frac{(1+x^2)(1+x^2)}{x^2} \left[ \ln{(1+x)} -x + \frac{x^2}{2} \right] + \frac{1+x}{1-x}~x^2\ln{x}\\[0.5cm] \displaystyle {\rm with~~~:~~~} x=\frac{1-\beta}{1+\beta} ~~~{\rm and}~~~\beta =\sqrt{1-\frac{4 m_\mu^2}{s}} \end{array} \right. \label{eqp4} \ee and, for $0 < s \leq 4 m_\mu^2$, it becomes \cite{Fred09} ($r=s/m_\mu^2$)~: \be \displaystyle K(s)= \frac{1}{2}- r +\frac{1}{2} r (r-2) \ln{r} -(1 -2 r + \frac{r^2}{2}) \sqrt{\frac{r}{4 -r} }\arctan{\sqrt{\frac{4-r}{r}}} \label{eqp5} \ee This expression has to be used in order to integrate the cross section for $e^+ e^- \ra \pi^0 \gamma$ below the two--muon threshold. Our global fit provides the theoretical Born cross sections with their parameter values, errors and their (full) covariance matrix. As illustrated above and in \cite{ExtMod1}, the results obtained while fitting altogether several $e^+e^-$ cross sections and $\tau$ spectra are satisfactory. Therefore, we consider that using our cross sections within a Monte Carlo, which fully takes into account the parameters, their errors and correlations, should provide a fairly well motivated value for each accessible $a_\mu (H)$ in our fitting range ({\it i.e. } from each threshold up to $\simeq$1 GeV/c) and for its uncertainty which appropriately merges statistical and systematic errors. In order to allow for a motivated comparison between our results and experimental estimates for $a_\mu(\pi^+ \pi^-)$, one should also include the effects of Final State Radiation (FSR) into our estimates of the $\pi \pi$ contribution to the muon anomalous moment. Indeed, even if not always manifest, the shift produced by FSR corrections is included in the reported \cite{CMD2-1995corr,CMD2-1998-1,SND-1998} experimental values for $a_\mu(\pi^+ \pi^-)$. The FSR corrections are accounted for by multiplying the $\pi \pi$ Born cross section in Eq. (\ref{eqp2}) by \cite{Fred09}~: \be \displaystyle 1+\eta_{FSR}(s)= \left ( 1+ \eta(s) \frac{\alpha_{em}}{\pi} - \frac{\pi \alpha_{em}}{2 \beta} \right ) \frac{\pi \alpha_{em}}{\beta} \left( 1-\exp{\{-\frac{\pi \alpha_{em}}{\beta}\}} \right ) \label{FSR} \ee where the Schwinger function\cite{Schwinger} $\eta(s)$ can be found corrected for a missprint in \cite{Drees} together with a simplified expression (also derived by Schwinger) valid at the 1.5\% level. The uncertainties affecting the FSR effect estimates are not known~; nevertheless, they are expected small in our range of interest\cite{Fred09}~; we, therefore, neglect their contribution to the errors we report on $a_\mu(\pi \pi)$. Finally, FSR effects on contributions other than $\pi \pi$ to $a_\mu$ are also known to be negligible \cite{Fred09} and are thus neglected. \subsection{Global Fit Results And $e^+e^-$ Data} \indent \indent It is quite important to compare the outcome of our model and fit procedure with experimental data. This should allow to check for possible methodological biases and to substantiate how the merging of statistical and systematic errors operates. CMD-2 \cite{CMD2-1995corr,CMD2-1998-1} and SND \cite{SND-1998}, for instance, have published $a_\mu(\pi^+ \pi^-)$ obtained by integrating numerically their measured $e^+ e^- \ra \pi^+ \pi^-$ cross section over the interval $\sqrt{s}=0.630 \div 0.958$ GeV, using the trapezoidal method. We have run our code using separately each of the data sets given in resp. \cite{CMD2-1995corr}, \cite{CMD2-1998-1} and \cite{SND-1998} together with the relevant set of decay partial widths of vector mesons needed in order to determine numerically the SU(3)/U(3)/SU(2) breaking parametrization. \begin{table}[!htb] \hspace{-2.cm} \begin{tabular}{|| c || c | c | c | c ||} \hline \hline \rule[-3.mm]{0.mm}{11.mm} Data Set & Experimental Result & Fit Solution & \multicolumn{2}{|c|}{Statistical Information} \\ \hline \rule[-3.mm]{0.mm}{11.mm} ~~ & ~~& ~~& $\chi^2/\rm{dof}$ & Probability\\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} CMD--2 (1995)& $362.1 \pm (2.4)_{stat} \pm (2.2)_{syst}$ & $362.57\pm 2.64$ & $42.46/44$ & 53.8\%\\ \hline \rule[-3.mm]{0.mm}{11.mm} CMD--2 (1998)& $361.5 \pm (1.7)_{stat} \pm (2.9)_{syst}$ & $362.36 \pm 2.14$ &$38.05/40$ & 55.9\%\\ \hline \rule[-3.mm]{0.mm}{11.mm} SND (1998) & $361.0 \pm (1.2)_{stat} \pm (4.7)_{syst}$ & $361.09 \pm 2.04$ & $27.10/46$ & 98.8\% \\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} NSK New Timelike $\pi^+\pi^-$ & $360.0 \pm 3.02_{exp}$ $~~^{***}$ & $361.39 \pm 1.72$ & $124.61/128$ & 56.8\% \\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK New Timelike~$\pi^+\pi^-$ + KLOE & $358.5 \pm 2.41_{exp}$ $~~^{***}$ & $360.25 \pm 1.48$ & $255.22/188$ & 0.1\% \\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} NSK($\pi^+\pi^-$) & ~~& $359.50 \pm 1.60$ & $180.09/210$ & 93.3\%\\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK ($\pi^+\pi^-$) + $(\pi^0/\eta) \gamma$ & & $359.42\pm 1.52$ & 373.59/468 & 99.96\%\\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} NSK ($\pi^+\pi^-$) + $(\pi^0/\eta) \gamma$ + ($\pi^+ \pi^- \pi^0$) & & $359.31 \pm 1.62 $ & 525.10/597 & 98.4\% \\ \hline \hline \end{tabular} \caption{ \label{T3} Contributions to $10^{10} a_\mu(\pi \pi)$ from the invariant mass region $0.630-0.958$ GeV/c. The data flagged by $~~^{***}$ are combined values proposed in \cite{DavierHoecker} for the New Timelike data altogether, with or without KLOE data \cite{KLOE08}. The first three lines in this Table refer to $a_\mu(\pi \pi)$ experimental values given in resp. \cite{CMD2-1995corr}, \cite{CMD2-1998-1} and \cite{SND-1998}. "NSK ($\pi^+\pi^-$)" indicate that {\it all} annihilation data to $\pi^+\pi^-$ are considered.} \end{table} The results are given in the upper part of Table \ref{T3} and look in good correspondence with expectations. The highly favorable probability for the SND spectrum \cite{SND-1998} might be attributable to its larger systematics compared to CMD--2. The middle line in this Table shows the result obtained while fitting simultaneously the three Novosibirsk spectra \cite{CMD2-1995corr,CMD2-1998-1,SND-1998} together. The global fit of these data sets, where systematics are certainly under good control, provides a strong reduction of the global uncertainty -- merging all reported systematic and statistical errors. The improvement of the uncertainty derived from the global fit solution compares favorably with the average value proposed by \cite{DavierHoecker} using a spline method. Therefore, nothing obviously abnormal is recognized in this comparison. The following line shows that KLOE data \cite{KLOE08} allows to reduce a little bit more the uncertainty~; both the central value and the uncertainty compare well with the average proposed by \cite{DavierHoecker}. However, one should note the very low probability of the associated global fit. The next three lines in Table \ref{T3} are also quite interesting. The first of these reports the result obtained when fitting using all available $\pi^+ \pi^-$ data sets -- except for KLOE \cite{KLOE_ISR1}~; this turns out to include into the fitted data the so--called "Old Timelike Data" \cite{Barkov,DM1} (82 measurements) besides the "New Timelike Data" (127 measurements). The central value for $a_\mu(\pi \pi)$ is slightly shifted downwards by about $1 \sigma$ with a negligible gain in precision. One also observes that the fit probability makes a large jump ($\simeq 50 \% \Rightarrow \simeq 90 \%$), reflecting the larger uncertainties in the data from \cite{Barkov,DM1} compared to those from \cite{CMD2-1995corr,CMD2-1998-1,SND-1998}. This once more shows that the systematics in the data sets from \cite{Barkov,DM1} have been conservatively estimed. The last 2 lines in Table \ref{T3} exhibit the same trend~; indeed, including in the fit procedure the 86 measurements of $e^+e^- \ra \pi^0 \gamma$, the 182 measurements of $e^+e^- \ra \eta \gamma$ and the 126 points of $e^+e^- \ra \pi^0 \pi^+ \pi^-$ from \cite{CMD2-1995corr,CMD2-2006,CMD2KKb-1,CMD2-1998,ND3pion-1991,CMD3pion-1989} does not improve, strictly speaking, the uncertainty~; however, it is satisfactory to check that the central value only fluctuates inside quite acceptable limits. This teaches us some remarkable facts~: \begin{itemize} \item Improving systematics in all processes having the same underlying physics as $\pi^+ \pi^-$ might be useful to allow a better determination of their own contributions to $a_\mu$, but also of $a_\mu(\pi \pi)$ itself. Indeed, most physics parameters in the quoted processes are the same as in $e^+ e^- \ra \pi^+ \pi^-$. Conversely, higher quality $\pi^+ \pi^-$ data should improve estimating the contributions of the other related annihilation processes to $a_\mu$. \item If errors are reasonably well understood and appropriately dealt with, adding poor quality data sets into the fitted sample does not degrade the result~; this allows, however, to confirm the stability of the central values, which is certainly a valuable piece of information. \end{itemize} However, the most important remark is certainly that using the radiative partial width decays of light mesons -- and/or the $e^+ e^- \ra (\pi^0/\eta) \gamma$ cross sections -- together with $e^+ e^- \ra \pi^+ \pi^-$ allows a quite significant improvement of the accuracy on $a_\mu(\pi \pi)$ compared to the direct numerical integration of the experimental spectra. This is, indeed, the main advantage of having a constraining global model allowing for an overconstrained global fit. \subsection{Effects Of Including $\tau$ Spectra And KLOE Data} \indent \indent One may question the effects produced by introducing the $\tau$ decay data inside our collection of fitted data sets. These are expected to improve the model parameter values (and errors), if their systematics are indeed reasonably well controlled. Our strategy will be to consider all $e^+ e^-$ data sets used just above and look at the effect of including the A \cite{Aleph}, B \cite{Belle} and C \cite{Cleo} data samples in isolation or combined. As above, when using the fit results obtained by constraining the rescaling factors -- {\it i.e.} keeping the $(\lambda/\eta)^2$ terms in the $\chi^2$ expressions for $\tau$ data samples -- the $\tau$ samples will be denoted A, B and C~; when concentrating over $\tau$ lineshapes, these will be denoted A$^{sh}$, B$^{sh}$ and C$^{sh}$. The case when fitting the $\tau$ spectra by allowing non--vanishing $\delta g$ and $\delta m^2$ and imposing $\lambda \equiv 0$ will still be referred to as A$_{dm,dg}$ B$_{dm,dg}$ or C$_{dm,dg}$ in our Tables and/or Figures. \subsubsection{Comparison With Standard $\tau$ Based Estimates for $a_\mu$} \indent \indent When letting free $\delta g$ and $\delta m^2$ while imposing $\lambda \equiv 0$, our approach is comparable to those underlying the so--called $\tau$ based estimates \cite{DavierHoecker} of the hadronic contribution to the muon $g-2$. Indeed, the whole set of $e^+ e^-$ data fixes in a data driven mode all identified isospin breaking corrections~: $\rho-\omg$ and $\rho-\phi$ meson mixing, $\rho$ meson mass and width differences, I=0 part of the $\rho$ revealed by its coupling to photon\footnote{This contribution is currently not considered as such \cite{DavierHoecker}. It should be partly absorbed in $\rho$ meson width corrections. } (see Eq. (\ref{referee1})) and FSR corrections. The pion mass difference is plugged in directly. It is thus interesting to compare our results in this case with the existing estimates and thus check the effects of a global fit. The hadronic contributions to the muon $g-2$ derived from $\tau$ data have been updated recently in \cite{DavierHoecker}~; the numerical contributions to $a_\mu$ from the the reference region $\sqrt{s} \in [0.63,0.958]$ are not published but have been kindly communicated to us \cite{ZhangPriv}. \begin{table}[!htb] \hspace{-2.cm} \begin{tabular}{|| c || c | c | c | c | c ||} \hline \hline \rule[-3.mm]{0.mm}{11.mm} Data Set & $a_\mu(\pi\pi) $ & $a_\mu(\pi\pi) $ & \multicolumn{3}{|c|}{Statistical Information} \\ \hline \rule[-3.mm]{0.mm}{11.mm} ~~ & Experimental Result & Fit Solution & $\chi^2_\tau/\rm{dof}$& $\chi^2_{ee}/\rm{dof}$ & Probability\\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} A$_{dm,dg}$ \cite{Aleph}& $364.02 \pm (2.19)_{exp} \pm (1.97)_{Br} \pm (1.51)_{IB}$ & $362.83\pm 1.46$ & $45.52/37$ & $122.12/127$ & 95.3\%\\ ~~& $(364.02 \pm 3.31_{tot})$ & ~~ & ~~& ~~& ~~\\ \hline \rule[-3.mm]{0.mm}{11.mm} B$_{dm,dg}$ \cite{Belle} & $366.44 \pm (1.02)_{exp} \pm (5.70)_{Br} \pm (1.51)_{IB}$ & $364.85 \pm 1.32$ &$41.95/19 $ &$128.58/127 $ & 76.5\%\\ ~~& $(366.44 \pm 5.98_{tot})$ & ~~ & ~~& ~~& ~~\\ \hline \rule[-3.mm]{0.mm}{11.mm} C$_{dm,dg}$ \cite{Cleo} & $366.62 \pm (4.17)_{exp} \pm (6.37)_{Br} \pm (1.51)_{IB}$ & $364.23 \pm 1.82$ & $63.01/29$ & $125.63/127$& 70.0\% \\ ~~& $(366.62 \pm 8.05_{tot})$ & ~~ & ~~& ~~& ~~\\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} OPAL \cite{Opal} & $354.40 \pm (4.67)_{exp} \pm (4.78)_{Br} \pm (1.51)_{IB}$ & -- & -- & -- & -- \\ ~~& $(354.40 \pm 6.85_{tot})$ & ~~ & ~~& ~~& ~~\\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} ALL $\tau$ Sets & $367.46 \pm (1.31)_{exp} \pm (1.59)_{Br} \pm (1.51)_{IB}$ & $367.12 \pm 1.30$ &$102.06/85 $&$143.32/127$ & 58.2\%\\ ~~& $(367.46 \pm 2.55_{tot})$ & ~~ & ~~& ~~& ~~\\ \hline \hline \end{tabular} \caption{ \label{T3b} Contributions to $10^{10} a_\mu(\pi \pi)$ from the invariant mass region $0.630-0.958$ GeV/c. The experimental values \cite{ZhangPriv} are derived using the method from \cite{DavierHoecker}. The experimental average includes OPAL data, our fit result does not. The meaning of A$_{dm,dg}$, B$_{dm,dg}$, C$_{dm,dg}$ is explained in the text.} \end{table} Table \ref{T3b} displays the experimental data derived from each existing $\tau$ data set and their combination \cite{DavierHoecker,ZhangPriv} in the first data column. One may note that the proposed experimental average is larger than each of the individual estimates~; actually, in order to perform the average, each individual estimate has been rescaled to the world average value for ${\cal B}_{\pi \pi}$ \cite{ZhangPriv}. The total (experimental) errors displayed are computed by summing up the various components in quadrature. This is provided in order to allow for an easy comparison with our fit result. The data columns $\chi^2_\tau/\rm{dof}$ and $\chi^2_{ee}/\rm{dof}$ display the contribution to the total $\chi^2$ provided by resp. the $\tau$ and the so--called New Timelike $e^+ e^-$ annihilation data \cite{CMD2-1995corr,CMD2-1998-1,SND-1998}, which serve as quality tags. One should note the nice correspondence between our central values and the corresponding experimental estimates. The improvement of the total errors provided by the global fit method is also worth mentioning. The reduction of the uncertainties provided when using $\tau$ data looks even more important that when using $e^+e^-$ data alone. Of course, the errors provided there, as anywhere in this paper, are the {\sc minuit} errors returned by the fits. The result for (A$_{dm,dg}$ B$_{dm,dg}$ C$_{dm,dg}$) -- last line in Table \ref{T3b} -- is also quite remarkable. It clearly illustrates that our global fit does not lead to a standard averaging, but takes into account the relative mismatch of the A, B and C lineshape distortions noted in Subsection \ref{preliminaire}. In this procedure, the fit average is significantly pushed upwards, in accord with the experimental estimate\footnote{ This should be slightly larger, if removing the OPAL data set \cite{Opal}.}. The data columns providing the $\chi^2$ information are also quite important. As already noted in Subsection \ref{preliminaire}, $\chi^2_{ALEPH}$ is reasonably good simultaneously with the New Timelike $e^+ e^-$ data. Comparing $\chi^2_{ALEPH}$ here (45.52) with its homologue in Table \ref{T0} -- fourth line therein -- (29.17), reveals that the ALEPH data sample meets some difficulty in accomodating the data for $e^+ e^- \ra \pi^+ \pi^- \pi^0 $. However, the description remains quite reasonable and one may conclude that there is no real mismatch, within our approach, between ALEPH data and the VMD expectations. The $a_\mu$ value just derived from ALEPH data compares well with those derived from $e^+ e^-$ data only (see Table \ref{T3}). The values for $a_\mu$ derived for BELLE and CLEO data in isolation, even if slightly larger than expected from VMD inference, are not in strong disagreement with these. However, as can be seen from either of Table \ref{T3b} and Table \ref{T0} (see the $\lambda \equiv 0$ entries therein), the fit quality, as reflected by $\chi^2_{BELLE}$ and $\chi^2_{CLEO}$ looks significantly poorer (both yield $\chi^2/npoints \simeq 2$) than ALEPH ($\chi^2/npoints \simeq 1.1 \div 1.2$). Whether this behavior reveals specific systematics is an open issue. When performing the global fit with all $\tau$ data samples (last line in Table \ref{T3b}), the weight of $\tau$ data happens to become dominant. In this case, the fit of the $\tau$ data is nearly unchanged ($\chi^2_{ALEPH}=26.64$, $\chi^2_{BELLE}=45.89$, $\chi^2_{CLEO}=29.52$)~; however, even if apparently reasonable, the new timelike $e^+ e^-$ data yield a $\chi^2_{ee}$ increased by 20 units, which is significantly far enough from optimum that we consider the corresponding $a_\mu$ value with great care. The analysis in Subsection \ref{preliminaire} has shown that rescaling factors and lineshape distortions are sharply related. However, in order to derive $\tau$ based estimates for $a_\mu$, one should remove all distortions produced by isospin breaking effects specific of the $\rho^\pm$ meson relative to $\rho^0$. Therefore, we do not consider reliable the estimates provided in Table \ref{T3b}, except possibly those in the A$_{dm,dg}$ entry, as the distortions in the B and C samples are to be better understood. \subsubsection{Additional $\tau$ Based Estimates of $a_\mu$} \indent \indent Our analysis in Subsection \ref{preliminaire} provided a serious hint that rescaling the $\tau$ spectra can be an effective way to account for shape distortions produced by IB effects which differentiate the $\rho^\pm$ and $\rho^0$ mesons. We just provided results assuming no scale correction to the $\tau$ data samples ($\lambda_{ALEPH}=\lambda_{BELLE}=\lambda_{CLEO}=0$). In order to provide a complete picture, it is worth considering two more sets of fitting conditions. In this Subsection, we will present the results derived by~: {\cal i/} fitting the $\tau$ spectrum lineshapes by the method emphasized in Subsection \ref{taulineshape}, {\cal ii/} fitting the $\tau$ spectra by allowing non--vanishing rescalings, however, constrained by the relevant $(\lambda/\eta_{Exp})^2$ term for each of the $\tau$ data set. \begin{table}[!htb] \hspace{+1.5cm} \begin{tabular}{|| c || c | c | c ||} \hline \hline \rule[-3.mm]{0.mm}{11.mm} Data Set & Fit Solution & \multicolumn{2}{|c|}{Statistical Information} \\ \hline \rule[-3.mm]{0.mm}{11.mm} ~~ & $10^{10} a_\mu(\pi \pi)$ & $\chi^2/\rm{dof}$ & Probability\\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK ($e^+ e^-$) & $359.31 \pm 1.62$ & $525.10/597$ & 98.4\%\\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} NSK +A$^{sh}$ B$^{sh}$ C$^{sh}$ & $359.62 \pm 1.51$ & $605.90/679$ & 97.9\%\\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} NSK + A ($\chi^2_\tau=$ 28.39) & $362.24 \pm 1.52$ & 568.60/632 & 96.6\% \\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK + B ($\chi^2_\tau=$ 32.59) & $360.68 \pm 1.47$ & 558.88/614 & 94.6\% \\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK + C ($\chi^2_\tau=$ 39.13) & $360.52 \pm 1.55 $ & 565.24/624 & 95.5\% \\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK + A B C & $364.48 \pm 1.34$ & 648.60/680 & 80.1 \% \\ \hline \hline \hline \rule[-3.mm]{0.mm}{11.mm} NSK + KLOE & $359.22 \pm 1.32$ &652.92/657& 51.9\%\\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK + KLOE + A$^{sh}$ B$^{sh}$ C$^{sh}$ & $358.52 \pm 1.32$ & 741.50/739 & 46.7\% \\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK + KLOE + A B C & $364.04 \pm 1.25$ & 792.28/740 & 8.9\% \\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK + KLOE + A ($\chi^2_\tau=$ 49.59) & $361.55 \pm 1.31$ & 708.90/692 & 32.0\% \\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK + KLOE + B ($\chi^2_\tau=$ 34.06) & $360.19 \pm 1.16$ & 688.02/674 & 34.6\% \\ \hline \rule[-3.mm]{0.mm}{11.mm} NSK + KLOE + C ($\chi^2_\tau=$ 39.89)& $360.11 \pm 1.32$ & 693.40/684 & 39.3\% \\ \hline \end{tabular} \caption{ \label{T4} Contributions to $10^{10} a_\mu(\pi \pi)$ from $\sqrt{s} \in[0.630,0.958]$ GeV/c. $\tau$ data set configurations are considered together with the $e^+e^-$ data and the appropriate subset of decay partial widths. The meaning of A, B, C, and A$^{sh}$, B$^{sh}$, C$^{sh}$ is given in the text. The contribution of single $\tau$ subsets to the total $\chi^2$ is indicated when relevant by ($\chi^2_\tau=\cdots$). For lineshape fits without KLOE data, we have $\chi^2_\tau=$ 18.22 for A, $\chi^2_\tau=$ 28.35 for B and $\chi^2_\tau=$ 31.88 for C. } \end{table} The upmost part of Table \ref{T4} collects our results for $a_\mu$ obtained using the $\tau$ data samples, except for the first line which is nothing but the final result in Table \ref{T3}. The second line shows that including the $\tau$ lineshapes into the fit (A$^{sh}$, B$^{sh}$, C$^{sh}$) gives results perfectly consistent with using only $e^+ e^-$ data and a slightly improved uncertainty. As the fit quality of each A$^{sh}$, B$^{sh}$ and C$^{sh}$ on the one hand, and each of the $e^+ e ^-$ data are simultaneously optimum, one may consider this estimate safe. Compared with pure $e^+e^-$ estimates, the next 3 lines show the effects of including each of A, B and C in isolation\footnote{And keeping the $(\lambda/\eta)^2$ term included in the $\chi^2$ expression, as stated above.}. The $a_\mu$ value found for ALEPH agrees with that in the A$_{dm,dg}$ entry in Table \ref{T3b}. Interestingly, the values found using either B or C are significantly lower than their homologues in Table \ref{T3b}, but in nice correspondence with VMD expectations (see Table \ref{T3}). This may indicate that some rescaling is needed for these, consistent with their uncertainties. A simultanoueous use of A, B and C (line flagged with NSK+ABC) provokes an upward shift by $4~10^{-10}$, as already observed in Table \ref{T3b}. The behavior of the combination under the fit illustrates, once again, that the shape distortions exhibited by each of A, B and C are different and that the compensation performed by the fit degrades the description of the $e^+ e^-$ data. This produces the increased value for $a_\mu$, as also observed in the previous Subsection. \subsubsection{Including KLOE Data} \indent \indent Up to now, we have not introduced the KLOE \cite{KLOE_ISR1} data into the data sets submitted to a fit. Indeed, even if considered acceptable, its best $\chi^2/n_{points}$ looks large \cite{ExtMod1}. It is nevertheless instructive to point out explicitly its effects. Table 2 in \cite{ExtMod1} indicates that three among the rescaling coefficients can be cancelled out~; one only has to let vary those corresponding to the global scale ($\varepsilon_0$) and to the acceptance correction ($\varepsilon_2$) \cite{KLOE_ISR1}. This provides the results given in the lower half of Table \ref{T4}. Even if the probability decreases due to the intrinsically high (minimum) $\chi^2$ of the KLOE data sample, one observes a good consistency of the $a_\mu(\pi \pi)$ value derived from fitting the (rescaled) KLOE data together with all $e^+ e^-$ Novosibirsk data. One also starts getting much improved uncertainties. One may note the good probability when using all NSK data together with KLOE and A$^{sh}$ B$^{sh}$ C$^{sh}$ and the improved uncertainties on $a_\mu(\pi \pi)$. When using A, B and/or C , one finally observes the same trend as in the upper part of Table \ref{T4}, which lead us to be as cautious in this case. Nevertheless, one may conclude that KLOE data do not degrade the expectations from NSK data, whatever is done with $\tau$ data. In order to assess the effects of a global fit, it is also interesting to compare the result for NSK+KLOE in Table \ref{T4} with the corresponding average published in \cite{DavierHoecker}~: $${\rm~~Global Fit~:}~a_\mu(\pi \pi)= 359.22 \pm 1.32 ~~\Longleftrightarrow {\rm Average\cite{DavierHoecker}~:}~a_\mu(\pi \pi)= 358.51 \pm 2.41_{exp} $$ in units of $10^{-10}$. As for NSK data alone, the central value from our fit is slightly larger than the estimate from \cite{DavierHoecker} and the uncertainty is significatively improved. \subsubsection{A Partial Summary} \indent \indent In summary, the picture exhibited when using the $\tau$ data samples looks somewhat intricated. We have shown above, especially in Subsection \ref{preliminaire}, that lineshape distortions and absolute scale of spectra are intimately related. However, the present analysis tends to indicate that an important part of the reported discrepancy between $e^+ e^-$ and $\tau$ based estimates of $g-2$ is related with a more or less difficult way to account for isospin breaking effects differentiating the $\rho^\pm$ and $\rho^0$ lineshapes. This conclusion is enforced by comparing the behavior of the ALEPH data set with those of BELLE and CLEO~: the distortions of the former set can be accounted for within our model, providing a reasonable accord with VMD expectations. As for BELLE and CLEO, genuine mass and width differences compared to $\rho^0$, which successfully work with ALEPH data, do not avoid a residual distortion effect which has important consequences while estimating $a_\mu$. Interestingly, the normalized $\tau$ spectra do not reveal the same problem. This is certainly due to the fact that a free rescaling allows to disconnect lineshape distortions as parametrized by mass and width differences ($\delta g$ and $\delta m^2$ in our model) from the absolute scale. This indicates that the distortions exhibited by BELLE and CLEO are not understood. Of course, one cannot exclude that the agreement with ALEPH is purely accidental and that a more refined IB mechanism might have to be considered. Figure \ref{Compilation} displays graphically the main conclusions discussed in this Section. The vertical line centered at the value derived by fitting all Novosibirsk data is drawn to guide the eye. Comparing our estimates with the experimental data reported there is also interesting by itself. This indicates that individual estimates of $a_\mu$ provided by each of the $\tau$ data sets are close to the $e^+e^-$ based estimate. The combined fit of these, instead, exhibits a clear discrepancy, as shown by the data points flagged by NSK + A+B+C , NSK + A$_{dm,dg}$+B$_{dm,dg}$+C$_{dm,dg}$ or with KLOE. Interestingly, the comparison of the various data points exhibits the same trend. \subsubsection{Comments On Our Model Uncertainties} \label{moderr} \indent \indent Uncertainties intrinsic to our model may have to be estimated. For this purpose, having alternative models able to cover the same scope than our extended model would be desirable. Unfortunately, such models do not seem to exist which could cope with detailed descriptions of several decay channels over our whole fitting range. Some interesting attempts have been initiated relying on Chiral Symmetry and the properties of the Roy equations \cite{Colangelo1,Colangelo2}~; however, nothing final is presently available. In our case, one cannot make estimates by changing, for instance. the $\rho$ parametrization (Gounaris--Sakurai, Breit--Wigner, Kuhn--Santamaria \ldots) as sometimes done elsewhere. Indeed, the form factor lineshape is intrinsic to the model, including the IB vector meson mixing. However, the difference between the central values of our estimates and the corresponding experimental estimates \cite{CMD2-1995corr,CMD2-1998-1,SND-1998} may give some hint on the bound for our model uncertainty. \subsection{Comparison with $a_\mu$ Estimates Collected by the ISR Method} \indent \indent As a summary, our preferred final result is derived using all $e^+e^-$ data collected at Novosibirsk~: \be a_\mu(\pi \pi;~0.630\div0.958 {\rm~~GeV}) = [ 359.31 \pm1.62_{exp} ]~ 10^{-10} \label{finalamu} \ee and corresponds to an (underlying) fit probability of 98.4\%. This result can be compared with the recent estimate published by KLOE \cite{KLOE08}~: $$a_\mu(\pi \pi; 0.630 < m_{\pi\pi}<0.958) = (356.7 \pm 0.4 \pm 3.1)~~ 10^{-10}$$ based on a newly collected data set \cite{KLOE_ISR1,KLOEnote}. This estimate may look a little bit low, but is consistent with ours at $\simeq 0.7 ~\sigma$ level. On the other hand, the BaBar Collaboration has recently finalized a new data set \cite{BaBar}, also collected using the ISR method. The contribution it provides to $a_\mu(\pi \pi)$ in the canonical $s$ interval can be found in \cite{DavierHoecker2}~: $$a_\mu(\pi \pi;~0.630\div0.958 {\rm~~GeV}) = (365.2 \pm 1.9 \pm 1.9)~ 10^{-10} =~(365.2 \pm 2.7_{tot})~ 10^{-10}$$ which supersedes a preliminary result presented at the Novosibirsk Conference TAU08\cite{Davier2009} ($a_\mu(\pi \pi) = (369 \pm 0.8 \pm 2.9)~ 10^{-10}$). Even if larger than Eq. (\ref{finalamu}), this estimate is consistent with ours at the $1.9 ~\sigma$ level. Comparing our result with these recent estimates clearly illustrates the advantage of introducing information beyond the $e^+ e^- \ra \pi^+ \pi^- $ cross section and a framework which encompasses annihilation processes and partial width decays of light mesons. Indeed, the IB schemes needed can be calibrated consistently on the data set. The fact that the HLS framework does not presently go much beyond the $\phi$ mass region is certainly a handicap, but does not prevent interesting improvements. \subsection{Contributions to $a_\mu$ Up To 1 GeV} \indent \indent The discussion above leads us to consider the best motivated results for $a_\mu$ as derived from fit to the Novosibirsk $e^+ e^-$ data samples taking into account the photon hadronic VP (see Eq. (36) in\cite{ExtMod1}). A challenging choice might be to include the $\tau$ {\it normalized} spectra, which have been shown to yield a satisfactory description simultaneously with all $e^+ e^-$ data. We have seen that BELLE and CLEO seem to meet problems with their shape distortions which are not clearly identified. However, as A$_{\delta m,\delta g}$ is reasonably well understood, one may consider its results for $a_\mu$. Using these data samples, one can estimate several contributions\footnote{Actually, one could have produced, as well, these contributions up to about 1.05 GeV/c in order to include the $\phi$ region. As our fitted region includes the $\phi$ peak, our results would be as reliable. However, the largest ($K \overline{K}$) contribution involving the $\phi$ region is presently left aside because of the issue raised by \cite{BGPter} and still unsolved. } to $ a_\mu$ up to 1 GeV. They are given in Table \ref{T6}. We thus provide the results obtained while fitting without any $\tau$ sample in the first data column. The last data column, instead, shows the influence of KLOE data \cite{KLOE_ISR1,KLOEnote}. \begin{table}[!htb] \hspace{-1.5cm} \begin{tabular}{|| c || c | c |c |c ||} \hline \hline Process \rule[-3.mm]{0.mm}{11.mm} & NSK (no $\tau$) & NSK + A$^{sh}$ B$^{sh}$ C$^{sh}$ & NSK + A$_{\delta m,\delta g}$ & NSK + KLOE + A$^{sh}$ B$^{sh}$ C$^{sh}$\\ \hline \rule[-3.mm]{0.mm}{11.mm} $\pi^+ \pi^-$ & $492.02 \pm 2.24 $ & $ 491.66 \pm 1.98$ & $496.20\pm 1.85$ & $492.24\pm 1.79$\\ \hline \rule[-3.mm]{0.mm}{11.mm} $\pi^0 \gamma$ & $4.53\pm 0.04 $ & $4.53 \pm 0.04$ & $4.55 \pm 0.04$& $4.51 \pm 0.05$\\ \hline \rule[-3.mm]{0.mm}{11.mm} $\eta \gamma$ & $0.17 \pm 0.01 $ & $0.17 \pm 0.01$ & $0.17 \pm 0.01$ & $0.17 \pm 0.01$\\ \hline \rule[-3.mm]{0.mm}{11.mm} $\eta^\prime \gamma$ & $0.01 \pm 0.00$ &$0.01 \pm 0.00$ & $0.01 \pm 0.00$ & $0.01 \pm 0.00$\\ \hline \rule[-3.mm]{0.mm}{11.mm} $\pi^+ \pi^-\pi^0$ & $36.99 \pm 0.55 $ & $36.97 \pm 0.56 $ & $36.83 \pm 0.56$ & $37.07 \pm 0.55$ \\ \hline \hline \rule[-3.mm]{0.mm}{11.mm} Total & $533.72 \pm2.32$ & $533.34 \pm 2.07 $ & $537.76 \pm 2.08$ & $534.01 \pm 1.90$\\ \hline Fit Probability & 98.4\% &98.0\% & 95.3\% & 50.8\%\\ \hline \hline \end{tabular} \caption{ \label{T6} Contributions to $10^{10} a_\mu$ from thresholds up to 1 GeV/c for various processes fitting the data sets indicated on the first line~; by NSK we mean the set of all $e^+ e^-$ data except for KLOE. The errors provided merge the reported statistical and systematic uncertainties. } \end{table} One can note that the estimate based only on $e^+e^-$ annihilation data is consistently improved by including within the fitted data sets the $\tau$ lineshapes from ALEPH, BELLE and CLEO. Instead, using A$_{\delta m,\delta g}$ produces a shift by 4 units, while reducing the uncertainty in the same way than the $\tau$ lineshapes. Whether this estimate should be prefered, is an open question, awaiting a better understanding of the $\tau$ spectrum distortions. On the other hand, including KLOE data does not provokes differences with using only the Novosibirk data, but instead provides improved uncertainties. The always (negligible) contribution of the annihilation process $\eta^\prime \gamma$ is a prediction which is entirely determined by the $\eta \gamma$ decay channel and our model which implies a tight correlation between the $\eta \gamma$ and $\eta^\prime \gamma$ final states \cite{ExtMod1}. So close to its threshold, this contribution could have been expected small~; it should become obviously larger while including the $\phi$ region. As a final remark, one may conclude that the global fit method indeed performs as expected. Moreover, as far as we know, our method is the first one which can associate a probability to the various reported contributions to the muon anomalous magnetic moment. Of course, the quoted probabilities refer to the fits underlying the estimates and not the estimates themselves, as these are only derived from computations using the fit results (parameter values and full error covariance matrix). One may infer that our prefered numerical results for $a_\mu$ -- any of the first two data columns in Table \ref{T6} -- should increase the discrepancy between the prediction and the BNL measurement of the muon $g-2$. Improved estimates can be expected from using also the new data samples based on the ISR method collected by KLOE and BaBar. \section{Summary And Concluding Remarks} \label{FinalConclusion} \indent \indent In the first part of this study \cite{ExtMod1}, we defined the Extended HLS Model and reminded the mechanisms implementing U(3)/SU(3)/SU(2) symmetry breaking, noticeably the vector meson mixing provided by breaking Isospin symmetry. This was shown to provide a satisfactory simultaneous description of all $e^+ e^-$ data sets considered. The annihilation channels successfully covered ($\pi^+ \pi^-$, $\pi^0 \gamma$, $\eta \gamma$ and $\pi^0 \pi^+ \pi^-$) represent a large amount of data. It is also the largest set of annihilation channels simultaneously analyzed so far. The present work confirms that the dipion spectrum in $\tau$ decay is also in the reach of this model. The $\tau$ data sets collected by the ALEPH, BELLE and CLEO Collaborations have been carefully examined in isolation and combined within the context of a global fit performed together with a large set of $e^+ e^-$ annihilation data. For this purpose, we found appropriate to introduce additional Isospin breaking (IB) effects providing distortions of the $\rho^\pm$ lineshape compared to $\rho^0$. This was partly considered in our former study \cite{taupaper}, but totally ignored in preliminary versions of the present work like \cite{Pekin}. This additional mechanism (IB shape distortions) consists of a coupling difference $\delta g$ of the $\rho^\pm$ and $\rho^0$ to a pion pair and a mass difference $\delta m^2$ between these two mesons. Each of the ALEPH (A), BELLE (B) and CLEO (C) experiments reports on a global scale uncertainty for their dipion spectra. This scale uncertainty is much smaller for A (0.51\%) than for B (1.53\%) or C (1.75\%). The absolute scale of each $\tau$ spectrum can then be modified by a factor $1+ \lambda_{A/B/C}$, including a fit parameter constrained, for each of A, B and C, by its scale uncertainty. However, it has been proved that, in minimization procedures, $\delta g$, $\delta m^2$ and these scales are tightly correlated. Therefore, one cannot interpret $\lambda_{A/B/C} \ne 0$ as purely reflecting biases on the measured branching ratios ${\cal B}(\tau \ra \pi^\pm \pi^0 \nu)\equiv {\cal B}_{\pi \pi}$. To be explicit, the connection between pure lineshape distortion parameters and absolute scale, which is the subject of Subsection \ref{preliminaire}, prevents to tag reliably an undoubtful numerical value for some rescaling. It has been shown -- see Table \ref{T0} -- that the ALEPH data are in accord with VMD expectations, provided some shape distortions ($\delta g$, $\delta m^2$) are implemented. In this case, $\lambda_A=0$ is even found quite acceptable. This means that, relying on ALEPH data only, the presently accepted value for ${\cal B}_{\pi \pi}$ is not contradicted by our analysis. This result is tightly connected with the fact that the lineshape distortions of the ALEPH spectrum is well accounted for by our parametrization ($\delta g$, $\delta m^2$). The picture is not that clear with BELLE and CLEO, indicating that some residual problem survives, mostly visible in these data sets~; this could well be due to having a too simple--minded shape distortion mechanism~; however, this could also reflect specific systematics in these experiments or, as well, a real physical problem revealed by their larger statistics. A rescaling of their ${\cal B}_{\pi \pi}$ downwards is found, however, in reasonable accord with their reported uncertainties. Moreover, the {\it lineshapes} of the normalized spectra provided by A, B and C happen to exhibit no problem at all when fitted together with the largest possible set of $e^+ e^-$ annihilation data. This is certainly due to having dropped out the correlation between absolute scale and lineshape distortions. In this case, one even finds no need to introduce significant IB shape distortions. The $\tau$ lineshapes and the $e^+ e^-$ are optimally fitted as clear from the individual $\chi^2$ given in Table \ref{T2}. Collecting all pieces of information in this study, one cannot consider the existence of a fundamental issue, at the pion form factor level, between $e^+ e^-$ and $\tau$ data. Indeed, Table \ref{T2}, which summarizes our simultaneous description of $e^+ e^-$ annihilation and $\tau$ decay data, displays a fit quality of about 80\% with quite acceptable parameter values. This may imply as well that ${\cal B}_{\pi \pi}$ is slightly overestimated or that some additional systematics affect the $1/N dN/ds$ spectrum in some experiments. \vspace{0.5cm} The most important aim of the present work was to study the effects of a global fit on the estimation of hadronic contributions to $a_\mu$, the muon $g-2$, from the various thesholds up to 1 GeV. For this purpose, our treatment of statistical and systematic errors accounted as closely as possible for the information provided for each data sample. One first has checked that the fit performs as one could expect on individual $e^+ e^- \ra \pi^+ \pi^-$ data samples. Comparing our fit results with individual experimental information \cite{CMD2-1995corr,CMD2-1998-1,SND-1998} about the contribution of the reference region $(0.630,~0.958)$ GeV to $a_\mu(\pi^+ \pi^-)$ is already interesting~: Table \ref{T3} shows the information returned by the fits is in good agreement with the corresponding experimental information. Performing the global fit with these data samples, where systematics have certainly been considered with great care, provides quite reasonable central values for the individual estimates of $a_\mu(\pi^+ \pi^-)$ and shrinked uncertainties~; this shrinking becomes noticeable when fitting simultaneously all $e^+ e^-$ data from \cite{CMD2-1995corr,CMD2-1998-1,SND-1998}~: The uncertainty is reduced from 3.02 to 1.72 in units of $10^{-10}$, {\it i.e.} better than a 40 \% improvement. The next step was to include the older $e^+ e^- \ra \pi^+ \pi^-$ data samples \cite{Barkov,DM1}, where systematics are certainly not as well controlled as in \cite{CMD2-1995corr,CMD2-1998-1,SND-1998}. One thus gets a shift $\Delta a_\mu(\pi^+ \pi^-)\simeq -1.9$ (in units of $10^{-10}$) while leaving the uncertainty nearly unchanged. Including also all the $e^+ e^- \ra (\pi^0/\eta) \gamma$ and $e^+ e^- \ra \pi^0 \pi^+ \pi^-$ data samples does not produce significant modifications, showing that the effects of systematics when combining data samples of various qualities may prevent improvements. However, the corresponding combined fit does not degrade the information~; it rather allows to check the stability of the estimate. The final uncertainty improvement is at the level of 46 \%. At this point where all information on $e^+ e^- $ annihilations -- except for KLOE ISR data -- are considered, one ends up with~: $$ a_\mu(\pi^+ \pi^-;~0.630 \div 0.958 ) = 359.31 \pm 1.62$$ (in units of $10^{-10}$) where the error combines systematic and statistical uncertainties and accounts for the sample--to--sample correlations. The probability of the underlying fit to the data is 98.4\%. One has shown that these high probability values are a normal consequence of too conservative estimates of systematics into given data sets. This has been shown not to hide bad descriptions of some data subsets. Including KLOE data does not modify the picture, except for the probabilities which may become very low, reflecting the intrinsic large minimum $\chi^2$ for this data set. Adding the various $\tau$ data to the set of $e^+ e^- $ data samples, is also worth being made stepwise, in order to substantiate problems and specific properties of each of the A (ALEPH), B (BELLE) and C (CLEO) data sets. We have first compared (see Table \ref{T3b}) our reconstructed values for $a_\mu$ with the $\tau$ based estimates of \cite{DavierHoecker} for the reference region $0.630 \div 0.958$ GeV \cite{ZhangPriv}. The central values are found in agreement while the uncertainties are importantly shrinked. This is clearly an effect of our global fit/modelling. One should also note that the parameters provided by the simultaneous fit to A, B, C and the $e^+ e^- $ data provides a value for $ a_\mu$ larger than for each of A, B and C separately. Interestingly, this property is also exhibited by the combined experimental estimate proposed by the authors of \cite{DavierHoecker,ZhangPriv}. In this case, we show that introducing the A, B and C lineshapes inside our fit procedure allows to confirm the value for $ a_\mu(\pi^+ \pi^-;~0.630 \div 0.958 )$ derived using only $e^+ e^- $ data and some more shrinking of its uncertainty~: $$ a_\mu(\pi^+ \pi^-;~0.630 \div 0.958 ) = [359.62 \pm 1.51] ~10^{-10}~.$$ Using additionally the KLOE data leads to~: $$ a_\mu(\pi^+ \pi^-;~0.630 \div 0.958 ) = [358.52 \pm 1.32] ~10^{-10}~.$$ Both results correspond to good probabilities of the underlying form factor fits. However, our study leads us to conclude that some distortions of the lineshape, different for A and B/C, are at work which still need to be understood. Whether one is faced here with an incomplete IB distortion modelling, with unaccounted for systematics, or with some external physical effect, is an open issue. Until this issue is clarified, going much beyond the $\tau$ lineshapes for $ a_\mu(\pi^+ \pi^-)$ estimates looks hazardous. Our results concerning $ a_\mu(\pi^+ \pi^-;~0.630 \div 0.958 )$ are summarized in Figure \ref{Compilation}. This clearly illustrates that shape distortions and systematics are not completely understood, even if at the form factor level the picture is more optimistic~; this shows that $a_\mu$ estimates are a more sensitive probe to differences than fit probabilities of the underlying form factors. Our tendency for now is to prefer providing our final results concerning the various contributions to $a_\mu$ from thresholds to 1 GeV, considering all NSK $e^+e^-$ data samples together with $\tau$ lineshape data. The KLOE data set does not seem to modify the picture unreasonably. This information is the matter of Table \ref{T6}. \vspace{0.5cm} One may also conclude that the global fit method performs as could be expected and is a useful tool in order to examine the consistency of various data sets covering various processes related by physics. As a tool, it also permits improved estimates of the various hadronic contributions to the muon $g-2$. For this purpose, one should stress that better data, with better controlled systematics in all decay channels, and not only for the $\pi^+ \pi^-$ final state, may be valuable. Indeed, the physics correlations between the various final states in $e^+ e^-$ annihilations may conspire (better than presently) with each other in order to provide improved values for all contributions. Theoretical developments on the inclusion of scalar mesons and higher mass vector mesons within VMD--like frameworks are also desirable, as this could increase the underlying physics correlations between the various final states accessible in $e^+e^-$ annihilations. Finally, understanding the issue raised by \cite{BGPter} about the coupling of the $\phi$ meson to $K \overline{K}$ pairs, may well be an important task in order to estimate reliably the $\phi$ region contribution to the muon $g-2$. \section*{Acknowledgments} \indent \indent We are indebted to H. Hayashii, Nara Women's University, Nara, Japan, for having provided us with the BELLE data sample and for information concerning the BELLE Collaboration fits. We also thank Z. Zhang, LAL Orsay, to have provided us with some interesting unpublished results quoted in the text. Finally, M.B. gratefully acknowledges several useful discussions and mail exchanges with S. Eidelman, Budker Institute, Novosibirsk, Russia. \bibliographystyle{h-physrev}
1,477,468,750,742
arxiv
\subsection{Proof of Asymptotic Characterization} \subsubsection{Convergence in Wasserstein Distance} \begin{comment} In this section, we first prove an intermediate result (Lemma \ref{lem:CGMT}) that leads to our asymptotic characterization in Theorem \ref{thm:asymp_char}. The main idea is to express limiting joint distribution of $(\sol,\,\sgl)$ via proximal operator $\tproxl$. Then combining with Proposition \ref{prop:prox}, we can directly prove Theorem \ref{thm:asymp_char}. \begin{lem} \label{lem:CGMT}Let $\left\{ \sgl^{(p)},\,\dmtx^{(p)},\noise^{(p)},\,\vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ be a converging sequence satisfying our main assumptions in Sec. \ref{subsec:DefandAssump}. Then for a pseudo-Lipschitz function $\psi$ , we have: \[ \frac{1}{p}\sum_{i=1}^{p}\psi(\soli_{i},\,\sgli_{i})\pconv\lim_{p\to\infty}\E\{\frac{1}{p}\sum_{i=1}^{p}\psi([\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)]_{i},\sgli_{i})\} \] where $\equinoise\sim\mathcal{N}(0,\mI)$, $\tprox_{\tau\vlambda}(\vy)=\underset{x}{\text{argmin }}\frac{1}{2}\|\vy-\vx\|^{2}+\tau\sum_{i=1}^{p}\lambda_{i}|x|_{(i)}$ with $[\tproxl(\vy)]_{i}$ denoting its $i$th coordinate and $(\sigma,\,\tau)$ is the solution of the following equations: \begin{align} \sigma^{2} & =\sigma_{w}^{2}+\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\E\|\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\sgl\|_{2}^{2}\label{eq:sigmatau_eq1}\\ 1 & =\tau(1-\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\E[\text{div}(\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise))])\label{eq:sigmatau_eq2} \end{align} \end{lem} We can see the result of Lemma \ref{lem:CGMT} are very similar to that of Theorem \ref{thm:asymp_char}. The difference is that in Lemma \ref{lem:CGMT}, the asymptotic characterization is expressed in terms of the limiting coordinate-wise average of proximal vector, while in Theorem \ref{thm:asymp_char} this average is replaced by an equivalent scalar owning to Proposition \ref{prop:prox}. \end{comment} Theorem \ref{thm:asymp_char} essentially shows that the empirical measure $\mu_{p}(\h{\sgl},\sgl)$ converges to the joint distribution \begin{equation} \mu^{*}\bydef\left(\prox(B+\sigma^{*}Z;F_{y},F_{\tau^{*}\lambda}),B\right),\label{eq:mu_star} \end{equation} where $(\sigma^{*},\tau^{*})$ is the unique solution of (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) and $B\sim\calN(0,1)$. This convergence is quantified by Wasserstein distance. For two probability measures $\mu$ and $\nu$ with bounded second moment, we define Wasserstein distance as: \[ W_{2}(\mu,\nu)=\left(\inf_{\gamma\in\mathcal{C}(\mu,\nu)}\int\|x-y\|_{2}^{2}d\gamma(x,y)\right)^{1/2} \] where $\mathcal{C}(\mu,\nu)$ denotes the set of all couplings of $\mu$ and $\nu$. For any pseudo-Lipschitz function $\psi:\R^{2}\to\R$ and empirical measures $\left\{ \mu_{p}\right\} _{p\in\mathbb{N}}$ on $\R^{2}$: it holds that $\lim_{p\to\infty}\int\psi(x)d\mu_{p}(x)=\int\psi(x)d\mu^{*}(x)$ if and only if $W_{2}(\mu_{p},\mu^{*})\to0$ \cite{villani2008optimal}. Therefore, (\ref{eq:weak_convergence}) in Theorem \ref{thm:asymp_char} is equivalent to saying $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu^{*})\to0$. In Proposition (\ref{prop:empi_Wdis_conv}) below, we first show that $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*}^{(p)},\sgl))\pconv0$, where \begin{equation} \sgli_{*,i}\bydef\prox(\sgli_{i}+\sigma^{*}Z_{i};F_{y},F_{\tau^{*}\lambda}).\label{eq:sgl_star_i} \end{equation} Here $\prox(y;F_{y},F_{\lambda})$ is the limiting scalar function defined in Proposition \ref{prop:prox}, $(\sigma^{*},\tau^{*})$ is the solution of (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}), $Z_{i}\iid\calN(0,1)$ , $Y\sim B+\sigma Z$ with $B\sim F_{\beta}$ and $\lambda\sim F_{\lambda}$. Then combining it with standard results of concentration of empirical measure, we can prove Theorem \ref{thm:asymp_char}. \begin{comment} \[ \lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\psi(\h{\sgli}_{i},\sgli_{i})=\lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\psi(\prox(\sgli_{i}+\sigma^{*}Z_{i};F_{y},F_{\tau^{*}\lambda}),\sgli_{i}) \] \end{comment} \begin{prop} \label{prop:empi_Wdis_conv}$\forall\epsilon\in(0,1/2)$, \[ \P\left(W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*},\sgl))^{2}\geq\epsilon\right)\leq\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $\sgl_{*}$ is defined in (\ref{eq:sgl_star_i}). \end{prop} In the following sections, we will present the proof of Proposition \ref{prop:empi_Wdis_conv}. It follows the proof in \cite{thrampoulidis2018precise,miolane2018distribution,thrampoulidis2015regularized}. \subsubsection{Convex Gaussian Min-max Theorem} The proof of Proposition \ref{prop:empi_Wdis_conv} hinges on the Convex Gaussian Min-max Theorem (CGMT). For completeness, we summarize the main idea here. In \cite{thrampoulidis2018precise}, using CGMT framework, the asymptotic MSE is derived for the regularized M-estimator: \begin{equation} \hat{\sgl}=\argmin{\vx}\,\mathcal{L}(\vy-\dmtx\vx)+f(\vx)\label{eq:Mesti} \end{equation} where $\mathcal{L}(\cdot)$ is the loss function and $f(\cdot)$ is the regularizer. Here in SLOPE case, $\mathcal{L}(\vx)=\frac{1}{2}\|\vx\|^{2}$ and $f(\vx)=\regu(\vx)$. The CGMT studies a min-max primary optimization (PO) problem of the following form: \begin{equation} \Phi(\mG)=\min_{\vv\in\comset_{\vv}}\max_{\vu\in\comset_{\vu}}\vu^{\T}\mG\vv+\psi(\vv,\vu),\label{eq:PO} \end{equation} where $\comset_{\vv}\subset\R^{p}$, $\comset_{\vu}\subset\R^{n}$ are two compact sets, $\psi:\comset_{\vv}\times\comset_{\vu}\mapsto\R$ is a continuous convex-concave function w.r.t. $(\vv,\vu)$ and $\mG\iid\calN(0,1)$. Problem (\ref{eq:PO}) can be associated with the following auxiliary optimization (AO) problem, which is usually easier to analyze: \begin{equation} \phi(\vg,\vh)=\min_{\vv\in\comset_{\vv}}\max_{\vu\in\comset_{\vu}}\|\vv\|_{2}\vg^{\T}\vu+\|\vu\|_{2}\vh^{\T}\vv+\psi(\vv,\vu),\label{eq:AO} \end{equation} where $\vg\sim\calN(\mathbf{0},\mI_{n})$ and $\vh\sim\calN(\mathbf{0},\mI_{p})$. Following CGMT framework, we first write (\ref{eq:slope_opt}) in the form of (\ref{eq:PO}). Let $\vv=\vx-\sgl$ in (\ref{eq:Mesti}), we can rewrite optimization problem in (\ref{eq:slope_opt}) as: \begin{equation} \h{\vv}=\argmin{\vv\in\R^{p}}C_{\vlambda}(\vv)\label{eq:slope_opt1} \end{equation} where \begin{equation} C_{\vlambda}(\vv)\bydef\frac{1}{2}\|\dmtx\vv-\noise\|^{2}+\regu(\vv+\vbeta).\label{eq:C_lambda} \end{equation} and we have $\h{\vv}=\h{\sgl}-\sgl$, which is the error vector. In (\ref{eq:slope_opt1}), the feasibility set of $\vv$ is unbounded, while in (\ref{eq:PO}) and (\ref{eq:AO}), it is bounded. To facilitate analysis later, we will first enforce an artificial boundness on $\vv$ as in \cite{thrampoulidis2018precise}. Specifically, we will consider a bounded version of (\ref{eq:slope_opt1}) as follows: \begin{equation} \h{\vv}^{B}=\argmax{\vv\in\comset_{\vv}(K)}C_{\vlambda}(\vv),\label{eq:slope_opt1_constrained} \end{equation} where \[ \comset_{\vv}(K)\bydef\left\{ \vv\Big|\|\vv\|_{2}/\sqrt{n}\leq K\right\} \] with $K>0$ a sufficiently large constant. It can be proved by the convexity of (\ref{eq:C_lambda}) that if $\|\h{\vv}^{B}\|/\sqrt{n}\pconv\alpha_{*}<K$, we have $\|\h{\vv}\|/\sqrt{n}\pconv\alpha_{*}<K$. Therefore, by choosing a large enough $K$, we can work with the constrained problem (\ref{eq:slope_opt1_constrained}) to derive the asymptotic results. After introducing an auxiliary variable $\vu$ and normalizing the cost function by $1/n$, problem (\ref{eq:slope_opt1_constrained}) becomes: \begin{equation} \min_{\vv\in\comset_{\vv}(K)}\max_{\vu}\frac{1}{n}\left[\frac{\vu^{\T}}{\sqrt{n}}\left(\sqrt{n}\dmtx\right)\vv-\vu^{\T}\noise-\frac{\|\vu\|^{2}}{2}+\regu(\vv+\vbeta)\right],\label{eq:slope_PO} \end{equation} which satisfies the form of (\ref{eq:PO}). The corresponding AO is: \begin{equation} \min_{\vv\in\comset_{\vv}(K)}\max_{\vu}\frac{1}{n}\left[-\frac{\|\vu\|}{\sqrt{n}}\vh^{\T}\vv-\frac{\|\vv\|}{\sqrt{n}}\vg^{\T}\vu-\vu^{\T}\noise-\frac{\|\vu\|^{2}}{2}+\regu(\vv+\vbeta)\right],\label{eq:slope_AO} \end{equation} where $\vg\sim\calN(\mathbf{0},\mI_{n})$ and $\vh\sim\calN(\mathbf{0},\mI_{p})$. Taking $\theta\bydef\frac{\|\vu\|}{\sqrt{n}}$, (\ref{eq:slope_AO}) is equivalent to: \[ \min_{\vv\in\comset_{\vv}(K)}\max_{\theta\geq0}l(\vv,\theta), \] where \begin{align*} l(\vv,\theta) & \bydef\theta\left(\left\Vert \frac{\|\vv\|}{n}\vg+\frac{\vw}{\sqrt{n}}\right\Vert -\frac{\vh^{\T}\vv}{n}\right)-\frac{1}{2}\theta^{2}+\frac{\regu(\vv+\vbeta)}{n}\\ & =\theta\left(\sqrt{\frac{\|\vv\|^{2}}{n}\frac{\|\vg\|^{2}}{n}+\frac{\|\noise\|^{2}}{n}+2\frac{\|\vv\|}{\sqrt{n}}\frac{\vg^{\T}\noise}{n}}-\frac{\vh^{\T}\vv}{n}\right)-\frac{1}{2}\theta^{2}+\frac{\regu(\vv+\vbeta)}{n} \end{align*} and after optimizing over $\theta$, we get the optimization problem: \[ \min_{\vv\in\comset_{\vv}(K)}L(\vv), \] where \begin{equation} L(\vv)\bydef\frac{1}{2}\left(\sqrt{\frac{\|\vv\|^{2}}{n}\frac{\|\vg\|^{2}}{n}+\frac{\|\noise\|^{2}}{n}+2\frac{\|\vv\|}{\sqrt{n}}\frac{\vg^{\T}\noise}{n}}-\frac{\vh^{\T}\vv}{n}\right)_{+}^{2}+\frac{\regu(\vv+\vbeta)}{n}.\label{eq:Lv} \end{equation} The following result is from \cite{miolane2018distribution,thrampoulidis2015regularized}: \begin{prop} \label{prop:CGMT_C_L}Let $D\subseteq\comset_{\vv}(K)$ be a convex closed set, then $\forall t\in\R$, we have: \begin{equation} \P(\min_{\vv\in D}C_{\vlambda}(\vv)\leq t)\leq2\P(\min_{\vv\in D}L(\vv)\leq t)\label{eq:CGMT_C_L1} \end{equation} and \begin{equation} \P(\min_{\vv\in D}C_{\vlambda}(\vv)\geq t)\leq2\P(\min_{\vv\in D}L(\vv)\geq t).\label{eq:CGMT_C_L2} \end{equation} \end{prop} In \cite{miolane2018distribution,thrampoulidis2015regularized}, the authors consider Gaussian noise $\noise\sim\calN(\mathbf{0},\mI_{n})$ and $L(\vv)$ can be written equivalently as a convex function. Here, $L(\vv)$ in (\ref{eq:Lv}) is not a convex function and the previous results can not directly apply. Note that the non-convexity of (\ref{eq:Lv}) stems from the term $\frac{\|\vv\|}{\sqrt{n}}\frac{\vg^{\T}\noise}{n}$. For large enough $n$, $\frac{\vg^{\T}\noise}{n}\approx0$, i.e., the non-convex term is asymptotically negligible. In addition, $\frac{\|\vg\|^{2}}{n}\approx1$ and $\frac{\|\noise\|^{2}}{n}\approx\sigma_{\noisei}^{2}$, when $n,p\to\infty$. Therefore, we can study the following function \begin{comment} \[ \widetilde{L}(\vv)\bydef\frac{1}{2}\left(\sqrt{\frac{\|\vv\|^{2}}{n}\frac{\|\vg\|^{2}}{n}+\frac{\|\noise\|^{2}}{n}}-\frac{\vh^{T}\vv}{n}\right)_{+}^{2}+\frac{\regu(\vv+\vbeta)}{n}, \] \end{comment} \begin{equation} \widetilde{L}(\vv)\bydef\frac{1}{2}\left(\sqrt{\frac{\|\vv\|^{2}}{n}+\sigma_{\noisei}^{2}}-\frac{\vh^{\T}\vv}{n}\right)_{+}^{2}+\frac{\regu(\vv+\vbeta)}{n},\label{eq:tilde_L_v} \end{equation} The next Lemma shows that for large enough $p$, $L(\vv)$ and $\widetilde{L}(\vv)$ are closed to each other for $\vv\in\comset_{\vv}(K)$. \begin{lem} \label{lem:perturb_Lv}$\forall\epsilon\in(0,1/2)$, we have \[ \sup_{\vv\in\comset_{\vv}(K)}|L(\vv)-\widetilde{L}(\vv)|<c\epsilon \] with probability $1-\delta^{(p)}$, where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ is a constant that does not depend on $p$. \end{lem} \begin{IEEEproof} Define $g(\Delta x;x,y)\bydef(\sqrt{x+\Delta x}-y)_{+}^{2}-(\sqrt{x}-y)_{+}^{2}.$ It is not hard to show $|g(\Delta x;x,y)|\leq\Delta x\left(1+\frac{|y|}{\sqrt{x}}\right)$. Then letting $x=\frac{\|\vv\|^{2}}{n}+\sigma_{\noisei}^{2}$, $\Delta x=\frac{\|\vv\|^{2}}{n}\left(\frac{\|\vg\|^{2}}{n}-1\right)+\left(\frac{\|\noise\|^{2}}{n}-\sigma_{\noisei}^{2}\right)+2\frac{\|\vv\|}{\sqrt{n}}\frac{\vg^{\T}\noise}{n}$ and $y=\frac{\vh^{\T}\vv}{n}$ in $g(\Delta x;x,y)$, we get \begin{equation} |L(\vv)-\widetilde{L}(\vv)|\leq\Delta x\left(1+\left|\frac{\vh^{\T}\vv}{n}\right|/\sqrt{\frac{\|\vv\|^{2}}{n}\frac{\|\vg\|^{2}}{n}+\frac{\|\noise\|^{2}}{n}}\right).\label{eq:diff_Lv} \end{equation} $\forall\epsilon\in(0,1/2)$, under the event: \begin{equation} \left\{ \frac{\|\vg\|^{2}}{n},\frac{\delta\|\vh\|^{2}}{n},\frac{\|\noise\|^{2}}{\sigma_{\noisei}^{2}n}\in\left(1-\veps,1+\veps\right)\right\} \bigcap\left\{ \left|\frac{\vg^{\T}\noise}{n}\right|<\epsilon\right\} ,\label{eq:event_1} \end{equation} we can see from (\ref{eq:diff_Lv}) that $\exists c>0$ s.t. \[ \sup_{\vv\in\comset_{\vv}(K)}|L(\vv)-\widetilde{L}(\vv)|<c\epsilon. \] Since $\vg\sim\calN(\mathbf{0},\mI_{p})$, $\vh\sim\calN(\mathbf{0},\mI_{n})$ and $\left\{ \noise^{(p)}\right\} $ is a converging sequence, (\ref{eq:event_1}) occurs with probability $1-\delta^{(p)}$ and $\delta^{(p)}\to0$ as $p\to\infty$. \end{IEEEproof} As a result of Lemma \ref{lem:perturb_Lv}, $|\min_{\vv\in\comset_{\vv}(K)}L(\vv)-\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)|<\epsilon$ with probability approaching 1. Combining this with Proposition \ref{prop:CGMT_C_L}, we obtain the following: \begin{prop} \label{prop:CGMT_C_tildeL}Let $D\subseteq\comset_{\vv}(K)$ be a convex closed set, then $\forall t\in\R$ and $\epsilon\in(0,1/2)$, we have \begin{comment} \[ (1-\delta^{(p)})\P(\min_{\vv\in D}L(\vv)\leq t)\leq\P(\min_{\vv\in D}\widetilde{L}(\vv)\leq t+\epsilon) \] \end{comment} \[ \P(\min_{\vv\in D}C_{\vlambda}(\vv)\leq t)\leq c\P(\min_{\vv\in D}\widetilde{L}(\vv)\leq t+c\epsilon)+\delta^{(p)} \] and \[ \P(\min_{\vv\in D}C_{\vlambda}(\vv)\geq t)\geq c\P(\min_{\vv\in D}\widetilde{L}(\vv)\geq t-c\epsilon)+\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ does not depend on $p$. \end{prop} For any finite $p$, define $\vv_{*}\in\R^{p}$: \begin{equation} v_{*,i}=\sgli_{*,i}-\sgli_{i},\label{eq:v_star_p} \end{equation} where $\sgli_{*,i}$ is defined in (\ref{eq:sgl_star_i}) and $\vh$ is the same Gaussian vector in (\ref{eq:tilde_L_v}). The next result is the counterpart of Theorem B.1 in \cite{miolane2018distribution}. It shows that \[ \h{\vv}_{\widetilde{L}}\bydef\argmin{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv), \] i.e., the minimizer of $\widetilde{L}(\vv)$ over $\comset_{\vv}(K)$ converges to $\vv_{*}$ with high probability in $L^{2}$ sense. \begin{prop} \label{prop:local_stability}$\forall K>0$ and $\epsilon\in(0,1/2)$, we have for $\vv_{*}$ defined in (\ref{eq:v_star_p}), \[ \P\left(\exists\vv\in\comset_{\vv}(K)\text{ s.t. }\frac{1}{p}\|\vv-\vv_{*}\|^{2}>\epsilon\text{ and }\widetilde{L}(\vv)\leq\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)\leq\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ does not depend on $p$. \end{prop} Proposition \ref{prop:local_stability} can be proved in the same way as Theorem B.1 of \cite{miolane2018distribution}. There are two key steps: (1) show that $\widetilde{L}(\vv)$ is strongly convex in a neighborhood of $\vv_{*}$, (2) show $\widetilde{L}(\vv_{*})\approx\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)$. Besides, both events happen with high probability as $p\to\infty$. These are summarized in Lemma \ref{lem:strongly_convex} and \ref{lem:closeness_v_vhat} as below. \begin{lem} \label{lem:strongly_convex}$\exists r,\gamma>0$ s.t. $\widetilde{L}(\vv)$ is $\frac{\gamma}{n}$-strongly convex on $\left\{ \vv\in\R^{p}\Big|\|\vv-\vv_{*}\|\leq\sqrt{n}r\right\} $ with probability greater than $1-\delta^{(p)}$, where $\delta^{(p)}\to0$ as $p\to\infty$. \end{lem} \begin{IEEEproof} The proof follows the same steps in \cite{miolane2018distribution} and we omit the details here. \end{IEEEproof} \begin{lem} \label{lem:closeness_v_vhat}$\forall K>0$, $\epsilon\in(0,1/2)$ and $\vv_{*}$ defined in (\ref{eq:v_star_p}), we have: \[ \P\left(\widetilde{L}(\vv_{*})\leq\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)\geq1-\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ does not depend on $p$. \end{lem} \begin{IEEEproof} The proof follows that of Proposition B.2 in \cite{miolane2018distribution}. Define \begin{align*} \t l(\vv,\theta) & =\theta\left(\sqrt{\frac{\|\vv\|^{2}}{n}+\sigma_{\noisei}^{2}}-\frac{\vh^{\T}\vv}{n}\right)-\frac{\theta^{2}}{2}+\frac{\regu(\vv+\vbeta)}{n} \end{align*} and we have \begin{equation} \widetilde{L}(\vv)=\max_{\theta\geq0}\t l(\vv,\theta).\label{eq:tilde_L_l} \end{equation} For $\vv\in\comset_{\vv}(K)$, $\t l(\vv,\theta)$ can be written as: \begin{align*} \t l(\vv,\theta) & =\theta\left(\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\left\{ \frac{\sigma}{2}+\frac{\frac{\|\vv\|^{2}}{n}+\sigma_{\noisei}^{2}}{2\sigma}\right\} -\frac{\vh^{\T}\vv}{n}\right)-\frac{\theta^{2}}{2}+\frac{\regu(\vv+\vbeta)}{n}\\ & =\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\left\{ \frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+\frac{G(\vv,\sigma,\theta;\sgl,\vh)}{n}\right\} \end{align*} where \begin{equation} G(\vv,\sigma,\theta;\sgl,\vh)=\frac{\theta\|\vv\|^{2}}{2\sigma}-\theta\vh^{\T}\vv+\regu(\vv+\vbeta)\label{eq:G_v_sgl_h} \end{equation} Then $\forall\theta\geq0$ we have: \begin{equation} \min_{\vv\in\comset_{\vv}(K)}\t l(\vv,\theta)=\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}F(\sigma,\vh;\theta)\label{eq:tilde_l_F} \end{equation} where $F(\sigma,\vh;\theta)$ is defined as: \[ F(\sigma,\vh;\theta)=\frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+\frac{1}{n}\min_{\vv\in\comset_{\vv}(K)}G(\vv,\sigma,\theta;\sgl,\vh) \] Similar to \cite{miolane2018distribution}, it can be shown for fixed $\sgl$ and $\theta$, $F(\sigma,\vh;\theta)$ converges to its mean: \[ \E_{\vh}F(\sigma,\vh;\theta)=\frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+\frac{1}{n}\E\min_{\vv\in\comset_{\vv}(K)}G(\vv,\sigma,\theta;\sgl,\vh) \] uniformly over $[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]$ in probability, i.e., \begin{equation} \P\left(\sup_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}|F(\sigma,\vh;\theta)-\E_{\vh}F(\sigma,\vh;\theta)|>c\epsilon\right)<\delta^{(p)},\label{eq:unif_conv_F} \end{equation} where $c>0$ and $\delta^{(p)}\to0$ as $p\to\infty$. Then we define \begin{align*} \Psi^{(p)}(\sigma,\theta) & =\frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+\frac{1}{n}G(\sigma,\theta;\sgl,\vh) \end{align*} and \begin{equation} \Psi(\sigma,\theta)=\frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+G(\sigma,\theta)\label{eq:Psi_sigma_theta} \end{equation} where \begin{align*} G(\sigma,\theta;\sgl,\vh) & =\E\min_{\vv\in\R^{p}}G(\vv,\sigma,\theta;\sgl,\vh)\\ & =\E e_{\regu}(\sgl+\sigma\vh;\frac{\sigma}{\theta})-\frac{\theta\sigma}{2}p \end{align*} and \[ G(\sigma,\theta)=\frac{1}{\delta}\left[\lim_{p\to\infty}\frac{1}{p}\E e_{\regu}(\sgl+\sigma\vh;\frac{\sigma}{\theta})-\frac{\theta\sigma}{2}\right]. \] with $e_{f}(\vx;\tau)$ defined later in (\ref{eq:Moreau_f}). Here, $G(\sigma,\theta)$ can be considered as the limit of $G(\sigma,\theta;\sgl,\vh)$ when $p\to\infty$. The existence of limit $\lim_{p\to\infty}\frac{1}{p}\E e_{\regu}(\sgl+\sigma\vh;\frac{\sigma}{\theta})$ will be verified later in Lemma \ref{lem:conv_F}. Also let \begin{eqnarray} \h{\vv}_{\tprox} & = & \argmin{\vv\in\R^{p}}G(\vv,\sigma,\theta;\sgl,\vh)\nonumber \\ & = & \tprox_{\frac{\sigma\regu}{\theta}}(\sgl+\sigma\vh)-\sgl\label{eq:v_hat_p} \end{eqnarray} where $G(\vv,\sigma,\theta;\sgl,\vh)$ is defined in (\ref{eq:G_v_sgl_h}). Clearly, we have \begin{equation} \E_{\vh}F(\sigma,\vh;\theta)\geq\Psi^{(p)}(\sigma,\theta)\label{eq:EF_Psi} \end{equation} From the uniform convergence of $\frac{1}{p}\E e_{\regu}(\sgl+\sigma\vh;\frac{\sigma}{\theta})$ proved in Lemma \ref{lem:F_c_tau_uniform_conv} later, we have $\forall\epsilon\in(0,1/2)$: \begin{equation} \P\left(\sup_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\left|\Psi(\sigma,\theta)-\Psi^{(p)}(\sigma,\theta)\right|>c\epsilon\right)<\delta^{(p)}\label{eq:unif_conv_Psi} \end{equation} From (\ref{eq:unif_conv_F}) and (\ref{eq:unif_conv_Psi}), we know $\forall\epsilon\in(0,1/2)$ and $\theta\geq0$, \begin{equation} \P\left(\left|\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}F(\sigma,\vh;\theta)-\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\E_{\vh}F(\sigma,\vh;\theta)\right|>c\epsilon\right)<\delta^{(p)}\label{eq:min_F_conv} \end{equation} and \begin{equation} \P\left(\left|\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\Psi^{(p)}(\sigma,\theta)-\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\Psi(\sigma,\theta)\right|>c\epsilon\right)<\delta^{(p)}\label{eq:min_Psi_conv} \end{equation} Letting $\theta=\theta^{*}$ and combining (\ref{eq:tilde_L_l})-(\ref{eq:EF_Psi}), (\ref{eq:min_F_conv}) and (\ref{eq:min_Psi_conv}), we have $\forall\epsilon\in(0,1/2)$: \begin{equation} \min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)\geq\Psi(\sigma^{*},\theta^{*})-c\epsilon\label{eq:L_tilde_opt_conv} \end{equation} with probability $1-c\delta^{(p)}$, where $c>0$ is a constant that does not depend on $p$ and $(\sigma^{*},\theta^{*})$ is the optimal solution of $\min_{\sigma\geq0}\max_{\theta\geq0}\Psi(\sigma,\theta)$. From Lemma \ref{lem:F_c_tau_uniform_conv} in the next section, we know $\Psi(\sigma,\theta)$ is continuously differentiable. Besides, it is not hard to show $\Psi(\sigma,\theta)$ is convex-concave w.r.t. $(\sigma,\theta)$ based on Lemma \ref{lem:F_c_tau_uniform_conv}. Moreover, we can also show that $(\sigma^{*},\theta^{*})$ is in the interior of its domain. To see this, we first show that $\forall\sigma_{0}\geq0$, $\theta^{*}(\sigma_{0})\in(0,\infty)$, where $\theta^{*}(\sigma_{0})=\argmax{\theta\geq0}\Psi(\sigma_{0},\theta)$. We can show \[ \frac{\partial G(\sigma_{0},\theta)}{\partial\theta}=\lim_{n\to\infty}\frac{\|\h{\vv}_{\tprox}\|^{2}}{2n\sigma_{0}}-\frac{\vh^{\T}\h{\vv}_{\tprox}}{n} \] and from (\ref{eq:v_hat_p}), we know $\lim_{\theta\to0}\frac{\partial G(\sigma_{0},\theta)}{\partial\theta}=\frac{\E\sgli^{2}}{2\sigma_{0}\delta}>0$, so $\lim_{\theta\to0}\frac{\partial\Psi(\sigma_{0},\theta)}{\partial\theta}>0$, $\forall\sigma_{0}\geq0$. In the meantime, we have $\lim_{\theta\to+\infty}\Psi(\sigma_{0},\theta)=-\infty$, so $\theta^{*}(\sigma_{0})\in(0,\infty)$ given that $G(\sigma_{0},\theta)$ is concave w.r.t. $\theta$. Similarly, we can show that $\forall\theta_{0}\geq0$, $\sigma^{*}(\theta_{0})\in(0,\infty)$. Therefore, $(\sigma^{*},\theta^{*})$ can be obtained via the first-order optimality condition. Taking derivatives of $\Psi(\sigma,\theta)$, we can get: \begin{equation} \begin{cases} \frac{\partial\Psi}{\partial\sigma}=\frac{\theta}{2\sigma^{2}}\left[\sigma^{2}-\left(\sigma_{\noisei}^{2}+\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\E\|\h{\vv}_{\tprox}\|^{2}\right)\right]\\ \frac{\partial\Psi}{\partial\theta}=\sigma-\theta-\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\E\vh^{\T}\h{\vv}_{\tprox} \end{cases}\label{eq:fixed_equation_highdim} \end{equation} where $\h{\vv}_{\tprox}$ is defined in (\ref{eq:v_hat_p}). Setting $\frac{\partial\Psi}{\partial\sigma}=\frac{\partial\Psi}{\partial\theta}=0$ and using Lemma \ref{lem:conv_Stein}, we can get: \begin{equation} \begin{cases} \sigma^{2}=\sigma_{\noisei}^{2}+\frac{1}{\delta}\E[\eta(B+\sigma\equinoisei;F_{y},F_{\sigma\lambda/\theta})-B]^{2}\\ 1=\tau\left(1-\frac{1}{\delta}\E[\eta'(B+\sigma\equinoisei;F_{y},F_{\sigma\lambda/\theta})-B]^{2}\right) \end{cases}\label{eq:fixed_equation2} \end{equation} Letting $\tau\bydef\sigma/\theta$ in (\ref{eq:fixed_equation2}), we recover equations (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}). On the other hand, similar to Proposition F.1 of \cite{miolane2018distribution}, it can be shown that \begin{equation} \P\left(\left|\widetilde{L}(\vv_{*})-\Psi(\sigma^{*},\theta^{*})\right|>c\epsilon\right)<\delta^{(p)}\label{eq:L_tilde_v_star_conv} \end{equation} with $\delta^{(p)}\to0$ as $p\to\infty$. Then combining (\ref{eq:L_tilde_opt_conv}) and (\ref{eq:L_tilde_v_star_conv}), we conclude that \[ \min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)\geq\widetilde{L}(\vv_{*})-c\epsilon \] with probability $1-\delta^{(p)}$. Here $c>0$ does not depend on $p$. \end{IEEEproof} We now turn to the convergence of empirical measure in Wasserstein distance. Based on Proposition \ref{prop:local_stability}, we can prove that the Wasserstein distance between $\mu_{p}(\vv_{*},\sgl)$ and $\mu_{p}(\h{\vv}_{\widetilde{L}},\sgl)$ converges to 0 in probability, which is the following lemma: \begin{lem} \label{lem:L_Wdistance}$\forall K>0$ and $\epsilon\in(0,1/2)$, $\exists c>0$ s.t. \[ \P\left(\min_{\vv\in D_{\epsilon}}\widetilde{L}(\vv)\leq\min_{\vv\in S_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)\leq\delta^{(p)} \] where $D_{\epsilon}\bydef\left\{ \vv\in S_{\vv}(K)\Big|W_{2}(\mu_{p}(\vv,\sgl),\mu_{p}(\vv_{*},\sgl))^{2}\geq\epsilon\right\} $ and $\delta^{(p)}\to0$ as $p\to\infty$. \end{lem} \begin{IEEEproof} The proof follows Lemma C.1 in \cite{miolane2018distribution}. \end{IEEEproof} On the other hand, combining Proposition \ref{prop:CGMT_C_tildeL} and \ref{prop:local_stability}, we have: \begin{lem} \label{lem:CGMT_C_tildeL_2}$\forall K>0$, $\epsilon\in(0,1/2)$ and closed convex set $D\subseteq\comset_{\vv}(K)$, we have: \[ \P\left(\min_{\vv\in D}C_{\vlambda}(\vv)\leq\min_{\vv\in\comset_{\vv}(K)}C_{\vlambda}(\vv)+\epsilon\right)\leq c\P\left(\min_{\vv\in D}\widetilde{L}(\vv)\leq\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)+\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ does not depend on $p$. \end{lem} \begin{IEEEproof} The proof follows Proposition C.1 in \cite{miolane2018distribution}. \end{IEEEproof} Combining Lemma \ref{lem:L_Wdistance} and \ref{lem:CGMT_C_tildeL_2}, we can prove the following result: \begin{prop} \label{prop:C_Wdistance}$\forall K>0$ and $\epsilon\in(0,1/2)$, $\exists c>0$ s.t. \[ \P\left(\min_{\vv\in D_{\epsilon}}C_{\vlambda}(\vv)\leq\min_{\vv\in S_{\vv}(K)}C_{\vlambda}(\vv)+c\epsilon\right)\leq\delta^{(p)} \] where $D_{\epsilon}\bydef\left\{ \vv\in S_{\vv}(K)\Big|W_{2}(\mu_{p}(\vv,\sgl),\mu_{p}(\vv_{*},\sgl))^{2}\geq\epsilon\right\} $ and $\delta^{(p)}\to0$ as $p\to\infty$. \end{prop} Now we are ready to prove Proposition \ref{prop:empi_Wdis_conv} and Theorem \ref{thm:asymp_char} \begin{comment} For the special case of separable loss and regularizer, i.e., $\mathcal{L}(\vx)=\sum_{i=1}^{p}l(x_{i})$, $f(\vx)=\sum_{i=1}^{p}f(x_{i})$, they show that the asymptotic characterization can be expressed by a simpler form with just a few scalars (see Theorem 4.1 of \cite{thrampoulidis2018precise}). We can find that SLOPE estimator falls within the form in (\ref{eq:Mesti}) with $\mathcal{L}(\vx)=\frac{1}{2}\|\vx\|^{2}$, which is separable. However, the regularizer $f(\vx)=\sum_{i=1}^{p}\lambda_{i}|x|_{(i)}$ is not separable, so the scalarization results for separable loss and regularizer can not be directly applied. Here, we need to specialize master theorem (Theorem 3.1) to our case. \end{comment} \begin{comment} We have checked that for $F(c,\tau)$ defined in (\ref{eq:F_ctau}), it satisfies the conditions of Theorem 3.1 of \cite{thrampoulidis2018precise}. Besides, it has been shown in \cite{thrampoulidis2018precise} quadratic loss function $l(\vx)=\frac{1}{2}\|\vx\|^{2}$adopted in SLOPE also satisfies the conditions. Therefore, by applying Theorem 3.1, (\ref{eq:sigmatau_eq1}) and (\ref{eq:sigmatau_eq2}) can be optained from the saddle point equation of the corresponding minimax scalar optimization problem. \end{comment} \subsubsection{Proof of Proposition \ref{prop:empi_Wdis_conv}} From Proposition \ref{prop:C_Wdistance}, we conclude that $\P\left(W_{2}(\mu_{p}(\h{\vv}^{B},\sgl),\mu_{p}(\vv_{*},\sgl))^{2}\geq c\epsilon\right)\leq\delta^{(p)}$. Let $\h{\sgl}^{B}\bydef\h{\vv}^{B}+\sgl$. Since $\sgl_{*}=\vv_{*}+\sgl$, we have \[ W_{2}(\mu_{p}(\h{\vv}^{B},\sgl),\mu_{p}(\vv_{*},\sgl))=W_{2}(\mu_{p}(\h{\sgl}^{B},\sgl),\mu_{p}(\sgl_{*},\sgl)) \] Therefore, $\forall\epsilon\in(0,1/2)$, \begin{equation} \P\left(W_{2}(\mu_{p}(\h{\sgl}^{B},\sgl),\mu_{p}(\sgl_{*},\sgl))^{2}\geq c\epsilon\right)\leq\delta^{(p)}\label{eq:Wdist_conv_bounded} \end{equation} where $\delta^{(p)}\to0$ as $p\to\infty$. Then we show for large enough $K$, $\frac{\|\h{\vv}^{B}\|}{\sqrt{p}},\frac{\|\h{\sgl}^{B}\|}{\sqrt{p}}<K$ with probability approaching 1 as $p\to\infty$. To see this, let $D=\left\{ \vv\in\R^{p}\Big|\|\vv-\vv_{*}\|>\sqrt{p}\epsilon\right\} $ in Lemma \ref{lem:CGMT_C_tildeL_2}. From Proposition \ref{prop:local_stability}, we know $\forall\epsilon>0$, $\exists c>0$ s.t. \[ \P\left(\min_{\vv\in D}\widetilde{L}(\vv)\leq\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)\leq\delta^{(p)}. \] Therefore, combining with Lemma \ref{lem:CGMT_C_tildeL_2}, we know $\|\h{\vv}^{B}-\vv_{*}\|\leq\sqrt{p}\epsilon$ with probability approaching 1. Meanwhile, it is not hard to show that $\frac{\|\vv_{*}\|}{\sqrt{p}}\to C$, where $C>0$ does not depend on $K$. Therefore, $\frac{\|\h{\vv}^{B}\|}{\sqrt{p}}<2C$ and thus $\frac{\|\h{\sgl}^{B}\|}{\sqrt{p}}<2(C+\sqrt{\E\sgli^{2}})$ with probability approaching 1. Therefore, by choosing a large enough $K>0$, we can ensure $\h{\sgl}^{B}=\h{\sgl}$ with probability approaching 1. From (\ref{eq:Wdist_conv_bounded}), we obtain that \[ \P\left(W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*},\sgl))^{2}\geq c\epsilon\right)\leq\delta^{(p)} \] with $\delta^{(p)}\to0$ as $p\to\infty$, which indicates that $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*},\sgl))\pconv0$. \subsubsection{Proof of Theorem \ref{thm:asymp_char}} As it is shown before, it is equivalent to prove $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu^{*})\to0$, where $\mu^{*}$ is given in (\ref{eq:mu_star}). We have shown $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*},\sgl))\pconv0$ in Proposition \ref{prop:empi_Wdis_conv}. Meanwhile, it is not hard to verify that $W_{2}(\mu_{p}(\sgl_{*},\sgl),\mu^{*})\pconv0$ following the proof of Lemma 4 in \cite{bayati2011dynamics}. Combining these two results, we prove (\ref{eq:weak_convergence}) given the fact that $W_{2}$ is a metric. Finally, we show that solution to (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) is unique. From the proof of Lemma \ref{lem:closeness_v_vhat}, equations (\ref{eq:sigmatau_1}), (\ref{eq:sigmatau_2}) correspond to the first order optimality condition of $\min_{\sigma\geq0}\max_{\theta\geq0}\Psi(\sigma,\theta)$ and statonary point $(\sigma^{*},\theta^{*})$ always exists. To prove the uniqueness, it suffices to show (1) $\forall\sigma_{0}\geq0$, $\Psi(\sigma_{0},\theta)$ is strictly concave w.r.t. $\theta$, (2) $\Psi(\sigma,\theta^{*}(\sigma))$ is strictly convex w.r.t. $\sigma$. Both can be directly verified from the form of $\Psi(\sigma,\theta)$ given in (\ref{eq:Psi_sigma_theta}). \subsection{Moreau Envelope of $\protect\regu$} \begin{defn} Let $f(\vx)$: $\R^{p}\to\R$ be a proper closed convex function. For $\tau>0$, the Moreau envelope of $f(\vx)$ is defined as: \begin{equation} e_{f}(\vx;\tau)\bydef\min_{\vv}\frac{1}{2\tau}\|\vx-\vv\|^{2}+f(\vv)\label{eq:Moreau_f} \end{equation} and the corresponding optimal solution is exactly proximal operator of $f$. \end{defn} We first study the convergence of $e_{\regu}(\vx;\tau)$ as $p\to\infty$. \begin{lem} \label{lem:conv_F}Let $\vh\sim\mathcal{N}(0,\,\mI_{p})$ and $\left\{ \sgl\right\} _{p\in\mathbb{N}}$, $\left\{ \vlambda\right\} _{p\in\mathbb{N}}$ are converging sequences. Then for all $c\in\R$ and $\tau>0$, we have: \begin{equation} \frac{1}{p}\left\{ e_{\regu}(c\vh+\sgl;\tau)-\regu(\sgl)\right\} \pconv F(c,\tau),\label{eq:coord_aver1} \end{equation} where $\regu(\vx)=\sum_{i=1}^{p}\lambda_{i}|x|_{(i)}$. Besides, \begin{equation} F(c,\tau)=\lim_{p\to\infty}F_{p}(c,\tau)\label{eq:F_ctau} \end{equation} where \begin{equation} F_{p}(c,\tau)=\frac{1}{p}\E\left\{ e_{\regu}(c\vh+\sgl;\tau)-\regu(\sgl)\right\} .\label{eq:Fp_ctau} \end{equation} \end{lem} \begin{IEEEproof} We first prove the convergence of $\frac{1}{p}\left\{ e_{\regu}(c\vh+\sgl;\tau)-\regu(\sgl)\right\} $ in probability. Writing out $e_{\regu}(\vx;\tau)$, we have: \begin{equation} e_{\regu}(\vx;\tau)=\frac{1}{2\tau}\|\vx-\tprox_{\tau\vlambda}(\vx)\|^{2}+\regu(\tprox_{\tau\vlambda}(\vx)).\label{eq:Moreau_regu} \end{equation} For $\vx=c\vh+\sgl$, we need to show $\frac{e_{\regu}(\vx;\tau)}{p}$ converges in probability. This can be done in two steps: (I) replace $\tprox_{\tau\vlambda}(\vx)$ in (\ref{eq:Moreau_regu}) by $\eta(\vx)\bydef\eta(\vx;F_{y},F_{\tau\lambda})$ where $\eta$ is obtained in Proposition \ref{prop:prox}. Clearly, $\eta(\vx)$ is a converging sequence almost surely. Then prove \begin{equation} \hat{e}_{\regu}(\vx;\tau)\bydef\frac{1}{2\tau}\|\vx-\eta(\vx)\|^{2}+\regu(\eta(\vx)),\label{eq:Moreau_f1} \end{equation} converges in probability, (II) show that $\frac{e_{\regu}(\vx;\tau)-\hat{e}_{\regu}(\vx;\tau)}{p}\pconv0$. To prove the first step, note that the first summand $\frac{1}{2\tau}\sum_{i=1}^{p}(x_{i}-\eta(x_{i};F_{x},F_{\tau\lambda}))^{2}$ in (\ref{eq:Moreau_f1}) is the empirical average of a converging sequence with each $x_{i}$ independent and it is not hard to show $\frac{\|\vx-\eta(\vx;F_{x},F_{\tau\lambda})\|^{2}}{2\tau p}$ converges in probability by weak law of large number (WLLN); for the second summand: $\regu(\eta(\vx))=\sum_{i=1}^{p}\lambda_{i}|\eta(\vx)|_{(i)}$, since each $\lambda_{i}|\eta|_{(i)}$ is not independent, we can not directly prove the convergence in probability. Instead, we replace $\left\{ \vlambda\right\} _{p\in\mathbb{N}}$ and $\left\{ \eta(\vx)\right\} _{p\in\mathbb{N}}$ by the corresponding regular converging sequence $\left\{ \vlambda_{r}\right\} _{p\in\mathbb{N}}$ and $\left\{ \eta_{r}(\vx)\right\} _{p\in\mathbb{N}}$. It is not hard to show that $\frac{\|\vlambda-\vlambda_{r}\|^{2}}{p},\frac{\|\eta(\vx)-\eta_{r}(\vx)\|^{2}}{p}\pconv0$ and $\frac{\sregu_{\vlambda_{r}}(\eta_{r}(\vx))}{p}$ converges in probability. Therefore, we have: \begin{align*} \frac{\left|\regu(\eta(\vx))-\sregu_{\vlambda_{r}}(\eta_{r}(\vx))\right|}{p} & \leq\frac{\tau\sum_{i=1}^{p}\lambda_{i}\left||\eta(\vx)|_{(i)}-|\eta_{r}(\vx)|_{(i)}\right|}{p}\\ & \quad+\frac{\tau\sum_{i=1}^{p}\left|\lambda_{i}-\lambda_{r,i}\right||\eta_{r}(\vx)|_{(i)}}{p}\\ & \leq\sqrt{\frac{\sum_{i=1}^{p}\lambda_{i}^{2}}{p}}\sqrt{\frac{\|\eta(\vx)-\eta_{r}(\vx)\|^{2}}{p}}\\ & \quad+\sqrt{\frac{\sum_{i=1}^{p}\eta_{r,i}^{2}(\vx)}{p}}\sqrt{\frac{\|\vlambda-\vlambda_{r}\|^{2}}{p}}, \end{align*} which indicates that $\frac{\left|\regu(\eta(\vx))-\sregu_{\vlambda_{r}}(\eta_{r}(\vx))\right|}{p}$ converges to 0 in probability. Combined with the convergence of $\frac{\sregu_{\vlambda_{r}}(\eta_{r}(\vx))}{p}$ , we conclude $\frac{\regu(\eta(\vx))}{p}$ also converges. \begin{comment} Instead, we replace $|\eta|_{(i)}$ by $|\hat{\eta}|_{(i)}\bydef F_{|\eta|}^{-1}(F_{\lambda}(\lambda_{i}))$ . After replacement, each summand depends only on $\lambda_{i}$ and thus becomes i.i.d., so $\frac{\sum_{i=1}^{p}\lambda_{i}|\hat{\eta}|_{(i)}}{p}$ converges in probability and from Lemma \ref{lem:seq_similarity}, we know $\frac{\sum_{i=1}^{p}\left(|\eta|_{(i)}-|\hat{\eta}|_{(i)}\right)^{2}}{p}\pconv0$. Therefore, we have: \begin{align} \frac{\left|\regu(\eta(\vx;F_{x},F_{\tau\lambda}))-\tau\sum_{i=1}^{p}\lambda_{i}|\hat{\eta}|_{(i)}\right|}{p} & =\frac{\tau\sum_{i=1}^{p}\lambda_{i}\left||\eta|_{(i)}-|\hat{\eta}|_{(i)}\right|}{p}\nonumber \\ & \leq\tau\sqrt{\frac{\sum_{i=1}^{p}\lambda_{i}^{2}}{p}}\sqrt{\frac{\sum_{i=1}^{p}\left(|\eta|_{(i)}-|\hat{\eta}|_{(i)}\right)^{2}}{p}}\label{eq:ubound1} \end{align} and by WLLN and continuous mapping theorem, upper bound (\ref{eq:ubound1})converges to 0. Combined with the convergence of $\frac{\sum_{i=1}^{p}\lambda_{i}|\hat{\eta}|_{(i)}}{p}$, we conclude $\frac{\regu(\eta(\vx;F_{x},F_{\tau\lambda}))}{p}$ also converges. \end{comment} Up to now, we have proved $\frac{\hat{e}_{\regu}(\vx;\tau)}{p}$ converges in probability, to prove step (II), we use $\frac{\|\tprox_{\tau\vlambda}(\vx)-\eta(\vx;F_{x},F_{\tau\lambda})\|^{2}}{p}\to0$ proved in Proposition \ref{prop:prox} and proceed in the similar way we proved the convergence of $\frac{\regu(\eta(\vx))}{p}$ in step (1) to get $\frac{e_{\regu}(\vx;\tau)-\hat{e}_{\regu}(\vx;\tau)}{p}\pconv0$. Combining both steps, we obtain that $\frac{e_{\regu}(\vx;\tau)}{p}$ converges in probability. Similarly, we can show $\frac{\regu(\sgl)}{p}$ converges in probability, so we conclude $\frac{1}{p}\left\{ e_{\regu}(c\vh+\sgl;\tau)-\regu(\sgl)\right\} $ converges in probability. To prove (\ref{eq:F_ctau}), we observe that: \begin{align} \frac{\left|e_{\regu}(c\vh+\sgl;\tau)\right|}{p} & =\frac{\min_{\vv}\frac{1}{2\tau}\|c\vh+\sgl-\vv\|^{2}+\regu(\vv)}{p}\nonumber \\ & \leq\frac{c^{2}\|\vh\|^{2}+2\tau\left(\|\vlambda\|^{2}+\|\sgl\|^{2}\right)}{2\tau p}\label{eq:DCT1} \end{align} By WLLN, we have \begin{align*} \frac{c^{2}\|\vh\|^{2}+2\tau\left(\|\vlambda\|^{2}+\|\sgl\|^{2}\right)}{2\tau p} & \pconv\E\frac{c^{2}h^{2}+2\tau(\lambda^{2}+\beta^{2})}{2\tau}\\ & =\lim_{p\to\infty}\E\frac{c^{2}\|\vh\|^{2}+2\tau\left(\|\vlambda\|^{2}+\|\sgl\|^{2}\right)}{2\tau p} \end{align*} so by generalized dominated convergence theorem (GDCT), we have: \begin{equation} \E\lim_{p\to\infty}\frac{e_{\regu}(c\vh+\sgl;\tau)}{p}=\lim_{p\to\infty}\E\frac{e_{\regu}(c\vh+\sgl;\tau)}{p}\label{eq:DCT2} \end{equation} and thus \begin{equation} \frac{e_{\regu}(c\vh+\sgl;\tau)}{p}\pconv\lim_{p\to\infty}\E\frac{e_{\regu}(c\vh+\sgl;\tau)}{p}.\label{eq:DCT3} \end{equation} On the other hand, we can show $\frac{\regu(\sgl)}{p}\pconv\lim_{p\to\infty}\E\frac{\regu(\sgl)}{p}$. Together with (\ref{eq:DCT3}), we obtain (\ref{eq:coord_aver1}). \end{IEEEproof} The function $F(c,\tau)$ defined in (\ref{eq:F_ctau}) plays a crucial role in deriving asymptotic characterization. To get (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}), we need some additional analytical properties of $F(c,\tau)$, which will be proved in the next lemma. \begin{lem} \label{lem:F_c_tau_uniform_conv}$F(c,\tau)$ defined in (\ref{eq:F_ctau}) is jointly convex w.r.t. $(c,\tau)$ and continuously differentiable, with partial derivatives: \begin{align} \frac{\partial F(c,\tau)}{\partial c} & =\lim_{p\to\infty}\frac{1}{p}\E\frac{\partial e_{\regu}}{\partial c}(c\vh+\sgl;\tau)\label{eq:F_deri1}\\ \frac{\partial F(c,\tau)}{\partial\tau} & =\lim_{p\to\infty}\frac{1}{p}\E\frac{\partial e_{\regu}}{\partial\tau}(c\vh+\sgl;\tau)\label{eq:F_deri2} \end{align} where $\vh\sim\calN(\mathbf{0},\mI_{p})$ and $\vbeta$, $\vlambda$ are converging sequences. \end{lem} \begin{IEEEproof} We first show the convexity of $F(c,\tau)$ defined in (\ref{eq:F_ctau}). It can be shown that $\forall\vx\in\R^{p}$, $e_{\regu}(\vx;\tau)$ is jointly convex in $(\vx,\tau)$ \cite{rockafellar2009variational}. Therefore, $F_{p}(c,\tau)$ in (\ref{eq:Fp_ctau}) is jointly convex in $(c,\tau)$ and so is $F(c,\tau)$. We then show $F_{p}(c,\tau)$ is continuously differentiable. First, it can be shown \cite{rockafellar2009variational} that $e_{\regu}(c\vh+\sgl;\tau)$ is continuously differentialble w.r.t. $(c,\tau)$ for any $\vh$ and $\sgl$ with derivatives as follows: \begin{align} \frac{\partial e_{\regu}}{\partial c}(c\vh+\sgl;\tau) & =\frac{\left[c\vh+\sgl-\tprox_{\tau\vlambda}(c\vh+\sgl)\right]^{\T}\vh}{\tau}\label{eq:par_efc}\\ \frac{\partial e_{\regu}}{\partial\tau}(c\vh+\sgl;\tau) & =-\frac{\|c\vh+\sgl-\tprox_{\tau\vlambda}(c\vh+\sgl)\|^{2}}{2\tau^{2}}.\label{eq:par_eftau} \end{align} From Algorithm \ref{alg:discrete}, it is not hard to show that $\tprox_{\vlambda}(\vy)$ satisfies \begin{equation} \|\tprox_{\vlambda}(\vy_{1})-\tprox_{\vlambda}(\vy_{2})\|\leq\|\vy_{1}-\vy_{2}\|,\label{eq:prox_lipschitz} \end{equation} so $\|\tprox_{\tau\vlambda}(c\vh+\sgl)\|^{2}\leq\|c\vh+\sgl\|^{2}$. Combining it with (\ref{eq:par_efc}) and (\ref{eq:par_eftau}) and using Cauchy-Schwartz inequality, we have: \begin{align*} \left|\frac{\partial e_{\regu}}{\partial c}(c\vh+\sgl;\tau)\right| & \leq g_{1}(\vh,\sgl;c,\tau)\\ \left|\frac{\partial e_{\regu}}{\partial\tau}(c\vh+\sgl;\tau)\right| & \leq g_{2}(\vh,\sgl;c,\tau) \end{align*} where \begin{align*} g_{1}(\vh,\sgl;c,\tau) & =\frac{(|c|\|\vh\|+\|\sgl\|)^{2}+\|\vh\|^{2}}{\tau}\\ g_{2}(\vh,\sgl;c,\tau) & =\frac{2(|c|\|\vh\|+\|\sgl\|)^{2}}{\tau^{2}} \end{align*} Clearly, for any $c\in R$ and $\tau>0$, $\E g_{1}(\vh,\sgl;c,\tau)<\infty$ and $\E g_{2}(\vh,\sgl;c,\tau)<\infty$. When $(c,\tau)$ belongs to any compact subset $\comset\subset\R\times\R^{+}$, $g_{1}(\vh,\sgl;c,\tau)\leq g_{1,\mathcal{C}}(\vh,\sgl)$ and $g_{2}(\vh,\sgl;c,\tau)\leq g_{2,\mathcal{C}}(\vh,\sgl)$, where \begin{align*} g_{1,\mathcal{C}}(\vh,\sgl) & =\frac{(|c_{\max}|\|\vh\|+\|\sgl\|)^{2}+\|\vh\|^{2}}{\tau_{\min}}\\ g_{2,\mathcal{C}}(\vh,\sgl) & =\frac{2(|c_{\max}|\|\vh\|+\|\sgl\|)^{2}}{\tau_{\min}^{2}} \end{align*} with $c_{\max}=\max_{c\in\comset}c$ and $\tau_{\min}=\min_{\tau\in\comset}\tau$. Obviously, $\E g_{1,\mathcal{C}}(\vh,\sgl),\,\E g_{2,\mathcal{C}}(\vh,\sgl)<\infty$. Therefore, by dominated convergence theorem (DCT), we have: \begin{align} \frac{\partial F_{p}(c,\tau)}{\partial c} & =\frac{1}{p}\E\frac{\partial e_{\regu}}{\partial c}(c\vh+\sgl;\tau)\label{eq:par_Fp_c}\\ \frac{\partial F_{p}(c,\tau)}{\partial\tau} & =\frac{1}{p}\E\frac{\partial e_{\regu}}{\partial\tau}(c\vh+\sgl;\tau).\label{eq:par_Fp_tau} \end{align} and they are all continuous for any finite $p$. Next we show $F_{p}(c,\tau)$ in (\ref{eq:Fp_ctau}) converges uniformly to $F(c,\tau)$ in any compact set $\comset\subset\R\times\R^{+}$. First, we already know that $F_{p}(c,\tau)$ is jointly convex. In addition, $\forall p$, $F_{p}(c,\tau)$ and $F(c,\tau)$ are bounded on $\comset\subset\R\times\R^{+}$. Therefore, by Theorem 10.8 in \cite{rockafellar2015convex}, $F_{p}(c,\tau)$ converges uniformly to $F(c,\tau)$ in any compact $\comset\subset\R\times\R^{+}$. Finally, we show if $(c,\tau)$ belongs to any compact set $\comset_{1}\times\comset_{2}\subset\R\times\R^{+}$, then (a) $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ converges uniformly on $\comset_{1}$ for any given $\tau\in\comset_{2}$ , (b) $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial\tau}\right\} _{p\geq1}$ converges uniformly on $\comset_{2}$ for any given $c\in\comset_{1}$. First, similar to the proof of Lemma \ref{lem:conv_F}, we can show $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ and $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial\tau}\right\} _{p\geq1}$ converge pointwise over $\comset_{1}\times\comset_{2}$. To prove (a), we first show for any given $\tau\in\comset_{2}$, $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ is equi-Lipschiz over $\comset_{1}$, i.e., $\exists\gamma>0$, s.t. $\left|\frac{\partial F_{p}(c_{1},\tau)}{\partial c}-\frac{\partial F_{p}(c_{2},\tau)}{\partial c}\right|<\gamma\left|c_{1}-c_{2}\right|$, $\forall c_{1},c_{2}\in\comset_{1}$, $\forall p\geq1$. Combining (\ref{eq:par_efc}) and (\ref{eq:par_Fp_c}), we have: \begin{align} \left|\frac{\partial F_{p}(c_{1},\tau)}{\partial c}-\frac{\partial F_{p}(c_{2},\tau)}{\partial c}\right| & =\frac{\E\left|\left(\tprox_{\tau\vlambda}(c_{1}\vh+\sgl)-\tprox_{\tau\vlambda}(c_{2}\vh+\sgl)\right)^{\T}\vh\right|}{p\tau}\nonumber \\ & \leq\frac{\E\|\vh\|^{2}}{p\tau}|c_{1}-c_{2}|\label{eq:unif_ineq1}\\ & =\frac{|c_{1}-c_{2}|}{\tau}\nonumber \end{align} where to get (\ref{eq:unif_ineq1}) we have used Cauchy-Schwartz inequality and the fact that $\tprox_{\vlambda}(\vx)$ is 1-Lipschiz (\ref{eq:prox_lipschitz}). Therefore, for fixed $\tau$,$\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ is equi-Lipschiz with constant $1/\tau$. Then following the same argument in the proof of Theorem 10.8 in \cite{rockafellar2015convex}, we can show the uniform convergence of $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ for $\tau\in\comset_{2}$. Similarly, to prove (b), we also first prove $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial\tau}\right\} _{p\geq1}$ is equi-Lipschiz over $\comset_{2}$ as a function of $\tau$. It is easy to check that $\tprox_{\tau\vlambda}(\vy)=\tau\tprox_{\vlambda}(\frac{\vy}{\tau})$ from (\ref{eq:slope_prox}) and combining it with (\ref{eq:par_eftau}) and (\ref{eq:par_Fp_tau}), we have: \begin{align*} & \left|\frac{\partial F_{p}(c,\tau_{1})}{\partial\tau}-\frac{\partial F_{p}(c,\tau_{2})}{\partial\tau}\right|\\ \leq & \frac{\E\left|\frac{\tau_{2}^{2}-\tau_{1}^{2}}{2\tau_{1}^{2}\tau_{2}^{2}}\left\Vert \vy\right\Vert ^{2}+\frac{\vx^{\T}}{\tau_{1}}\tprox_{\vlambda}(\frac{\vy}{\tau_{1}})-\frac{\vx^{\T}\tprox_{\vlambda}(\frac{\vy}{\tau_{2}})}{\tau_{2}}+\left\Vert \tprox_{\vlambda}(\frac{\vy}{\tau_{2}})\right\Vert ^{2}-\left\Vert \tprox_{\vlambda}(\frac{\vy}{\tau_{1}})\right\Vert ^{2}\right|}{p}\\ \leq & \frac{3|\tau_{1}+\tau_{2}|\E\|\vy\|^{2}}{p\tau_{1}^{2}\tau_{2}^{2}}|\tau_{1}-\tau_{2}|\\ \leq & C|\tau_{1}-\tau_{2}|, \end{align*} where $\vy=c\vh+\sgl$, $C$ is a constant not depending on $p$ and we have used Lipschitz continuity of $\tprox_{\vlambda}(\vy)$ in the second inequality. Then again follow the proof of \cite{rockafellar2015convex}, we can show the uniform convergence of $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial\tau}\right\} _{p\geq1}$ for $c\in\comset_{1}$. After combining all the 3 steps above with Theorem 7.17 in \cite{rudin1976principles}, we get (\ref{eq:par_efc}) and (\ref{eq:par_eftau}). \end{IEEEproof} \begin{lem} \label{lem:conv_Stein}For converging sequence $\left\{ \sgl\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda\right\} _{p\in\mathbb{N}}$, the following results hold: \begin{equation} \lim_{p\to\infty}\frac{1}{p}\E\|\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\sgl\|_{2}^{2}=\E[\eta(B+\sigma\equinoisei;F_{y},F_{\tau\lambda})-B]^{2}\label{eq:fixedequa_lim1} \end{equation} and \begin{equation} \lim_{p\to\infty}\frac{1}{p}\E[\equinoise^{\T}\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)]=\sigma\E\eta'(B+\sigma\equinoisei;F_{y},F_{\tau\lambda}).\label{eq:fixedequa_lim2} \end{equation} where $\equinoise\sim\calN(\mathbf{0},\mI_{p})$ and $y\sim B+\sigma Z$, with $B\sim F_{\sgli}$ and $\equinoisei\sim\calN(0,1)$. \end{lem} \begin{IEEEproof} Using Proposition \ref{prop:prox}, for (\ref{eq:fixedequa_lim1}), we have: \begin{align*} & \lim_{p\to\infty}\frac{1}{p}\E\|\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\sgl\|^{2}\\ = & \lim_{p\to\infty}\frac{1}{p}\E\|\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\eta(\sgl+\sigma\equinoise;F_{y},F_{\tau\lambda})\|^{2}\\ & +\lim_{p\to\infty}\frac{2}{p}\E(\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\eta(\sgl+\sigma\equinoise;F_{y},F_{\tau\lambda}))^{\T}(\eta(\sgl+\sigma\equinoise;F_{y},F_{\tau\lambda})-\sgl)\\ & +\lim_{p\to\infty}\frac{1}{p}\E\|\eta(\sgl+\sigma\equinoise;F_{y},F_{\tau\lambda})-\sgl\|^{2}\\ \to & \E[\eta(B+\sigma\equinoisei;F_{y},F_{\tau\lambda})-B]^{2} \end{align*} Next we show (\ref{eq:fixedequa_lim2}). Again using Proposition \ref{prop:prox}, we have: \begin{align*} \lim_{p\to\infty}\frac{1}{p}\E[\equinoise^{\T}\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)] & =\lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\E[\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)]_{i}\equinoisei_{i}\\ & =\lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\E\eta(\sgli_{i}+\sigma\equinoisei_{i};F_{y},F_{\tau\lambda})\equinoisei_{i}\\ & =\E\eta(B+\sigma\equinoisei;F_{y},F_{\tau\lambda})\equinoisei\\ & =\sigma\E\eta'(B+\sigma\equinoisei;F_{y},F_{\tau\lambda}), \end{align*} where we have used Stein's lemma in the last equality. \end{IEEEproof} \section{Introduction} In sparse linear regression, we seek to estimate a sparse vector $\sgl \in\mathbb{R}^{p}$ from \begin{equation}\label{eq:model} \vy=\dmtx\sgl+\noise, \end{equation} where $\dmtx\in\mathbb{R}^{n\times p}$ is the design matrix and $\noise$ denotes the observation noise. In this paper, we study the \emph{sorted $\ell_{1}$ penalization estimator} (SLOPE) \cite{bogdan2015slope} (see also \cite{zhong2012efficient, zeng2014decreasing}). Given a non-decreasing regularization sequence $\boldsymbol{\lambda} = [\lambda_1, \lambda_2, \ldots, \lambda_p]^\top$ with $0\leq\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{p}$, SLOPE estimates $\sgl$ by solving the following optimization problem \begin{equation} \widehat{\sgl} = \underset{\est}{\arg\,\min}\ \frac{1}{2}\|\vy-\dmtx\est\|_{2}^{2}+\sum_{i=1}^{p}\lambda_{i}|\esti|_{(i)},\label{eq:slope_opt} \end{equation} where $|\esti|_{(1)}\leq|\esti|_{(2)}\leq\cdots\leq|\esti|_{(p)}$ is a reordering of the absolute values $\abs{\esti_1}, \abs{\esti_2}, \ldots, \abs{\esti_p}$ in increasing order. In \cite{bogdan2015slope}, the regularization term $\regu(\est) \bydef \sum_{i=1}^{p}\lambda_{i}|\esti|_{(i)}$ is referred to as the ``sorted $\ell_{1}$ norm'' of $\est$. The same regularizer was independently developed in a different line of work \cite{zhong2012efficient, zeng2014decreasing, figueiredo2016ordered}, where the motivation is to promote group selection in the presence of correlated covariates. The classical LASSO estimator is a special case of SLOPE. It corresponds to using a constant regularization sequence, \emph{i.e.}, $\lambda_1 = \lambda_2 = \cdots = \lambda_p = \lambda$. However, with more general $\lambda$-sequences, SLOPE has the flexibility to penalize different coordinates of the estimate according to their magnitudes. This adaptivity endows SLOPE with some nice \emph{statistical} properties that are not possessed by LASSO. For example, it is shown in \cite{su2016slope,bellec2018slope} that SLOPE achieves the minimax $\ell_{2}$ estimation rate with high probability. In terms of testing, the authors of \cite{bogdan2015slope} show that SLOPE controls false discovery rate (FDR) for orthogonal design matrices, which is not the case for LASSO \cite{su2017false}. In addition, we note that the new regularizer $\regu(\est)$ is still a norm \cite{bogdan2015slope,zeng2014decreasing}. Thus, the optimization problem associated with SLOPE remains convex, and it can be efficiently solved by using \emph{e.g.}, proximal gradient descent \cite{zeng2014decreasing, bogdan2015slope}. In the aforementioned studies on analyzing SLOPE, the performance of the estimator is given in terms of non-asymptotic probabilistic bounds, while such bounds provide very limited information about how to optimally design the regularization sequence $\boldsymbol{\lambda}$ in different settings, which is an important open question in the literature \cite{bogdan2015slope, su2016slope}. In this paper, we provide two main contributions: \begin{enumerate} \item We obtain a characterization of SLOPE in the asymptotic regime: $n,\,p\to\infty$ and $n/p\to\delta$. Compared with the probabilistic bounds derived in previous work, our results are asymptotically \emph{exact}. Similar asymptotic analysis has been done for LASSO \cite{bayati2012lasso} and many other regularized linear regression problems \cite{karoui2013asymptotic, thrampoulidis2015regularized, thrampoulidis2018precise}, but the main technical challenge in analyzing SLOPE is the nonseparability of $\regu(\vx)$: it cannot be written as a sum of component-wise functions, \emph{i.e.}, $\regu(\vx) \neq \sum_{i=1}^p J_i(x_i)$. In our work, we overcome this challenge by showing that the proximal operator of $\regu(\vx)$ is \emph{asymptotically separable}. \item Using our asymptotic characterization, we derive \emph{oracle} optimal $\vlambda$ in two settings: (1) the optimal regularization sequence that minimizes the MSE $\E\|\hat{\sgl}-\sgl\|^{2}$; and (2) the optimal sequence that achieves the highest possible power in testing and variable selection under any given level of Type-I error. In both cases, we show that the optimal design can be recast as certain infinite-dimensional convex optimization problems, which have efficient and accurate finite-dimensional approximations. \end{enumerate} A caveat of our optimal design is that it requires knowing the limiting empirical measure of $\sgl$ (\emph{e.g.}, the sparsity level and the distribution of its nonzero coefficients). For this reason, our results are \emph{oracle optimal}. It provides the first step towards more practical optimal designs that are completely blind to $\sgl$. The rest of the paper is organized as follows. In Sec. \ref{sec:Asymptotic-Results}, we first prove the asymptotic separability of the proximal operator associated with $\regu(\vx)$. This property allows us to derive our main asymptotic characterization of SLOPE, summarized as Theorem~\ref{thm:asymp_char}. Based on this analysis, we present the optimal design of the regularization sequence in Sec. \ref{sec:Oracle-Optimality}. Numerical simulations verify our asymptotic characterizations. They also demonstrate the superiority of our optimal design over LASSO and a previous sequence design in the literature \cite{su2016slope}. \section{Main Asymptotic Results\label{sec:Asymptotic-Results}} \subsection{Technical Assumptions\label{subsec:DefandAssump}} There are four main objects in the description of our model and algorithm: (1) the unknown sparse vector $\sgl$; (2) the design matrix $\dmtx$; (3) the noise vector $\noise$; and (4) the regularization sequence $\vlambda$. Since we study the asymptotic limit (with $p \to \infty$), we will consider a sequence of instances $\big\{ \sgl^{(p)},\,\dmtx^{(p)},\,\vw^{(p)},\,\vlambda^{(p)}\big\} _{p\in\mathbb{N}}$ with increasing dimensions $p$, where $\sgl^{(p)},\,\vlambda^{(p)}\in\R^{p}$, $\dmtx^{(p)}\in\R^{n\times p}$ and $\noise^{(p)}\in\R^{n}$. A sequence of vectors $\boldsymbol{x}^{(p)} \in \R^p$ indexed by $p$ is called a \emph{converging sequence} \cite{bayati2012lasso} if its empirical measure $\mu_p(x) \bydef \tfrac{1}{p} \sum_{i=1}^p \delta(x - x_i^{(p)})$ converges weakly to a probability measure on $\R$ as $p\to\infty$. Our results are proved under the following assumptions: \begin{enumerate}[label={(A.\arabic*)}] \item \label{a:sampling_ratio} The number of observations grows in proportion to $p$: $n^{(p)}/p\to\delta\in(0,\infty)$. \item The number of nonzero elements in $\sgl^{(p)}$ grows in proportion to $p$: $k/p\to\rho\in(0,1]$. \item The elements of $\dmtx^{(p)}$ are i.i.d. Gaussian distribution: $\dmtxi_{ij}^{(p)}\iid\mathcal{N}(0,\,\frac{1}{n})$. \item \label{a:converging} $\{\sgl^{(p)}\}_{p\in \mathbb{N}}$, $\{\vw^{(p)}\}_{p\in \mathbb{N}}$ and $\{\vlambda^{(p)}\}_{p\in \mathbb{N}}$ are converging sequences. The distribution functions of the limiting measures are denoted by $F_{\sgli}$, $F_{\noisei}$ and $F_{\lambda}$, respectively. Moreover, we have $\P(|\lambda|\neq0)>0$, $\tfrac{1}{p}\|\sgl^{(p)}\|^{2}\to\E[\sgli^{2}]$, $\frac{1}{n}\|\noise^{(p)}\|^{2}\to\E[\noisei^{2}]=\sigma_{w}^{2}$ and $\frac{1}{p}\|\vlambda^{(p)}\|^{2}\to\E[\lambda^{2}]$, where the probability $\P(\cdot)$ and the expectations $\E[\cdot]$ are all computed with respect to the limiting measures. \end{enumerate} \begin{comment} Let us denote $F_{Y}$ and $F_{|Y|}$ as the CDF of $y_{i}$ and $|y_{i}|$. Also we can consider each $\lambda_{i}$ is drawn independently from the same distribution $F_{\lambda}$ and then reorder them to get $\vlambda$. \end{comment} \subsection{Asymptotics of the Proximal Operator of $\protect\regu(\protect\vx)$} We start by studying the proximal operator associated with the sorted $\ell_{1}$ norm $\regu(\vx)$. Given $\vy \in \R^p$ and a regularization sequence $\boldsymbol{\lambda}$ with $0\leq\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{p}$, the proximal operator is defined as the solution to the following optimization problem: \begin{equation} \text{Prox}_{\boldsymbol{\lambda}}(\vy) \bydef \underset{\vx}{\arg\,\min}\ \frac{1}{2}\|\vy-\vx\|_{2}^{2}+\sum_{i=1}^{p}\lambda_{i}|x|_{(i)},\label{eq:slope_prox} \end{equation} where $0\leq\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{p}$. Although $\regu(\vx)$ is a more complicated regularization term than $\ell_1$ norm used in LASSO, (\ref{eq:slope_prox}) can still be efficiently solved \cite{bogdan2015slope,zhong2012efficient}. In the case of LASSO, which corresponds to choosing $\lambda_1 = \lambda_2 = \cdots = \lambda_p = \lambda$, the proximal operator is easy to characterize, as it is separable: $[\text{Prox}_{\boldsymbol{\lambda}}(\vy)]_i = \text{sign}(y_i)\max(|y_{i}|-\lambda,0)$. In other words, the $i$th element of $\text{Prox}_{\boldsymbol{\lambda}}(\vy)$ is solely determined by $y_i$. However, this separability property does not hold for a general regularization sequence. When $p$ is finite, $[\text{Prox}_{\boldsymbol{\lambda}}(\vy)]_i$ depends not only on $y_{i}$ but also on other elements of $\vy $. This coupling makes it much harder to analyze the proximal operator. Fortunately, as we show below, when $p\to\infty$, $\text{Prox}_{\boldsymbol{\lambda}}(\cdot)$ becomes \emph{asymptotically separable}. \begin{prop} \label{prop:prox} Let $\{ \vy^{(p)}\}_{p\in \mathbb{N}}$ and $\{\vlambda^{(p)}\}_{p\in \mathbb{N}}$ be two converging sequences. Denote by $F_y$ and $F_\lambda$ the distribution functions of their respective limiting measures. It holds that \begin{equation}\label{eq:prox_sep} \lim_{p \to \infty} \frac{1}{p}\,\|\text{Prox}_{\boldsymbol{\lambda}}(\vy^{(p)})-\prox(\vy^{(p)}; F_y, F_\lambda)\|^{2} \to 0, \end{equation} where $\prox(\cdot; F_y, F_\lambda)$ is a scalar function that is determined by $F_y$ and $F_\lambda$. Here $\prox(\vy^{(p)}; F_y, F_\lambda)$ denotes a coordinate-wise application of $\prox(\cdot; F_y, F_\lambda)$ on $\vy^{(p)}$. The explicit construction of $\prox(\cdot; F_y, F_\lambda)$ is given in Algorithm \ref{alg:conti}. \end{prop} The proof of Proposition \ref{prop:prox} and a detailed explanations of Algorithm \ref{alg:conti} will be provided in Appendix \ref{subsec:appendix_prox}. We will also see that the asymptotic separability of $\text{Prox}_{\boldsymbol{\lambda}}(\cdot)$ greatly facilitates our asymptotic analysis and the optimal design of SLOPE, since it allows us to reduce the original high-dimensional problem to an equivalent one-dimensional problem. In what follows, we refer to $\prox(\cdot; F_y, F_\lambda)$ as the \emph{limiting scalar function}. \begin{algorithm} \begin{minipage} {\columnwidth} \begin{center} \begin{algorithmic}[2] \Require{ Distribution of $Y$ and $\lambda$: $y\sim F_y(y)$, $\lambda\sim F_{\lambda}(\lambda)$ } \Ensure{ Limiting proximal mapping $\opty{y; F_y, F_\lambda}$ } \State{Compute \[ \diff_0(y;F_{y},F_{\lambda}) := y - \lambda(y), y\geq 0 \] where \[ \lambda(y) := \begin{cases} F_{\lambda}^{-1}(F_{|y|}(y)) & \text{if } \P(|Y|=y)=0, \\ \frac{\int_{F_{|y|}(y^-)}^{F_{|y|}(y)}F_{\lambda}^{-1}(u)du}{F_{|y|}(y)-F_{|y|}(y^-)} & \text{if } \P(|Y|=y)>0. \end{cases} \]} \State{Set $t=0$} \While{$\exists$ MDI of $\diff_t(y;F_{y},F_{\lambda}) $} \State{Find the first MDI of $\diff_t(y;F_{y},F_{\lambda})$: $I_2=[\bdp_1, \bdp_2]$. } \If{$\P(|Y|>\bdp_2)=0$} \State{$\bdp_L \leftarrow \arg\max_{v\in[0, \bdp_1]} \conde(v, \bdp_2;\diff_t)$} \State{$\bdp_R \leftarrow \bdp_2$} \Else \State{Find $I_3$, right neighbouring MNDI of $I_2$ and $I_1$, left neighbouring MNDI of $I_2$} \State{Solve \[ \min_{w\in I_3}\max_{v\in I_1} \conde\left(v,w;\diff_t\right) \]} \State{Get optimal solution: $[w^*, v^*]$.} \State{$\bdp_L \leftarrow v^*$, $\bdp_R \leftarrow w^*$} \EndIf \State{For $y\in [\bdp_L, \bdp_R]$, replace original $\diff_{t}(y;F_{y},F_{\lambda})$ by $\conde\left(\bdp_L,\bdp_R;\diff_t\right)$ to get $\diff_{t+1}(y;F_{y},F_{\lambda})$.} \State{$t=t+1$} \EndWhile \State{Set $\diff(|y|;F_{y},F_{\lambda})=\diff_{t}(y;F_{y},F_{\lambda})$} \State{Obtain: $\prox(y;F_{y},F_{\lambda}) = \text{sign}(y) \max\{0,\diff(|y|;F_{y},F_{\lambda})\}$ } \end{algorithmic} \end{center} \end{minipage} \caption{Recursive procedure for constructing $\prox(\cdot; F_y, F_\lambda)$.\label{alg:conti}} \end{algorithm} \begin{example} We compare the actual proximal operator $\text{Prox}_{\boldsymbol{\lambda}}(\vy)$ and the limiting scalar function $\prox(y; F_y, F_\lambda)$, for two different $\vlambda$-sequences shown in Fig.~\ref{fig:prox_dist}(a) and Fig.~\ref{fig:prox_dist}(c). The red curves represent the limiting scalar functions obtained in Proposition \ref{prop:prox}, whereas the blue circles are sample points of $\left(y_{i},\,\left[\text{Prox}_{\boldsymbol{\lambda}}(\vy)\right]_{i}\right)$, with $y_i \sim \mathcal{N}(0, 1)$. For better visualization, we randomly sample 3\% of all $\left(y_{i},\,\left[\text{Prox}_{\boldsymbol{\lambda}}(\vy)\right]_{i}\right)$. It can be seen that under a moderate dimension $p=1024$, the proximal operator can already be very accurately approximated by the limiting scalar function. \end{example} \begin{figure}[t] \vspace{-3ex} \centering \subfloat[]{ \includegraphics[scale=0.618]{figs/stair_dist} }\subfloat[]{ \includegraphics[clip,scale=0.618]{figs/stair_prox} } \vspace{-2ex} \centering \subfloat[]{ \includegraphics[clip,scale=0.618]{figs/BHq_dist} }\hphantom{}\subfloat[]{ \includegraphics[clip,scale=0.618]{figs/BHq_prox} } \caption{(a) and (c): The histograms of two different $\vlambda$-sequences. (b) and (d): Sample points of $\left(y_{i},\,\left[\text{Prox}_{\boldsymbol{\lambda}}(\vy)\right]_{i}\right)$ (the blue dots) compared against the limiting scalar functions $\prox(y; F_y, F_\lambda)$ (the red curves). In this experiment, $p = 1024$.}\label{fig:prox_dist} \vspace{-2ex} \end{figure} \subsection{Asymptotics of SLOPE} We are now ready to tackle the original optimization problem (\ref{eq:slope_opt}) associated with SLOPE. Our goal is to characterize the joint empirical measure of $(\sol^{(p)},\,\sgl^{(p)})$: $\mu_{p}(\hat{\sgli},\,\sgli)\bydef\frac{1}{p}\sum_{i=1}^{p}\delta(\hat{\sgli}-\soli_{i},\sgli-\sgli_{i})$. Indeed, many quantities of interest, such as the MSE, type-I error, and FDR, are all functionals of this joint empirical measure. A function $\psi:\,\R^{2}\to\R$ is called \emph{pseudo-Lipschiz} if $|\psi(\vx)-\psi(\vy)|\leq L(1+\|\vx\|_{2}+\|\vy\|_{2})\|\vx-\vy\|_{2}$ for all $\vx,\,\vy\in\R^{2}$, where $L$ is a positive constant. As in \cite{bayati2012lasso}, we will depict the limit of $\mu_p(\hat{\sgli}, \sgli)$ through its action on {pseudo-Lipschiz} functions. \begin{thm} \label{thm:asymp_char} Assume \ref{a:sampling_ratio} -- \ref{a:converging} hold. For any pseudo-Lipschiz function $\psi$, we have \begin{equation} \lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\psi(\soli_{i},\,\sgli_{i})=\E_{B,Z}[\psi(\prox(B+\sigma Z;\,F_y, F_{\tau \lambda}),B)].\label{eq:weak_convergence} \end{equation} Here, $B, Z$ are two independent random variables with $B \sim F_\beta$ and $Z \sim\mathcal{N}(0,1)$; $\prox(\cdot\,; F_y, F_{\tau \lambda})$ is the limiting scalar function defined in Proposition \ref{prop:prox}, with $F_y$ denoting the distribution function of $B + \sigma Z$ and $F_{\tau \lambda}$ denoting that of $\tau \lambda$ for some $\tau \ge 0$. Moreover, the scalar pair $(\sigma,\,\tau)$ is the unique solution of the following equations: \begin{align} \sigma^{2} & =\sigma_{\noisei}^{2}+\frac{1}{\delta}\E_{B, Z}[(\prox(B+\sigma Z;F_y, F_{\tau \lambda})-B)^{2}]\label{eq:sigmatau_1}\\ 1 & =\tau\big(1-\frac{1}{\delta}\E_{B, Z}[\prox'(B+\sigma Z;F_y, F_{\tau \lambda})]\big).\label{eq:sigmatau_2} \end{align} \end{thm} \begin{rem} Readers familiar with the asymptotic analysis of LASSO will recognize that the forms of (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) look identical to the results of LASSO obtained in \cite{bayati2012lasso,thrampoulidis2018precise}. Indeed, the first part of our proof directly applies the framework of analyzing LASSO asymptotics using convex Gaussian min-max theorem (CMGT) \cite{thrampoulidis2015regularized, thrampoulidis2018precise}. Following \cite{thrampoulidis2018precise}, in the asymptotic regime, the limiting measure of SLOPE is determined by the following fixed point equations: \begin{align} \sigma^{2} & =\sigma_{w}^{2}+\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\|\text{Prox}_{\tau \vlambda}(\sgl+\sigma\equinoise)-\sgl\|_{2}^{2}\label{eq:fixedpt_vector1}\\ 1 & =\tau\left[1-\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\text{div}(\text{Prox}_{\tau \vlambda}(\sgl+\sigma\equinoise))\right].\label{eq:fixedpt_vector2} \end{align} Note that (\ref{eq:fixedpt_vector1}) and (\ref{eq:fixedpt_vector2}) are similar to (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}), except that they involve an $\R^{p}\mapsto\R^{p}$ proximal mapping: $\text{Prox}_{\tau \vlambda}(\sgl+\sigma\equinoise)$. This is where Proposition \ref{prop:prox} becomes useful. Using the asymptotic separability stated in that proposition, we can simplify (\ref{eq:fixedpt_vector1}) and (\ref{eq:fixedpt_vector2}) to the scalar equations given in (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}). \end{rem} Theorem \ref{thm:asymp_char} essentially says that the joint empirical measure of $(\sol^{(p)},\,\sgl^{(p)})$ converges weakly to the law of $(\prox(B+\sigma Z;\,F_y, F_{\tau \lambda}),B)$. This means that although the original problem (\ref{eq:slope_opt}) is high-dimensional, its asymptotic performance can be succinctly captured by merely two scalars random variables. In (\ref{eq:weak_convergence}), if we let $\psi(x_1,x_2)=(x_1-x_2)^{2}$, we obtain the asymptotic MSE; by setting $\psi(x_1,x_2)=1_{x_2=0,\,x_1\neq0}$, we can recover the type-I error. (Technically, $1_{y=0,x\neq0}$ is not pseudo-Lipschiz. However, with additional justifications \cite{bogdan2015slope}, one can show that the conclusion is still correct.) \subsection{Proof of Proposition \ref{prop:function_space}} Let \begin{align*} \mathcal{F}\bydef & \{\prox(y)|\prox(y)=-\prox(-y)\,\text{and } 0 \leq\prox(y_{1})-\prox(y_{2})\leq y_{1}-y_{2},\,\forall y_{1}\geq y_{2}\} \end{align*} and we want to prove $\mathcal{F}=\mathcal{M}$. From Lemma \ref{lem:regu_Lipschitz}, we can easily show $\mathcal{M}\subset\mathcal{F}$, so it suffices to show $\mathcal{F}\subset\mathcal{M}$. For any $\eta\in\mathcal{\mathcal{F}}$, we argue that if we let $\lambda\sim y-\prox(y)$, where $y\sim F_{|Y|}$ with $\E Y^{2}<\infty$ and choose $\left\{ \vlambda^{(p)}\right\} $ to be a converging sequence with limiting CDF $F_{\lambda}$, then $\frac{\|\tproxl(\vy)-\prox(\vy;F_{y},F_{\lambda})\|^{2}}{p}\to0$ and $\E\lambda^{2}<\infty$. To see this, first observe that $y-\prox(y)$ is a non-decreasing and continuous function on $[0,+\infty)$, so we have \begin{align*} F_{\lambda}^{-1}(F_{|Y|}(y)) & =\inf\left\{ x\Big|F_{\lambda}(x)\geq F_{|Y|}(y)\right\} \\ & =y-\prox(y) \end{align*} and $F_{\lambda}^{-1}(F_{|Y|}(y))=F_{\lambda}^{-1}(F_{|Y|}(y^{-}))$. Therefore, in Algorithm \ref{alg:conti}, $\lambda(y)=y-\prox(y)$ and $\diff(y)=\prox(y)$. Then by Lemma \ref{lem:0MDI_asym_sepa}, we obtain that $\frac{\|\tproxl(\vy)-\prox(\vy;F_{y},F_{\lambda})\|^{2}}{p}\to0$. On the other hand, $\E\lambda^{2}<\infty$ follows from the fact that $\E Y^{2}<\infty$ and $\lambda(y)\leq y$, where $y\sim F_{|Y|}$. As a result, we conclude that $\mathcal{F}\subset\mathcal{M}$. Finally, we prove the convexity of the set $\mathcal{M}$. Let $\prox_{1},\,\prox_{2}\in\mathcal{M}$, then $\forall\alpha\in[0,1]$, we define $\prox_{\alpha}\bydef\alpha\prox_{1}+(1-\alpha)\prox_{2}$. Clearly, for $y_{1}\geq y_{2}$, we have $0\leq\prox_{\alpha}(y_{1})-\prox_{\alpha}(y_{2})\leq y_{1}-y_{2}$ and also $\prox_{\alpha}(y)=-\prox_{\alpha}(-y)$, so $\prox_{\alpha}\in\mathcal{M}$, which shows the convexity of $\mathcal{M}$. \subsection{Proof of Proposition \ref{prop:min_MSE}} \label{subsec:min_MSE_proof} Note that equations (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) involve two variables $\sigma$ and $\tau$. It is not easy to handle them simultaneously. A simplification we can make is to first set $\tau$ to $1$ and find the minimum $\sigma$ such that the first equation (\ref{eq:sigmatau_1}) and the inequality $\E_{B, Z}[\prox'(B+\sigma Z;F_y, F_{ \lambda})]\leq\delta$ hold. Once we get $\sigma_{\min}$ and optimal $\lambda^*$, the corresponding $\tau_{\min}$ can then be obtained via (\ref{eq:sigmatau_2}): $\tau_{\min}=(1-\tfrac{1}{\delta}\E_{B, Z}[{\prox^{*}}'(B+\sigma_{\min} Z;F_y, F_{\lambda}^*)])^{-1}$ and $\lambda^*$ is in turn updated to be $\lambda^*/\tau_{\min}$. After this replacement, (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) will be both satisfied. It is not difficult to show that this procedure will lead to the same $\sigma_{\min}$ as defined in (\ref{eq:sigma_min}). Clearly, $\sigma_{\min}$ must lie over a compact set: $\left[\sigma_{\noisei},\,\sqrt{\sigma_{\noisei}^{2}+\frac{1}{\delta}\E B^{2}}\right]$, since from (\ref{eq:sigmatau_1}), we know $\sigma^{2}\geq\sigma_{w}^{2}$ and also $\sigma^{2}=\sigma_{\noisei}^{2}+\frac{1}{\delta}\E[B^{2}]$ when $\prox\equiv0$. Therefore, to find $\sigma_{\min}$, it suffices to solve problem (\ref{eq:opt_esti_prox}) for every candidate $\sigma$ and find the minimum $\sigma$ such that (\ref{eq:sigma_min}) holds. Finally, we show the convexity of problem (\ref{eq:opt_esti_prox}). Note that the objective function can be written as: \begin{equation*} \E_{B, Z}[(\prox(B+\sigma Z)-B)^{2}] = \E_Y[\eta(Y)-\E(B|Y)]^2 + \E_Y\text{Var}(B|Y), \end{equation*} where $Y = B+\sigma Z$. Since $\E_Y[\eta(Y)-\E(B|Y)]^2$ can be viewed as the distance between $\eta(y)$ and $\E(B|Y)$ under the metric induced by the following inner product: \begin{align*} \langle f_1(y), f_2(y) \rangle_{F_Y} \bydef \int f_1(y)f_2(y)dF_Y(y), \end{align*} so naturally it is a convex functional w.r.t. $\eta(y)$. On the other hand, it is not hard to show that the feasible set of $\eta(y)$ is convex by the convexity of $\mathcal{M}$ and also the inequality constraint in problem (\ref{eq:opt_esti_prox}). Once we find the optimal $\prox^* = \prox_{\sigma_{\min}}$ and $(\sigma_{\min},\,\tau_{\min})$, from Proposition \ref{prop:function_space} and argument we made before, the corresponding optimal $\lambda^*$ distribution can be represented as: $\lambda\sim \frac{|Y|-\prox^*(|Y|)}{\tau_{\min}}$, with $Y\sim B+\sigma_{\min}Z$. \subsection{Proof of Proposition \ref{prop:max_power}} \label{subsec:max_power_proof} Let $y_{\text{thresh}}=\sup_{y\geq0}\left\{ y \mid \prox(y)=0\right\} $. It follows from Theorem \ref{thm:asymp_char} that, in order to ensure $\P_{\text{type-I}}=\alpha$, we need to have $\frac{y_{\text{thresh}}}{\sigma}=\Phi^{-1}(1-\frac{\alpha}{2})$, where $\Phi(\cdot)$ is the CDF of the standard normal distribution. Similarly, we can compute the power of the test as $\P(|\frac{\sgli}{\sigma}+Z|\geq\Phi^{-1}(1-\frac{\alpha}{2}))$. It can be shown that for any fixed $\sgli$, $\P(|\frac{\sgli}{\sigma}+Z|\geq\Phi^{-1}(1-\frac{\alpha}{2}))$ is a non-increasing function of $\sigma$. Thus, under a given type-I error rate $\alpha$, maximizing the power is equivalent to minimizing $\sigma$, which is the same objective in the case of optimal estimation as shown in Proposition \ref{prop:min_MSE}. The only difference here is that we need to enforce additional constraints (\ref{eq:typeI_constraint}) that guarantees $P_{\text{type-I}}=\alpha$. Besides, it is not hard to show adding this constraint will not affect the convexity of the feasible set of $\prox(y)$. The rest of the proof is the same as Proposition \ref{prop:min_MSE}. \section{Oracle Optimality of SLOPE\label{sec:Oracle-Optimality}} In this section, we will study the optimal design of the regularization sequence in SLOPE. Using the asymptotic characterization presented in Sec. \ref{sec:Asymptotic-Results}, we will derive the optimal limiting distribution $F_{\lambda}$ to achieve the best estimation or testing performance, given the oracle knowledge of $F_{\sgli}$. \subsection{Estimation with Minimum MSE} \label{sec:mse} We first turn to the problem of finding the optimal $\vlambda$-sequence which minimizes the MSE of slope estimator. Since we work in the asymptotic regime, it boils down to finding the optimal distribution $F_{\lambda}^{*}$ such that \begin{align*} F_{\lambda}^{*} & =\argmin{F_{\lambda}} \ \lim_{p\to\infty}\frac{1}{p}\|\sol-\sgl\|_{2}^{2}\\ & =\argmin{F_{\lambda}} \ \E_{B, Z}[(\prox(B+\sigma Z;F_y, F_{\tau \lambda})-B)^{2}], \end{align*} where $B\sim F_{\sgli}$ and the second equality follows from Theorem \ref{thm:asymp_char}. From (\ref{eq:sigmatau_1}), this is further equivalent to finding $F_{\lambda}$ to minimize $\sigma$. However, directly searching over $F_{\lambda}$ appears unwieldy, since $\sigma$, as a functional of $F_{\lambda}$, is defined indirectly through a nonlinear fixed point equation. To simplify this problem, we first note that in (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}), the influence of $F_{\lambda}$ on the solution $(\sigma,\,\tau)$ is only through the limiting scalar function $\prox$. Therefore, instead of optimizing over $F_{\lambda}$, we can find the optimal $\prox^{*}$ and then calculate the corresponding $F_{\lambda}^{*}$. The next question then becomes finding all possible $\prox$ that can be realized by some $F_{\lambda}$. In fact, for any given converging sequence $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$, we can compute all possible $\prox(\cdot)$ associated with it. Let \[ \mathcal{M}\bydef\left\{ \prox(\cdot\,; F_y, F_\lambda) \mid\exists F_{\lambda},\,\E\lambda^{2}<\infty,\,\text{s.t. } (\ref{eq:prox_sep}) \text{ holds}\right\} \] be the functional space that $\prox$ belongs to. We have the following result: \begin{prop} \label{prop:function_space}For any converging sequence $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$, we have \begin{align*} \mathcal{M}= & \{\prox(y)\mid\prox(y)=-\prox(-y)\,\text{and }\\ & \hspace{3em}0\leq\prox(y_{1})-\prox(y_{2})\leq y_{1}-y_{2},\,\forall y_{1}\geq y_{2}\} \end{align*} and $\mathcal{M}$ is a convex set. Moreover, for any $\prox\in \mathcal{M}$, the corresponding distribution of the $\vlambda$-sequence that yields $\prox$ can be represented by: $\lambda \sim |Y|-\prox(|Y|)$, where $Y$ follows the limiting distribution of $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$. \end{prop} \begin{rem} Proposition \ref{prop:function_space} is the key ingredient in our optimal design. It shows that, with different choices of $F_{\lambda}$, we can reach any non-decreasing and odd function that is Lipschitz continuous with constant 1. Clearly, the soft-thresholding functions associated with LASSO belongs to $\mathcal{M}$, but the set $\mathcal{M}$ is much richer. This is the essence of how SLOPE generalizes LASSO: it allows for more degrees of freedom in the regularization. \end{rem} Due to Proposition \ref{prop:function_space}, the optimization problem can be simplified to that of finding the optimal $\prox\in\mathcal{M}$ such that $\sigma$ as obtained from (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) is minimized. Specifically, we need to find \begin{equation} \sigma_{\min}\bydef\inf\left\{ \sigma \mid \exists\prox\in\mathcal{M},\,\text{s.t. }\sigma\text{ satisfies }\eqref{eq:sigmatau_1}\text{ and }\eqref{eq:sigmatau_2}\right\}. \label{eq:sigma_min} \end{equation} The following results reveal a way to obtain $\sigma_{\min}$. \begin{prop} \label{prop:min_MSE} $\sigma_{\min}$ is the minimum solution of equation: \begin{equation} \mathcal{L}(\sigma)=\delta(\sigma^2 - \sigma_{\noisei}^2),~\sigma\in \left[\sigma_{\noisei},\,\sqrt{\sigma_{\noisei}^{2}+\frac{1}{\delta}\E B^{2}}\right], \label{eq:sigma_min_equation} \end{equation} where $\mathcal{L}(\sigma)$ is the optimal value of the following convex optimization problem w.r.t. $\prox(y)$ for a fixed $\sigma$: \begin{align} \min_{\prox\in\mathcal{M}}\hspace{1em} & \E_{B, Z}[(\prox(B+\sigma Z)-B)^{2}]\label{eq:opt_esti_prox}\\ \text{s.t.}\hspace{1em} & \E_{B, Z}[\prox'(B+\sigma Z)]\leq\delta.\nonumber \end{align} The corresponding optimal limiting scalar function $\prox^* = \prox_{\sigma_{\min}}$ and $\lambda^*$ follows the distribution: \begin{equation*} \lambda^* \sim \frac{|Y|-\prox^*(|Y|)}{\tau_{\min}}, \end{equation*} where $Y\sim B+\sigma_{\min}Z$, $\prox_{\sigma_{\min}}$ is the optimal solution of (\ref{eq:opt_esti_prox}), when $\sigma=\sigma_{\min}$ and \begin{equation*} \tau_{\min} = \left(1-\frac{\E {\prox^{*}} '(Y)}{\delta}\right)^{-1}. \end{equation*} \end{prop} The proof of Proposition \ref{prop:min_MSE} is deferred to Appendix \ref{subsec:min_MSE_proof}. In practice, we can discretize over $\R$ to obtain a finite-dimensional approximation to the original infinite-dimensional problem (\ref{eq:opt_esti_prox}). Naturally, this finite approximation problem is also convex. In the following simulations, we use an approximation with 2048 grids. \begin{comment} There are several points that remain to be justified: (1) show $\sigma_{\min}<\sigma$ when $\min_{\prox\in\mathcal{F}}\,\E[\prox(\xi+\sigma Z;1)-\xi]^{2}\leq\delta(\sigma^{2}-\sigma_{w}^{2})$. \end{comment} \begin{figure}[t] \begin{centering} \subfloat[$\rho=0.256$\label{subfig:MSE_SNR}]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/MSE_SNR_1} \par\end{centering} }\hphantom{}\subfloat[$\text{SNR}=5$\label{subfig:MSE_sparsity}]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/MSE_sparsity_1} \par\end{centering} } \par\end{centering} \caption{Comparison of MSEs obtained by three regularization sequences: LASSO, BHq and the oracle optimal design, under different SNR and sparsity levels. Here, $p=1024$, $\delta=0.64$. The red curves show the theoretical minimum MSE that can be achieved by using the oracle optimal sequences. \label{fig:Comparison-of-MSEs}} \end{figure} In Fig. \ref{fig:Comparison-of-MSEs}, we compare the MSEs achieved by our optimal design with those obtained by LASSO and the BHq sequences proposed \cite{su2016slope}, at different SNRs and sparsity levels. For fair comparison, we optimize the parameters of the BHq and LASSO sequences. It can be seen from the figure that the empirical minimum MSEs match well with theoretical ones. We observe from Fig. \ref{subfig:MSE_SNR} that, under low SNRs, the BHq sequence can lead to very similar performance as the oracle optimal design. However, at higher SNR levels, the optimal design outperforms the BHq sequence, while it gets close to LASSO. To unravel the underlying reason for this, we plot in Fig. \ref{fig:Comparison-of-distributions} the distributions of the $\vlambda$-sequences associated with the optimal design and the BHq design, respectively. It turns out that, in the low SNR case, the optimal design and BHq have similar distributions; at higher SNRs, the distribution of the optimal design is close to a delta-like distribution similar to LASSO. Note that for a small sparsity-level $\rho$, LASSO can outperform BHq and achieve performance close to that of the optimal sequence, but it is prone to higher bias when $\rho$ grows. From Fig. \ref{subfig:MSE_sparsity}, we can find that LASSO's performance degrades much faster than the other two as $\rho$ increases, This is because LASSO's penalization is not adaptive to the underlying sparsity levels \cite{su2016slope}. \begin{figure} \begin{centering} \subfloat[Optimal]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/optimaldist_lowSNR} \par\end{centering} }\hphantom{}\subfloat[BHq]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/BHqdist_lowSNR} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[Optimal]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/optimaldist_highSNR} \par\end{centering} }\hphantom{}\subfloat[BHq]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/BHqdist_highSNR} \par\end{centering} } \par\end{centering} \caption{Comparison of distributions of two regularization sequences ``BHq'' and ``Optimal'' in Fig. \ref{subfig:MSE_SNR}: (a)-(b): $\text{SNR}=1$, (c)-(d): $\text{SNR}=10$.\label{fig:Comparison-of-distributions} } \end{figure} \subsection{Multiple Testing with Maximum Power} Next we consider using SLOPE directly for variable selection. In other words, the non-zero coordinates of SLOPE solution $\sol$ are selected. Our goal is to find the optimal regularization sequence to achieve the highest possible power, under a given level of type-I error. Similar to Proposition \ref{prop:min_MSE}, we have the following result: \begin{prop} \label{prop:max_power} For a given Type-I error level $\alpha$, the maximum selection power that can be reached by SLOPE is: $\P\left(\left|\frac{B}{\sigma_{\min}}+Z\right| \geq \Phi^{-1}(1-{\alpha}/{2})\right)$, where $\sigma_{\min}$ is the minimum solution of equation: \begin{equation} \mathcal{L}(\sigma)= \delta(\sigma^2 - \sigma_{\noisei}^2),~\sigma\in \left[\sigma_{\noisei},\,\sqrt{\sigma_{\noisei}^{2}+\frac{1}{\delta}\E B^{2}}\right], \label{eq:sigma_min_equation1} \end{equation} Here $\mathcal{L}(\sigma)$ is the optimal value of the following convex optimization problem under a given $\sigma$: \begin{align} \min_{\prox\in\mathcal{M}}\hspace{1em} & \E_{B, Z}[(\prox(B+\sigma Z)-B)^{2}]\label{eq:opt_testing_prox}\\ \text{s.t.}\hspace{1em} & \E_{B, Z}[\prox'(B+\sigma Z)]\leq\delta,\nonumber\\ &\prox(y)=0, |y|\leq \Phi^{-1}\left(1-\frac{\alpha}{2}\right)\sigma.\label{eq:typeI_constraint} \end{align} The corresponding optimal $\prox^{*}=\prox_{\sigma_{\min}}$ and $\lambda^*$ is represented by: \begin{equation*} \lambda^* \sim \frac{|Y|-\prox^{*}(|Y|)}{\tau_{\min}}, \end{equation*} where the definitions of $Y, \tau_{\min}$ and $\prox_{\sigma_{\min}}$ are the same as in Proposition \ref{prop:min_MSE}. \end{prop} The proof of Proposition \ref{prop:max_power}, which is similar to that of Proposition \ref{prop:min_MSE}, will be given in Appendix \ref{subsec:max_power_proof}. In Fig.\ref{fig:Optimal-hypothesis-testing}, we compare the FDR curve of the optimal design with that of the BHq sequence. We verify that the optimal design indeed dominates the BHq sequence and that the empirical FDR curve matches well with theoretical prediction. \begin{comment} The main difference is on the power: there exists an upper bound for the power of LASSO, but for SLOPE the power can almost reach 1. The phenomenon of limited power for LASSO is also discussed in \cite{bogdan2015slope,su2016slope} and it is inherently connected with the so-called ``noise-sensitivity '' phase transition \cite{donoho2011noise}. Here, it can be seen that by optimizing over the regularization, the ``ceiling'' effect on testing power can be greatly alleviated. \end{comment} \begin{figure} \begin{centering} \includegraphics[clip,scale=1.0]{figs/FDR1} \par\end{centering} \caption{Hypothesis testing using oracle optimal and BHq sequences. Here, $p=1024$, $\protect\sgli_{i}\protect\iid(1-\rho)\delta(0)+\rho\mathcal{N}(\mu_{0},\,\sigma_{0}^{2})$ with $\rho=0.25$, $\mu_{0}=2.125$ and $\sigma_{0}=0$, $w_{i}\protect\iid\mathcal{N}(0,\,\sigma_w^{2})$ with $\sigma=0.25$. The results are averaged over 100 realizations. \label{fig:Optimal-hypothesis-testing}} \end{figure} \subsection{Proximal Mapping $\protect\tprox_{\protect\vlambda}(\protect\vy)$} Before proving the main asymptotic results, we recall some key properties of $\tprox_{\vlambda}(\vy)$ that will be used later. First, it is easy to check that $[\tprox_{\vlambda}(\vy)]_{i}$ should have the same sign as $y_{i}$, $i=1,\ldots,p$, i.e., \begin{equation} \tprox_{\vlambda}(\vy)=\mD_{\sgn(\vy)}\tprox_{\vlambda}(|\vy|)\label{eq:prox_linearity_sgn} \end{equation} where $\mD_{\sgn(\vy)}=\diag\left\{ \sgn(y_{1}),\ldots,\sgn(y_{p})\right\} $, $|\vy|=\left(|y_{1}|,|y_{2}|,\ldots,|y_{p}|\right)^{T}$ and \[ \sgn(y)=\begin{cases} 1 & y>0\\ 0 & y=0\\ -1 & y<0 \end{cases}. \] Also we have: \begin{equation} \Pi\circ\tprox_{\vlambda}(\vy)=\tprox_{\vlambda}(\Pi\circ\vy),\label{eq:prox_linearity_permutation} \end{equation} where $\Pi$ is a permutation matrix. Therefore, without loss of generality, we can assume $0\leq y_{1}\leq y_{2}\leq\cdots\leq y_{p}$ and for the general $\left\{ y_{i}\right\} _{1\leq i\leq p}$, combining (\ref{eq:prox_linearity_sgn}) and (\ref{eq:prox_linearity_permutation}), we can express $\tprox_{\vlambda}(\vy)$ as: \begin{align*} \tprox_{\vlambda}(\vy) & =\mD_{\sgn(\vy)}\tprox_{\vlambda}(|\vy|)\\ & =\mD_{\sgn(\vy)}\Pi_{\uparrow}^{T}\circ\tprox_{\vlambda}(\acute{|\vy|}) \end{align*} where $\Pi_{\uparrow}$ is a permutation matrix such that the coordinates of $\Pi_{\uparrow}|\vy|$ are arranged in non-decreasing order and $\acute{|\vy|}\bydef {\Pi_{\uparrow}}\circ|\vy|$. As a consequence, we can focus on the non-negative and monotonely non-decreasing sequences of $\vy$: $0\leq y_{1}\leq y_{2}\leq\cdots\leq y_{p}$. For this type of $\vy$, we have the following three properties, which have been proved in \cite{bogdan2015slope} : \begin{lem} \label{lem:prox_previous_property}For $\vy\in\R^{p}$ satisfying: $0\leq y_{1}\leq y_{2}\leq\cdots\leq y_{p}$, it holds that: \end{lem} \begin{enumerate} \item $0\leq[\tprox_{\vlambda}(\vy)]_{1}\leq[\tprox_{\vlambda}(\vy)]_{2}\leq\cdots\leq[\tprox_{\vlambda}(\vy)]_{p}$, \item If $y_{1}-\lambda_{1}\leq y_{2}-\lambda_{2}\leq\cdots\leq y_{p}-\lambda_{p}$, then $[\tprox_{\vlambda}(\vy)]_{i}=\max(0,\,y_{i}-\lambda_{i})$, \item If $y_{i}-\lambda_{i}\geq y_{i+1}-\lambda_{i+1}\geq\cdots\geq y_{j}-\lambda_{j}$, then $[\tprox_{\vlambda}(\vy)]_{i}=[\tprox_{\vlambda}(\vy)]_{i+1}=\cdots=[\tprox_{\vlambda}(\vy)]_{j}$. \end{enumerate} From the 3rd property in Lemma \ref{lem:prox_previous_property}, if $\left\{ y_{k}-\lambda_{k}\right\} _{i\leq k\leq j}$ is non-increasing, it is shown in \cite{bogdan2015slope} that $\tprox_{\vlambda}(\vy)$ will remain unchanged if we replace $\lambda_{k}$ by $\lambda_{k}^{+}$ as follows: \begin{eqnarray} \lambda_{k}^{+} & = & \begin{cases} y_{k}-\frac{\sum_{l=i}^{j}y_{l}-\lambda_{l}}{j-i+1} & k=i,\ldots j\\ \lambda_{k} & \otherwise \end{cases}\label{eq:aver_lambda} \end{eqnarray} Since $y_{i}-\lambda_{i}\geq\frac{\sum_{l=i}^{j}y_{l}-\lambda_{l}}{j-i+1}$, $\lambda_{i}^{+}=y_{i}-\frac{\sum_{l=i}^{j}y_{l}-\lambda_{l}}{j-i+1}\geq\lambda_{i}$. Similarly, $\lambda_{j}^{+}\leq\lambda_{j}$, so $\lambda_{1}^{+}\leq\lambda_{2}^{+}\leq\cdots\leq\lambda_{p}^{+}$, which is still a valid regularization sequence. Also for $k=i,\ldots j$, $y_{k}-\lambda_{k}^{+}=\frac{\sum_{l=i}^{j}y_{l}-\lambda_{l}}{j-i+1}$. Therefore, the solution of (\ref{eq:slope_prox}) can be obtained by the following procedure (assuming $0\leq y_{1}\leq y_{2}\leq\cdots\leq y_{p}$): \begin{enumerate} \item Compute $y_{i}-\lambda_{i}$, $i=1,2,\ldots,p$, \item Find the smallest $i$ such that the sub-sequence from $i$ to $j$ is strictly decreasing: $y_{i}-\lambda_{i}>y_{i+1}-\lambda_{i+1}>\cdots>y_{j}-\lambda_{j}$ and $y_{j}-\lambda_{j}\leq y_{j+1}-\lambda_{j+1}$, \item For $i\leq l\leq j$, calculate the average and replace the original $\vlambda$ as (\ref{eq:aver_lambda}), \item Repeat step 2 and 3 until obtain a sequence $\vlambda^{+}$ s.t. $\left\{ y_{i}-\lambda_{i}^{+}\right\} _{1\leq i\leq p}$ is non-decreasing. \item Return $\tprox_{\vlambda}(y_{i})=\max(y_{i}-\lambda_{i}^{+},\,0)$, $i=1,2,\ldots,p$. \end{enumerate} The above procedure can be implemented in an efficient stack-based algorithm as proposed in \cite{zhong2012efficient,bogdan2015slope}. Here, to facilitate the asymptotic analysis later, we will present an equivalent algorithm implemented in a different manner. Before doing so, we first list a few definitions that will be used later. \begin{defn} \label{def:mono_segment}For a finite sequence $\left\{ a_{i}\right\} _{1\leq i\leq p}$, we call $\left\{ k_{1},k_{1}+1,\ldots,k_{2}\right\} $ a\emph{ maximal decreasing segment }(MDS) if $a_{k_{1}}>a_{k_{1}+1}>\cdots>a_{k_{2}}$ and $a_{k_{1}-1}\leq a_{k_{1}}$ or ($k_{1}=1$), $a_{k_{2}}\geq a_{k_{2}+1}$ or ($k_{2}=p$); similarly, $\left\{ k_{1},k_{1}+1,\ldots,k_{2}\right\} $ is a\emph{ maximal non-decreasing segment }(MNDS) if $a_{k_{1}}\leq a_{k_{1}+1}\leq\cdots\leq a_{k_{2}}$ and $a_{k_{1}-1}>a_{k_{1}}$ or ($k_{1}=1$), $a_{k_{2}}<a_{k_{2}+1}$ or ($k_{2}=p$). \end{defn} \begin{defn} \label{def:BP_discrete}In a finite sequence $\left\{ a_{i}\right\} _{1\leq i\leq p}$, $S_{1}=\left\{ 1,\ldots,k_{1}\right\} ,$$S_{2}=\left\{ k_{1},\ldots,k_{2}\right\} $ and $S_{3}=\left\{ k_{2},\ldots,k_{3}\right\} $ are 3 neighboring maximal segments. Suppose $S_{1}$ and $S_{3}$ are MNDS and $S_{2}$ is an MDS. For $j\in S_{3}$, define the index set: \[ \mathcal{I}_{j}=\left\{ i\Big|a_{i-1}>\aver ij,2\leq i\leq k_{1}\right\} . \] where \[ \aver ij\bydef\frac{\sum_{l=i}^{j}a_{l}}{j-i+1}. \] The corresponding \emph{left balance point} (LBP) of $j$ is defined as: \[ i^{*}(j)=\begin{cases} \min_{i\in\mathcal{I}_{j}}i-1 & \mathcal{I}_{j}\neq\emptyset\\ k_{1} & \mathcal{I}_{j}=\emptyset \end{cases} \] Using this definition, construct another index set: \[ \mathcal{J}=\left\{ j\Big|\aver{i^{*}(j)}j>a_{j+1},k_{2}\leq j\leq k_{3}-1\right\} \] and the \emph{right balance point} (RBP) is defined as: \[ j^{*}=\begin{cases} \max_{j\in\mathcal{J}}j+1 & \mathcal{J}\neq\emptyset\\ k_{2} & \mathcal{J}=\emptyset \end{cases} \] \end{defn} \begin{algorithm} \begin{algorithmic}[1] \Require{ $\{y_i\}_{1\leq i \leq p}$, $\{\lambda_i\}_{1\leq i \leq p}$ }, $0\leq\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_p$ \Ensure{ $\tprox_{\vlambda}(\vy)$ } \State{Calculate $g_{0,i}=|y|_{(i)}-\lambda_i$, $i=1,2,\ldots,p$} \State{Set $t=0$.} \While{$\exists$ MDS of $\{g_{t,i}\}_{1\leq i \leq p}$} \State{Find the first MDS $S_2$: $\{{k_1}, {k_1+1},\ldots,{k_2}\}$ } \If{$k_2=p$} \State{$k_R = p$} \State{$k_L = i^*(k_2)$} \Else \State{Find $S_3 = \{{k_2}, {k_2+1},\ldots,{k_3}\}$, the right neighbouring MNDS of $S_2$ and $S_1 = \{{1}, {2},\ldots,{k_1}\}$, the left neighbouring MNDS of $S_2$.} \State{Find RBP $j^*$ and corresponding LBP $i^*=i^*(j^*)$.} \State{$k_L \leftarrow i^*$, $k_R \leftarrow j^*$} \EndIf \State{For $i\in [k_L, k_R]$, replace original $g_{t,i}$ by $\mathtt{aver}(g_{t,k_L},g_{t,k_R})$ to obtain the new $\{g_{t+1,i}\}_{1\leq i\leq p}$.} \State{$t=t+1$} \EndWhile \State{$\{g_i\}_{1\leq i \leq p} \leftarrow \{g_{t,i}\}_{1\leq i \leq p}$} \State{$[\tprox_{\vlambda}(\vy)]_i = \sgn(y_i)\max\{0,g_{t,j(i)}\}$} \end{algorithmic} \caption{Recursive procedure for constructing $\protect\tproxl(\protect\vy)$\label{alg:discrete}} \end{algorithm} The reconstruction procedure for obtaining $\tprox_{\vlambda}(\vy)$ is given in Algorithm \ref{alg:discrete}. In the next section, we will study the limit of this algorithm when the input are converging sequences $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ satisfying (A.1)-(A.4). We will show that this limit is exactly $\prox(y;F_{y},F_{\lambda})$ in Proposition \ref{prop:prox}. To begin with, we first study some properties of Algorithm \ref{alg:discrete}. \begin{lem} \label{lem:discrete_LBP_monotone}Consider the same setting in Definition \ref{def:BP_discrete} and Algorithm \ref{alg:discrete}. For each fixed $j\in S_{3}$, $\aver ij$ as a function of $i$ satisfies the following: (1) $\aver ij<\aver{i-1}j$ for each $i^{*}(j)<i\leq k_{1}$, (2) $\aver ij\geq a_{i-1}$, $\forall2\leq i\leq i^{*}(j)$, (3) $\aver ij\geq\aver{i-1}j$ for each $2\leq i\leq i^{*}(j)$. \end{lem} \begin{IEEEproof} (1) Since $\forall i^{*}(j)<i\leq k_{1}$, $a_{i-1}>\aver ij$, so $\aver ij<\aver{i-1}j$, (2) For $2\leq i\leq i^{*}(j)$, $\aver{i^{*}(j)}j\geq a_{i-1}$ and $a_{k}\geq a_{i-1}$, $\forall i\leq k\leq i^{*}(j)-1$, so $\aver ij\geq a_{i-1}$, $\forall2\leq i\leq i^{*}(j)$, (3) Since $\aver ij\geq a_{i-1}$, $\forall2\leq i\leq i^{*}(j)$ by part (2), clearly $\aver ij\geq\aver{i-1}j$. \end{IEEEproof} \begin{lem} \label{lem:discrete_RBP_monotone}Consider the same setting in Definition \ref{def:BP_discrete} and Algorithm \ref{alg:discrete}. $\aver{i^{*}(j)}j$ as a function of $j$ satisfies the following: (1) $\aver{i^{*}(j)}j>\aver{i^{*}(j+1)}{j+1}$, $\forall k_{2}\leq j<j^{*}$, (2) $\aver{i^{*}(j)}j\leq a_{j+1}$, $\forall j^{*}\leq j\leq p-1$, (3) $\aver{i^{*}(j)}j\leq\aver{i^{*}(j+1)}{j+1}$, $\forall j^{*}\leq j\leq p-1$. \end{lem} \begin{IEEEproof} (1) By definition, $\aver{i^{*}(j)}j>a_{j+1}$, $\forall k_{2}\leq j<j^{*}$, so $\aver{i^{*}(j)}{j+1}<\aver{i^{*}(j)}j$. Therefore, from Lemma \ref{lem:discrete_LBP_monotone}, we know $i^{*}(j+1)\leq i^{*}(j)$. Since $\forall k\leq i^{*}(j)$, $a_{k}\leq a_{i^{*}(j)}$, $\aver{i^{*}(j+1)}{j+1}\leq\aver{i^{*}(j)}{j+1}<\aver{i^{*}(j)}j$. (2) By definition, $\aver{i^{*}(j^{*})}{j^{*}}\leq a_{j^{*}+1}$. Suppose $\exists j$, $j^{*}<j\leq p-1$, $\aver{i^{*}(j)}j>a_{j+1}$, then $\aver{i^{*}(j)}{j^{*}}>a_{j^{*}+1}\geq\aver{i^{*}(j^{*})}{j^{*}}$, which contradicts Lemma \ref{lem:discrete_LBP_monotone}. (3) From part (2), $\aver{i^{*}(j)}j\leq a_{j+1}$, $\forall j^{*}\leq j\leq p-1$, so $\aver{i^{*}(j)}j\leq\aver{i^{*}(j)}{j+1}$. From Lemma \ref{lem:discrete_LBP_monotone}, $\aver{i^{*}(j)}{j+1}\leq\aver{i^{*}(j+1)}{j+1}$, so $\aver{i^{*}(j)}j\leq\aver{i^{*}(j+1)}{j+1}$, $\forall j^{*}\leq j\leq p-1$. \end{IEEEproof} The next lemma is a direct implication of Lemma \ref{lem:discrete_LBP_monotone} and \ref{lem:discrete_RBP_monotone}. \begin{lem} \label{lem:discrete_LRBP_monotone}$\forall j=1,2,\ldots,p$, if $i_{1}\geq1$, $i_{2}>1$ satisfy: $i_{1}=1$ or $\aver{i_{1}}j\geq a_{i_{1}-1}$ and $\aver{i_{2}}j<a_{i_{2}-1}$, then $i_{1}\leq i^{*}(j)<i_{2}$. On the other hand, $\forall j_{1},j_{2}\leq p-1$, with $j_{2}\geq j_{1}+1$, if $\aver{i^{*}(j_{2})}{j_{2}}\leq a_{j_{2}+1}$ and $\aver{i^{*}(j_{1})}{j_{1}}>a_{j_{1}+1}$, then $j_{1}<j^{*}\leq j_{2}$. \end{lem} \subsection{\label{subsec:appendix_prox}Proof of Proposition \ref{prop:prox}} \subsubsection{Limiting Form of Algorithm \ref{alg:discrete}} As mentioned before, our strategy is to derive the asymptotics of $\tprox_{\vlambda^{(p)}}(\vy^{(p)})$ via the limit of Algorithm \ref{alg:discrete} as $p\to\infty$. The construction of this limiting form is already given in Algorithm \ref{alg:conti} in Sec. \ref{sec:Asymptotic-Results} and here we will give detailed descriptions of the whole procedure. We first extend several notions defined in Definition \ref{def:mono_segment} and \ref{def:BP_discrete} to their limiting forms. The following notations will be adopted later in our statements: \begin{enumerate} \item $(x)_{+}\bydef\max\left\{ 0,x\right\} $. \item $\diff(\bdp_{0}^{+})=\lim_{y\to\bdp_{0}^{+}}\diff(y)$ and $\diff(\bdp_{0}^{-})=\lim_{y\to\bdp_{0}^{-}}\diff(y)$, where $\diff(y)$ is a function defined on $\R$. \item $\text{Int}(\cdot)$ and $\text{Bd}(\cdot)$ denote the interior and boundary of a set in $\R$, respectively. \end{enumerate} We first extend Definition \ref{def:mono_segment} and \ref{def:BP_cont} to their continuous counterparts. \begin{defn} \label{def:mono_interval}For a function $\diff(y)$ on $y\geq0$, we call $[\bdp_{L},\bdp_{R}]$ a \emph{maximal decreasing interval }(MDI) if $\diff(y)$ is strictly decreasing on $[\bdp_{L},\bdp_{R}]$, while for some $\varepsilon>0$, non-decreasing on $(\bdp_{L}-\veps,\bdp_{L})$, if $\bdp_{L}>0$ and $(\bdp_{R},\bdp_{R}+\varepsilon)$, if $\bdp_{R}<\infty$. One special case is when $\diff(y)$ is discontinuous at $\bdp_{0}$ and $\diff(\bdp_{0}^{+})<\diff(\bdp_{0}^{-})$. We will call $[\bdp_{0},\bdp_{0}]$, which is a single discontinuous point, as an MDI. On the other hand, $(\bdp_{L},\bdp_{R})$ is called a \emph{maximal non-decreasing interval }(MNDI) if $\diff(x)$ is non-decreasing on $(\bdp_{L},\bdp_{R})$ and strictly decreasing on $[\bdp_{L}-\veps,\bdp_{L}]$ and $[\bdp_{R},\bdp_{R}+\varepsilon]$ for some $\varepsilon>0$, when $\bdp_{L}>0$ and $\bdp_{R}<\infty$. \begin{comment} On the other hand, $[\bdp_{L},\bdp_{R}]$ (or $[\bdp_{L},\bdp_{R})$) is called a \emph{maximal non-decreasing interval }(MNDI) if (1) $\bdp_{L}=\bdp_{R}=0$ and $0\in\text{MDI}$ or (2) $\diff(x)$ is non-decreasing on $(\bdp_{L},\bdp_{R}]$ (or $(\bdp_{L},\bdp_{R})$) and not non-decreasing on $[[-\varepsilon+\bdp_{L}]^{+},\,\bdp_{L}]$ and $(\bdp_{R},\,\bdp_{R}+\varepsilon)$ or ($[[-\varepsilon+\bdp_{L}]^{+},\,\bdp_{L}]$ and $[\bdp_{R},\,\bdp_{R}+\varepsilon)$) for any $\varepsilon>0$. \end{comment} \end{defn} \begin{defn} \label{def:BP_cont \begin{comment} Consider a function $\diff(y)$ on $y\geq0$. Suppose $I_{1}=[0,\bdp_{1}]$ (or $I_{1}=[0,\bdp_{1})$), $I_{3}=[\bdp_{2},\bdp_{3}]$ (or $I_{3}=[\bdp_{2},\bdp_{3})$) are two consecutive MNDIs and $I_{2}=[\bdp_{1},\bdp_{2}]$ is the sandwiched MDI. Here $\bdp_{3}$ can be $+\infty$. Let $Y$ be a non-negative random variable. For $w\in I_{3}$, construct the following set: \end{comment} {} Consider a function $\diff(y)$ on $y\geq0$. Suppose $I_{1}=(0,\bdp_{1})$, $I_{3}=(\bdp_{2},\bdp_{3})$ are two consecutive MNDIs and $I_{2}=[\bdp_{1},\bdp_{2}]$ is the sandwiched MDI. Let $Y$ be a non-negative random variable. For $w\in I_{3}$, construct the following set: \[ \mathcal{V}_{R,\veps}(w)=\left\{ v|v\in I_{1},\conde(v,w;\diff(y))<\diff(v^{-})-\veps\right\} \] where \begin{equation} \conde(v,w;\diff(y))\bydef\E_{Y}(\diff(y)|y\in[v,w]\cap(I_{T}\cup\left\{ 0\right\} ))\label{eq:conde} \end{equation} with \begin{equation} I_{T}\bydef I_{1}\cup I_{2}\cup I_{3}\label{eq:IT} \end{equation} Define $v_{R,\veps}^{*}(w)$ as: \[ v_{R,\veps}^{*}(w)=\begin{cases} \inf_{v\in\mathcal{V}_{R,\veps}(w)}v & \mathcal{V}_{R,\veps}(w)\neq\emptyset\\ \bdp_{1} & \mathcal{V}_{R,\veps}(w)=\emptyset \end{cases} \] Then the left balance point of $w$ is defined as: $v^{*}(w)\bydef v_{R,0}^{*}(w)$. Similarly, we can define the right balance point for $\diff(y)$, when $y\in I_{T}$. First, construct the following set: \[ \mathcal{W}_{L,\veps}=\left\{ w|w\in I_{3},\conde(v^{*}(w),w)>\diff(w^{+})+\veps\right\} \] Correspondingly, define the point: \begin{equation} w_{L,\veps}^{*}=\begin{cases} \sup_{w\in\mathcal{W}_{L,\veps}}w & \mathcal{W}_{L,\veps}\neq\emptyset\\ \bdp_{2} & \mathcal{W}_{L,\veps}=\emptyset \end{cases}\label{eq:w_L_eps_star} \end{equation} The right balance point of $\diff(y)$ over $I_{T}$ is defined as: $w^{*}\bydef w_{L,0}^{*}$ and we denote $v^{*}\bydef v^{*}(w^{*}).$ \end{defn} The conditional expectation function defined in (\ref{eq:conde}) is a crucial quantity in constructing limiting form of $\tproxl$. It can be viewed as the continuous counterpart of $\aver ij$. In Lemma \ref{lem:cont_LBP_monotone} and \ref{lem:cont_RBP_monotone} below, we summarize the properties of $\conde(v,w;\diff(y))$, which are in the similar sense to Lemma \ref{lem:discrete_LBP_monotone} and \ref{lem:discrete_RBP_monotone}. \begin{lem} \label{lem:condE_continuity}Consider the same setting in Definition \ref{def:BP_cont}. If $\diff(y)$ is continuous over $I_{1}$ and $I_{3}$, then $\conde(v,w;\diff)$ is right-continuous w.r.t. $w$ and left-continuous w.r.t. $w$; $\conde(v^{*}(w),w;\diff)$ as a function of $w$ is right-continuous. \end{lem} \begin{IEEEproof} This result can be directly proved from the continuity of $\diff(y)$ and the fact that $F_{|y|}$ is right continuous. \end{IEEEproof} \begin{lem} \label{lem:cont_LBP_monotone}Consider the same setting in Definition \ref{def:BP_cont}. Suppose $\diff(y)$ is continuous in $I_{1}$ and $I_{3}$. For a fixed $w\in I_{3}$, if $v^{*}(w)\in I_{1}$, $\conde(v,w;\diff)$ as a function of $v\in I_{1}$ satisfies the following: (1) non-decreasing as $v$ decreases on $(v^{*}(w),\bdp_{1})$, (2) $\conde(v^{*}(w),w;\diff)=\diff(v^{*}(w))$, (3) $\forall0<v\leq v^{*}(w)$, $\conde(v,w;\diff)\geq\diff(v)$, (4) non-increasing as $v$ decreases on $(0,v^{*}(w))$. \end{lem} \begin{IEEEproof} (1) $\forall v>v^{*}(w)$, $\diff(v^{-})>\conde(v,w;\diff)$. Since $\diff(y)$ is continuous on $I_{1}$, for a small enough $\delta$, $\diff(y)>\conde(v,w;\diff)$ on $(v-\delta,v)$. Therefore, $\conde(v,w;\diff)$ increases as $v$ decreases on $(v-\delta,v]$. This holds $\forall v>v^{*}(w)$, so $\conde(v,w;\diff)$ is non-decreasing as $v$ decreases on $(v^{*}(w),\bdp_{1})$. (2) Since $\diff(v^{-})>\conde(v,w;\diff(y))$, $\forall v\in(v^{*}(w),\bdp_{1})$, we know that $\diff(v^{*}(w)^{-})\geq\conde(v^{*}(w),w;\diff)$. Suppose $\diff(v^{*}(w)^{-})>\conde(v^{*}(w),w;\diff)$, then by the continuity of $\diff(y)$, $\exists v_{0}<v^{*}(w)$, s.t. $\diff(v_{0})>\conde(v_{0},w;\diff)$, which contradicts the definition of $v^{*}(w)$. Therefore, we must have $\diff(v^{*}(w)^{-})=\conde(v^{*}(w),w;\diff)$. By continuity of $\diff(y)$ on $(0,\bdp_{1})$, $\diff(v^{*}(w))=\diff(v^{*}(w)^{-})=\conde(v^{*}(w),w;\diff)$. (3) Since $\diff(v^{*}(w))=\conde(v^{*}(w),w;\diff)$ and $\diff(v^{*}(w))\geq\diff(v)$ $\forall v\leq v^{*}(w)$, we must have$\conde(v,w;\diff)\geq\diff(v)$. (4) $\forall v_{1},v_{2}\in(0,v^{*}(w))$ and $v_{1}<v_{2}$, from part (3) we know $\conde(v_{2},w;\diff)\geq\diff(v_{2})$. Besides, $\diff(v_{2})\geq\diff(y)$, $\forall y\in[v_{1},v_{2}]$, so $\conde(v_{2},w;\diff)\geq\conde(v_{1},w;\diff)$. \end{IEEEproof} \begin{lem} \label{lem:cont_RBP_monotone}Consider the same setting in Definition \ref{def:BP_cont}. Suppose $\diff(y)$ is continuous in $I_{1}$ and $I_{3}$. If $v^{*}\in(0,\bdp_{1})$ and $w^{*}\in(\bdp_{2},\bdp_{3})$, then $\conde(v^{*}(w),w;\diff)$ as a function of $w$ satisfies the following: (1) non-increasing on $(\bdp_{2},w^{*})$, (2) $\conde(v^{*},w^{*};\diff)=\diff(w^{*})$, (3) $\forall w\in(w^{*},\bdp_{3})$, $\conde(v^{*}(w),w;\diff)\leq\diff(w)$, (4) non-decreasing on $(w^{*},\bdp_{3})$. \end{lem} \begin{IEEEproof} (1) $\forall w<w^{*}$, $\conde(v^{*}(w),w;\diff)>\diff(w^{+})=\diff(w)$. By continuity of $\diff(y)$ on $I_{3}$, for small enough $\delta>0$, $\diff(y)\leq\conde(v^{*}(w),w;\diff)$ when $y\in(w,w+\delta)$, so $\conde(v^{*}(w),w;\diff)\geq\conde(v^{*}(w),w+\delta;\diff)$. Therefore, $\conde(v^{*}(w),w+\delta;\diff)\leq\diff(v^{*}(w))$, which indicates $v^{*}(w+\delta)\leq v^{*}(w)$ and hence $\diff(v^{*}(w+\delta))\leq\diff(v^{*}(w))$. Since $\conde(v^{*}(w+\delta),w+\delta;\diff)=\diff(v^{*}(w+\delta))$ by Lemma \ref{lem:cont_LBP_monotone}, we know $\conde(v^{*}(w+\delta),w+\delta;\diff)\leq\diff(v^{*}(w))$. On the other hand, $\diff(v^{*}(w))=\conde(v^{*}(w),w;\diff)$, so obtain that $\conde(v^{*}(w+\delta),w+\delta;\diff)\leq\conde(v^{*}(w),w;\diff)$. (2) Since $\conde(v^{*}(w),w;\diff)>\diff(w)$, $\forall w\in(\bdp_{2},w^{*})$, we have $\conde(v^{*},w^{*};\diff)\geq\diff(w^{*})$. If $\conde(v^{*},w^{*};\diff)>\diff(w^{*})$, by continuity of $\conde(v^{*}(w),w;\diff)$ from Lemma \ref{lem:condE_continuity} and $\diff(y)$, $\exists w_{1}>w^{*}$ s.t. $\conde(v^{*}(w_{1}),w_{1};\diff)>\diff(w_{1}),$which contradicts the definition of $w^{*}$. Therefore, $\conde(v^{*},w^{*};\diff)=\diff(w^{*})$. (3) Suppose $\exists w>w^{*}$ such that $\conde(v^{*}(w),w;\diff)>\diff(w)$. Then $\conde(v^{*}(w),w^{*};\diff)>\diff(w^{*})=\conde(v^{*}(w^{*}),w^{*};\diff)$, which contradicts Lemma \ref{lem:cont_LBP_monotone}. Therefore, $\conde(v^{*}(w),w;\diff)\leq\diff(w)$. (4) $\forall w_{1},w_{2}\in(w^{*},\bdp_{3})$, $w_{1}<w_{2}$, we already know from part (3) that $\conde(v^{*}(w_{1}),w_{1};\diff)\leq\diff(w_{1})$. Besides, $\diff(w)\geq\diff(w_{1})$, for $w\in[w_{1},w_{2}]$, so $\conde(v^{*}(w_{1}),w_{2};\diff)\geq\conde(v^{*}(w_{1}),w_{1};\diff)$. According to Lemma \ref{lem:cont_LBP_monotone}, we have $\conde(v^{*}(w_{2}),w_{2};\diff)\geq\conde(v^{*}(w_{1}),w_{2};\diff)$, so we get $\conde(v^{*}(w_{2}),w_{2};\diff)\geq\conde(v^{*}(w_{1}),w_{1};\diff)$. \end{IEEEproof} From Lemma \ref{lem:cont_LBP_monotone} and \ref{lem:cont_RBP_monotone}, we can directly verify $v^{*}$ and $w^{*}$ in Definition \ref{def:BP_cont} can be written as the solution of the following min-max optimization problem: \[ \min_{w\in I_{3}}\max_{v\in I_{1}}\conde(v,w;\diff). \] Using $v^{*}$and $w^{*}$in Definition \ref{def:mono_interval} and \ref{def:BP_cont}, we obtain Algorithm \ref{alg:conti}. Next we are going to prove that this limiting form is exactly $\prox(y;F_{y},F_{\lambda})$ in Proposition \ref{prop:prox}. Before doing so, we need some additional properties of some functions used in Algorithm \ref{alg:conti}. In Algorithm \ref{alg:conti}, $\diff_{t}(y;F_{y},F_{\lambda})$ plays a central role. We first study some of its properties, which will be associated with perturbation analysis in Sec. \ref{subsec:asym_sepa_RCS}. Here, $t$ is the index of $\mathtt{WHILE}$ $\mathtt{LOOP}$ and $F_{y}$, $F_{\lambda}$ indicates the dependence of $\diff_{t}(y;F_{y},F_{\lambda})$ on CDF of $Y$ and $\lambda$. In the statement of the following results, for notation simplicity, we will sometimes drop $t$, $F_{y}$, $F_{\lambda}$ when there is no confusion. \begin{lem} \label{lem:G0_continuity}$\diff_{0}(y)$ defined in Algorithm \ref{alg:conti} satisfies the following: (1) $\diff_{0}(y)$ is continuous in the interior of any MNDI. (2) If $\diff_{0}(y)$ is discontinuous at $y_{0}$, then $\diff_{0}(y_{0}^{-})>\diff_{0}(y_{0}^{+})$. \end{lem} \begin{IEEEproof} (1) By definition of $\lambda(y)$ in Algorithm \ref{alg:conti}, we can easily show $\lambda(y_{0}^{-})$ always exists for $y_{0}>0$ and $\lambda(y_{0}^{+})$ always exists $\forall y_{0}\geq0$. If $y_{0}\in\text{Int}(\text{MNDI})$, then $\lambda(y_{0}^{-})\geq\lambda(y_{0})\geq\lambda(y_{0}^{+})$, otherwise, $\diff_{0}(y)$ will be strictly decreasing in a sufficiently small neighborhood of $y_{0}$. On the other hand, $\lambda(y)$ is monotonely non-decreasing, so $\lambda(y_{0}^{-})\leq\lambda(y_{0})\leq\lambda(y_{0}^{+})$. As a result, we have $\lambda(y_{0}^{-})=\lambda(y_{0})=\lambda(y_{0}^{+})$. Since $\diff_{0}(y)=y-\lambda(y)$, $\diff_{0}(y)$ is continuous in the interior of any MNDI. (2) From part (1), we know $\diff_{0}(y)$ is only discontinuous in MDI, so if $y=y_{0}$ is a discontinuous point, $y_{0}\in\text{MDI}$. Therefore, we have $\diff_{0}(y_{0}^{-})\geq\diff_{0}(y_{0})\geq\diff_{0}(y_{0}^{+})$, but $\diff(y)$ is discontinuous at $y=y_{0}$, so we must have $\diff_{0}(y_{0}^{-})>\diff_{0}(y_{0})$ or $\diff_{0}(y_{0})>\diff_{0}(y_{0}^{+})$. \end{IEEEproof} \begin{lem} \label{lem:v_star_continuity}Consider the same setting in Definition \ref{def:BP_cont}. If $\bdp_{3}>\bdp_{2}$, then $w^{*}>\bdp_{2}$; if $\bdp_{1}>0$, then $v^{*}<\bdp_{1}$. Furthermore, if $v^{*}>0$, then $\conde(v^{*},w^{*};\diff_{0})=\diff_{0}(v^{*})$, where $\diff_{0}$ is given in Algorithm \ref{alg:conti}. \end{lem} \begin{IEEEproof} First, $\diff_{0}(\bdp_{1}^{-})>\conde(\bdp_{1},\bdp_{2};\diff_{0})>\diff_{0}(\bdp_{2}^{+})$, so for small enough $\delta>0$, $\conde(\bdp_{1},\bdp_{2}+\delta;\diff_{0})<\diff_{0}(\bdp_{1}^{-})$, which means $v^{*}(\bdp_{2}+\delta)<\bdp_{1}$. Besides, $\conde(v^{*}(\bdp_{2}+\delta),\bdp_{2}+\delta;\diff_{0})>\diff_{0}(\bdp_{2}+\delta)$, so $w^{*}>\bdp_{2}$. If $w^{*}<\bdp_{3}$, from Lemma \ref{lem:cont_RBP_monotone}, we know $\conde(v^{*},w^{*};\diff_{0})\leq\conde(v^{*}(\bdp_{2}+\delta),\bdp_{2}+\delta;\diff_{0})$, so $\diff_{0}(v^{*})\leq\diff_{0}(v^{*}(\bdp_{2}+\delta))$, which indicates that $v^{*}\leq v^{*}(\bdp_{2}+\delta)<\bdp_{1}$; if $w^{*}=\bdp_{3}$, since $\forall w\in(\bdp_{2}+\delta,\bdp_{3})$, $\conde(v^{*}(w),w;\diff_{0})>\diff_{0}(w)$ and $v^{*}(w)\leq v^{*}(\bdp_{2}+\delta)$, we also have $v^{*}\leq v^{*}(\bdp_{2}+\delta)<\bdp_{1}$. Therefore, if $v^{*}>0$, we have $v^{*}\in\text{Int}(I_{1})$, from Lemma \ref{lem:cont_LBP_monotone}, we can get $\diff_{0}(v^{*})=\conde(v^{*},w^{*};\diff_{0})$. \end{IEEEproof} \begin{lem} \label{lem:Gt_continuity}In each $\mathtt{WHILE}$ $\mathtt{LOOP}$ of Algorithm \ref{alg:conti}, we have for $t\geq1$ (1) $\diff_{t}(y)$ is continuous at $y=0$ and in the interior of any MNDI (2) If $\diff_{t}(y)$ is discontinuous at $y_{0}$, then $\diff_{t}(y_{0}^{-})>\diff_{t}(y_{0}^{+})$. \end{lem} \begin{IEEEproof} \begin{comment} We first show that $\diff_{0}(y;F_{y},F_{\lambda})$, $y\geq0$ is continuous in the interior of any MNDI. For any $y_{0}\geq0$ and $\delta>0$, by definition of $\lambda(y)$ in Algorithm \ref{alg:conti}, we have $F_{\lambda}^{-1}(F_{|y|}(y_{0}-\delta))\leq\lambda(y_{0})\leq F_{\lambda}^{-1}(F_{|y|}(y_{0}+\delta))$. Suppose $y_{0}$ is a discontinuous point, then we must have $F_{\lambda}^{-1}(F_{|y|}(y_{0}-\delta))<\lambda(y_{0})$ or $\lambda(y_{0})<F_{\lambda}^{-1}(F_{|y|}(y_{0}+\delta))$, $\forall\delta>0$, so we have $\lim_{\delta\to0}\diff_{0}(y_{0}-\delta;F_{y},F_{\lambda})>\diff_{0}(y_{0};F_{y},F_{\lambda})$ or $\diff_{0}(y_{0};F_{y},F_{\lambda})>\lim_{\delta\to0}\diff_{0}(y_{0}+\delta;F_{y},F_{\lambda})$, which contradicts the fact that $y_{0}$ is inside the interior of MNDI. \end{comment} {} Consider the $t=1$ case. We first show $\diff_{t}(y)$ is continuous at $y=v^{*}$. If $\bdp_{1}=0$, then $v^{*}=0$ and $\diff_{1}(0)=\diff_{1}(0^{+})=\conde(0,w^{*};\diff_{0})$; if $\bdp_{1}>0$, from Lemma \ref{lem:v_star_continuity}, $v^{*}<\bdp_{1}$, so when $v^{*}>0$, $\diff_{1}((v^{*})^{-})=\diff_{0}(v^{*})=\conde(v^{*},w^{*};\diff_{0})=\diff_{1}(v^{*});$when $v^{*}=0$, $\diff_{1}(0)=\diff_{1}(0^{+})=\conde(0,w^{*};\diff_{0})$. On the other hand, from Lemma \ref{lem:cont_RBP_monotone} and \ref{lem:v_star_continuity}, we know if $w^{*}\in\text{int}(I_{3})$, $\diff_{1}((w^{*})^{+})=\diff_{0}((w^{*})^{+})=\conde(v^{*},w^{*};\diff_{0})=\diff_{1}(w^{*})$, which means $\diff_{1}(y)$ is continuous at $w^{*}$. Next consider the behavior of $\diff_{1}(y)$ when $w^{*}=\bdp_{3}$. If $w^{*}=\bdp_{3}$ and $\bdp_{3}\notin I_{3}$, then $\diff_{1}(\bdp_{3}^{-})=\conde(v^{*}(\bdp_{3}),\bdp_{3};\diff_{0})\geq\diff_{0}(\bdp_{3}^{-})$ and $\diff_{0}(\bdp_{3}^{-})>\diff_{0}(\bdp_{3})=\diff_{1}(\bdp_{3})$, so $\diff_{1}(v^{*})=\diff_{1}(\bdp_{3}^{-})>\diff_{1}(\bdp_{3})$; if $w^{*}=\bdp_{3}$ and $\bdp_{3}\in I_{3}$, similarly, we can show that $\diff_{1}(v^{*})=\diff_{1}(\bdp_{3})\geq\diff_{0}(\bdp_{3}^{+})=\diff_{1}(\bdp_{3}^{+}).$ Therefore, $\diff_{1}(y)$ is continuous on $[0,\bdp_{3})$, which is the interior of the first MNDI of $\diff_{1}(y)$. Since in Algorithm \ref{alg:conti}, $\diff_{1}(y)=\diff_{0}(y)$ for $y>\bdp_{3}$, from Lemma \ref{lem:G0_continuity}, we get $\diff_{1}(y)$ is continuous in the interior of any MNDI. Besides, from the above derivations, $\diff_{1}(\bdp_{3}^{-})>\diff_{1}(\bdp_{3}^{+})$, if $\diff_{1}(y)$ is discontinuous at $\bdp_{3}$. Therefore, if $\diff_{1}(y)$ is discontinuous at $y_{0}$, then $\diff_{1}(y_{0}^{-})>\diff_{1}(y_{0}^{+})$. By induction, we know this holds for any $t\geq1$. \begin{comment} \begin{IEEEproof} Consider $t=0$ case. First we show that $\conde(v^{*},w^{*};\diff_{0})=\diff_{1}(v^{*};F_{y},F_{\lambda})$. If $v^{*}(w)\in\text{Bd}(I_{1})$, then immediately $\conde(v^{*}(w),w;\diff_{0})=\diff_{1}(v^{*}(w);F_{y},F_{\lambda})$; on the other hand, if $v^{*}(w)\in\text{Int}(I_{1})$, then $\conde(v^{*}(w),w;\diff_{0})\leq\diff_{0}(v^{*}(w);F_{y},F_{\lambda})$ by Definition \ref{def:BP_cont}. On the other hand, if $\conde(v^{*}(w),w;\diff_{0})<\diff_{0}(v^{*}(w);F_{y},F_{\lambda})$, then by continuity of $\diff_{0}(v;F_{y},F_{\lambda})$, $\exists v_{L}<v^{*}(w)$ s.t. $\conde(v_{L},w;\diff_{0})<\diff_{0}(v_{L};F_{y},F_{\lambda})$, which contradicts the definition of $v^{*}(w)$. Therefore, $\diff_{0}(v^{*}(w);F_{y},F_{\lambda})=\conde(v^{*}(w),w;\diff_{0})=\diff_{1}(v^{*}(w);F_{y},F_{\lambda})$. Similarly, we can show $\conde(v^{*}(w^{*}),w^{*};\diff_{0})=\diff_{1}(w^{*};F_{y},F_{\lambda})$. Combined with previous part, it leads to $\diff_{1}(v^{*};F_{y},F_{\lambda})=\diff_{1}(w^{*};F_{y},F_{\lambda})=\conde(v^{*},w^{*};\diff_{0})$. Therefore, $\diff_{1}(y;F_{y},F_{\lambda})$ is still continuous in the interior of any MNDI. Besides, it is easy to show $\diff_{1}(y;F_{y},F_{\lambda})$ should be also continuous at $y=0$. Then by induction, the results hold for any $t$. \end{IEEEproof} \end{comment} \end{IEEEproof} \begin{prop} \label{prop:G_Lipschitz}$\forall F_{y},F_{\lambda}$, $\diff(y;F_{y},F_{\lambda})$ in Algorithm \ref{alg:conti} satisfies: $0\leq\diff(y_{2};F_{y},F_{\lambda})-\diff(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$, $\forall0\leq y_{1}\leq y_{2}$. \end{prop} \begin{IEEEproof} Since the final output $\diff(y;F_{y},F_{\lambda})$ of Algorithm \ref{alg:conti} has no MDI, according to Lemma \ref{lem:Gt_continuity}, $\diff(y;F_{y},F_{\lambda})$ is continuous on $y\geq0$ and $\diff(y;F_{y},F_{\lambda})$ is monotonely non-decreasing. On the other hand, it is not hard to see that $\diff_{0}(y_{2};F_{y},F_{\lambda})-\diff_{0}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$ from Lemma \ref{lem:G0_continuity}. Suppose before the $t$th $\texttt{WHILE}$ $\texttt{LOOP}$, $\diff_{t-1}(y_{2};F_{y},F_{\lambda})-\diff_{t-1}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$. After the $t$th loop, $\diff_{t}(y;F_{y},F_{\lambda})=\diff_{t-1}(y;F_{y},F_{\lambda})$, $y\notin[v^{*},w^{*}]$. This means $\diff_{t}(y_{2};F_{y},F_{\lambda})-\diff_{t}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$, for $y_{1},y_{2}\in[0,v^{*})$ or $y_{1},y_{2}\in(v^{*},\infty)$ by assumption. Besides, we have $\diff_{t}(y;F_{y},F_{\lambda})=\diff_{t-1}(v^{*};F_{y},F_{\lambda})$, $y\in[v^{*},w^{*}]$, so $\diff_{t}(y_{2};F_{y},F_{\lambda})-\diff_{t}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$ also holds for $y_{1},y_{2}\in[v^{*},w^{*}]$. Since from Lemma \ref{lem:Gt_continuity}, at any discontinuous point $y_{0}$ of $\diff_{t}(y)$, $\diff_{t}(y_{0}^{-})\geq\diff_{t}(y_{0}^{+})$, we can obtain that $\diff_{t}(y_{2};F_{y},F_{\lambda})-\diff_{t}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$. Finally, by induction, we can obtain the results hold for $\diff(y,F_{y},F_{\lambda})$. \end{IEEEproof} \subsubsection{\label{subsec:asym_sepa_RCS}Asymptotic Separability for Regular Converging Sequences} We first prove Proposition \ref{prop:prox} for a special class of converging sequences. Among all converging sequences $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ with limiting CDF $F_{y}$ and $F_{\lambda}$, we define: \begin{align} \begin{cases} y_{i}=F_{y}^{-1}(\frac{i}{p+1}) & i=1,2,\ldots,p\\ \lambda_{i}=F_{\lambda}^{-1}(\frac{i}{p+1}) & i=1,2,\ldots,p \end{cases}\label{eq:regu_convseq} \end{align} as the \emph{regular converging sequence} (RCS). The sequence in (\ref{eq:regu_convseq}) possesses a nice property that if $\diff_{0}(y;F_{y},F_{\lambda})$ is decreasing (non-decreasing) over $[\bdp_{L},\bdp_{R}]$, any sub-sequence $I$ with $\left\{ |y_{i}|\right\} _{i\in I}$ falling within $[\bdp_{L},\bdp_{R}]$ satisfies that $\left\{ \diffi_{0,i}\right\} _{i\in I}$ is decreasing (non-decreasing). This means that the number of MDS of $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ is bounded by the number of MDI of $\diff_{0}(y;F_{y},F_{\lambda})$. As a result, the implementation of Algorithm \ref{alg:discrete} under RCS is simpler, since the number of $\mathtt{WHILE}$ $\mathtt{LOOP},$ which equals to the number of MDS, is bounded. However, for other converging sequences with same $F_{y}$ and $F_{\lambda}$, the number of MDS may be much larger, which makes it much more complicated to analyze. The simplest case is when $\diff_{0}(y;F_{y},F_{\lambda})$ is non-decreasing over $[0,\infty)$, which means there is no MDS in $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$. Then the asymptotic separability in Proposition \ref{prop:prox} can be easily verified: \begin{lem} \label{lem:0MDI_asym_sepa}For RCS $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ with limiting CDF $F_{y}$ and $F_{\lambda}$. Assume $\diff_{0}(y;F_{y},F_{\lambda})$ in Algorithm \ref{alg:conti} has no MDI, then asymptotic separability holds. \end{lem} \begin{IEEEproof} Since by assumption $\diff_{0}(y;F_{y},F_{\lambda})$ is non-decreasing, $\left\{ |y|_{(i)}-\lambda_{i}\right\} _{1\leq i\leq p}$ has no MDS. Therefore, from Algorithm \ref{alg:discrete}, $[\tprox(\vy)]_{i}=\sgn(y_{i})(|y|_{(j(i))}-\lambda_{j})$. Also by Lemma \ref{lem:G0_continuity}, we know $F_{y}$ should be continuous here, so we have $\lambda_{i}=F_{\lambda}^{-1}(F_{|y|}(|y|_{(i)}))$. Therefore, from Algorithm \ref{alg:conti}, $\prox(y_{i};F_{y},F_{\lambda})=\sgn(y_{i})[|y|_{(j(i))}-F_{\lambda}^{-1}(F_{|y|}(|y|_{(j(i))}))]=[\tprox_{\vlambda}(\vy)]_{i}$, $\forall i=1,2,\ldots,p$. \end{IEEEproof} \begin{comment} To prove asymptotic separability for RCS in (\ref{eq:regu_convseq}), we first consider two special distributions of $Y$ and $\lambda$ with $F_{y}$ and $F_{\lambda}$ satisfying: (1) $F_{y}$ is continuous and $\diff_{0}(y;F_{y},F_{\lambda})$ in Algorithm \ref{alg:conti} has no MDI, (2) there exists exactly one MDI in $G(y)$. These two cases will be the building blocks for the proof of general distributions. In the following, we first prove the asymptotic separability of $\tproxl(\vy^{(p)})$ for RCS and then generalize it to all converging sequences with the same limiting distribution. We first define some quantities that will be used in proving convergence results. \end{comment} A more complicated case is when there exists exactly one MDI in $\diff_{0}(y;F_{y},F_{\lambda})$. To analyze this case, we need some auxiliary quantities. \begin{defn} \label{def:BP_perturb}Consider a function $\diff(y)$, $y\geq0$ with $I_{1}$, $I_{2}$ and $I_{3}$ same as in Definition \ref{def:BP_cont}. $Y$ is a non-negative random variable. For $w\in I_{3}$, construct the following set: \[ \mathcal{V}_{L,\veps}(w)=\left\{ v|v\in I_{1},\diff(v)<\conde(v^{*}(w),w;\diff)-\veps\right\} \] Then we define $v_{L,\veps}^{*}(w)$ as: \[ v_{L,\veps}^{*}(w)=\begin{cases} \sup_{v\in\mathcal{V}_{L,\veps}(w)}v & \mathcal{V}_{L,\veps}(w)\neq\emptyset\\ 0 & \mathcal{V}_{L,\veps}(w)=\emptyset \end{cases} \] Similarly, \[ \mathcal{W}_{R,\veps}=\left\{ w|w\in I_{3},\diff(w)>\conde(v^{*},w^{*};\diff)+\veps\right\} \] where $w_{R,\veps}^{*}$ is defined as: \begin{equation} w_{R,\veps}^{*}=\begin{cases} \inf_{w\in\mathcal{W}_{R,\veps}}w & \mathcal{W}_{R,\veps}\neq\emptyset\\ \bdp_{3} & \mathcal{W}_{R,\veps}=\emptyset \end{cases}\label{eq:w_R_eps_star} \end{equation} \end{defn} The quantities $v_{R,\veps}^{*}(w)$, $w_{L,\veps}^{*}$, $v_{L,\veps}^{*}(w)$ and $w_{R,\veps}^{*}$ can be considered as the small perturbation of the LBP and RBP in Definition \ref{def:BP_cont}, where $\veps$ measures the magnitude of this perturbation. Next we will show for any small $\veps$, as $p\to\infty$, within each $\texttt{WHILE}$ $\texttt{LOOP}$ in Algorithm \ref{alg:discrete}, $\diffi_{t,j^{*}}$ and $\diffi_{t,i^{*}}$ will be closed to $\diff_{t}(w^{*})$ and $\diff_{t}(v^{*})$ and also $\aver{i^{*}}{j^{*}}\to\conde(v^{*},w^{*})$. To begin with, we first establish some properties on the perturbation related quantities defined above in Lemma \ref{lem:LBP_bound}-\ref{lem:RBP_LLN_bound}. For all these lemmas, we assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI: $I_{2}=[\bdp_{1},\bdp_{2}]$. Also $I_{1}=(0,\bdp_{1})$, $I_{3}=(\bdp_{2},\bdp_{3})$ are the neighboring MNDIs. \begin{lem} \label{lem:LBP_bound} Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. For $w\in I_{3}$, we have $|\conde(v_{\cdot,\veps}^{*}(w),w)-\conde(v^{*}(w),w)|\leq\veps$, where ``$\cdot$'' can be ``R'' and ``L''. \end{lem} \begin{IEEEproof} It is not hard to show $v_{L,\veps}^{*}(w)\leq v^{*}(w)\leq v_{R,\veps}^{*}(w)$ from Lemma \ref{lem:cont_LBP_monotone} and the definitions of $v_{L,\veps}^{*}(w)$ and $v_{R,\veps}^{*}(w)$. We first prove $|\conde(v_{R,\veps}^{*}(w),w)-\conde(v^{*}(w),w)|\leq\veps$. There are two different cases: (a) $v_{R,\veps}^{*}(w)=0$ or $v^{*}(w)=\bdp_{1}$: then $v^{*}(w)=v_{R,\veps}^{*}(w)$, so $\diff_{0}(v_{R,\veps}^{*}(w))=\diff_{0}(v^{*}(w))$ and $\conde(v_{R,\veps}^{*}(w),w)=\conde(v^{*}(w),w)$. (b) $v^{*}(w)<\bdp_{1}$ and $v_{R,\veps}^{*}(w)>0$: then $0\leq\diff_{0}(v_{R,\veps}^{*}(w))-\conde(v_{R,\veps}^{*}(w),w)\leq\veps$, otherwise we can find a $v<v_{R,\veps}^{*}(w)$ such that $\conde(v,w)<\diff_{0}(v^{-})-\veps$, which contradicts the definition of $v_{R,\veps}^{*}(w)$. On the other hand, since $v^{*}(w)\leq v_{R,\veps}^{*}(w)$, we have $\conde(v_{R,\veps}^{*}(w),w)\leq\conde(v^{*}(w),w)$ and $\conde(v^{*}(w),w)\leq\diff_{0}(v^{*}(w))\leq\diff_{0}(v_{R,\veps}^{*}(w))$ by Lemma \ref{lem:cont_LBP_monotone}. Therefore, it holds that $0\leq\conde(v^{*}(w),w)-\conde(v_{R,\veps}^{*}(w),w)\leq\veps$. \begin{comment} if $v^{*}(w)<\bdp_{1}$, $v_{R,\veps}^{*}(w)>0$ and $0<\diff(v_{R,\veps}^{*}(w))-\conde(v_{R,\veps}^{*}(w),w)\leq\veps$. Since $\conde(v,w)$ is non-decreasing as $v$ decreases on $[v^{*}(w),\bdp_{1}]$, we have $\diff(v^{*}(w))=\conde(v^{*}(w),w)\geq\conde(v_{R,\veps}^{*}(w),w)$ and thus $0<\diff(v_{R,\veps}^{*}(w))-\diff(v^{*}(w))\leq\veps$; since $\diff(y)$ is non-increasing as $y$ decreases on $[v^{*}(w),y_{1}]$, $\diff(v_{R,\veps}^{*}(w))\geq\diff(v^{*}(w))=\conde(v^{*}(w),w)$, so $0<\conde(v^{*}(w),w)-\conde(v_{R,\veps}^{*}(w),w)\leq\veps$. \end{comment} Next, we prove $|\conde(v_{L,\veps}^{*}(w),w)-\conde(v^{*}(w),w)|\leq\veps$. There are two different cases: (a) $v^{*}(w)=0$ or $v_{L,\veps}^{*}(w)=\bdp_{1}$: then $v_{L,\veps}^{*}(w)=v^{*}(w)$ and hence $\diff_{0}(v_{L,\veps}^{*}(w))=\diff_{0}(v^{*}(w))$ and $\conde(v_{L,\veps}^{*}(w),w)=\conde(v^{*}(w),w)$ (b) $v^{*}(w)>0$ and $v_{L,\veps}^{*}(w)<\bdp_{1}$: if $\diff_{0}(y)$ is discontinuous at $y=0$, then $v^{*}(w)=0$ and hence $v_{L,\veps}^{*}(w)=0$, which implies the result as in case (a); if $\diff(y)$ is continuous at $y=0$, then from definition of $v_{L,\veps}^{*}$, we know $0\leq\conde(v^{*}(w),w)-\diff(v_{L,\veps}^{*}(w))\leq\veps$. Using Lemma \ref{lem:cont_LBP_monotone}, we have $\diff(v_{L,\veps}^{*}(w))\leq\conde(v_{L,\veps}^{*}(w),w)\leq\conde(v^{*}(w),w)$, so $0\leq\conde(v^{*}(w),w)-\conde(v^{*}(w),w)\leq\veps$. \begin{comment} first, $v^{*}(w)>0$ implies that $\diff(y)$ is continuous at $y=0$ and combining with the continuity of $\diff(y)$ in $(0,\bdp_{1})$ shown in Lemma \ref{lem:Gt_continuity}, we get $\diff(y)$ is continuous on $[0,\bdp_{1})$. Therefore, $0\leq\diff(v^{*}(w)^{-})-\diff(v_{L,\veps}^{*}(w))\leq\veps$ by definition of $v_{L,\veps}^{*}(w)$. If $v^{*}(w)<\bdp_{1}$, $\diff(v^{*}(w))=\conde(v^{*}(w),w)$ and $0<\conde(v^{*}(w),w)-\conde(v_{L,\veps}^{*}(w),w)\leq\veps$; If $v^{*}(w)=\bdp_{1}$, we have $\conde(v^{*}(w),w)-v_{L,\veps}^{*}(w)<$ \end{comment} \end{IEEEproof} \begin{lem} \label{lem:RBP_bound}Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. We have $|\conde(v^{*}(w_{\cdot,\veps}^{*}),w_{\cdot,\veps}^{*})-\conde(v^{*},w^{*})|\leq\veps$, where ``$\cdot$'' can be ``R'' and ``L''. \end{lem} \begin{IEEEproof} It is not hard to show that $w_{L,\veps}^{*}\leq w^{*}\leq w_{R,\veps}^{*}$ from the definitions and Lemma \ref{lem:cont_RBP_monotone}. We first prove $|\conde(v^{*}(w_{L,\veps}^{*}),w_{\cdot,\veps}^{*})-\conde(v^{*},w^{*})|\leq\veps$. Consider the following two different cases: (a) $w^{*}=\bdp_{2}$ or $w_{L,\veps}^{*}=\bdp_{3}$: then $w^{*}=w_{L,\veps}^{*}$, so clearly $\diff_{0}(w_{L,\veps}^{*})=\diff(w^{*})$ and $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})=\conde(v^{*},w^{*})$. (b) $w^{*}>\bdp_{2}$ and $w_{L,\veps}^{*}<\bdp_{3}$: First, $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})\geq\diff_{0}((w_{L,\veps}^{*})^{+})$ otherwise, by Lemma \ref{lem:cont_RBP_monotone}, $w^{*}=w_{L,\veps}^{*}=\bdp_{2}$ or $w^{*}<w_{L,\veps}^{*}$, which leads to contradiction. Also we have $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\diff_{0}((w_{L,\veps}^{*})^{+})\leq\veps$, otherwise from Lemma \ref{lem:cont_RBP_monotone} and \ref{lem:G0_continuity}, we know $\exists w>w_{L,\veps}^{*}$ s.t. $\conde(v^{*}(w),w)-\diff_{0}(w{}^{+})>\veps$, which contradicts definition in (\ref{eq:w_L_eps_star}). We can easily show that $\diff_{0}(w_{L,\veps}^{*})\leq\diff_{0}(w^{*})\leq\conde(v^{*},w^{*})$, so $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\conde(v^{*},w^{*})\leq\veps$. On the other hand, since $\conde(v^{*}(w),w)$ is non-increasing on $[\bdp_{2},w^{*}]$, $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\conde(v^{*},w^{*})\geq0$. \begin{comment} By definition, $w_{L,\veps}^{*}\leq w^{*}\leq w_{R,\veps}^{*}$. If $w^{*}=\bdp_{2}$ or $w_{L,\veps}^{*}=\bdp_{3}$, then $w^{*}=w_{L,\veps}^{*}$ and clearly $\diff(w_{L,\veps}^{*})=\diff(w^{*})$ and $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})=\conde(v^{*},w^{*})$; otherwise, $w^{*}>y_{2}$, $w_{L,\veps}^{*}<y_{3}$ and $0<\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\diff(w_{L,\veps}^{*})\leq\veps$. Since $\conde(v^{*}(w),w)$ is non-increasing on $[y_{2},w^{*}]$, we have $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})\geq\conde(v^{*}(w^{*}),w^{*})\geq\diff(w^{*})$, so $0<\diff(w^{*})-\diff(w_{L,\veps}^{*})\leq\veps$. On the other hand, $\diff(w_{L,\veps}^{*})\leq\diff(w^{*})\leq\conde(v^{*}(w^{*}),w^{*})$, so $0<\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\conde(v^{*}(w^{*}),w^{*})\leq\veps$. \end{comment} Next we show $|\conde(v^{*}(w_{R,\veps}^{*}),w_{\cdot,\veps}^{*})-\conde(v^{*},w^{*})|\leq\veps$. The proof is similar. There are two different cases: (a) $w^{*}=\bdp_{3}$ or $w_{R,\veps}^{*}=\bdp_{2}$: $\diff_{0}(w_{R,\veps}^{*})=\diff_{0}(w^{*})$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})=\conde(v^{*},w^{*})$. (b) $w^{*}<\bdp_{3}$ and $w_{R,\veps}^{*}>\bdp_{2}$: we have $0\leq\diff_{0}((w_{R,\veps}^{*}))-\conde(v^{*},w^{*})\leq\veps$. It can be directly verified from Lemma \ref{lem:cont_RBP_monotone} that $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})\leq\diff_{0}(w_{R,\veps}^{*})$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})\geq\conde(v^{*},w^{*})$, so we obtain $0\leq\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})-\conde(v^{*},w^{*})\leq\veps$. \begin{comment} if $w^{*}=y_{3}$ or $w_{R,\veps}^{*}=0$, $\diff(w_{R,\veps}^{*})=\diff(w^{*})$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})=\conde(v^{*},w^{*})$; otherwise, $w^{*}<y_{3}$ and $w_{R,\veps}^{*}>0$. If $\diff(y)$ is discontinuous at $y=y_{2}$, $w^{*}>y_{2}$ and since $\diff(y)$ is continuous on $(y_{2},y_{3})$, $0\leq\diff((w_{R,\veps}^{*})^{-})-\diff(w^{*})\leq\veps$ by definition of $w_{R,\veps}^{*}$. In addition, $\diff(w^{*})=\conde(v^{*},w^{*})$, so $|\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})-\conde(v^{*},w^{*})|\leq\veps$. \end{comment} \end{IEEEproof} \begin{lem} \label{lem:LBP_perturb} Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. For any $w\in I_{3}$, $\exists\veps_{1}\in(0,\veps]$ s.t. if $v_{R,\veps}^{*}(w)\in(0,\bdp_{1})$, $\conde(v_{R,\veps}^{*}(w),w)<\diff_{0}(v_{R,\veps}^{*}(w))-\veps_{1}$; if $v_{L,\veps}^{*}(w)\in(0,\bdp_{1})$, $\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v_{L,\veps}^{*}(w),w)-\veps_{1}$. \end{lem} \begin{IEEEproof} If $v_{R,\veps}^{*}(w)\in(0,\bdp_{1})$, by definition of $v_{R,\veps}^{*}(w)$, $\forall v\in(v_{R,\veps}^{*}(w),\bdp_{1})$, $\diff_{0}(v^{-})>\conde(v,w)+\veps$, so $\diff_{0}(v_{R,\veps}^{*}(w))>\E_{Y}(\diff(y)|y\in(v_{R,\veps}^{*}(w),w])+\veps$ from the continuity of $\diff_{0}(y)$ on $(0,\bdp_{1})$ shown in Lemma \ref{lem:G0_continuity}. Therefore, $\exists0<\veps_{1}\leq\veps$ s.t. $\diff_{0}(v_{R,\veps}^{*}(w)^{-})>\conde(v_{R,\veps}^{*}(w),w)+\veps_{1}=\conde(v_{R,\veps}^{*}(w),w)+\veps_{1}$. Also since $v_{R,\veps}^{*}(w)\in\text{Int}(I_{1})$, $\diff_{0}(v_{R,\veps}^{*}(w))=\diff_{0}(v_{R,\veps}^{*}(w)^{-})$. If $v_{L,\veps}^{*}(w)\in(0,\bdp_{1})$, then $\diff_{0}(v_{L,\veps}^{*}(w))=\conde(v^{*}(w),w)-\veps$ due to the continuity of $\diff_{0}(y)$. Based on this, it is easy to check that $\exists0<\veps_{1}\leq\veps$, $\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v_{L,\veps}^{*}(w),w)-\veps_{1}$. \end{IEEEproof} \begin{lem} \label{lem:LBP_LLN_bound} Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI and $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$are RCS. For any $w\in[\bdp_{2},\bdp_{3}]$, let $j_{w}=\max_{1\leq j\leq p}\left\{ j\Big||y_{j}|\leq w\right\} $. We have: \begin{equation} |\aver{i^{*}(j_{w})}{j_{w}}-\conde(v^{*}(w),w)|\leq\epsilon_{w}^{(p)},\label{eq:LBP_LLN} \end{equation} where $\epsilon_{w}^{(p)}\to0$ as $p\to\infty$. \end{lem} \begin{IEEEproof} Let $i_{L,\veps}(j_{w})=\min_{1\leq i\leq p}\left\{ i\Big||y|_{(i)}\geq v_{L,\veps}^{*}(w)\right\} $ and $i_{R,\veps}(j_{w})=\min_{1\leq i\leq p}\left\{ i\Big||y|_{(i)}\geq v_{R,\veps}^{*}(w)\right\} $. We first prove $i^{*}(j_{w})\geq i_{L,\veps}(j_{w})-1$ for large enough $p$. There are three different scenarios: (a) $i_{L,\veps}(j_{w})=1$: then $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})$ holds trivially. (b) $i_{L,\veps}(j_{w})>1$ and $v_{L,\veps}^{*}(w)<\bdp_{1}$: then $i_{L,\veps}(j_{w})\in S_{1}$ due to the fact that $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$are RCSs as in (\ref{eq:regu_convseq}). Also we have $v_{L,\veps}^{*}(w)>0$, otherwise $i_{L,\veps}(j_{w})=1$. According to Lemma \ref{lem:LBP_perturb}, $\exists\veps_{1}\in(0,\veps]$, s.t. $\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v_{L,\veps}^{*}(w),w)-\veps_{1}$. Since $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ is a converging sequence, $|\aver{i_{L,\veps}(j_{w})}{j_{w}}-\conde(v_{L,\veps}^{*}(w),w)|=\epsilon_{L,w}^{(p)}$, with $\epsilon_{L,w}^{(p)}\to0$, as $p\to\infty$. Therefore, $\diff_{0}(v_{L,\veps}^{*}(w))<\aver{i_{L,\veps}(j_{w})}{j_{w}}$ for large enough $p$. Since $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ is a RCS and $|y|_{(i_{L,\veps}(j_{w})-1)}<v_{L,\veps}^{*}(w)$, $\diff_{0}(v_{L,\veps}^{*}(w))\geq\diff_{0}(|y|_{(i_{L,\veps}(j_{w})-1)})=\diffi_{0,i_{L,\veps}(j_{w})-1}$ and $\diffi_{0,i_{L,\veps}(j_{w})-1}<\aver{i_{L,\veps}(j_{w})}{j_{w}}.$ By Lemma \ref{lem:discrete_LRBP_monotone}, $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})$. (c) $i_{L,\veps}(j_{w})>1$ and $v_{L,\veps}^{*}(w)=\bdp_{1}$: Since $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ are RCSs, we can get $i_{L,\veps}(j_{w})\leq\max_{i\in S_{1}}i+1$. Similar to scenario (b), we can also obtain that $i^{*}(j_{w})=\max_{i\in S_{1}}i$. Therefore, we have $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})+1$. \begin{comment} If $v_{L,\veps}^{*}(w)=0$, then $i_{L,\veps}(j_{w})=1$, so $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})$ holds trivially; if $\calV_{L,\veps}(w)\neq\emptyset$ and $v_{L,\veps}^{*}(w)>0$, by Lemma \ref{lem:LBP_perturb}, $\exists\veps_{1}\in(0,\veps]$, s.t. $\diff_{0}(v_{L,\veps}^{*}(w)^{-})<\conde(v_{L,\veps}^{*}(w),w)-\veps_{1}$. Since $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ is a converging sequence, $|\aver{i_{L,\veps}(j_{w})}{j_{w}}-\conde(v_{L,\veps}^{*}(w),w)|=\epsilon_{L,w}^{(p)}$, with $\epsilon_{L,w}^{(p)}\to0$, as $p\to\infty$. Therefore, $\diff_{0}(v_{L,\veps}^{*}(w)^{-})<\aver{i_{L,\veps}(j_{w})}{j_{w}}$ for large enough $p$. If $i_{L,\veps}(j_{w})=1$, then the result holds trivially, otherwise $\diffi_{0,i_{L,\veps}(j_{w})-1}\leq\diff_{0}(v_{L,\veps}^{*}(w)^{-})$, so $\diffi_{0,i_{L,\veps}(j_{w})-1}\leq\diff_{0}(v_{L,\veps}^{*}(w)^{-})$. From Lemma \ref{lem:discrete_LRBP_monotone}, we know $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})$. \end{comment} Next, we prove $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$. We have three different scenarios: (a) $v_{R,\veps}^{*}(w)=\bdp_{1}$: then by definition of $i_{R,\veps}(j_{w})$, we have $i_{R,\veps}(j_{w})\in S_{2}$. Since $i^{*}(j_{w})\in S_{1}$, we have $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$. (b) $0<v_{R,\veps}^{*}(w)<\bdp_{1}$: from Lemma \ref{lem:LBP_perturb}, $\diff_{0}(v_{R,\veps}^{*}(w))>\conde(v_{R,\veps}^{*}(w),w)+\veps$, where we have used the continuity of $\diff_{0}(y)$ on $(0,\bdp_{1})$. On the other hand, $|\aver{i_{R,\veps}(j_{w})}{j_{w}}-\conde(v_{R,\veps}^{*}(w),w)|=\epsilon_{R,w}^{(p)}$, with $\epsilon_{R,w}^{(p)}\to0$, as $p\to\infty$, so $\diff_{0}(v_{R,\veps}^{*}(w))>\aver{i_{R,\veps}(j_{w})}{j_{w}}$ for large enough $p$. If $i_{R,\veps}(j_{w})\in S_{2}$, then $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$ holds trivially; if $i_{R,\veps}(j_{w})\in S_{1}$, then $\diffi_{0,i_{R,\veps}(j_{w})}>\aver{i_{R,\veps}(j_{w})}{j_{w}}$, so using Lemma \ref{lem:discrete_LRBP_monotone}, we can obtain $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$. (c) $v_{R,\veps}^{*}(w)=0$: then by definition $i_{R,\veps}(j_{w})=1$. It suffices to show $i^{*}(j_{w})=1$. If $\diff_{0}(y)$ is discontinuous at $y=0$, then it is not hard to show $\diffi_{0,1}\in S_{2}$, so $i^{*}(j_{w})=1$; if $\diff_{0}(y)$ is continuous at $y=0$ and $\bdp_{1}=0$, then clearly $i^{*}(j_{w})=1$; if $\diff_{0}(y)$ is continuous at $y=0$ and $\bdp_{1}>0$, then $\diffi_{0,1}\geq\diff_{0}(0)\geq\conde(0,w)+\veps$. Since $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ is a converging sequence, $\aver 1{j_{w}}\to\conde(0,w)$ as $p\to\infty$. Therefore, $\diffi_{0,1}\geq\aver 1{j_{w}}+\veps/2$ for large enough $p$. This indicates that $\aver 1{j_{w}}>\aver 2{j_{w}},$so $\diffi_{0,1}\geq\aver 2{j_{w}}+\veps/2$ and by Lemma \ref{lem:discrete_LRBP_monotone} we get $i^{*}(j_{w})=1$. \begin{comment} Next, we prove $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$. If $v_{R,\veps}^{*}(w)=\bdp_{1}$, then $\diffi_{0,i_{R,\veps}(j_{w})}\in S_{2},$where $S_{2}$ is defined in Algorithm \ref{alg:discrete}. Since $\diffi_{0,i^{*}(j_{w})}\in S_{1},$$i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$; if $\calV_{R,\veps}(w)\neq\emptyset$ and $0<v_{R,\veps}^{*}(w)<\bdp_{1}$, from Lemma \ref{lem:LBP_perturb}, $\diff_{0}(v_{R,\veps}^{*}(w)^{-})>\conde(v_{R,\veps}^{*}(w),w)+\veps$. On the other hand, $|\aver{i_{R,\veps}(j_{w})}{j_{w}}-\conde(v_{R,\veps}^{*}(w),w)|=\epsilon_{R,w}^{(p)}$, with $\epsilon_{R,w}^{(p)}\to0$, as $p\to\infty$, so $\diff_{0}(v_{R,\veps}^{*}(w)^{-})>\aver{i_{R,\veps}(j_{w})}{j_{w}}$ for large enough $p$. We have two scenarios: (1) $\diffi_{0,i_{R,\veps}(j_{w})}\in S_{2},$then $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$ holds trivially; (2)$\diffi_{0,i_{R,\veps}(j_{w})}<\max_{\diffi_{0,i}\in S_{2}}\diffi_{0,i}$, then $\aver{i_{R,\veps}(j_{w})}{j_{w}}\geq\aver{i_{R,\veps}(j_{w})+1}{j_{w}}.$ Since we choose $\left\{ \vy^{(p)}\right\} _{1\leq i\leq p}$ and $\left\{ \vlambda^{(p)}\right\} _{1\leq i\leq p}$ in (\ref{eq:regu_convseq}), $\diffi_{0,i_{R,\veps}(j_{w})}\geq\diff_{0}(v_{R,\veps}^{*}(w)^{-})$ and hence $\diffi_{0,i_{R,\veps}(j_{w})}>\aver{i_{R,\veps}(j_{w})}{j_{w}}.$ As a result, $\diffi_{0,i_{R,\veps}(j_{w})}>\aver{i_{R,\veps}(j_{w})+1}{j_{w}}$ and $i^{*}(j_{w})<i_{R,\veps}(j_{w})$ based on Lemma \ref{lem:discrete_LRBP_monotone}; if $\calV_{R,\veps}(w)\neq\emptyset$ and $v_{R,\veps}^{*}(w)=0$, $i_{R,\veps}(j_{w})=1$, it is not hard to prove $i^{*}(j_{w})=1$, for large enough $p$. \end{comment} Next, we prove (\ref{eq:LBP_LLN}). Since $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$is a converging sequence, we have $|\aver{i_{L,\veps}(j_{w})}{j_{w}}-\conde(v_{L,\veps}^{*}(w),w)|=\epsilon_{L,w}^{(p)}$, with $\epsilon_{L,w}^{(p)}\to0$, as $p\to\infty$. From Lemma \ref{lem:LBP_bound}, $|\conde(v^{*}(w),w)-\conde(v_{L,\veps}^{*}(w),w)|\leq\veps$, so \begin{equation} |\aver{i_{L,\veps}(j_{w})}{j_{w}}-\conde(v^{*}(w),w)|\leq\veps+\epsilon_{L,w}^{(p)}.\label{eq:LBP_LLN1} \end{equation} Now consider the following three different scenarios: (a) $\diff_{0}(y)$ is discontinuous at $y=0$: since $y=0$ is a discontinuous point, it is not hard to show $i^{*}(j_{w})=1$ and $v^{*}(w)=0$. Therefore, for large enough $p$, (\ref{eq:LBP_LLN}) holds. (b) $\diff_{0}(y)$ is continuous at $y=0$ with $v_{L,\veps}^{*}(w)=\bdp_{1}$ or $v_{R,\veps}^{*}(w)=0$: in this case, $v_{L,\veps}^{*}(w)=v_{R,\veps}^{*}(w)$. By definition of $i_{L,\veps}(j_{w})$ and $i_{R,\veps}(j_{w})$, we have $i_{L,\veps}(j_{w})=i_{R,\veps}(j_{w})$. Then $i_{L,\veps}(j_{w})-1\leq i^{*}(j_{w})\leq i_{L,\veps}(j_{w})$. Combining with (\ref{eq:LBP_LLN1}), we get for large enough $p$, \begin{equation} |\aver{i^{*}(j_{w})}{j_{w}}-\conde(v^{*}(w),w)|\leq\veps+\epsilon_{w}^{(p)},\label{eq:LBP_LLN2} \end{equation} with $\epsilon_{w}^{(p)}\to0$ as $p\to\infty$. Let $\veps\to0$ in (\ref{eq:LBP_LLN2}) we get (\ref{eq:LBP_LLN}). (c) $\diff_{0}(y)$ is continuous at $y=0$ with $v_{L,\veps}^{*}(w)<\bdp_{1}$ and $v_{R,\veps}^{*}(w)>0$: since $v_{R,\veps}^{*}(w)>0$, by continuity of $\diff_{0}(y)$ in the interior of $I_{1}$, we have $\conde(v^{*}(w),w)>\diff_{0}(v_{R,\veps}^{*}(w))-\veps$. On the other hand, we have $\conde(v^{*}(w),w)-\veps\leq\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v^{*}(w),w)$. These together lead to $|\diff_{0}(v_{R,\veps}^{*}(w))-\conde(v^{*}(w),w)|<\veps$ and $|\diff_{0}(v_{L,\veps}^{*}(w))-\conde(v^{*}(w),w)|<\veps$, which indicates that $|\diffi_{0,i}-\conde(v^{*}(w),w)|<\veps$, $\forall i_{L,\veps}(j_{w})\leq i<i_{R,\veps}(j_{w})$. Since $i_{L,\veps}(j_{w})-1\leq i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$, from (\ref{eq:LBP_LLN1}) we know (\ref{eq:LBP_LLN2}) holds for large enough $p$. Let $\veps\to0$ , we conclude that (\ref{eq:LBP_LLN}) holds. \begin{comment} otherwise, we have $\conde(v^{*}(w),w)-\veps\leq\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v^{*}(w),w)$, $\conde(v^{*}(w),w)>\diff_{0}(v_{R,\veps}^{*}(w)^{-})-\veps$, which together gives $|\diff_{0}(v_{R,\veps}^{*}(w)^{-})-\conde(v^{*}(w),w)|<\veps$ and $|\diff_{0}(v_{L,\veps}^{*}(w))-\conde(v^{*}(w),w)|<\veps$. This indicates that $|\diffi_{0,i}-\conde(v^{*}(w),w)|<\veps$, $\forall i_{L,\veps}(j_{w})\leq i<i_{R,\veps}(j_{w})$ and since $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$, we obtain (\ref{eq:LBP_LLN2}) from (\ref{eq:LBP_LLN1}). Finally, let $\veps\to0$ in (\ref{eq:LBP_LLN2}) we get (\ref{eq:LBP_LLN}). \end{comment} \end{IEEEproof} \begin{lem} \label{lem:RBP_LLN_bound} Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. Besides, $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$are RCS. We have: \begin{equation} |\aver{i^{*}}{j^{*}}-\conde(v^{*},w^{*})|\leq\epsilon_{w^{*}}^{(p)}\label{eq:jstar_LLN} \end{equation} where $\epsilon_{w^{*}}^{(p)}\to0$ as $p\to\infty$. \end{lem} \begin{IEEEproof} In the rest of proof, we use $\conde(v,w)$ to denote $\conde(v,w;\diff_{0}).$ Let $j_{R\text{,\ensuremath{\veps}}}=\max_{1\leq j\leq p}\left\{ j\Big||y_{j}|\leq w_{R,\veps}^{*}\right\} $, where $w_{R,\veps}^{*}$ is the same definition as before. We first show $j^{*}\leq j_{R,\veps}$ for large enough $p$. If $w_{R,\veps}^{*}=\bdp_{3}$, this becomes trivial since $j_{R,\veps}=p$ in this case; if $w_{R,\veps}^{*}<\bdp_{3}$, then $\calW_{R,\veps}\neq\emptyset$. From continuity of $\diff_{0}(y)$ shown in Lemma \ref{lem:G0_continuity} and definition of $w_{R,\veps}^{*}$ in (\ref{eq:w_R_eps_star}), $\diff_{0}(w_{R,\veps}^{*})=\conde(v^{*},w^{*})+\veps$. Therefore, $\conde(v^{*},w_{R,\veps}^{*})\geq\conde(v^{*},w^{*})$. Consider the two different cases: (a) $\conde(v^{*},w_{R,\veps}^{*})=\conde(v^{*},w^{*})$: then $v^{*}(w_{R,\veps}^{*})=v^{*}$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})=\conde(v^{*},w^{*})$, so \begin{equation} \conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})=\diff_{0}(w_{R,\veps}^{*})-\veps,\label{eq:RBPR_jump} \end{equation} From Lemma \ref{lem:LBP_LLN_bound}, we know for large enough $p$, \begin{equation} |\aver{i^{*}(j_{R,\veps})}{j_{R,\veps}}-\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})|\leq\epsilon_{w_{R,\veps}^{*}}^{(p)}\label{eq:RBPR_LLN} \end{equation} with $\epsilon_{w_{R,\veps}^{*}}^{(p)}\to0$ as $p\to\infty$. From (\ref{eq:RBPR_jump}) and (\ref{eq:RBPR_LLN}), we know $\aver{i^{*}(j_{R,\veps})}{j_{R,\veps}}<\diff_{0}(w_{R,\veps}^{*})\leq\diffi_{j_{R,\veps}+1}$, which indicates $j^{*}\leq j_{R,\veps}$ by Lemma \ref{lem:discrete_LRBP_monotone}. (b) $\conde(v^{*},w_{R,\veps}^{*})>\conde(v^{*},w^{*})$: then $v^{*}(w_{R,\veps}^{*})>v^{*}$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})>\conde(v^{*},w^{*})$ from Lemma \ref{lem:cont_RBP_monotone}. Let $j_{M,\veps}=\max_{1\leq j\leq p}\left\{ j\Big||y_{j}|\leq w^{*}\right\} $ and similar to (\ref{eq:RBPR_LLN}), we have \[ |\aver{i^{*}(j_{M,\veps})}{j_{M,\veps}}-\conde(v^{*},w^{*})|\leq\epsilon_{w_{M,\veps}^{*}}^{(p)} \] with $\epsilon_{w_{M,\veps}^{*}}^{(p)}\to0$ as $p\to\infty$. Therefore, combining with (\ref{eq:RBPR_LLN}) we get for large enough $p$, $\aver{i^{*}(j_{M,\veps})}{j_{M,\veps}}<\aver{i^{*}(j_{R,\veps})}{j_{R,\veps}}$. This indicates that $j^{*}\leq j_{R,\veps}$, because from Lemma \ref{lem:discrete_LRBP_monotone}, we can see that if $j<j^{*}$, $\aver{i^{*}(j-1)}{j-1}>\aver{i^{*}(j)}j$. \begin{comment} For $w_{R,\veps}^{*}$, choose $0<\veps_{2}<\veps_{1}/2$ s.t. \[ |\conde(v_{R,\veps_{2}}^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*};\diff)-\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*};\diff)|\leq\veps_{2} \] and \[ |\conde(v_{L,\veps_{2}}^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*};\diff)-\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*};\diff)|\leq\veps_{2}. \] Define $i_{L,\veps_{2}}(j_{R,\veps})=\min_{1\leq i\leq p}\left\{ i\Big||y_{i}|\geq v_{L,\veps_{2}}^{*}(w_{R,\veps}^{*})\right\} $ and $i_{R,\veps_{2}}(j_{R,\veps})=\min_{1\leq i\leq p}\left\{ i\Big||y_{i}|\geq v_{R,\veps_{2}}^{*}(w_{R,\veps}^{*})\right\} $. \end{comment} Next let $j_{L,\veps}=\max_{1\leq j\leq p}\left\{ j\Big||y_{j}|\leq w_{L,\veps}^{*}\right\} $ and we prove $j^{*}\geq j_{L,\veps}$. If $w_{L,\veps}^{*}=\bdp_{2}$, $j_{L,\veps}\leq\min_{j\in S_{3}}j$ and since $j^{*}\in S_{3}$, this becomes trivial; if $w_{L,\veps}^{*}=\bdp_{3}$, then $j_{L,\veps}=p$ and $\exists0<\veps_{2}\leq\veps$ s.t. $\conde(v^{*}(\bdp_{3}),\bdp_{3})>\diff(\bdp_{3})+\veps_{2}$. From Lemma \ref{lem:LBP_LLN_bound}, we know $|\aver{i^{*}(p)}p-\conde(v^{*}(w),w)|\leq\epsilon_{\bdp_{3}}^{(p)}$ with $\epsilon_{\bdp_{3}}^{(p)}\to0$ as $p\to\infty$. Hence letting $w=\bdp_{3}$ we obtain $\aver{i^{*}(p)}p>\diff(\bdp_{3})+\veps_{2}/2\geq\diffi_{0,p}+\veps_{2}/2$. Therefore, we can get $j^{*}=p=j_{L,\veps}$ from Lemma \ref{lem:discrete_LRBP_monotone}; if $\bdp_{3}>w_{L,\veps}^{*}>\bdp_{2}$, then from Lemma \ref{lem:LBP_perturb}, $\exists\veps_{3}<\veps$, $\veps_{3}<\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\diff_{0}(w_{L,\veps}^{*})\leq\veps$. From Lemma \ref{lem:LBP_LLN_bound}, we can get: \begin{equation} |\aver{i^{*}(j_{L,\veps})}{j_{L,\veps}}-\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})|\leq\epsilon_{w_{L,\veps}^{*}}^{(p)}\label{eq:RBPL_LLN} \end{equation} with $\epsilon_{w_{L,\veps}^{*}}^{(p)}\to0$ as $p\to\infty$ and hence $\aver{i^{*}(j_{L,\veps})}{j_{L,\veps}}-\veps_{3}/2>\diff_{0}(w_{L,\veps}^{*})\geq\diffi_{0,j_{L,\veps}}$. Then we get $j^{*}\geq j_{L,\veps}$ from Lemma \ref{lem:discrete_LRBP_monotone}. In addition, we have: \begin{equation} |\aver{i^{*}(j_{L,\veps})}{j_{L,\veps}}-\conde(v^{*},w^{*})|\leq\veps+\epsilon_{w_{L,\veps}^{*}}^{(p)}\label{eq:J1_LLN} \end{equation} by combining (\ref{eq:RBPL_LLN}) and Lemma \ref{lem:RBP_bound} and similarly, \begin{equation} |\aver{i^{*}(j_{R,\veps})}{j_{R,\veps}}-\conde(v^{*},w^{*})|\leq\veps+\epsilon_{w_{R,\veps}^{*}}^{(p)}\label{eq:j2_LLN} \end{equation} Now we are ready to prove (\ref{eq:jstar_LLN}). Consider the following three different scenarios: (1) $w_{L,\veps}^{*}=\bdp_{3}$: we have $w^{*}=\bdp_{3}$ and $j^{*}=p$ as shown above. Then from Lemma \ref{lem:LBP_LLN_bound}, we can get (\ref{eq:jstar_LLN}). (2) $w_{R,\veps}^{*}=\bdp_{2}$: we have $w^{*}=\bdp_{2}$ and $j^{*}=j_{\bdp_{2}}$. From Lemma \ref{lem:LBP_LLN_bound}, we can get (\ref{eq:jstar_LLN}). (3) $\bdp_{2}\leq w_{L,\veps}^{*}<\bdp_{3}$ and $\bdp_{2}<w_{R,\veps}^{*}\leq\bdp_{3}$: we have $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\veps\leq\diff_{0}((w_{L,\veps}^{*})^{+})\leq\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})$, so $\diffi_{0,j_{L,\veps}+1}\geq\diff_{0}((w_{L,\veps}^{*})^{+})\geq\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\veps$. On the other hand, $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})\leq\diff_{0}(w_{R,\veps}^{*})\leq\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})+\veps$, so $\diffi_{0,j_{R,\veps}}\leq\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})+\veps$. Then combining with Lemma \ref{lem:RBP_bound}, we have $\diffi_{0,j_{L,\veps}+1}\geq\conde(v^{*},w^{*})-2\veps$ and $\diffi_{0,j_{R,\veps}}\leq\conde(v^{*},w^{*})+2\veps$. Since $j_{L,\veps}-1\leq j^{*}\leq j_{R,\veps}$, from (\ref{eq:j2_LLN}) and (\ref{eq:J1_LLN}) and letting $\veps\to0$, we conclude (\ref{eq:jstar_LLN}) holds. \end{IEEEproof} \begin{lem} \label{lem:1MDI_asym_sepa}Suppose in Algorithm \ref{alg:conti}, $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. Then for RCS $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ with limiting CDF $F_{y}$ and $F_{\lambda}$, asymptotic separability holds. \end{lem} \begin{IEEEproof} Let $\diffi_{1,i}^{*}=\diff_{1}(|y|_{(i)})$, $1\leq i\leq p$ and $\prox(y_{i};F_{y},F_{\lambda})=\text{sign}(y_{i})\max\left\{ 0,\diffi_{1,j(i)}^{*}\right\} $. In the current setting, we know from Algorithm \ref{alg:conti} that: \[ \diff_{1}(y)=\begin{cases} \conde(v^{*},w^{*}) & y\in[v^{*},w^{*}]\\ \diff_{0}(y) & y\notin[v^{*},w^{*}] \end{cases} \] On the other hand, $\diffi_{1,i}$ in Algorithm \ref{alg:discrete} is: \[ \diffi_{1,i}=\begin{cases} \aver{i^{*}}{j^{*}} & i^{*}\leq i\leq j^{*}\\ \diff_{0}(y|_{(i)}) & \otherwise \end{cases} \] and $[\tprox_{\vlambda^{(p)}}(\vy^{(p)})]_{i}=\text{sign}(y_{i})\max\left\{ 0,\diffi_{1,i}\right\} $. Based on Lemma \ref{lem:RBP_LLN_bound}, for $\epsilon>0$ we can define the index sets: \[ I_{+}(\epsilon)\bydef\left\{ i\Big|\diff_{0}(y_{i})\geq\conde(v^{*},w^{*})+\epsilon\text{/2}\right\} \] and \[ I_{-}(\epsilon)\bydef\left\{ i\Big|\diff_{0}(y_{i})<\conde(v^{*},w^{*})-\epsilon/2\right\} \] From (\ref{eq:jstar_LLN}), we know for $i=1,2,\ldots,p$ and large enough $p$, \[ \begin{cases} \diffi_{1,i}=\diffi_{1,i}^{*} & i\in I_{+}(\epsilon)\cup I_{-}(\epsilon)\\ |\diffi_{1,i}-\diffi_{1,i}^{*}|\leq\epsilon & \otherwise \end{cases} \] Therefore, $|\prox(y_{i};F_{y},F_{\lambda})-[\tproxl(\vy^{(p)})]_{i}|\leq|\diffi_{1,i}-\diffi_{1,i}^{*}|\leq\epsilon$, $\forall i=1,2,\ldots,p$. With $\epsilon$ taken arbitrarily closed to 0, this implies that $\frac{1}{p}\|\prox(\vy^{(p)};F_{y},F_{\lambda})-\tproxl(\vy^{(p)})\|^{2}\to0$ as $p\to\infty$, which proves the results. \end{IEEEproof} Now we are ready to prove the asymptotic separability for RCS in (\ref{eq:regu_convseq}) with general limiting distribution $F_{y}$ and $F_{\lambda}$. \begin{lem} \label{lem:asym_sepa_regu}$\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ are RCS as in (\ref{eq:regu_convseq}) with limiting CDF $F_{y}$ and $F_{\lambda}$. Then asymptotic separability holds. \end{lem} \begin{IEEEproof} The case where $\diff_{0}(y;F_{y},F_{\lambda})$ is non-decreasing has been proved in Lemma \ref{lem:0MDI_asym_sepa}. In the other cases, $\diff_{0}(y;F_{y},F_{\lambda})$ in Algorithm \ref{alg:conti} must contain at least one MDI and $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ as defined in Algorithm \ref{alg:discrete} must contain a MDS for large enough $p$. Our general proof strategy is to show that $\exists\left\{ \tilde{\vlambda}^{(p)}\right\} _{p\in\mathbb{N}}$ s.t. $\frac{\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\t{\vlambda}^{(p)}}(\vy^{(p)})\|^{2}}{p}\to0$, where $\left\{ \tilde{\vlambda}^{(p)}\right\} _{p\in\mathbb{N}}$ is a regular converging sequence with non-decreasing $\diff_{0}(y;F_{y},F_{\t{\lambda}})$, which is equal to $\diff(y;F_{y},F_{\lambda})$ (output of $\mathtt{WHILE}$ $\mathtt{LOOP}$ in Algorithm \ref{alg:conti} implemented for $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$). Let $\mathcal{I}_{T}=\left\{ i\Big||y|_{(i)}\in I_{T}\right\} $, where $I_{T}$ is defined in (\ref{eq:IT}). Define $\left\{ \diffi_{1,i}^{s}\right\} _{i\in\mathcal{I}_{T}}$ as the output of Algorithm \ref{alg:discrete} implemented on the sub-sequence $\left\{ |y|_{(i)}\right\} _{i\in\mathcal{I}_{T}}$and $\left\{ \lambda_{i}\right\} _{i\in\mathcal{I}_{T}}$and $\h g_{1,i}$ as \[ \h g_{1,i}=\begin{cases} \diffi_{1,i}^{s} & i\in\mathcal{I}_{T}\\ \diffi_{0,i} & i\notin\mathcal{I}_{T} \end{cases} \] We construct a new regularization sequence as follows: \[ \h{\lambda}_{1,i}=|y|_{(i)}-\h g_{1,i} \] When $i<i^{*}$ or $i>j^{*}$, $\h g_{1,i}=\diffi_{0,i}$ and thus $\lambda_{i}=\h{\lambda}_{i}$, since $\lambda_{i}=|y|_{(i)}-\diffi_{0,i}$. From Lemma \ref{lem:discrete_LRBP_monotone} that $\diffi_{0,i^{*}}\geq\aver{i^{*}}{j^{*}}=\h{\diffi}_{1,i^{*}}$ and $\diffi_{0,j^{*}}\leq\aver{i^{*}}{j^{*}}=\h{\diffi}_{1,j^{*}}$, so $\lambda_{i^{*}}\leq\h{\lambda}_{1,i^{*}}$, $\lambda_{j^{*}}\geq\h{\lambda}_{1,j^{*}}$. Besides, since $\h{\lambda}_{1,i^{*}-1}=\lambda_{i^{*}-1}\leq\lambda_{i^{*}}$ and $\h{\lambda}_{1,j^{*}+1}=\lambda_{j^{*}+1}\geq\lambda_{j^{*}}$, we have $\h{\lambda}_{1,i^{*}-1}\leq\h{\lambda}_{1,i^{*}}$ and $\h{\lambda}_{1,j^{*}+1}\geq\h{\lambda}_{1,j^{*}}$, which means $\left\{ \h{\lambda}_{1,i}\right\} _{1\leq i\leq p}$ is a non-decreasing sequence. On the other hand, since $\diffi_{0,i^{*}}\geq\h{\diffi}_{1,i^{*}}$, we get $\lambda_{i^{*}}\leq\h{\lambda}_{1,i^{*}}$ and given that $\left\{ \h{\lambda}_{1,i}\right\} _{1\leq i\leq p}$ is non-decreasing and $\lambda_{i}\geq0$, we know $\h{\lambda}_{1,i}\geq0$, $\forall i$. Therefore, $\left\{ \h{\lambda}_{1,i}\right\} _{1\leq i\leq p}$ is a valid regularization sequence. Moreover, it is not difficult to show if we replace $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ by $\left\{ \hat{\vlambda}_{1}^{(p)}\right\} _{p\in\mathbb{N}}$, the solution will remain unaltered: \[ \tprox_{\hat{\vlambda}_{1}^{(p)}}(\vy^{(p)})=\tprox_{\vlambda^{(p)}}(\vy^{(p)}), \] Then consider another regularization sequence: \[ \tilde{\lambda}_{1,i}=\begin{cases} |y|_{(i)}-\diff_{1}(|y|_{(i)};F_{y},F_{\lambda}) & i\in\mathcal{I}_{T}\\ \lambda_{i} & i\notin\mathcal{I}_{T} \end{cases} \] We can see that $\left\{ \tilde{\vlambda}_{1}^{(p)}\right\} _{p\in\mathbb{N}}$ is also a RCS and the corresponding $\diff_{0}(y;F_{y},F_{\t{\lambda}_{1}})$ is exactly $\diff_{1}(y;F_{y},F_{\lambda})$. Note that $\diff_{0}(y;F_{y},F_{\lambda})$ contains at most one MDI within $I_{T}$, so it naturally falls within the settings of Lemma \ref{lem:1MDI_asym_sepa}. Based on the proof of Lemma \ref{lem:1MDI_asym_sepa}, we can get $\frac{1}{p}\sum_{i=1}^{p}\left[\diff_{1}(|y|_{(i)};F_{y},F_{\lambda})-\h g_{1,i}\right]^{2}\to0$, so $\frac{1}{p}\|\hat{\vlambda}_{1}^{(p)}-\tilde{\vlambda}_{1}^{(p)}\|^{2}\to0$. From Lemma \ref{lem:replace}, we have $\frac{1}{p}\|\tprox_{\hat{\vlambda}_{1}^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{1}^{(p)}}(\vy^{(p)})\|^{2}\to0$. Up to now, we have found a RCS $\left\{ \tilde{\vlambda}_{1}^{(p)}\right\} _{p\in\mathbb{N}}$ satisfying $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{1}^{(p)}}(\vy^{(p)})\|^{2}\to0$. Besides, the number of MDIs in $\diff_{0}(y;F_{y},F_{\t{\lambda}_{1}})$ is smaller than $\diff_{0}(y;F_{y},F_{\lambda})$ by 1, since $\diff_{0}(y;F_{y},F_{\t{\lambda}_{1}})$ is exactly $\diff_{1}(y;F_{y},F_{\lambda})$. If $\diff_{0}(y;F_{y},F_{\t{\lambda}_{1}})$ still contains an MDI, we can repeat the above two steps by constructing another $\t{\vlambda}_{2}^{(p)}$ s.t. $\frac{1}{p}\|\tprox_{\tilde{\vlambda}_{1}^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{2}^{(p)}}(\vy^{(p)})\|^{2}\to0$ and $\diff_{0}(y;F_{y},F_{\t{\lambda}_{2}})=\diff_{1}(y;F_{y},F_{\t{\lambda}_{1}})$. Since $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{1}^{(p)}}(\vy^{(p)})\|^{2}\to0$ and $\diff_{1}(y;F_{y},F_{\t{\lambda}_{1}})=\diff_{2}(y;F_{y},F_{\lambda})$, we get $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{2}^{(p)}}(\vy^{(p)})\|^{2}\to0$ and $\diff_{0}(y;F_{y},F_{\t{\lambda}_{2}})=\diff_{2}(y;F_{y},F_{\lambda})$. If total number of MDI in $\diff_{0}(y;F_{y},F_{\lambda})$ is $T<\infty$, continuing this process, we can find a $\left\{ \tilde{\vlambda}_{T}^{(p)}\right\} _{p\in\mathbb{N}}$, which is a regular converging sequence such that $\diff_{0}(y;F_{y},F_{\t{\lambda}_{T}})$ is non-decreasing and $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{T}^{(p)}}(\vy^{(p)})\|^{2}\to0$. Using Lemma \ref{lem:0MDI_asym_sepa}, we also have \begin{align*} [\tprox_{\tilde{\vlambda}_{T}^{(p)}}(\vy^{(p)})]_{i} & =\sgn(y_{i})\diff_{0}(y_{i};F_{y},F_{\t{\lambda}_{T}})\\ & =\sgn(y_{i})\diff_{T}(y_{i};F_{y},F_{\lambda})\\ & =\prox(y_{i};F_{y},F_{\lambda}) \end{align*} so we obtain $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\prox(\vy^{(p)};F_{y},F_{\lambda})\|^{2}\to0$. \end{IEEEproof} \subsubsection{Replacement by General Converging Sequences} Now we generalize the results in Sec. \ref{subsec:asym_sepa_RCS} to general converging sequences. The bridge between the regular and general converging sequences is the following three lemmas. The first lemma shows the sensitivity of $\tproxl(\vy)$ to the perturbation of $\vlambda$ and $\vy$. \begin{lem} \label{lem:replace} Let $\vy_{1},\vy_{2},\,\vlambda_{1},\,\vlambda_{2}\in\R^{p}$ and without loss of generality assume $\left\{ \lambda_{1,i}\right\} _{1\leq i\leq p}$ and $\left\{ \lambda_{2,i}\right\} _{1\leq i\leq p}$ are both non-decreasing sequences. It holds that: \[ \|\tprox_{\vlambda_{1}}(\vy_{1})-\tprox_{\vlambda_{2}}(\vy_{2})\|_{2}\leq2\left(\|\vlambda_{1}-\vlambda_{2}\|_{2}+\|\vy_{1}-\vy_{2}\|_{2}\right) \] \end{lem} \begin{IEEEproof} Let $f_{1}(\vx)=\frac{1}{2}\|\vy_{1}-\vx\|_{2}^{2}+\sum_{i=1}^{p}\lambda_{1,i}|x|_{(i)}$ and $f_{2}(\vx)=\frac{1}{2}\|\vy_{2}-\vx\|_{2}^{2}+\sum_{i=1}^{p}\lambda_{2,i}|x|_{(i)}$. Denote corresponding optimal solutions as: $\vx_{1}^{*}$ and $\vx_{2}^{*}$ and we know $\vx_{1}^{*}=\tprox_{\vlambda_{1}}(\vy_{1})$ and $\vx_{2}^{*}=\tprox_{\vlambda_{2}}(\vy_{2}).$ From optimality of $\vx_{1}^{*}$ and $\vx_{2}^{*}$, we have: \begin{align} f_{1}(\vx_{2}^{*})-f_{1}(\vx_{1}^{*}) & =f_{2}(\vx_{2}^{*})-f_{2}(\vx_{1}^{*})+\sum_{i=1}^{p}\left(\lambda_{1,i}-\lambda_{2,i}\right)\left(|x_{2}^{*}|_{(i)}-|x_{1}^{*}|_{(i)}\right)\nonumber \\ & \quad+\sum_{i=1}^{p}\left(y_{2,i}-y_{1,i}\right)\left(x_{2,i}^{*}-x_{1,i}^{*}\right)\nonumber \\ & \leq\|\vlambda_{1}-\vlambda_{2}\|\left(\|\vx_{1}^{*}\|^{2}+\|\vx_{2}^{*}\|^{2}-2\sum_{i=1}^{p}|x_{1}^{*}|_{(i)}|x_{2}^{*}|_{(i)}\right)^{1/2}\nonumber \\ & \quad+\|\vy_{1}-\vy_{2}\|\|\vx_{1}^{*}-\vx_{2}^{*}\|\nonumber \\ & \leq\left(\|\vlambda_{1}-\vlambda_{2}\|+\|\vy_{1}-\vy_{2}\|\right)\|\vx_{1}^{*}-\vx_{2}^{*}\|\label{eq:pertleq} \end{align} On the other hand, since $f_{1}(\vx)$ is strongly convex, at the optimal solution $\vx_{1}^{*}$, we have: \begin{align} f_{1}(\vx_{2}^{*})-f_{1}(\vx_{1}^{*}) & \geq\nabla^{\T}f_{1}(\vx_{1}^{*})(\vx_{2}^{*}-\vx_{1}^{*})+\frac{1}{2}\|\vx_{2}^{*}-\vx_{1}^{*}\|^{2}\nonumber \\ & =\frac{1}{2}\|\vx_{2}^{*}-\vx_{1}^{*}\|^{2}\label{eq:pertgeq} \end{align} Combining (\ref{eq:pertleq}) and (\ref{eq:pertgeq}) together, we have: \[ \|\vx_{1}^{*}-\vx_{2}^{*}\|\leq2\left(\|\vlambda_{1}-\vlambda_{2}\|+\|\vy_{1}-\vy_{2}\|\right) \] \end{IEEEproof} The second lemma shows the convergence of empirical quantile function to the theoretical one in $L^{2}$ sense: \begin{lem} \label{lem:seq_similarity}Let $\left\{ \vx^{(p)}\right\} _{p\in\mathbb{N}}$ be a converging sequence satisfying (A.1)-(A.4) with limiting CDF $F(x)$. Let $F^{-1}(z)=\inf\left\{ x:\:F(x)\geq z\right\} $ be the corresponding quantile function. When $p\to\infty$, the following holds: \begin{equation} \frac{1}{p}\sum_{i=1}^{p}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\to0\label{eq:L2_quantile} \end{equation} \end{lem} \begin{IEEEproof} We can write $x_{(i)}=\h F_{p+1}^{-1}(\frac{i}{p+1})$, where $\h F_{p}^{-1}(z)$ is the quantile function of empirical measure: $\mu_{p}(x)=\frac{1}{p}\sum_{i=1}^{p}\delta(x-x_{i})$. Choose $\veps>0$ such that $F^{-1}(z)$ is continuous at $z=\veps,1-\veps$ and let $A_{\veps}=\max_{z\in(\veps,1-\veps)}|F^{-1}(z)|$ and $I_{(p),\veps}\bydef\left\{ i\Big|\frac{i}{p+1}\in(\veps,1-\veps),\,|x_{(i)}|\leq A_{\veps}\right\} $. The summation in (\ref{eq:L2_quantile}) can be partitioned into two parts: (a) $\frac{1}{p}\sum_{i\in I_{(p),\veps}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}$, (b) $\frac{1}{p}\sum_{i\in I_{(p),\veps}^{C}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}$. We next analyze these two parts separately. To control part (a), we follows the method for proving Glivenko-Cantelli theorem \cite{van2000asymptotic}. Let us start by constructing a sequence of points $\left\{ z_{k}\right\} _{1\leq k\leq K}$ in the following way: (1) Find $\veps=w_{1}<w_{2}<\cdots<w_{\ell}=1-\veps$ satisfying $F^{-1}(w_{j+1})-F^{-1}(w_{j}^{+})\leq\veps$. This can be done by setting $w_{j}=\inf\left\{ w\Big|F^{-1}(w_{j+1})-F^{-1}(w)\leq\veps\right\} $, $j=1,2,\ldots,\ell-1$. In this way, we also have $F^{-1}(w_{j+1})-F^{-1}(w_{j})\geq\veps$ otherwise, by left-continuity of $F^{-1}(\cdot)$, $\exists w_{j}'<w_{j}$, $F^{-1}(w_{j+1})-F^{-1}(w_{j}')\leq\veps$ which contradicts the definition of $w_{j}$. Therefore, we have $\ell\leq\frac{2A_{\veps}}{\veps}+1$. (2) For any discontinuous point of $w_{k}$, find two continuous points $w_{k,L}$ and $w_{k,R}$, s.t. $w_{k}\in(w_{k,L},\,w_{k,R})$ and $w_{k,R}-w_{k,L}<\veps_{1}/2$. Since $F^{-1}(\cdot)$ has countably many discontinuous points, $w_{k,L}$, $w_{k,R}$ always exist and $\veps_{1}$ can be made arbitrarily small. (3) Add all $w_{k,L}$, $w_{k,R}$ to $\left\{ w_{k}\right\} _{1\leq k\leq l}$ to get $\left\{ z_{i}\right\} _{1\leq i\leq K}$ and we know $K\leq3\left(\frac{2A_{\veps}}{\veps}+1\right)$. Intervals $(z_{k-1},z_{k})$ formed by $\left\{ z_{i}\right\} _{1\leq i\leq K}$ can be categorized into two types: (1) one of $z_{k-1}$, $z_{k}$ is discontinuous, (2) $z_{k-1}$ and $z_{k}$ are both continuous points of $F^{-1}(\cdot)$. Let us use $C_{(p),\veps,\veps_{1}}$ to denote the set of all $i\in I_{(p),\veps}$ s.t. $\frac{i}{p+1}$ falls within the intervals of type (1). The cardinality of $C_{(p),\veps,\veps_{1}}$ satisfies: $|C_{(p),\veps,\veps_{1}}|\leq\left(\frac{2A_{\veps}}{\veps}+1\right)\veps_{1}p$. For any $i\in C_{(p),\veps,\veps_{1}}$with $w_{k}<\frac{i}{p+1}\leq w_{k+1}$, we have \[ \h F_{p+1}^{-1}(w_{k}^{+})\leq\h F_{p+1}^{-1}(\frac{i}{p+1})\leq\h F_{p+1}^{-1}(w_{k+1}) \] and \[ F^{-1}(w_{k}^{+})\leq F^{-1}(\frac{i}{p+1})\leq F^{-1}(w_{k+1}) \] which gives us \[ F^{-1}(w_{k})-F^{-1}(w_{k+1})-\delta_{k}\leq\h F_{p+1}^{-1}(\frac{i}{p+1})-F^{-1}(\frac{i}{p+1})\leq F^{-1}(w_{k+1})-F^{-1}(w_{k})+\delta_{k+1}, \] where $\delta_{k}=|\h F_{p+1}^{-1}(w_{k})-F^{-1}(w_{k})|$ and we have use the continuity of $F^{-1}(\cdot)$ at $w_{k}$ and $w_{k+1}$. Using the fact that $\h F_{p}^{-1}(\cdot)$ converges at its continuous points \cite{van2000asymptotic}, we have $\delta_{k}$, $\delta_{k+1}\to0$ as $p\to\infty$. Also by construction, we have $|F^{-1}(w_{k+1})-F^{-1}(w_{k})|\leq\veps$. Then we have: \begin{align*} \frac{1}{p}\sum_{i\in I_{(p),\veps}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2} & =\frac{1}{p}\sum_{i\in C_{(p),\veps,\veps_{1}}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\\ & \quad+\frac{1}{p}\sum_{i\in I_{(p),\veps}\backslash C_{(p),\veps,\veps_{1}}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\\ & \leq\left(\frac{2A_{\veps}}{\veps}+1\right)\veps_{1}A_{\veps}^{2}+\frac{2K}{p}(\veps^{2}+\max\left\{ \delta_{k}^{2}\right\} _{k\in C_{(p),\veps,\veps_{1}}}) \end{align*} Hence for any $\veps>0$, we can find small enough $\veps_{1}$ and large enough $p$ such that $\frac{1}{p}\sum_{i\in I_{(p),\veps}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\leq\veps^{2}$. Next we analyze part (b). Since $\frac{1}{p}\sum_{i=1}^{p}x_{i}^{2}\to\E X^{2}$ and $\frac{1}{p}\sum_{i\in I_{(p),\veps}}x_{i}^{2}\leq\frac{1}{p}\sum_{i=1}^{p}x_{i}^{2}$, by dominated convergence theorem, we have $\lim_{\veps\to0}\lim_{p\to\infty}\frac{1}{p}\sum_{i\in I_{(p),\veps}^{C}}x_{i}^{2}=0$. Similarly, $\lim_{\veps\to0}\lim_{p\to\infty}\frac{1}{p}\sum_{i\in I_{(p),\veps}^{C}}\left[F^{-1}(\frac{i}{p+1})\right]^{2}=0$. Therefore, $\lim_{\veps\to0}\lim_{p\to\infty}\frac{1}{p}\sum_{i\in I_{(p),\veps}^{C}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}=0$. Combined with the analysis of part (a), we conclude that $\frac{1}{p}\sum_{i=1}^{p}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\to0$. \end{IEEEproof} The third lemma shows any $\prox(y;F_{y},F_{\lambda})$ obtained from Algorithm \ref{alg:conti} is non-decreasing and Lipschitz continuous with constant 1. \begin{lem} \label{lem:regu_Lipschitz}For any $F_{y}$ and $F_{\lambda}$, $\prox(y;F_{y},F_{\lambda})$ obtained from Algorithm \ref{alg:conti} satisfies: $0\leq\prox(y_{2};F_{y},F_{\lambda})-\prox(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$, $\forall y_{1}\leq y_{2}$. \end{lem} \begin{IEEEproof} Based on Proposition \ref{prop:G_Lipschitz} and the fact that $\prox(y;F_{y},F_{\lambda})=\sgn(y)\max\left\{ 0,\diff_{t}(|y|;F_{y},F_{\lambda})\right\} $, we know that $\prox(y;F_{y},F_{\lambda})$ is also non-decreasing. Also using the identity: $\left|\max\left\{ 0,\,x_{1}\right\} -\max\left\{ 0,\,x_{2}\right\} \right|\leq|x_{1}-x_{2}|$, we can get $\prox(y_{2};F_{y},F_{\lambda})-\prox(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$. \end{IEEEproof} We are now ready to prove Proposition \ref{prop:prox}. \subsubsection{Proof of Proposition \ref{prop:prox}} Let us denote the converging sequence in (\ref{eq:regu_convseq}) as $\left\{ \vy_{r}^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda_{r}^{(p)}\right\} _{p\in\mathbb{N}}$. For any converging sequence $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ , from Lemma \ref{lem:seq_similarity}, we know $\frac{1}{p}\|\vy_{r}^{(p)}-\vy^{(p)}\|^{2},\,\frac{1}{p}\|\vlambda_{r}^{(p)}-\vlambda^{(p)}\|^{2}\to0$, as $p\to\infty$. Using Lemma \ref{lem:replace}, we have $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\vlambda_{r}^{(p)}}(\vy_{r}^{(p)})\|^{2}\to0$. Besides, from Lemma \ref{lem:asym_sepa_regu}, $\frac{1}{p}\|\tprox_{\vlambda_{r}^{(p)}}(\vy_{r}^{(p)})-\prox(\vy_{r}^{(p)};F_{y},F_{\lambda})\|^{2}\to0$ and from Lemma \ref{lem:regu_Lipschitz}, $\frac{1}{p}\|\prox(\vy_{r}^{(p)};F_{y},F_{\lambda})-\prox(\vy^{(p)};F_{y},F_{\lambda})\|^{2}\leq\frac{1}{p}\|\vy_{r}^{(p)}-\vy^{(p)}\|^{2}\to0$. Combining the above convergence results together, we obtain that: $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\prox(\vy^{(p)};F_{y},F_{\lambda})\|^{2}\to0$. \section{Conclusion \label{sec:Conclusions}} We have established the asymptotic characterization of SLOPE in high-dimensional regime. Although SLOPE is a high-dimensional regularized regression method, asymptotically its statistical performance can be fully characterized by a few scalar random variables. We showed this low-dimensional characterization enables us to design a regularization sequence that yields optimal estimation and variable selection performance of SLOPE. \subsection{Proof of Asymptotic Characterization} \subsubsection{Convergence in Wasserstein Distance} \begin{comment} In this section, we first prove an intermediate result (Lemma \ref{lem:CGMT}) that leads to our asymptotic characterization in Theorem \ref{thm:asymp_char}. The main idea is to express limiting joint distribution of $(\sol,\,\sgl)$ via proximal operator $\tproxl$. Then combining with Proposition \ref{prop:prox}, we can directly prove Theorem \ref{thm:asymp_char}. \begin{lem} \label{lem:CGMT}Let $\left\{ \sgl^{(p)},\,\dmtx^{(p)},\noise^{(p)},\,\vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ be a converging sequence satisfying our main assumptions in Sec. \ref{subsec:DefandAssump}. Then for a pseudo-Lipschitz function $\psi$ , we have: \[ \frac{1}{p}\sum_{i=1}^{p}\psi(\soli_{i},\,\sgli_{i})\pconv\lim_{p\to\infty}\E\{\frac{1}{p}\sum_{i=1}^{p}\psi([\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)]_{i},\sgli_{i})\} \] where $\equinoise\sim\mathcal{N}(0,\mI)$, $\tprox_{\tau\vlambda}(\vy)=\underset{x}{\text{argmin }}\frac{1}{2}\|\vy-\vx\|^{2}+\tau\sum_{i=1}^{p}\lambda_{i}|x|_{(i)}$ with $[\tproxl(\vy)]_{i}$ denoting its $i$th coordinate and $(\sigma,\,\tau)$ is the solution of the following equations: \begin{align} \sigma^{2} & =\sigma_{w}^{2}+\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\E\|\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\sgl\|_{2}^{2}\label{eq:sigmatau_eq1}\\ 1 & =\tau(1-\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\E[\text{div}(\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise))])\label{eq:sigmatau_eq2} \end{align} \end{lem} We can see the result of Lemma \ref{lem:CGMT} are very similar to that of Theorem \ref{thm:asymp_char}. The difference is that in Lemma \ref{lem:CGMT}, the asymptotic characterization is expressed in terms of the limiting coordinate-wise average of proximal vector, while in Theorem \ref{thm:asymp_char} this average is replaced by an equivalent scalar owning to Proposition \ref{prop:prox}. \end{comment} Theorem \ref{thm:asymp_char} essentially shows that the empirical measure $\mu_{p}(\h{\sgl},\sgl)$ converges to the joint distribution \begin{equation} \mu^{*}\bydef\left(\prox(B+\sigma^{*}Z;F_{y},F_{\tau^{*}\lambda}),B\right),\label{eq:mu_star} \end{equation} where $(\sigma^{*},\tau^{*})$ is the unique solution of (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) and $B\sim\calN(0,1)$. This convergence is quantified by Wasserstein distance. For two probability measures $\mu$ and $\nu$ with bounded second moment, we define Wasserstein distance as: \[ W_{2}(\mu,\nu)=\left(\inf_{\gamma\in\mathcal{C}(\mu,\nu)}\int\|x-y\|_{2}^{2}d\gamma(x,y)\right)^{1/2} \] where $\mathcal{C}(\mu,\nu)$ denotes the set of all couplings of $\mu$ and $\nu$. For any pseudo-Lipschitz function $\psi:\R^{2}\to\R$ and empirical measures $\left\{ \mu_{p}\right\} _{p\in\mathbb{N}}$ on $\R^{2}$: it holds that $\lim_{p\to\infty}\int\psi(x)d\mu_{p}(x)=\int\psi(x)d\mu^{*}(x)$ if and only if $W_{2}(\mu_{p},\mu^{*})\to0$ \cite{villani2008optimal}. Therefore, (\ref{eq:weak_convergence}) in Theorem \ref{thm:asymp_char} is equivalent to saying $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu^{*})\to0$. In Proposition (\ref{prop:empi_Wdis_conv}) below, we first show that $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*}^{(p)},\sgl))\pconv0$, where \begin{equation} \sgli_{*,i}\bydef\prox(\sgli_{i}+\sigma^{*}Z_{i};F_{y},F_{\tau^{*}\lambda}).\label{eq:sgl_star_i} \end{equation} Here $\prox(y;F_{y},F_{\lambda})$ is the limiting scalar function defined in Proposition \ref{prop:prox}, $(\sigma^{*},\tau^{*})$ is the solution of (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}), $Z_{i}\iid\calN(0,1)$ , $Y\sim B+\sigma Z$ with $B\sim F_{\beta}$ and $\lambda\sim F_{\lambda}$. Then combining it with standard results of concentration of empirical measure, we can prove Theorem \ref{thm:asymp_char}. \begin{comment} \[ \lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\psi(\h{\sgli}_{i},\sgli_{i})=\lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\psi(\prox(\sgli_{i}+\sigma^{*}Z_{i};F_{y},F_{\tau^{*}\lambda}),\sgli_{i}) \] \end{comment} \begin{prop} \label{prop:empi_Wdis_conv}$\forall\epsilon\in(0,1/2)$, \[ \P\left(W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*},\sgl))^{2}\geq\epsilon\right)\leq\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $\sgl_{*}$ is defined in (\ref{eq:sgl_star_i}). \end{prop} In the following sections, we will present the proof of Proposition \ref{prop:empi_Wdis_conv}. It follows the proof in \cite{thrampoulidis2018precise,miolane2018distribution,thrampoulidis2015regularized}. \subsubsection{Convex Gaussian Min-max Theorem} The proof of Proposition \ref{prop:empi_Wdis_conv} hinges on the Convex Gaussian Min-max Theorem (CGMT). For completeness, we summarize the main idea here. In \cite{thrampoulidis2018precise}, using CGMT framework, the asymptotic MSE is derived for the regularized M-estimator: \begin{equation} \hat{\sgl}=\argmin{\vx}\,\mathcal{L}(\vy-\dmtx\vx)+f(\vx)\label{eq:Mesti} \end{equation} where $\mathcal{L}(\cdot)$ is the loss function and $f(\cdot)$ is the regularizer. Here in SLOPE case, $\mathcal{L}(\vx)=\frac{1}{2}\|\vx\|^{2}$ and $f(\vx)=\regu(\vx)$. The CGMT studies a min-max primary optimization (PO) problem of the following form: \begin{equation} \Phi(\mG)=\min_{\vv\in\comset_{\vv}}\max_{\vu\in\comset_{\vu}}\vu^{\T}\mG\vv+\psi(\vv,\vu),\label{eq:PO} \end{equation} where $\comset_{\vv}\subset\R^{p}$, $\comset_{\vu}\subset\R^{n}$ are two compact sets, $\psi:\comset_{\vv}\times\comset_{\vu}\mapsto\R$ is a continuous convex-concave function w.r.t. $(\vv,\vu)$ and $\mG\iid\calN(0,1)$. Problem (\ref{eq:PO}) can be associated with the following auxiliary optimization (AO) problem, which is usually easier to analyze: \begin{equation} \phi(\vg,\vh)=\min_{\vv\in\comset_{\vv}}\max_{\vu\in\comset_{\vu}}\|\vv\|_{2}\vg^{\T}\vu+\|\vu\|_{2}\vh^{\T}\vv+\psi(\vv,\vu),\label{eq:AO} \end{equation} where $\vg\sim\calN(\mathbf{0},\mI_{n})$ and $\vh\sim\calN(\mathbf{0},\mI_{p})$. Following CGMT framework, we first write (\ref{eq:slope_opt}) in the form of (\ref{eq:PO}). Let $\vv=\vx-\sgl$ in (\ref{eq:Mesti}), we can rewrite optimization problem in (\ref{eq:slope_opt}) as: \begin{equation} \h{\vv}=\argmin{\vv\in\R^{p}}C_{\vlambda}(\vv)\label{eq:slope_opt1} \end{equation} where \begin{equation} C_{\vlambda}(\vv)\bydef\frac{1}{2}\|\dmtx\vv-\noise\|^{2}+\regu(\vv+\vbeta).\label{eq:C_lambda} \end{equation} and we have $\h{\vv}=\h{\sgl}-\sgl$, which is the error vector. In (\ref{eq:slope_opt1}), the feasibility set of $\vv$ is unbounded, while in (\ref{eq:PO}) and (\ref{eq:AO}), it is bounded. To facilitate analysis later, we will first enforce an artificial boundness on $\vv$ as in \cite{thrampoulidis2018precise}. Specifically, we will consider a bounded version of (\ref{eq:slope_opt1}) as follows: \begin{equation} \h{\vv}^{B}=\argmax{\vv\in\comset_{\vv}(K)}C_{\vlambda}(\vv),\label{eq:slope_opt1_constrained} \end{equation} where \[ \comset_{\vv}(K)\bydef\left\{ \vv\Big|\|\vv\|_{2}/\sqrt{n}\leq K\right\} \] with $K>0$ a sufficiently large constant. It can be proved by the convexity of (\ref{eq:C_lambda}) that if $\|\h{\vv}^{B}\|/\sqrt{n}\pconv\alpha_{*}<K$, we have $\|\h{\vv}\|/\sqrt{n}\pconv\alpha_{*}<K$. Therefore, by choosing a large enough $K$, we can work with the constrained problem (\ref{eq:slope_opt1_constrained}) to derive the asymptotic results. After introducing an auxiliary variable $\vu$ and normalizing the cost function by $1/n$, problem (\ref{eq:slope_opt1_constrained}) becomes: \begin{equation} \min_{\vv\in\comset_{\vv}(K)}\max_{\vu}\frac{1}{n}\left[\frac{\vu^{\T}}{\sqrt{n}}\left(\sqrt{n}\dmtx\right)\vv-\vu^{\T}\noise-\frac{\|\vu\|^{2}}{2}+\regu(\vv+\vbeta)\right],\label{eq:slope_PO} \end{equation} which satisfies the form of (\ref{eq:PO}). The corresponding AO is: \begin{equation} \min_{\vv\in\comset_{\vv}(K)}\max_{\vu}\frac{1}{n}\left[-\frac{\|\vu\|}{\sqrt{n}}\vh^{\T}\vv-\frac{\|\vv\|}{\sqrt{n}}\vg^{\T}\vu-\vu^{\T}\noise-\frac{\|\vu\|^{2}}{2}+\regu(\vv+\vbeta)\right],\label{eq:slope_AO} \end{equation} where $\vg\sim\calN(\mathbf{0},\mI_{n})$ and $\vh\sim\calN(\mathbf{0},\mI_{p})$. Taking $\theta\bydef\frac{\|\vu\|}{\sqrt{n}}$, (\ref{eq:slope_AO}) is equivalent to: \[ \min_{\vv\in\comset_{\vv}(K)}\max_{\theta\geq0}l(\vv,\theta), \] where \begin{align*} l(\vv,\theta) & \bydef\theta\left(\left\Vert \frac{\|\vv\|}{n}\vg+\frac{\vw}{\sqrt{n}}\right\Vert -\frac{\vh^{\T}\vv}{n}\right)-\frac{1}{2}\theta^{2}+\frac{\regu(\vv+\vbeta)}{n}\\ & =\theta\left(\sqrt{\frac{\|\vv\|^{2}}{n}\frac{\|\vg\|^{2}}{n}+\frac{\|\noise\|^{2}}{n}+2\frac{\|\vv\|}{\sqrt{n}}\frac{\vg^{\T}\noise}{n}}-\frac{\vh^{\T}\vv}{n}\right)-\frac{1}{2}\theta^{2}+\frac{\regu(\vv+\vbeta)}{n} \end{align*} and after optimizing over $\theta$, we get the optimization problem: \[ \min_{\vv\in\comset_{\vv}(K)}L(\vv), \] where \begin{equation} L(\vv)\bydef\frac{1}{2}\left(\sqrt{\frac{\|\vv\|^{2}}{n}\frac{\|\vg\|^{2}}{n}+\frac{\|\noise\|^{2}}{n}+2\frac{\|\vv\|}{\sqrt{n}}\frac{\vg^{\T}\noise}{n}}-\frac{\vh^{\T}\vv}{n}\right)_{+}^{2}+\frac{\regu(\vv+\vbeta)}{n}.\label{eq:Lv} \end{equation} The following result is from \cite{miolane2018distribution,thrampoulidis2015regularized}: \begin{prop} \label{prop:CGMT_C_L}Let $D\subseteq\comset_{\vv}(K)$ be a convex closed set, then $\forall t\in\R$, we have: \begin{equation} \P(\min_{\vv\in D}C_{\vlambda}(\vv)\leq t)\leq2\P(\min_{\vv\in D}L(\vv)\leq t)\label{eq:CGMT_C_L1} \end{equation} and \begin{equation} \P(\min_{\vv\in D}C_{\vlambda}(\vv)\geq t)\leq2\P(\min_{\vv\in D}L(\vv)\geq t).\label{eq:CGMT_C_L2} \end{equation} \end{prop} In \cite{miolane2018distribution,thrampoulidis2015regularized}, the authors consider Gaussian noise $\noise\sim\calN(\mathbf{0},\mI_{n})$ and $L(\vv)$ can be written equivalently as a convex function. Here, $L(\vv)$ in (\ref{eq:Lv}) is not a convex function and the previous results can not directly apply. Note that the non-convexity of (\ref{eq:Lv}) stems from the term $\frac{\|\vv\|}{\sqrt{n}}\frac{\vg^{\T}\noise}{n}$. For large enough $n$, $\frac{\vg^{\T}\noise}{n}\approx0$, i.e., the non-convex term is asymptotically negligible. In addition, $\frac{\|\vg\|^{2}}{n}\approx1$ and $\frac{\|\noise\|^{2}}{n}\approx\sigma_{\noisei}^{2}$, when $n,p\to\infty$. Therefore, we can study the following function \begin{comment} \[ \widetilde{L}(\vv)\bydef\frac{1}{2}\left(\sqrt{\frac{\|\vv\|^{2}}{n}\frac{\|\vg\|^{2}}{n}+\frac{\|\noise\|^{2}}{n}}-\frac{\vh^{T}\vv}{n}\right)_{+}^{2}+\frac{\regu(\vv+\vbeta)}{n}, \] \end{comment} \begin{equation} \widetilde{L}(\vv)\bydef\frac{1}{2}\left(\sqrt{\frac{\|\vv\|^{2}}{n}+\sigma_{\noisei}^{2}}-\frac{\vh^{\T}\vv}{n}\right)_{+}^{2}+\frac{\regu(\vv+\vbeta)}{n},\label{eq:tilde_L_v} \end{equation} The next Lemma shows that for large enough $p$, $L(\vv)$ and $\widetilde{L}(\vv)$ are closed to each other for $\vv\in\comset_{\vv}(K)$. \begin{lem} \label{lem:perturb_Lv}$\forall\epsilon\in(0,1/2)$, we have \[ \sup_{\vv\in\comset_{\vv}(K)}|L(\vv)-\widetilde{L}(\vv)|<c\epsilon \] with probability $1-\delta^{(p)}$, where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ is a constant that does not depend on $p$. \end{lem} \begin{IEEEproof} Define $g(\Delta x;x,y)\bydef(\sqrt{x+\Delta x}-y)_{+}^{2}-(\sqrt{x}-y)_{+}^{2}.$ It is not hard to show $|g(\Delta x;x,y)|\leq\Delta x\left(1+\frac{|y|}{\sqrt{x}}\right)$. Then letting $x=\frac{\|\vv\|^{2}}{n}+\sigma_{\noisei}^{2}$, $\Delta x=\frac{\|\vv\|^{2}}{n}\left(\frac{\|\vg\|^{2}}{n}-1\right)+\left(\frac{\|\noise\|^{2}}{n}-\sigma_{\noisei}^{2}\right)+2\frac{\|\vv\|}{\sqrt{n}}\frac{\vg^{\T}\noise}{n}$ and $y=\frac{\vh^{\T}\vv}{n}$ in $g(\Delta x;x,y)$, we get \begin{equation} |L(\vv)-\widetilde{L}(\vv)|\leq\Delta x\left(1+\left|\frac{\vh^{\T}\vv}{n}\right|/\sqrt{\frac{\|\vv\|^{2}}{n}\frac{\|\vg\|^{2}}{n}+\frac{\|\noise\|^{2}}{n}}\right).\label{eq:diff_Lv} \end{equation} $\forall\epsilon\in(0,1/2)$, under the event: \begin{equation} \left\{ \frac{\|\vg\|^{2}}{n},\frac{\delta\|\vh\|^{2}}{n},\frac{\|\noise\|^{2}}{\sigma_{\noisei}^{2}n}\in\left(1-\veps,1+\veps\right)\right\} \bigcap\left\{ \left|\frac{\vg^{\T}\noise}{n}\right|<\epsilon\right\} ,\label{eq:event_1} \end{equation} we can see from (\ref{eq:diff_Lv}) that $\exists c>0$ s.t. \[ \sup_{\vv\in\comset_{\vv}(K)}|L(\vv)-\widetilde{L}(\vv)|<c\epsilon. \] Since $\vg\sim\calN(\mathbf{0},\mI_{p})$, $\vh\sim\calN(\mathbf{0},\mI_{n})$ and $\left\{ \noise^{(p)}\right\} $ is a converging sequence, (\ref{eq:event_1}) occurs with probability $1-\delta^{(p)}$ and $\delta^{(p)}\to0$ as $p\to\infty$. \end{IEEEproof} As a result of Lemma \ref{lem:perturb_Lv}, $|\min_{\vv\in\comset_{\vv}(K)}L(\vv)-\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)|<\epsilon$ with probability approaching 1. Combining this with Proposition \ref{prop:CGMT_C_L}, we obtain the following: \begin{prop} \label{prop:CGMT_C_tildeL}Let $D\subseteq\comset_{\vv}(K)$ be a convex closed set, then $\forall t\in\R$ and $\epsilon\in(0,1/2)$, we have \begin{comment} \[ (1-\delta^{(p)})\P(\min_{\vv\in D}L(\vv)\leq t)\leq\P(\min_{\vv\in D}\widetilde{L}(\vv)\leq t+\epsilon) \] \end{comment} \[ \P(\min_{\vv\in D}C_{\vlambda}(\vv)\leq t)\leq c\P(\min_{\vv\in D}\widetilde{L}(\vv)\leq t+c\epsilon)+\delta^{(p)} \] and \[ \P(\min_{\vv\in D}C_{\vlambda}(\vv)\geq t)\geq c\P(\min_{\vv\in D}\widetilde{L}(\vv)\geq t-c\epsilon)+\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ does not depend on $p$. \end{prop} For any finite $p$, define $\vv_{*}\in\R^{p}$: \begin{equation} v_{*,i}=\sgli_{*,i}-\sgli_{i},\label{eq:v_star_p} \end{equation} where $\sgli_{*,i}$ is defined in (\ref{eq:sgl_star_i}) and $\vh$ is the same Gaussian vector in (\ref{eq:tilde_L_v}). The next result is the counterpart of Theorem B.1 in \cite{miolane2018distribution}. It shows that \[ \h{\vv}_{\widetilde{L}}\bydef\argmin{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv), \] i.e., the minimizer of $\widetilde{L}(\vv)$ over $\comset_{\vv}(K)$ converges to $\vv_{*}$ with high probability in $L^{2}$ sense. \begin{prop} \label{prop:local_stability}$\forall K>0$ and $\epsilon\in(0,1/2)$, we have for $\vv_{*}$ defined in (\ref{eq:v_star_p}), \[ \P\left(\exists\vv\in\comset_{\vv}(K)\text{ s.t. }\frac{1}{p}\|\vv-\vv_{*}\|^{2}>\epsilon\text{ and }\widetilde{L}(\vv)\leq\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)\leq\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ does not depend on $p$. \end{prop} Proposition \ref{prop:local_stability} can be proved in the same way as Theorem B.1 of \cite{miolane2018distribution}. There are two key steps: (1) show that $\widetilde{L}(\vv)$ is strongly convex in a neighborhood of $\vv_{*}$, (2) show $\widetilde{L}(\vv_{*})\approx\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)$. Besides, both events happen with high probability as $p\to\infty$. These are summarized in Lemma \ref{lem:strongly_convex} and \ref{lem:closeness_v_vhat} as below. \begin{lem} \label{lem:strongly_convex}$\exists r,\gamma>0$ s.t. $\widetilde{L}(\vv)$ is $\frac{\gamma}{n}$-strongly convex on $\left\{ \vv\in\R^{p}\Big|\|\vv-\vv_{*}\|\leq\sqrt{n}r\right\} $ with probability greater than $1-\delta^{(p)}$, where $\delta^{(p)}\to0$ as $p\to\infty$. \end{lem} \begin{IEEEproof} The proof follows the same steps in \cite{miolane2018distribution} and we omit the details here. \end{IEEEproof} \begin{lem} \label{lem:closeness_v_vhat}$\forall K>0$, $\epsilon\in(0,1/2)$ and $\vv_{*}$ defined in (\ref{eq:v_star_p}), we have: \[ \P\left(\widetilde{L}(\vv_{*})\leq\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)\geq1-\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ does not depend on $p$. \end{lem} \begin{IEEEproof} The proof follows that of Proposition B.2 in \cite{miolane2018distribution}. Define \begin{align*} \t l(\vv,\theta) & =\theta\left(\sqrt{\frac{\|\vv\|^{2}}{n}+\sigma_{\noisei}^{2}}-\frac{\vh^{\T}\vv}{n}\right)-\frac{\theta^{2}}{2}+\frac{\regu(\vv+\vbeta)}{n} \end{align*} and we have \begin{equation} \widetilde{L}(\vv)=\max_{\theta\geq0}\t l(\vv,\theta).\label{eq:tilde_L_l} \end{equation} For $\vv\in\comset_{\vv}(K)$, $\t l(\vv,\theta)$ can be written as: \begin{align*} \t l(\vv,\theta) & =\theta\left(\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\left\{ \frac{\sigma}{2}+\frac{\frac{\|\vv\|^{2}}{n}+\sigma_{\noisei}^{2}}{2\sigma}\right\} -\frac{\vh^{\T}\vv}{n}\right)-\frac{\theta^{2}}{2}+\frac{\regu(\vv+\vbeta)}{n}\\ & =\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\left\{ \frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+\frac{G(\vv,\sigma,\theta;\sgl,\vh)}{n}\right\} \end{align*} where \begin{equation} G(\vv,\sigma,\theta;\sgl,\vh)=\frac{\theta\|\vv\|^{2}}{2\sigma}-\theta\vh^{\T}\vv+\regu(\vv+\vbeta)\label{eq:G_v_sgl_h} \end{equation} Then $\forall\theta\geq0$ we have: \begin{equation} \min_{\vv\in\comset_{\vv}(K)}\t l(\vv,\theta)=\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}F(\sigma,\vh;\theta)\label{eq:tilde_l_F} \end{equation} where $F(\sigma,\vh;\theta)$ is defined as: \[ F(\sigma,\vh;\theta)=\frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+\frac{1}{n}\min_{\vv\in\comset_{\vv}(K)}G(\vv,\sigma,\theta;\sgl,\vh) \] Similar to \cite{miolane2018distribution}, it can be shown for fixed $\sgl$ and $\theta$, $F(\sigma,\vh;\theta)$ converges to its mean: \[ \E_{\vh}F(\sigma,\vh;\theta)=\frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+\frac{1}{n}\E\min_{\vv\in\comset_{\vv}(K)}G(\vv,\sigma,\theta;\sgl,\vh) \] uniformly over $[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]$ in probability, i.e., \begin{equation} \P\left(\sup_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}|F(\sigma,\vh;\theta)-\E_{\vh}F(\sigma,\vh;\theta)|>c\epsilon\right)<\delta^{(p)},\label{eq:unif_conv_F} \end{equation} where $c>0$ and $\delta^{(p)}\to0$ as $p\to\infty$. Then we define \begin{align*} \Psi^{(p)}(\sigma,\theta) & =\frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+\frac{1}{n}G(\sigma,\theta;\sgl,\vh) \end{align*} and \begin{equation} \Psi(\sigma,\theta)=\frac{\theta}{2}\left(\frac{\sigma_{\noisei}^{2}}{\sigma}+\sigma\right)-\frac{\theta^{2}}{2}+G(\sigma,\theta)\label{eq:Psi_sigma_theta} \end{equation} where \begin{align*} G(\sigma,\theta;\sgl,\vh) & =\E\min_{\vv\in\R^{p}}G(\vv,\sigma,\theta;\sgl,\vh)\\ & =\E e_{\regu}(\sgl+\sigma\vh;\frac{\sigma}{\theta})-\frac{\theta\sigma}{2}p \end{align*} and \[ G(\sigma,\theta)=\frac{1}{\delta}\left[\lim_{p\to\infty}\frac{1}{p}\E e_{\regu}(\sgl+\sigma\vh;\frac{\sigma}{\theta})-\frac{\theta\sigma}{2}\right]. \] with $e_{f}(\vx;\tau)$ defined later in (\ref{eq:Moreau_f}). Here, $G(\sigma,\theta)$ can be considered as the limit of $G(\sigma,\theta;\sgl,\vh)$ when $p\to\infty$. The existence of limit $\lim_{p\to\infty}\frac{1}{p}\E e_{\regu}(\sgl+\sigma\vh;\frac{\sigma}{\theta})$ will be verified later in Lemma \ref{lem:conv_F}. Also let \begin{eqnarray} \h{\vv}_{\tprox} & = & \argmin{\vv\in\R^{p}}G(\vv,\sigma,\theta;\sgl,\vh)\nonumber \\ & = & \tprox_{\frac{\sigma\regu}{\theta}}(\sgl+\sigma\vh)-\sgl\label{eq:v_hat_p} \end{eqnarray} where $G(\vv,\sigma,\theta;\sgl,\vh)$ is defined in (\ref{eq:G_v_sgl_h}). Clearly, we have \begin{equation} \E_{\vh}F(\sigma,\vh;\theta)\geq\Psi^{(p)}(\sigma,\theta)\label{eq:EF_Psi} \end{equation} From the uniform convergence of $\frac{1}{p}\E e_{\regu}(\sgl+\sigma\vh;\frac{\sigma}{\theta})$ proved in Lemma \ref{lem:F_c_tau_uniform_conv} later, we have $\forall\epsilon\in(0,1/2)$: \begin{equation} \P\left(\sup_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\left|\Psi(\sigma,\theta)-\Psi^{(p)}(\sigma,\theta)\right|>c\epsilon\right)<\delta^{(p)}\label{eq:unif_conv_Psi} \end{equation} From (\ref{eq:unif_conv_F}) and (\ref{eq:unif_conv_Psi}), we know $\forall\epsilon\in(0,1/2)$ and $\theta\geq0$, \begin{equation} \P\left(\left|\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}F(\sigma,\vh;\theta)-\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\E_{\vh}F(\sigma,\vh;\theta)\right|>c\epsilon\right)<\delta^{(p)}\label{eq:min_F_conv} \end{equation} and \begin{equation} \P\left(\left|\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\Psi^{(p)}(\sigma,\theta)-\min_{\sigma\in[\sigma_{\noisei},\sqrt{\sigma_{\noisei}^{2}+K^{2}}]}\Psi(\sigma,\theta)\right|>c\epsilon\right)<\delta^{(p)}\label{eq:min_Psi_conv} \end{equation} Letting $\theta=\theta^{*}$ and combining (\ref{eq:tilde_L_l})-(\ref{eq:EF_Psi}), (\ref{eq:min_F_conv}) and (\ref{eq:min_Psi_conv}), we have $\forall\epsilon\in(0,1/2)$: \begin{equation} \min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)\geq\Psi(\sigma^{*},\theta^{*})-c\epsilon\label{eq:L_tilde_opt_conv} \end{equation} with probability $1-c\delta^{(p)}$, where $c>0$ is a constant that does not depend on $p$ and $(\sigma^{*},\theta^{*})$ is the optimal solution of $\min_{\sigma\geq0}\max_{\theta\geq0}\Psi(\sigma,\theta)$. From Lemma \ref{lem:F_c_tau_uniform_conv} in the next section, we know $\Psi(\sigma,\theta)$ is continuously differentiable. Besides, it is not hard to show $\Psi(\sigma,\theta)$ is convex-concave w.r.t. $(\sigma,\theta)$ based on Lemma \ref{lem:F_c_tau_uniform_conv}. Moreover, we can also show that $(\sigma^{*},\theta^{*})$ is in the interior of its domain. To see this, we first show that $\forall\sigma_{0}\geq0$, $\theta^{*}(\sigma_{0})\in(0,\infty)$, where $\theta^{*}(\sigma_{0})=\argmax{\theta\geq0}\Psi(\sigma_{0},\theta)$. We can show \[ \frac{\partial G(\sigma_{0},\theta)}{\partial\theta}=\lim_{n\to\infty}\frac{\|\h{\vv}_{\tprox}\|^{2}}{2n\sigma_{0}}-\frac{\vh^{\T}\h{\vv}_{\tprox}}{n} \] and from (\ref{eq:v_hat_p}), we know $\lim_{\theta\to0}\frac{\partial G(\sigma_{0},\theta)}{\partial\theta}=\frac{\E\sgli^{2}}{2\sigma_{0}\delta}>0$, so $\lim_{\theta\to0}\frac{\partial\Psi(\sigma_{0},\theta)}{\partial\theta}>0$, $\forall\sigma_{0}\geq0$. In the meantime, we have $\lim_{\theta\to+\infty}\Psi(\sigma_{0},\theta)=-\infty$, so $\theta^{*}(\sigma_{0})\in(0,\infty)$ given that $G(\sigma_{0},\theta)$ is concave w.r.t. $\theta$. Similarly, we can show that $\forall\theta_{0}\geq0$, $\sigma^{*}(\theta_{0})\in(0,\infty)$. Therefore, $(\sigma^{*},\theta^{*})$ can be obtained via the first-order optimality condition. Taking derivatives of $\Psi(\sigma,\theta)$, we can get: \begin{equation} \begin{cases} \frac{\partial\Psi}{\partial\sigma}=\frac{\theta}{2\sigma^{2}}\left[\sigma^{2}-\left(\sigma_{\noisei}^{2}+\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\E\|\h{\vv}_{\tprox}\|^{2}\right)\right]\\ \frac{\partial\Psi}{\partial\theta}=\sigma-\theta-\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\E\vh^{\T}\h{\vv}_{\tprox} \end{cases}\label{eq:fixed_equation_highdim} \end{equation} where $\h{\vv}_{\tprox}$ is defined in (\ref{eq:v_hat_p}). Setting $\frac{\partial\Psi}{\partial\sigma}=\frac{\partial\Psi}{\partial\theta}=0$ and using Lemma \ref{lem:conv_Stein}, we can get: \begin{equation} \begin{cases} \sigma^{2}=\sigma_{\noisei}^{2}+\frac{1}{\delta}\E[\eta(B+\sigma\equinoisei;F_{y},F_{\sigma\lambda/\theta})-B]^{2}\\ 1=\tau\left(1-\frac{1}{\delta}\E[\eta'(B+\sigma\equinoisei;F_{y},F_{\sigma\lambda/\theta})-B]^{2}\right) \end{cases}\label{eq:fixed_equation2} \end{equation} Letting $\tau\bydef\sigma/\theta$ in (\ref{eq:fixed_equation2}), we recover equations (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}). On the other hand, similar to Proposition F.1 of \cite{miolane2018distribution}, it can be shown that \begin{equation} \P\left(\left|\widetilde{L}(\vv_{*})-\Psi(\sigma^{*},\theta^{*})\right|>c\epsilon\right)<\delta^{(p)}\label{eq:L_tilde_v_star_conv} \end{equation} with $\delta^{(p)}\to0$ as $p\to\infty$. Then combining (\ref{eq:L_tilde_opt_conv}) and (\ref{eq:L_tilde_v_star_conv}), we conclude that \[ \min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)\geq\widetilde{L}(\vv_{*})-c\epsilon \] with probability $1-\delta^{(p)}$. Here $c>0$ does not depend on $p$. \end{IEEEproof} We now turn to the convergence of empirical measure in Wasserstein distance. Based on Proposition \ref{prop:local_stability}, we can prove that the Wasserstein distance between $\mu_{p}(\vv_{*},\sgl)$ and $\mu_{p}(\h{\vv}_{\widetilde{L}},\sgl)$ converges to 0 in probability, which is the following lemma: \begin{lem} \label{lem:L_Wdistance}$\forall K>0$ and $\epsilon\in(0,1/2)$, $\exists c>0$ s.t. \[ \P\left(\min_{\vv\in D_{\epsilon}}\widetilde{L}(\vv)\leq\min_{\vv\in S_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)\leq\delta^{(p)} \] where $D_{\epsilon}\bydef\left\{ \vv\in S_{\vv}(K)\Big|W_{2}(\mu_{p}(\vv,\sgl),\mu_{p}(\vv_{*},\sgl))^{2}\geq\epsilon\right\} $ and $\delta^{(p)}\to0$ as $p\to\infty$. \end{lem} \begin{IEEEproof} The proof follows Lemma C.1 in \cite{miolane2018distribution}. \end{IEEEproof} On the other hand, combining Proposition \ref{prop:CGMT_C_tildeL} and \ref{prop:local_stability}, we have: \begin{lem} \label{lem:CGMT_C_tildeL_2}$\forall K>0$, $\epsilon\in(0,1/2)$ and closed convex set $D\subseteq\comset_{\vv}(K)$, we have: \[ \P\left(\min_{\vv\in D}C_{\vlambda}(\vv)\leq\min_{\vv\in\comset_{\vv}(K)}C_{\vlambda}(\vv)+\epsilon\right)\leq c\P\left(\min_{\vv\in D}\widetilde{L}(\vv)\leq\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)+\delta^{(p)} \] where $\delta^{(p)}\to0$ as $p\to\infty$ and $c>0$ does not depend on $p$. \end{lem} \begin{IEEEproof} The proof follows Proposition C.1 in \cite{miolane2018distribution}. \end{IEEEproof} Combining Lemma \ref{lem:L_Wdistance} and \ref{lem:CGMT_C_tildeL_2}, we can prove the following result: \begin{prop} \label{prop:C_Wdistance}$\forall K>0$ and $\epsilon\in(0,1/2)$, $\exists c>0$ s.t. \[ \P\left(\min_{\vv\in D_{\epsilon}}C_{\vlambda}(\vv)\leq\min_{\vv\in S_{\vv}(K)}C_{\vlambda}(\vv)+c\epsilon\right)\leq\delta^{(p)} \] where $D_{\epsilon}\bydef\left\{ \vv\in S_{\vv}(K)\Big|W_{2}(\mu_{p}(\vv,\sgl),\mu_{p}(\vv_{*},\sgl))^{2}\geq\epsilon\right\} $ and $\delta^{(p)}\to0$ as $p\to\infty$. \end{prop} Now we are ready to prove Proposition \ref{prop:empi_Wdis_conv} and Theorem \ref{thm:asymp_char} \begin{comment} For the special case of separable loss and regularizer, i.e., $\mathcal{L}(\vx)=\sum_{i=1}^{p}l(x_{i})$, $f(\vx)=\sum_{i=1}^{p}f(x_{i})$, they show that the asymptotic characterization can be expressed by a simpler form with just a few scalars (see Theorem 4.1 of \cite{thrampoulidis2018precise}). We can find that SLOPE estimator falls within the form in (\ref{eq:Mesti}) with $\mathcal{L}(\vx)=\frac{1}{2}\|\vx\|^{2}$, which is separable. However, the regularizer $f(\vx)=\sum_{i=1}^{p}\lambda_{i}|x|_{(i)}$ is not separable, so the scalarization results for separable loss and regularizer can not be directly applied. Here, we need to specialize master theorem (Theorem 3.1) to our case. \end{comment} \begin{comment} We have checked that for $F(c,\tau)$ defined in (\ref{eq:F_ctau}), it satisfies the conditions of Theorem 3.1 of \cite{thrampoulidis2018precise}. Besides, it has been shown in \cite{thrampoulidis2018precise} quadratic loss function $l(\vx)=\frac{1}{2}\|\vx\|^{2}$adopted in SLOPE also satisfies the conditions. Therefore, by applying Theorem 3.1, (\ref{eq:sigmatau_eq1}) and (\ref{eq:sigmatau_eq2}) can be optained from the saddle point equation of the corresponding minimax scalar optimization problem. \end{comment} \subsubsection{Proof of Proposition \ref{prop:empi_Wdis_conv}} From Proposition \ref{prop:C_Wdistance}, we conclude that $\P\left(W_{2}(\mu_{p}(\h{\vv}^{B},\sgl),\mu_{p}(\vv_{*},\sgl))^{2}\geq c\epsilon\right)\leq\delta^{(p)}$. Let $\h{\sgl}^{B}\bydef\h{\vv}^{B}+\sgl$. Since $\sgl_{*}=\vv_{*}+\sgl$, we have \[ W_{2}(\mu_{p}(\h{\vv}^{B},\sgl),\mu_{p}(\vv_{*},\sgl))=W_{2}(\mu_{p}(\h{\sgl}^{B},\sgl),\mu_{p}(\sgl_{*},\sgl)) \] Therefore, $\forall\epsilon\in(0,1/2)$, \begin{equation} \P\left(W_{2}(\mu_{p}(\h{\sgl}^{B},\sgl),\mu_{p}(\sgl_{*},\sgl))^{2}\geq c\epsilon\right)\leq\delta^{(p)}\label{eq:Wdist_conv_bounded} \end{equation} where $\delta^{(p)}\to0$ as $p\to\infty$. Then we show for large enough $K$, $\frac{\|\h{\vv}^{B}\|}{\sqrt{p}},\frac{\|\h{\sgl}^{B}\|}{\sqrt{p}}<K$ with probability approaching 1 as $p\to\infty$. To see this, let $D=\left\{ \vv\in\R^{p}\Big|\|\vv-\vv_{*}\|>\sqrt{p}\epsilon\right\} $ in Lemma \ref{lem:CGMT_C_tildeL_2}. From Proposition \ref{prop:local_stability}, we know $\forall\epsilon>0$, $\exists c>0$ s.t. \[ \P\left(\min_{\vv\in D}\widetilde{L}(\vv)\leq\min_{\vv\in\comset_{\vv}(K)}\widetilde{L}(\vv)+c\epsilon\right)\leq\delta^{(p)}. \] Therefore, combining with Lemma \ref{lem:CGMT_C_tildeL_2}, we know $\|\h{\vv}^{B}-\vv_{*}\|\leq\sqrt{p}\epsilon$ with probability approaching 1. Meanwhile, it is not hard to show that $\frac{\|\vv_{*}\|}{\sqrt{p}}\to C$, where $C>0$ does not depend on $K$. Therefore, $\frac{\|\h{\vv}^{B}\|}{\sqrt{p}}<2C$ and thus $\frac{\|\h{\sgl}^{B}\|}{\sqrt{p}}<2(C+\sqrt{\E\sgli^{2}})$ with probability approaching 1. Therefore, by choosing a large enough $K>0$, we can ensure $\h{\sgl}^{B}=\h{\sgl}$ with probability approaching 1. From (\ref{eq:Wdist_conv_bounded}), we obtain that \[ \P\left(W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*},\sgl))^{2}\geq c\epsilon\right)\leq\delta^{(p)} \] with $\delta^{(p)}\to0$ as $p\to\infty$, which indicates that $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*},\sgl))\pconv0$. \subsubsection{Proof of Theorem \ref{thm:asymp_char}} As it is shown before, it is equivalent to prove $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu^{*})\to0$, where $\mu^{*}$ is given in (\ref{eq:mu_star}). We have shown $W_{2}(\mu_{p}(\h{\sgl},\sgl),\mu_{p}(\sgl_{*},\sgl))\pconv0$ in Proposition \ref{prop:empi_Wdis_conv}. Meanwhile, it is not hard to verify that $W_{2}(\mu_{p}(\sgl_{*},\sgl),\mu^{*})\pconv0$ following the proof of Lemma 4 in \cite{bayati2011dynamics}. Combining these two results, we prove (\ref{eq:weak_convergence}) given the fact that $W_{2}$ is a metric. Finally, we show that solution to (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) is unique. From the proof of Lemma \ref{lem:closeness_v_vhat}, equations (\ref{eq:sigmatau_1}), (\ref{eq:sigmatau_2}) correspond to the first order optimality condition of $\min_{\sigma\geq0}\max_{\theta\geq0}\Psi(\sigma,\theta)$ and statonary point $(\sigma^{*},\theta^{*})$ always exists. To prove the uniqueness, it suffices to show (1) $\forall\sigma_{0}\geq0$, $\Psi(\sigma_{0},\theta)$ is strictly concave w.r.t. $\theta$, (2) $\Psi(\sigma,\theta^{*}(\sigma))$ is strictly convex w.r.t. $\sigma$. Both can be directly verified from the form of $\Psi(\sigma,\theta)$ given in (\ref{eq:Psi_sigma_theta}). \subsection{Moreau Envelope of $\protect\regu$} \begin{defn} Let $f(\vx)$: $\R^{p}\to\R$ be a proper closed convex function. For $\tau>0$, the Moreau envelope of $f(\vx)$ is defined as: \begin{equation} e_{f}(\vx;\tau)\bydef\min_{\vv}\frac{1}{2\tau}\|\vx-\vv\|^{2}+f(\vv)\label{eq:Moreau_f} \end{equation} and the corresponding optimal solution is exactly proximal operator of $f$. \end{defn} We first study the convergence of $e_{\regu}(\vx;\tau)$ as $p\to\infty$. \begin{lem} \label{lem:conv_F}Let $\vh\sim\mathcal{N}(0,\,\mI_{p})$ and $\left\{ \sgl\right\} _{p\in\mathbb{N}}$, $\left\{ \vlambda\right\} _{p\in\mathbb{N}}$ are converging sequences. Then for all $c\in\R$ and $\tau>0$, we have: \begin{equation} \frac{1}{p}\left\{ e_{\regu}(c\vh+\sgl;\tau)-\regu(\sgl)\right\} \pconv F(c,\tau),\label{eq:coord_aver1} \end{equation} where $\regu(\vx)=\sum_{i=1}^{p}\lambda_{i}|x|_{(i)}$. Besides, \begin{equation} F(c,\tau)=\lim_{p\to\infty}F_{p}(c,\tau)\label{eq:F_ctau} \end{equation} where \begin{equation} F_{p}(c,\tau)=\frac{1}{p}\E\left\{ e_{\regu}(c\vh+\sgl;\tau)-\regu(\sgl)\right\} .\label{eq:Fp_ctau} \end{equation} \end{lem} \begin{IEEEproof} We first prove the convergence of $\frac{1}{p}\left\{ e_{\regu}(c\vh+\sgl;\tau)-\regu(\sgl)\right\} $ in probability. Writing out $e_{\regu}(\vx;\tau)$, we have: \begin{equation} e_{\regu}(\vx;\tau)=\frac{1}{2\tau}\|\vx-\tprox_{\tau\vlambda}(\vx)\|^{2}+\regu(\tprox_{\tau\vlambda}(\vx)).\label{eq:Moreau_regu} \end{equation} For $\vx=c\vh+\sgl$, we need to show $\frac{e_{\regu}(\vx;\tau)}{p}$ converges in probability. This can be done in two steps: (I) replace $\tprox_{\tau\vlambda}(\vx)$ in (\ref{eq:Moreau_regu}) by $\eta(\vx)\bydef\eta(\vx;F_{y},F_{\tau\lambda})$ where $\eta$ is obtained in Proposition \ref{prop:prox}. Clearly, $\eta(\vx)$ is a converging sequence almost surely. Then prove \begin{equation} \hat{e}_{\regu}(\vx;\tau)\bydef\frac{1}{2\tau}\|\vx-\eta(\vx)\|^{2}+\regu(\eta(\vx)),\label{eq:Moreau_f1} \end{equation} converges in probability, (II) show that $\frac{e_{\regu}(\vx;\tau)-\hat{e}_{\regu}(\vx;\tau)}{p}\pconv0$. To prove the first step, note that the first summand $\frac{1}{2\tau}\sum_{i=1}^{p}(x_{i}-\eta(x_{i};F_{x},F_{\tau\lambda}))^{2}$ in (\ref{eq:Moreau_f1}) is the empirical average of a converging sequence with each $x_{i}$ independent and it is not hard to show $\frac{\|\vx-\eta(\vx;F_{x},F_{\tau\lambda})\|^{2}}{2\tau p}$ converges in probability by weak law of large number (WLLN); for the second summand: $\regu(\eta(\vx))=\sum_{i=1}^{p}\lambda_{i}|\eta(\vx)|_{(i)}$, since each $\lambda_{i}|\eta|_{(i)}$ is not independent, we can not directly prove the convergence in probability. Instead, we replace $\left\{ \vlambda\right\} _{p\in\mathbb{N}}$ and $\left\{ \eta(\vx)\right\} _{p\in\mathbb{N}}$ by the corresponding regular converging sequence $\left\{ \vlambda_{r}\right\} _{p\in\mathbb{N}}$ and $\left\{ \eta_{r}(\vx)\right\} _{p\in\mathbb{N}}$. It is not hard to show that $\frac{\|\vlambda-\vlambda_{r}\|^{2}}{p},\frac{\|\eta(\vx)-\eta_{r}(\vx)\|^{2}}{p}\pconv0$ and $\frac{\sregu_{\vlambda_{r}}(\eta_{r}(\vx))}{p}$ converges in probability. Therefore, we have: \begin{align*} \frac{\left|\regu(\eta(\vx))-\sregu_{\vlambda_{r}}(\eta_{r}(\vx))\right|}{p} & \leq\frac{\tau\sum_{i=1}^{p}\lambda_{i}\left||\eta(\vx)|_{(i)}-|\eta_{r}(\vx)|_{(i)}\right|}{p}\\ & \quad+\frac{\tau\sum_{i=1}^{p}\left|\lambda_{i}-\lambda_{r,i}\right||\eta_{r}(\vx)|_{(i)}}{p}\\ & \leq\sqrt{\frac{\sum_{i=1}^{p}\lambda_{i}^{2}}{p}}\sqrt{\frac{\|\eta(\vx)-\eta_{r}(\vx)\|^{2}}{p}}\\ & \quad+\sqrt{\frac{\sum_{i=1}^{p}\eta_{r,i}^{2}(\vx)}{p}}\sqrt{\frac{\|\vlambda-\vlambda_{r}\|^{2}}{p}}, \end{align*} which indicates that $\frac{\left|\regu(\eta(\vx))-\sregu_{\vlambda_{r}}(\eta_{r}(\vx))\right|}{p}$ converges to 0 in probability. Combined with the convergence of $\frac{\sregu_{\vlambda_{r}}(\eta_{r}(\vx))}{p}$ , we conclude $\frac{\regu(\eta(\vx))}{p}$ also converges. \begin{comment} Instead, we replace $|\eta|_{(i)}$ by $|\hat{\eta}|_{(i)}\bydef F_{|\eta|}^{-1}(F_{\lambda}(\lambda_{i}))$ . After replacement, each summand depends only on $\lambda_{i}$ and thus becomes i.i.d., so $\frac{\sum_{i=1}^{p}\lambda_{i}|\hat{\eta}|_{(i)}}{p}$ converges in probability and from Lemma \ref{lem:seq_similarity}, we know $\frac{\sum_{i=1}^{p}\left(|\eta|_{(i)}-|\hat{\eta}|_{(i)}\right)^{2}}{p}\pconv0$. Therefore, we have: \begin{align} \frac{\left|\regu(\eta(\vx;F_{x},F_{\tau\lambda}))-\tau\sum_{i=1}^{p}\lambda_{i}|\hat{\eta}|_{(i)}\right|}{p} & =\frac{\tau\sum_{i=1}^{p}\lambda_{i}\left||\eta|_{(i)}-|\hat{\eta}|_{(i)}\right|}{p}\nonumber \\ & \leq\tau\sqrt{\frac{\sum_{i=1}^{p}\lambda_{i}^{2}}{p}}\sqrt{\frac{\sum_{i=1}^{p}\left(|\eta|_{(i)}-|\hat{\eta}|_{(i)}\right)^{2}}{p}}\label{eq:ubound1} \end{align} and by WLLN and continuous mapping theorem, upper bound (\ref{eq:ubound1})converges to 0. Combined with the convergence of $\frac{\sum_{i=1}^{p}\lambda_{i}|\hat{\eta}|_{(i)}}{p}$, we conclude $\frac{\regu(\eta(\vx;F_{x},F_{\tau\lambda}))}{p}$ also converges. \end{comment} Up to now, we have proved $\frac{\hat{e}_{\regu}(\vx;\tau)}{p}$ converges in probability, to prove step (II), we use $\frac{\|\tprox_{\tau\vlambda}(\vx)-\eta(\vx;F_{x},F_{\tau\lambda})\|^{2}}{p}\to0$ proved in Proposition \ref{prop:prox} and proceed in the similar way we proved the convergence of $\frac{\regu(\eta(\vx))}{p}$ in step (1) to get $\frac{e_{\regu}(\vx;\tau)-\hat{e}_{\regu}(\vx;\tau)}{p}\pconv0$. Combining both steps, we obtain that $\frac{e_{\regu}(\vx;\tau)}{p}$ converges in probability. Similarly, we can show $\frac{\regu(\sgl)}{p}$ converges in probability, so we conclude $\frac{1}{p}\left\{ e_{\regu}(c\vh+\sgl;\tau)-\regu(\sgl)\right\} $ converges in probability. To prove (\ref{eq:F_ctau}), we observe that: \begin{align} \frac{\left|e_{\regu}(c\vh+\sgl;\tau)\right|}{p} & =\frac{\min_{\vv}\frac{1}{2\tau}\|c\vh+\sgl-\vv\|^{2}+\regu(\vv)}{p}\nonumber \\ & \leq\frac{c^{2}\|\vh\|^{2}+2\tau\left(\|\vlambda\|^{2}+\|\sgl\|^{2}\right)}{2\tau p}\label{eq:DCT1} \end{align} By WLLN, we have \begin{align*} \frac{c^{2}\|\vh\|^{2}+2\tau\left(\|\vlambda\|^{2}+\|\sgl\|^{2}\right)}{2\tau p} & \pconv\E\frac{c^{2}h^{2}+2\tau(\lambda^{2}+\beta^{2})}{2\tau}\\ & =\lim_{p\to\infty}\E\frac{c^{2}\|\vh\|^{2}+2\tau\left(\|\vlambda\|^{2}+\|\sgl\|^{2}\right)}{2\tau p} \end{align*} so by generalized dominated convergence theorem (GDCT), we have: \begin{equation} \E\lim_{p\to\infty}\frac{e_{\regu}(c\vh+\sgl;\tau)}{p}=\lim_{p\to\infty}\E\frac{e_{\regu}(c\vh+\sgl;\tau)}{p}\label{eq:DCT2} \end{equation} and thus \begin{equation} \frac{e_{\regu}(c\vh+\sgl;\tau)}{p}\pconv\lim_{p\to\infty}\E\frac{e_{\regu}(c\vh+\sgl;\tau)}{p}.\label{eq:DCT3} \end{equation} On the other hand, we can show $\frac{\regu(\sgl)}{p}\pconv\lim_{p\to\infty}\E\frac{\regu(\sgl)}{p}$. Together with (\ref{eq:DCT3}), we obtain (\ref{eq:coord_aver1}). \end{IEEEproof} The function $F(c,\tau)$ defined in (\ref{eq:F_ctau}) plays a crucial role in deriving asymptotic characterization. To get (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}), we need some additional analytical properties of $F(c,\tau)$, which will be proved in the next lemma. \begin{lem} \label{lem:F_c_tau_uniform_conv}$F(c,\tau)$ defined in (\ref{eq:F_ctau}) is jointly convex w.r.t. $(c,\tau)$ and continuously differentiable, with partial derivatives: \begin{align} \frac{\partial F(c,\tau)}{\partial c} & =\lim_{p\to\infty}\frac{1}{p}\E\frac{\partial e_{\regu}}{\partial c}(c\vh+\sgl;\tau)\label{eq:F_deri1}\\ \frac{\partial F(c,\tau)}{\partial\tau} & =\lim_{p\to\infty}\frac{1}{p}\E\frac{\partial e_{\regu}}{\partial\tau}(c\vh+\sgl;\tau)\label{eq:F_deri2} \end{align} where $\vh\sim\calN(\mathbf{0},\mI_{p})$ and $\vbeta$, $\vlambda$ are converging sequences. \end{lem} \begin{IEEEproof} We first show the convexity of $F(c,\tau)$ defined in (\ref{eq:F_ctau}). It can be shown that $\forall\vx\in\R^{p}$, $e_{\regu}(\vx;\tau)$ is jointly convex in $(\vx,\tau)$ \cite{rockafellar2009variational}. Therefore, $F_{p}(c,\tau)$ in (\ref{eq:Fp_ctau}) is jointly convex in $(c,\tau)$ and so is $F(c,\tau)$. We then show $F_{p}(c,\tau)$ is continuously differentiable. First, it can be shown \cite{rockafellar2009variational} that $e_{\regu}(c\vh+\sgl;\tau)$ is continuously differentialble w.r.t. $(c,\tau)$ for any $\vh$ and $\sgl$ with derivatives as follows: \begin{align} \frac{\partial e_{\regu}}{\partial c}(c\vh+\sgl;\tau) & =\frac{\left[c\vh+\sgl-\tprox_{\tau\vlambda}(c\vh+\sgl)\right]^{\T}\vh}{\tau}\label{eq:par_efc}\\ \frac{\partial e_{\regu}}{\partial\tau}(c\vh+\sgl;\tau) & =-\frac{\|c\vh+\sgl-\tprox_{\tau\vlambda}(c\vh+\sgl)\|^{2}}{2\tau^{2}}.\label{eq:par_eftau} \end{align} From Algorithm \ref{alg:discrete}, it is not hard to show that $\tprox_{\vlambda}(\vy)$ satisfies \begin{equation} \|\tprox_{\vlambda}(\vy_{1})-\tprox_{\vlambda}(\vy_{2})\|\leq\|\vy_{1}-\vy_{2}\|,\label{eq:prox_lipschitz} \end{equation} so $\|\tprox_{\tau\vlambda}(c\vh+\sgl)\|^{2}\leq\|c\vh+\sgl\|^{2}$. Combining it with (\ref{eq:par_efc}) and (\ref{eq:par_eftau}) and using Cauchy-Schwartz inequality, we have: \begin{align*} \left|\frac{\partial e_{\regu}}{\partial c}(c\vh+\sgl;\tau)\right| & \leq g_{1}(\vh,\sgl;c,\tau)\\ \left|\frac{\partial e_{\regu}}{\partial\tau}(c\vh+\sgl;\tau)\right| & \leq g_{2}(\vh,\sgl;c,\tau) \end{align*} where \begin{align*} g_{1}(\vh,\sgl;c,\tau) & =\frac{(|c|\|\vh\|+\|\sgl\|)^{2}+\|\vh\|^{2}}{\tau}\\ g_{2}(\vh,\sgl;c,\tau) & =\frac{2(|c|\|\vh\|+\|\sgl\|)^{2}}{\tau^{2}} \end{align*} Clearly, for any $c\in R$ and $\tau>0$, $\E g_{1}(\vh,\sgl;c,\tau)<\infty$ and $\E g_{2}(\vh,\sgl;c,\tau)<\infty$. When $(c,\tau)$ belongs to any compact subset $\comset\subset\R\times\R^{+}$, $g_{1}(\vh,\sgl;c,\tau)\leq g_{1,\mathcal{C}}(\vh,\sgl)$ and $g_{2}(\vh,\sgl;c,\tau)\leq g_{2,\mathcal{C}}(\vh,\sgl)$, where \begin{align*} g_{1,\mathcal{C}}(\vh,\sgl) & =\frac{(|c_{\max}|\|\vh\|+\|\sgl\|)^{2}+\|\vh\|^{2}}{\tau_{\min}}\\ g_{2,\mathcal{C}}(\vh,\sgl) & =\frac{2(|c_{\max}|\|\vh\|+\|\sgl\|)^{2}}{\tau_{\min}^{2}} \end{align*} with $c_{\max}=\max_{c\in\comset}c$ and $\tau_{\min}=\min_{\tau\in\comset}\tau$. Obviously, $\E g_{1,\mathcal{C}}(\vh,\sgl),\,\E g_{2,\mathcal{C}}(\vh,\sgl)<\infty$. Therefore, by dominated convergence theorem (DCT), we have: \begin{align} \frac{\partial F_{p}(c,\tau)}{\partial c} & =\frac{1}{p}\E\frac{\partial e_{\regu}}{\partial c}(c\vh+\sgl;\tau)\label{eq:par_Fp_c}\\ \frac{\partial F_{p}(c,\tau)}{\partial\tau} & =\frac{1}{p}\E\frac{\partial e_{\regu}}{\partial\tau}(c\vh+\sgl;\tau).\label{eq:par_Fp_tau} \end{align} and they are all continuous for any finite $p$. Next we show $F_{p}(c,\tau)$ in (\ref{eq:Fp_ctau}) converges uniformly to $F(c,\tau)$ in any compact set $\comset\subset\R\times\R^{+}$. First, we already know that $F_{p}(c,\tau)$ is jointly convex. In addition, $\forall p$, $F_{p}(c,\tau)$ and $F(c,\tau)$ are bounded on $\comset\subset\R\times\R^{+}$. Therefore, by Theorem 10.8 in \cite{rockafellar2015convex}, $F_{p}(c,\tau)$ converges uniformly to $F(c,\tau)$ in any compact $\comset\subset\R\times\R^{+}$. Finally, we show if $(c,\tau)$ belongs to any compact set $\comset_{1}\times\comset_{2}\subset\R\times\R^{+}$, then (a) $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ converges uniformly on $\comset_{1}$ for any given $\tau\in\comset_{2}$ , (b) $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial\tau}\right\} _{p\geq1}$ converges uniformly on $\comset_{2}$ for any given $c\in\comset_{1}$. First, similar to the proof of Lemma \ref{lem:conv_F}, we can show $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ and $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial\tau}\right\} _{p\geq1}$ converge pointwise over $\comset_{1}\times\comset_{2}$. To prove (a), we first show for any given $\tau\in\comset_{2}$, $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ is equi-Lipschiz over $\comset_{1}$, i.e., $\exists\gamma>0$, s.t. $\left|\frac{\partial F_{p}(c_{1},\tau)}{\partial c}-\frac{\partial F_{p}(c_{2},\tau)}{\partial c}\right|<\gamma\left|c_{1}-c_{2}\right|$, $\forall c_{1},c_{2}\in\comset_{1}$, $\forall p\geq1$. Combining (\ref{eq:par_efc}) and (\ref{eq:par_Fp_c}), we have: \begin{align} \left|\frac{\partial F_{p}(c_{1},\tau)}{\partial c}-\frac{\partial F_{p}(c_{2},\tau)}{\partial c}\right| & =\frac{\E\left|\left(\tprox_{\tau\vlambda}(c_{1}\vh+\sgl)-\tprox_{\tau\vlambda}(c_{2}\vh+\sgl)\right)^{\T}\vh\right|}{p\tau}\nonumber \\ & \leq\frac{\E\|\vh\|^{2}}{p\tau}|c_{1}-c_{2}|\label{eq:unif_ineq1}\\ & =\frac{|c_{1}-c_{2}|}{\tau}\nonumber \end{align} where to get (\ref{eq:unif_ineq1}) we have used Cauchy-Schwartz inequality and the fact that $\tprox_{\vlambda}(\vx)$ is 1-Lipschiz (\ref{eq:prox_lipschitz}). Therefore, for fixed $\tau$,$\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ is equi-Lipschiz with constant $1/\tau$. Then following the same argument in the proof of Theorem 10.8 in \cite{rockafellar2015convex}, we can show the uniform convergence of $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial c}\right\} _{p\geq1}$ for $\tau\in\comset_{2}$. Similarly, to prove (b), we also first prove $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial\tau}\right\} _{p\geq1}$ is equi-Lipschiz over $\comset_{2}$ as a function of $\tau$. It is easy to check that $\tprox_{\tau\vlambda}(\vy)=\tau\tprox_{\vlambda}(\frac{\vy}{\tau})$ from (\ref{eq:slope_prox}) and combining it with (\ref{eq:par_eftau}) and (\ref{eq:par_Fp_tau}), we have: \begin{align*} & \left|\frac{\partial F_{p}(c,\tau_{1})}{\partial\tau}-\frac{\partial F_{p}(c,\tau_{2})}{\partial\tau}\right|\\ \leq & \frac{\E\left|\frac{\tau_{2}^{2}-\tau_{1}^{2}}{2\tau_{1}^{2}\tau_{2}^{2}}\left\Vert \vy\right\Vert ^{2}+\frac{\vx^{\T}}{\tau_{1}}\tprox_{\vlambda}(\frac{\vy}{\tau_{1}})-\frac{\vx^{\T}\tprox_{\vlambda}(\frac{\vy}{\tau_{2}})}{\tau_{2}}+\left\Vert \tprox_{\vlambda}(\frac{\vy}{\tau_{2}})\right\Vert ^{2}-\left\Vert \tprox_{\vlambda}(\frac{\vy}{\tau_{1}})\right\Vert ^{2}\right|}{p}\\ \leq & \frac{3|\tau_{1}+\tau_{2}|\E\|\vy\|^{2}}{p\tau_{1}^{2}\tau_{2}^{2}}|\tau_{1}-\tau_{2}|\\ \leq & C|\tau_{1}-\tau_{2}|, \end{align*} where $\vy=c\vh+\sgl$, $C$ is a constant not depending on $p$ and we have used Lipschitz continuity of $\tprox_{\vlambda}(\vy)$ in the second inequality. Then again follow the proof of \cite{rockafellar2015convex}, we can show the uniform convergence of $\left\{ \frac{\partial F_{p}(c,\tau)}{\partial\tau}\right\} _{p\geq1}$ for $c\in\comset_{1}$. After combining all the 3 steps above with Theorem 7.17 in \cite{rudin1976principles}, we get (\ref{eq:par_efc}) and (\ref{eq:par_eftau}). \end{IEEEproof} \begin{lem} \label{lem:conv_Stein}For converging sequence $\left\{ \sgl\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda\right\} _{p\in\mathbb{N}}$, the following results hold: \begin{equation} \lim_{p\to\infty}\frac{1}{p}\E\|\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\sgl\|_{2}^{2}=\E[\eta(B+\sigma\equinoisei;F_{y},F_{\tau\lambda})-B]^{2}\label{eq:fixedequa_lim1} \end{equation} and \begin{equation} \lim_{p\to\infty}\frac{1}{p}\E[\equinoise^{\T}\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)]=\sigma\E\eta'(B+\sigma\equinoisei;F_{y},F_{\tau\lambda}).\label{eq:fixedequa_lim2} \end{equation} where $\equinoise\sim\calN(\mathbf{0},\mI_{p})$ and $y\sim B+\sigma Z$, with $B\sim F_{\sgli}$ and $\equinoisei\sim\calN(0,1)$. \end{lem} \begin{IEEEproof} Using Proposition \ref{prop:prox}, for (\ref{eq:fixedequa_lim1}), we have: \begin{align*} & \lim_{p\to\infty}\frac{1}{p}\E\|\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\sgl\|^{2}\\ = & \lim_{p\to\infty}\frac{1}{p}\E\|\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\eta(\sgl+\sigma\equinoise;F_{y},F_{\tau\lambda})\|^{2}\\ & +\lim_{p\to\infty}\frac{2}{p}\E(\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)-\eta(\sgl+\sigma\equinoise;F_{y},F_{\tau\lambda}))^{\T}(\eta(\sgl+\sigma\equinoise;F_{y},F_{\tau\lambda})-\sgl)\\ & +\lim_{p\to\infty}\frac{1}{p}\E\|\eta(\sgl+\sigma\equinoise;F_{y},F_{\tau\lambda})-\sgl\|^{2}\\ \to & \E[\eta(B+\sigma\equinoisei;F_{y},F_{\tau\lambda})-B]^{2} \end{align*} Next we show (\ref{eq:fixedequa_lim2}). Again using Proposition \ref{prop:prox}, we have: \begin{align*} \lim_{p\to\infty}\frac{1}{p}\E[\equinoise^{\T}\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)] & =\lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\E[\tprox_{\tau\vlambda}(\sgl+\sigma\equinoise)]_{i}\equinoisei_{i}\\ & =\lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\E\eta(\sgli_{i}+\sigma\equinoisei_{i};F_{y},F_{\tau\lambda})\equinoisei_{i}\\ & =\E\eta(B+\sigma\equinoisei;F_{y},F_{\tau\lambda})\equinoisei\\ & =\sigma\E\eta'(B+\sigma\equinoisei;F_{y},F_{\tau\lambda}), \end{align*} where we have used Stein's lemma in the last equality. \end{IEEEproof} \section{Introduction} In sparse linear regression, we seek to estimate a sparse vector $\sgl \in\mathbb{R}^{p}$ from \begin{equation}\label{eq:model} \vy=\dmtx\sgl+\noise, \end{equation} where $\dmtx\in\mathbb{R}^{n\times p}$ is the design matrix and $\noise$ denotes the observation noise. In this paper, we study the \emph{sorted $\ell_{1}$ penalization estimator} (SLOPE) \cite{bogdan2015slope} (see also \cite{zhong2012efficient, zeng2014decreasing}). Given a non-decreasing regularization sequence $\boldsymbol{\lambda} = [\lambda_1, \lambda_2, \ldots, \lambda_p]^\top$ with $0\leq\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{p}$, SLOPE estimates $\sgl$ by solving the following optimization problem \begin{equation} \widehat{\sgl} = \underset{\est}{\arg\,\min}\ \frac{1}{2}\|\vy-\dmtx\est\|_{2}^{2}+\sum_{i=1}^{p}\lambda_{i}|\esti|_{(i)},\label{eq:slope_opt} \end{equation} where $|\esti|_{(1)}\leq|\esti|_{(2)}\leq\cdots\leq|\esti|_{(p)}$ is a reordering of the absolute values $\abs{\esti_1}, \abs{\esti_2}, \ldots, \abs{\esti_p}$ in increasing order. In \cite{bogdan2015slope}, the regularization term $\regu(\est) \bydef \sum_{i=1}^{p}\lambda_{i}|\esti|_{(i)}$ is referred to as the ``sorted $\ell_{1}$ norm'' of $\est$. The same regularizer was independently developed in a different line of work \cite{zhong2012efficient, zeng2014decreasing, figueiredo2016ordered}, where the motivation is to promote group selection in the presence of correlated covariates. The classical LASSO estimator is a special case of SLOPE. It corresponds to using a constant regularization sequence, \emph{i.e.}, $\lambda_1 = \lambda_2 = \cdots = \lambda_p = \lambda$. However, with more general $\lambda$-sequences, SLOPE has the flexibility to penalize different coordinates of the estimate according to their magnitudes. This adaptivity endows SLOPE with some nice \emph{statistical} properties that are not possessed by LASSO. For example, it is shown in \cite{su2016slope,bellec2018slope} that SLOPE achieves the minimax $\ell_{2}$ estimation rate with high probability. In terms of testing, the authors of \cite{bogdan2015slope} show that SLOPE controls false discovery rate (FDR) for orthogonal design matrices, which is not the case for LASSO \cite{su2017false}. In addition, we note that the new regularizer $\regu(\est)$ is still a norm \cite{bogdan2015slope,zeng2014decreasing}. Thus, the optimization problem associated with SLOPE remains convex, and it can be efficiently solved by using \emph{e.g.}, proximal gradient descent \cite{zeng2014decreasing, bogdan2015slope}. In the aforementioned studies on analyzing SLOPE, the performance of the estimator is given in terms of non-asymptotic probabilistic bounds, while such bounds provide very limited information about how to optimally design the regularization sequence $\boldsymbol{\lambda}$ in different settings, which is an important open question in the literature \cite{bogdan2015slope, su2016slope}. In this paper, we provide two main contributions: \begin{enumerate} \item We obtain a characterization of SLOPE in the asymptotic regime: $n,\,p\to\infty$ and $n/p\to\delta$. Compared with the probabilistic bounds derived in previous work, our results are asymptotically \emph{exact}. Similar asymptotic analysis has been done for LASSO \cite{bayati2012lasso} and many other regularized linear regression problems \cite{karoui2013asymptotic, thrampoulidis2015regularized, thrampoulidis2018precise}, but the main technical challenge in analyzing SLOPE is the nonseparability of $\regu(\vx)$: it cannot be written as a sum of component-wise functions, \emph{i.e.}, $\regu(\vx) \neq \sum_{i=1}^p J_i(x_i)$. In our work, we overcome this challenge by showing that the proximal operator of $\regu(\vx)$ is \emph{asymptotically separable}. \item Using our asymptotic characterization, we derive \emph{oracle} optimal $\vlambda$ in two settings: (1) the optimal regularization sequence that minimizes the MSE $\E\|\hat{\sgl}-\sgl\|^{2}$; and (2) the optimal sequence that achieves the highest possible power in testing and variable selection under any given level of Type-I error. In both cases, we show that the optimal design can be recast as certain infinite-dimensional convex optimization problems, which have efficient and accurate finite-dimensional approximations. \end{enumerate} A caveat of our optimal design is that it requires knowing the limiting empirical measure of $\sgl$ (\emph{e.g.}, the sparsity level and the distribution of its nonzero coefficients). For this reason, our results are \emph{oracle optimal}. It provides the first step towards more practical optimal designs that are completely blind to $\sgl$. The rest of the paper is organized as follows. In Sec. \ref{sec:Asymptotic-Results}, we first prove the asymptotic separability of the proximal operator associated with $\regu(\vx)$. This property allows us to derive our main asymptotic characterization of SLOPE, summarized as Theorem~\ref{thm:asymp_char}. Based on this analysis, we present the optimal design of the regularization sequence in Sec. \ref{sec:Oracle-Optimality}. Numerical simulations verify our asymptotic characterizations. They also demonstrate the superiority of our optimal design over LASSO and a previous sequence design in the literature \cite{su2016slope}. \section{Main Asymptotic Results\label{sec:Asymptotic-Results}} \subsection{Technical Assumptions\label{subsec:DefandAssump}} There are four main objects in the description of our model and algorithm: (1) the unknown sparse vector $\sgl$; (2) the design matrix $\dmtx$; (3) the noise vector $\noise$; and (4) the regularization sequence $\vlambda$. Since we study the asymptotic limit (with $p \to \infty$), we will consider a sequence of instances $\big\{ \sgl^{(p)},\,\dmtx^{(p)},\,\vw^{(p)},\,\vlambda^{(p)}\big\} _{p\in\mathbb{N}}$ with increasing dimensions $p$, where $\sgl^{(p)},\,\vlambda^{(p)}\in\R^{p}$, $\dmtx^{(p)}\in\R^{n\times p}$ and $\noise^{(p)}\in\R^{n}$. A sequence of vectors $\boldsymbol{x}^{(p)} \in \R^p$ indexed by $p$ is called a \emph{converging sequence} \cite{bayati2012lasso} if its empirical measure $\mu_p(x) \bydef \tfrac{1}{p} \sum_{i=1}^p \delta(x - x_i^{(p)})$ converges weakly to a probability measure on $\R$ as $p\to\infty$. Our results are proved under the following assumptions: \begin{enumerate}[label={(A.\arabic*)}] \item \label{a:sampling_ratio} The number of observations grows in proportion to $p$: $n^{(p)}/p\to\delta\in(0,\infty)$. \item The number of nonzero elements in $\sgl^{(p)}$ grows in proportion to $p$: $k/p\to\rho\in(0,1]$. \item The elements of $\dmtx^{(p)}$ are i.i.d. Gaussian distribution: $\dmtxi_{ij}^{(p)}\iid\mathcal{N}(0,\,\frac{1}{n})$. \item \label{a:converging} $\{\sgl^{(p)}\}_{p\in \mathbb{N}}$, $\{\vw^{(p)}\}_{p\in \mathbb{N}}$ and $\{\vlambda^{(p)}\}_{p\in \mathbb{N}}$ are converging sequences. The distribution functions of the limiting measures are denoted by $F_{\sgli}$, $F_{\noisei}$ and $F_{\lambda}$, respectively. Moreover, we have $\P(|\lambda|\neq0)>0$, $\tfrac{1}{p}\|\sgl^{(p)}\|^{2}\to\E[\sgli^{2}]$, $\frac{1}{n}\|\noise^{(p)}\|^{2}\to\E[\noisei^{2}]=\sigma_{w}^{2}$ and $\frac{1}{p}\|\vlambda^{(p)}\|^{2}\to\E[\lambda^{2}]$, where the probability $\P(\cdot)$ and the expectations $\E[\cdot]$ are all computed with respect to the limiting measures. \end{enumerate} \begin{comment} Let us denote $F_{Y}$ and $F_{|Y|}$ as the CDF of $y_{i}$ and $|y_{i}|$. Also we can consider each $\lambda_{i}$ is drawn independently from the same distribution $F_{\lambda}$ and then reorder them to get $\vlambda$. \end{comment} \subsection{Asymptotics of the Proximal Operator of $\protect\regu(\protect\vx)$} We start by studying the proximal operator associated with the sorted $\ell_{1}$ norm $\regu(\vx)$. Given $\vy \in \R^p$ and a regularization sequence $\boldsymbol{\lambda}$ with $0\leq\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{p}$, the proximal operator is defined as the solution to the following optimization problem: \begin{equation} \text{Prox}_{\boldsymbol{\lambda}}(\vy) \bydef \underset{\vx}{\arg\,\min}\ \frac{1}{2}\|\vy-\vx\|_{2}^{2}+\sum_{i=1}^{p}\lambda_{i}|x|_{(i)},\label{eq:slope_prox} \end{equation} where $0\leq\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{p}$. Although $\regu(\vx)$ is a more complicated regularization term than $\ell_1$ norm used in LASSO, (\ref{eq:slope_prox}) can still be efficiently solved \cite{bogdan2015slope,zhong2012efficient}. In the case of LASSO, which corresponds to choosing $\lambda_1 = \lambda_2 = \cdots = \lambda_p = \lambda$, the proximal operator is easy to characterize, as it is separable: $[\text{Prox}_{\boldsymbol{\lambda}}(\vy)]_i = \text{sign}(y_i)\max(|y_{i}|-\lambda,0)$. In other words, the $i$th element of $\text{Prox}_{\boldsymbol{\lambda}}(\vy)$ is solely determined by $y_i$. However, this separability property does not hold for a general regularization sequence. When $p$ is finite, $[\text{Prox}_{\boldsymbol{\lambda}}(\vy)]_i$ depends not only on $y_{i}$ but also on other elements of $\vy $. This coupling makes it much harder to analyze the proximal operator. Fortunately, as we show below, when $p\to\infty$, $\text{Prox}_{\boldsymbol{\lambda}}(\cdot)$ becomes \emph{asymptotically separable}. \begin{prop} \label{prop:prox} Let $\{ \vy^{(p)}\}_{p\in \mathbb{N}}$ and $\{\vlambda^{(p)}\}_{p\in \mathbb{N}}$ be two converging sequences. Denote by $F_y$ and $F_\lambda$ the distribution functions of their respective limiting measures. It holds that \begin{equation}\label{eq:prox_sep} \lim_{p \to \infty} \frac{1}{p}\,\|\text{Prox}_{\boldsymbol{\lambda}}(\vy^{(p)})-\prox(\vy^{(p)}; F_y, F_\lambda)\|^{2} \to 0, \end{equation} where $\prox(\cdot; F_y, F_\lambda)$ is a scalar function that is determined by $F_y$ and $F_\lambda$. Here $\prox(\vy^{(p)}; F_y, F_\lambda)$ denotes a coordinate-wise application of $\prox(\cdot; F_y, F_\lambda)$ on $\vy^{(p)}$. The explicit construction of $\prox(\cdot; F_y, F_\lambda)$ is given in Algorithm \ref{alg:conti}. \end{prop} The proof of Proposition \ref{prop:prox} and a detailed explanations of Algorithm \ref{alg:conti} will be provided in Appendix \ref{subsec:appendix_prox}. We will also see that the asymptotic separability of $\text{Prox}_{\boldsymbol{\lambda}}(\cdot)$ greatly facilitates our asymptotic analysis and the optimal design of SLOPE, since it allows us to reduce the original high-dimensional problem to an equivalent one-dimensional problem. In what follows, we refer to $\prox(\cdot; F_y, F_\lambda)$ as the \emph{limiting scalar function}. \begin{algorithm} \begin{minipage} {\columnwidth} \begin{center} \begin{algorithmic}[2] \Require{ Distribution of $Y$ and $\lambda$: $y\sim F_y(y)$, $\lambda\sim F_{\lambda}(\lambda)$ } \Ensure{ Limiting proximal mapping $\opty{y; F_y, F_\lambda}$ } \State{Compute \[ \diff_0(y;F_{y},F_{\lambda}) := y - \lambda(y), y\geq 0 \] where \[ \lambda(y) := \begin{cases} F_{\lambda}^{-1}(F_{|y|}(y)) & \text{if } \P(|Y|=y)=0, \\ \frac{\int_{F_{|y|}(y^-)}^{F_{|y|}(y)}F_{\lambda}^{-1}(u)du}{F_{|y|}(y)-F_{|y|}(y^-)} & \text{if } \P(|Y|=y)>0. \end{cases} \]} \State{Set $t=0$} \While{$\exists$ MDI of $\diff_t(y;F_{y},F_{\lambda}) $} \State{Find the first MDI of $\diff_t(y;F_{y},F_{\lambda})$: $I_2=[\bdp_1, \bdp_2]$. } \If{$\P(|Y|>\bdp_2)=0$} \State{$\bdp_L \leftarrow \arg\max_{v\in[0, \bdp_1]} \conde(v, \bdp_2;\diff_t)$} \State{$\bdp_R \leftarrow \bdp_2$} \Else \State{Find $I_3$, right neighbouring MNDI of $I_2$ and $I_1$, left neighbouring MNDI of $I_2$} \State{Solve \[ \min_{w\in I_3}\max_{v\in I_1} \conde\left(v,w;\diff_t\right) \]} \State{Get optimal solution: $[w^*, v^*]$.} \State{$\bdp_L \leftarrow v^*$, $\bdp_R \leftarrow w^*$} \EndIf \State{For $y\in [\bdp_L, \bdp_R]$, replace original $\diff_{t}(y;F_{y},F_{\lambda})$ by $\conde\left(\bdp_L,\bdp_R;\diff_t\right)$ to get $\diff_{t+1}(y;F_{y},F_{\lambda})$.} \State{$t=t+1$} \EndWhile \State{Set $\diff(|y|;F_{y},F_{\lambda})=\diff_{t}(y;F_{y},F_{\lambda})$} \State{Obtain: $\prox(y;F_{y},F_{\lambda}) = \text{sign}(y) \max\{0,\diff(|y|;F_{y},F_{\lambda})\}$ } \end{algorithmic} \end{center} \end{minipage} \caption{Recursive procedure for constructing $\prox(\cdot; F_y, F_\lambda)$.\label{alg:conti}} \end{algorithm} \begin{example} We compare the actual proximal operator $\text{Prox}_{\boldsymbol{\lambda}}(\vy)$ and the limiting scalar function $\prox(y; F_y, F_\lambda)$, for two different $\vlambda$-sequences shown in Fig.~\ref{fig:prox_dist}(a) and Fig.~\ref{fig:prox_dist}(c). The red curves represent the limiting scalar functions obtained in Proposition \ref{prop:prox}, whereas the blue circles are sample points of $\left(y_{i},\,\left[\text{Prox}_{\boldsymbol{\lambda}}(\vy)\right]_{i}\right)$, with $y_i \sim \mathcal{N}(0, 1)$. For better visualization, we randomly sample 3\% of all $\left(y_{i},\,\left[\text{Prox}_{\boldsymbol{\lambda}}(\vy)\right]_{i}\right)$. It can be seen that under a moderate dimension $p=1024$, the proximal operator can already be very accurately approximated by the limiting scalar function. \end{example} \begin{figure}[t] \vspace{-3ex} \centering \subfloat[]{ \includegraphics[scale=0.618]{figs/stair_dist} }\subfloat[]{ \includegraphics[clip,scale=0.618]{figs/stair_prox} } \vspace{-2ex} \centering \subfloat[]{ \includegraphics[clip,scale=0.618]{figs/BHq_dist} }\hphantom{}\subfloat[]{ \includegraphics[clip,scale=0.618]{figs/BHq_prox} } \caption{(a) and (c): The histograms of two different $\vlambda$-sequences. (b) and (d): Sample points of $\left(y_{i},\,\left[\text{Prox}_{\boldsymbol{\lambda}}(\vy)\right]_{i}\right)$ (the blue dots) compared against the limiting scalar functions $\prox(y; F_y, F_\lambda)$ (the red curves). In this experiment, $p = 1024$.}\label{fig:prox_dist} \vspace{-2ex} \end{figure} \subsection{Asymptotics of SLOPE} We are now ready to tackle the original optimization problem (\ref{eq:slope_opt}) associated with SLOPE. Our goal is to characterize the joint empirical measure of $(\sol^{(p)},\,\sgl^{(p)})$: $\mu_{p}(\hat{\sgli},\,\sgli)\bydef\frac{1}{p}\sum_{i=1}^{p}\delta(\hat{\sgli}-\soli_{i},\sgli-\sgli_{i})$. Indeed, many quantities of interest, such as the MSE, type-I error, and FDR, are all functionals of this joint empirical measure. A function $\psi:\,\R^{2}\to\R$ is called \emph{pseudo-Lipschiz} if $|\psi(\vx)-\psi(\vy)|\leq L(1+\|\vx\|_{2}+\|\vy\|_{2})\|\vx-\vy\|_{2}$ for all $\vx,\,\vy\in\R^{2}$, where $L$ is a positive constant. As in \cite{bayati2012lasso}, we will depict the limit of $\mu_p(\hat{\sgli}, \sgli)$ through its action on {pseudo-Lipschiz} functions. \begin{thm} \label{thm:asymp_char} Assume \ref{a:sampling_ratio} -- \ref{a:converging} hold. For any pseudo-Lipschiz function $\psi$, we have \begin{equation} \lim_{p\to\infty}\frac{1}{p}\sum_{i=1}^{p}\psi(\soli_{i},\,\sgli_{i})=\E_{B,Z}[\psi(\prox(B+\sigma Z;\,F_y, F_{\tau \lambda}),B)].\label{eq:weak_convergence} \end{equation} Here, $B, Z$ are two independent random variables with $B \sim F_\beta$ and $Z \sim\mathcal{N}(0,1)$; $\prox(\cdot\,; F_y, F_{\tau \lambda})$ is the limiting scalar function defined in Proposition \ref{prop:prox}, with $F_y$ denoting the distribution function of $B + \sigma Z$ and $F_{\tau \lambda}$ denoting that of $\tau \lambda$ for some $\tau \ge 0$. Moreover, the scalar pair $(\sigma,\,\tau)$ is the unique solution of the following equations: \begin{align} \sigma^{2} & =\sigma_{\noisei}^{2}+\frac{1}{\delta}\E_{B, Z}[(\prox(B+\sigma Z;F_y, F_{\tau \lambda})-B)^{2}]\label{eq:sigmatau_1}\\ 1 & =\tau\big(1-\frac{1}{\delta}\E_{B, Z}[\prox'(B+\sigma Z;F_y, F_{\tau \lambda})]\big).\label{eq:sigmatau_2} \end{align} \end{thm} \begin{rem} Readers familiar with the asymptotic analysis of LASSO will recognize that the forms of (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) look identical to the results of LASSO obtained in \cite{bayati2012lasso,thrampoulidis2018precise}. Indeed, the first part of our proof directly applies the framework of analyzing LASSO asymptotics using convex Gaussian min-max theorem (CMGT) \cite{thrampoulidis2015regularized, thrampoulidis2018precise}. Following \cite{thrampoulidis2018precise}, in the asymptotic regime, the limiting measure of SLOPE is determined by the following fixed point equations: \begin{align} \sigma^{2} & =\sigma_{w}^{2}+\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\|\text{Prox}_{\tau \vlambda}(\sgl+\sigma\equinoise)-\sgl\|_{2}^{2}\label{eq:fixedpt_vector1}\\ 1 & =\tau\left[1-\frac{1}{\delta}\lim_{p\to\infty}\frac{1}{p}\text{div}(\text{Prox}_{\tau \vlambda}(\sgl+\sigma\equinoise))\right].\label{eq:fixedpt_vector2} \end{align} Note that (\ref{eq:fixedpt_vector1}) and (\ref{eq:fixedpt_vector2}) are similar to (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}), except that they involve an $\R^{p}\mapsto\R^{p}$ proximal mapping: $\text{Prox}_{\tau \vlambda}(\sgl+\sigma\equinoise)$. This is where Proposition \ref{prop:prox} becomes useful. Using the asymptotic separability stated in that proposition, we can simplify (\ref{eq:fixedpt_vector1}) and (\ref{eq:fixedpt_vector2}) to the scalar equations given in (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}). \end{rem} Theorem \ref{thm:asymp_char} essentially says that the joint empirical measure of $(\sol^{(p)},\,\sgl^{(p)})$ converges weakly to the law of $(\prox(B+\sigma Z;\,F_y, F_{\tau \lambda}),B)$. This means that although the original problem (\ref{eq:slope_opt}) is high-dimensional, its asymptotic performance can be succinctly captured by merely two scalars random variables. In (\ref{eq:weak_convergence}), if we let $\psi(x_1,x_2)=(x_1-x_2)^{2}$, we obtain the asymptotic MSE; by setting $\psi(x_1,x_2)=1_{x_2=0,\,x_1\neq0}$, we can recover the type-I error. (Technically, $1_{y=0,x\neq0}$ is not pseudo-Lipschiz. However, with additional justifications \cite{bogdan2015slope}, one can show that the conclusion is still correct.) \section{Oracle Optimality of SLOPE\label{sec:Oracle-Optimality}} In this section, we will study the optimal design of the regularization sequence in SLOPE. Using the asymptotic characterization presented in Sec. \ref{sec:Asymptotic-Results}, we will derive the optimal limiting distribution $F_{\lambda}$ to achieve the best estimation or testing performance, given the oracle knowledge of $F_{\sgli}$. \subsection{Estimation with Minimum MSE} \label{sec:mse} We first turn to the problem of finding the optimal $\vlambda$-sequence which minimizes the MSE of slope estimator. Since we work in the asymptotic regime, it boils down to finding the optimal distribution $F_{\lambda}^{*}$ such that \begin{align*} F_{\lambda}^{*} & =\argmin{F_{\lambda}} \ \lim_{p\to\infty}\frac{1}{p}\|\sol-\sgl\|_{2}^{2}\\ & =\argmin{F_{\lambda}} \ \E_{B, Z}[(\prox(B+\sigma Z;F_y, F_{\tau \lambda})-B)^{2}], \end{align*} where $B\sim F_{\sgli}$ and the second equality follows from Theorem \ref{thm:asymp_char}. From (\ref{eq:sigmatau_1}), this is further equivalent to finding $F_{\lambda}$ to minimize $\sigma$. However, directly searching over $F_{\lambda}$ appears unwieldy, since $\sigma$, as a functional of $F_{\lambda}$, is defined indirectly through a nonlinear fixed point equation. To simplify this problem, we first note that in (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}), the influence of $F_{\lambda}$ on the solution $(\sigma,\,\tau)$ is only through the limiting scalar function $\prox$. Therefore, instead of optimizing over $F_{\lambda}$, we can find the optimal $\prox^{*}$ and then calculate the corresponding $F_{\lambda}^{*}$. The next question then becomes finding all possible $\prox$ that can be realized by some $F_{\lambda}$. In fact, for any given converging sequence $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$, we can compute all possible $\prox(\cdot)$ associated with it. Let \[ \mathcal{M}\bydef\left\{ \prox(\cdot\,; F_y, F_\lambda) \mid\exists F_{\lambda},\,\E\lambda^{2}<\infty,\,\text{s.t. } (\ref{eq:prox_sep}) \text{ holds}\right\} \] be the functional space that $\prox$ belongs to. We have the following result: \begin{prop} \label{prop:function_space}For any converging sequence $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$, we have \begin{align*} \mathcal{M}= & \{\prox(y)\mid\prox(y)=-\prox(-y)\,\text{and }\\ & \hspace{3em}0\leq\prox(y_{1})-\prox(y_{2})\leq y_{1}-y_{2},\,\forall y_{1}\geq y_{2}\} \end{align*} and $\mathcal{M}$ is a convex set. Moreover, for any $\prox\in \mathcal{M}$, the corresponding distribution of the $\vlambda$-sequence that yields $\prox$ can be represented by: $\lambda \sim |Y|-\prox(|Y|)$, where $Y$ follows the limiting distribution of $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$. \end{prop} \begin{rem} Proposition \ref{prop:function_space} is the key ingredient in our optimal design. It shows that, with different choices of $F_{\lambda}$, we can reach any non-decreasing and odd function that is Lipschitz continuous with constant 1. Clearly, the soft-thresholding functions associated with LASSO belongs to $\mathcal{M}$, but the set $\mathcal{M}$ is much richer. This is the essence of how SLOPE generalizes LASSO: it allows for more degrees of freedom in the regularization. \end{rem} Due to Proposition \ref{prop:function_space}, the optimization problem can be simplified to that of finding the optimal $\prox\in\mathcal{M}$ such that $\sigma$ as obtained from (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) is minimized. Specifically, we need to find \begin{equation} \sigma_{\min}\bydef\inf\left\{ \sigma \mid \exists\prox\in\mathcal{M},\,\text{s.t. }\sigma\text{ satisfies }\eqref{eq:sigmatau_1}\text{ and }\eqref{eq:sigmatau_2}\right\}. \label{eq:sigma_min} \end{equation} The following results reveal a way to obtain $\sigma_{\min}$. \begin{prop} \label{prop:min_MSE} $\sigma_{\min}$ is the minimum solution of equation: \begin{equation} \mathcal{L}(\sigma)=\sigma^2 - \sigma_{\noisei}^2,~\sigma\in \left[\sigma_{\noisei},\,\sqrt{\sigma_{\noisei}^{2}+\frac{1}{\delta}\E B^{2}}\right], \label{eq:sigma_min_equation} \end{equation} where $\mathcal{L}(\sigma)$ is the optimal value of the following convex optimization problem w.r.t. $\prox(y)$ for a fixed $\sigma$: \begin{align} \min_{\prox\in\mathcal{M}}\hspace{1em} & \E_{B, Z}[(\prox(B+\sigma Z)-B)^{2}]\label{eq:opt_esti_prox}\\ \text{s.t.}\hspace{1em} & \E_{B, Z}[\prox'(B+\sigma Z)]\leq\delta.\nonumber \end{align} The corresponding optimal limiting scalar function $\prox^* = \prox_{\sigma_{\min}}$ and $\lambda^*$ follows the distribution: \begin{equation*} \lambda^* \sim \frac{|Y|-\prox^*(|Y|)}{\tau_{\min}}, \end{equation*} where $Y\sim B+\sigma_{\min}Z$, $\prox_{\sigma_{\min}}$ is the optimal solution of (\ref{eq:opt_esti_prox}), when $\sigma=\sigma_{\min}$ and \begin{equation*} \tau_{\min} = \left(1-\frac{\E {\prox^{*}} '(Y)}{\delta}\right)^{-1}. \end{equation*} \end{prop} The proof of Proposition \ref{prop:min_MSE} is deferred to Appendix \ref{subsec:min_MSE_proof}. In practice, we can discretize over $\R$ to obtain a finite-dimensional approximation to the original infinite-dimensional problem (\ref{eq:opt_esti_prox}). Naturally, this finite approximation problem is also convex. In the following simulations, we use an approximation with 2048 grids. \begin{comment} There are several points that remain to be justified: (1) show $\sigma_{\min}<\sigma$ when $\min_{\prox\in\mathcal{F}}\,\E[\prox(\xi+\sigma Z;1)-\xi]^{2}\leq\delta(\sigma^{2}-\sigma_{w}^{2})$. \end{comment} \begin{figure}[t] \begin{centering} \subfloat[$\rho=0.256$\label{subfig:MSE_SNR}]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/MSE_SNR_1} \par\end{centering} }\hphantom{}\subfloat[$\text{SNR}=5$\label{subfig:MSE_sparsity}]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/MSE_sparsity_1} \par\end{centering} } \par\end{centering} \caption{Comparison of MSEs obtained by three regularization sequences: LASSO, BHq and the oracle optimal design, under different SNR and sparsity levels. Here, $p=1024$, $\delta=0.64$. The red curves show the theoretical minimum MSE that can be achieved by using the oracle optimal sequences. \label{fig:Comparison-of-MSEs}} \end{figure} In Fig. \ref{fig:Comparison-of-MSEs}, we compare the MSEs achieved by our optimal design with those obtained by LASSO and the BHq sequences proposed \cite{su2016slope}, at different SNRs and sparsity levels. For fair comparison, we optimize the parameters of the BHq and LASSO sequences. It can be seen from the figure that the empirical minimum MSEs match well with theoretical ones. We observe from Fig. \ref{subfig:MSE_SNR} that, under low SNRs, the BHq sequence can lead to very similar performance as the oracle optimal design. However, at higher SNR levels, the optimal design outperforms the BHq sequence, while it gets close to LASSO. To unravel the underlying reason for this, we plot in Fig. \ref{fig:Comparison-of-distributions} the distributions of the $\vlambda$-sequences associated with the optimal design and the BHq design, respectively. It turns out that, in the low SNR case, the optimal design and BHq have similar distributions; at higher SNRs, the distribution of the optimal design is close to a delta-like distribution similar to LASSO. Note that for a small sparsity-level $\rho$, LASSO can outperform BHq and achieve performance close to that of the optimal sequence, but it is prone to higher bias when $\rho$ grows. From Fig. \ref{subfig:MSE_sparsity}, we can find that LASSO's performance degrades much faster than the other two as $\rho$ increases, This is because LASSO's penalization is not adaptive to the underlying sparsity levels \cite{su2016slope}. \begin{figure} \begin{centering} \subfloat[Optimal]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/optimaldist_lowSNR} \par\end{centering} }\hphantom{}\subfloat[BHq]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/BHqdist_lowSNR} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[Optimal]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/optimaldist_highSNR} \par\end{centering} }\hphantom{}\subfloat[BHq]{\begin{centering} \includegraphics[clip,scale=0.618]{figs/BHqdist_highSNR} \par\end{centering} } \par\end{centering} \caption{Comparison of distributions of two regularization sequences ``BHq'' and ``Optimal'' in Fig. \ref{subfig:MSE_SNR}: (a)-(b): $\text{SNR}=1$, (c)-(d): $\text{SNR}=10$.\label{fig:Comparison-of-distributions} } \end{figure} \subsection{Multiple Testing with Maximum Power} Next we consider using SLOPE directly for variable selection. In other words, the non-zero coordinates of SLOPE solution $\sol$ are selected. Our goal is to find the optimal regularization sequence to achieve the highest possible power, under a given level of type-I error. Similar to Proposition \ref{prop:min_MSE}, we have the following result: \begin{prop} \label{prop:max_power} For a given Type-I error level $\alpha$, the maximum selection power that can be reached by SLOPE is: $\P\left(\left|\frac{B}{\sigma_{\min}}+Z\right| \geq \Phi^{-1}(1-{\alpha}/{2})\right)$, where $\sigma_{\min}$ is the minimum solution of equation: \begin{equation} \mathcal{L}(\sigma)=\sigma^2 - \sigma_{\noisei}^2,~\sigma\in \left[\sigma_{\noisei},\,\sqrt{\sigma_{\noisei}^{2}+\frac{1}{\delta}\E B^{2}}\right], \label{eq:sigma_min_equation1} \end{equation} Here $\mathcal{L}(\sigma)$ is the optimal value of the following convex optimization problem under a given $\sigma$: \begin{align} \min_{\prox\in\mathcal{M}}\hspace{1em} & \E_{B, Z}[(\prox(B+\sigma Z)-B)^{2}]\label{eq:opt_testing_prox}\\ \text{s.t.}\hspace{1em} & \E_{B, Z}[\prox'(B+\sigma Z)]\leq\delta,\nonumber\\ &\prox(y)=0, |y|\leq \Phi^{-1}\left(1-\frac{\alpha}{2}\right)\sigma.\label{eq:typeI_constraint} \end{align} The corresponding optimal $\prox^{*}=\prox_{\sigma_{\min}}$ and $\lambda^*$ is represented by: \begin{equation*} \lambda^* \sim \frac{|Y|-\prox^{*}(|Y|)}{\tau_{\min}}, \end{equation*} where the definitions of $Y, \tau_{\min}$ and $\prox_{\sigma_{\min}}$ are the same as in Proposition \ref{prop:min_MSE}. \end{prop} The proof of Proposition \ref{prop:max_power}, which is similar to that of Proposition \ref{prop:min_MSE}, will be given in Appendix \ref{subsec:max_power_proof}. In Fig.\ref{fig:Optimal-hypothesis-testing}, we compare the FDR curve of the optimal design with that of the BHq sequence. We verify that the optimal design indeed dominates the BHq sequence and that the empirical FDR curve matches well with theoretical prediction. \begin{comment} The main difference is on the power: there exists an upper bound for the power of LASSO, but for SLOPE the power can almost reach 1. The phenomenon of limited power for LASSO is also discussed in \cite{bogdan2015slope,su2016slope} and it is inherently connected with the so-called ``noise-sensitivity '' phase transition \cite{donoho2011noise}. Here, it can be seen that by optimizing over the regularization, the ``ceiling'' effect on testing power can be greatly alleviated. \end{comment} \begin{figure} \begin{centering} \includegraphics[clip,scale=1.0]{figs/FDR1} \par\end{centering} \caption{Hypothesis testing using oracle optimal and BHq sequences. Here, $p=1024$, $\protect\sgli_{i}\protect\iid(1-\rho)\delta(0)+\rho\mathcal{N}(\mu_{0},\,\sigma_{0}^{2})$ with $\rho=0.25$, $\mu_{0}=2.125$ and $\sigma_{0}=0$, $w_{i}\protect\iid\mathcal{N}(0,\,\sigma_w^{2})$ with $\sigma=0.25$. The results are averaged over 100 realizations. \label{fig:Optimal-hypothesis-testing}} \end{figure} \subsection{Proof of Proposition \ref{prop:function_space}} Let \begin{align*} \mathcal{F}\bydef & \{\prox(y)|\prox(y)=-\prox(-y)\,\text{and } 0 \leq\prox(y_{1})-\prox(y_{2})\leq y_{1}-y_{2},\,\forall y_{1}\geq y_{2}\} \end{align*} and we want to prove $\mathcal{F}=\mathcal{M}$. From Lemma \ref{lem:regu_Lipschitz}, we can easily show $\mathcal{M}\subset\mathcal{F}$, so it suffices to show $\mathcal{F}\subset\mathcal{M}$. For any $\eta\in\mathcal{\mathcal{F}}$, we argue that if we let $\lambda\sim y-\prox(y)$, where $y\sim F_{|Y|}$ with $\E Y^{2}<\infty$ and choose $\left\{ \vlambda^{(p)}\right\} $ to be a converging sequence with limiting CDF $F_{\lambda}$, then $\frac{\|\tproxl(\vy)-\prox(\vy;F_{y},F_{\lambda})\|^{2}}{p}\to0$ and $\E\lambda^{2}<\infty$. To see this, first observe that $y-\prox(y)$ is a non-decreasing and continuous function on $[0,+\infty)$, so we have \begin{align*} F_{\lambda}^{-1}(F_{|Y|}(y)) & =\inf\left\{ x\Big|F_{\lambda}(x)\geq F_{|Y|}(y)\right\} \\ & =y-\prox(y) \end{align*} and $F_{\lambda}^{-1}(F_{|Y|}(y))=F_{\lambda}^{-1}(F_{|Y|}(y^{-}))$. Therefore, in Algorithm \ref{alg:conti}, $\lambda(y)=y-\prox(y)$ and $\diff(y)=\prox(y)$. Then by Lemma \ref{lem:0MDI_asym_sepa}, we obtain that $\frac{\|\tproxl(\vy)-\prox(\vy;F_{y},F_{\lambda})\|^{2}}{p}\to0$. On the other hand, $\E\lambda^{2}<\infty$ follows from the fact that $\E Y^{2}<\infty$ and $\lambda(y)\leq y$, where $y\sim F_{|Y|}$. As a result, we conclude that $\mathcal{F}\subset\mathcal{M}$. Finally, we prove the convexity of the set $\mathcal{M}$. Let $\prox_{1},\,\prox_{2}\in\mathcal{M}$, then $\forall\alpha\in[0,1]$, we define $\prox_{\alpha}\bydef\alpha\prox_{1}+(1-\alpha)\prox_{2}$. Clearly, for $y_{1}\geq y_{2}$, we have $0\leq\prox_{\alpha}(y_{1})-\prox_{\alpha}(y_{2})\leq y_{1}-y_{2}$ and also $\prox_{\alpha}(y)=-\prox_{\alpha}(-y)$, so $\prox_{\alpha}\in\mathcal{M}$, which shows the convexity of $\mathcal{M}$. \subsection{Proof of Proposition \ref{prop:min_MSE}} \label{subsec:min_MSE_proof} Note that equations (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) involve two variables $\sigma$ and $\tau$. It is not easy to handle them simultaneously. A simplification we can make is to first set $\tau$ to $1$ and find the minimum $\sigma$ such that the first equation (\ref{eq:sigmatau_1}) and the inequality $\E_{B, Z}[\prox'(B+\sigma Z;F_y, F_{ \lambda})]\leq\delta$ hold. Once we get $\sigma_{\min}$ and optimal $\lambda^*$, the corresponding $\tau_{\min}$ can then be obtained via (\ref{eq:sigmatau_2}): $\tau_{\min}=(1-\tfrac{1}{\delta}\E_{B, Z}[{\prox^{*}}'(B+\sigma_{\min} Z;F_y, F_{\lambda}^*)])^{-1}$ and $\lambda^*$ is in turn updated to be $\lambda^*/\tau_{\min}$. After this replacement, (\ref{eq:sigmatau_1}) and (\ref{eq:sigmatau_2}) will be both satisfied. It is not difficult to show that this procedure will lead to the same $\sigma_{\min}$ as defined in (\ref{eq:sigma_min}). Clearly, $\sigma_{\min}$ must lie over a compact set: $\left[\sigma_{\noisei},\,\sqrt{\sigma_{\noisei}^{2}+\frac{1}{\delta}\E B^{2}}\right]$, since from (\ref{eq:sigmatau_1}), we know $\sigma^{2}\geq\sigma_{w}^{2}$ and also $\sigma^{2}=\sigma_{\noisei}^{2}+\frac{1}{\delta}\E[B^{2}]$ when $\prox\equiv0$. Therefore, to find $\sigma_{\min}$, it suffices to solve problem (\ref{eq:opt_esti_prox}) for every candidate $\sigma$ and find the minimum $\sigma$ such that (\ref{eq:sigma_min}) holds. Finally, we show the convexity of problem (\ref{eq:opt_esti_prox}). Note that the objective function can be written as: \begin{equation*} \E_{B, Z}[(\prox(B+\sigma Z)-B)^{2}] = \E_Y[\eta(Y)-\E(B|Y)]^2 + \E_Y\text{Var}(B|Y), \end{equation*} where $Y = B+\sigma Z$. Since $\E_Y[\eta(Y)-\E(B|Y)]^2$ can be viewed as the distance between $\eta(y)$ and $\E(B|Y)$ under the metric induced by the following inner product: \begin{align*} \langle f_1(y), f_2(y) \rangle_{F_Y} \bydef \int f_1(y)f_2(y)dF_Y(y), \end{align*} so naturally it is a convex functional w.r.t. $\eta(y)$. On the other hand, it is not hard to show that the feasible set of $\eta(y)$ is convex by the convexity of $\mathcal{M}$ and also the inequality constraint in problem (\ref{eq:opt_esti_prox}). Once we find the optimal $\prox^* = \prox_{\sigma_{\min}}$ and $(\sigma_{\min},\,\tau_{\min})$, from Proposition \ref{prop:function_space} and argument we made before, the corresponding optimal $\lambda^*$ distribution can be represented as: $\lambda\sim \frac{|Y|-\prox^*(|Y|)}{\tau_{\min}}$, with $Y\sim B+\sigma_{\min}Z$. \subsection{Proof of Proposition \ref{prop:max_power}} \label{subsec:max_power_proof} Let $y_{\text{thresh}}=\sup_{y\geq0}\left\{ y \mid \prox(y)=0\right\} $. It follows from Theorem \ref{thm:asymp_char} that, in order to ensure $\P_{\text{type-I}}=\alpha$, we need to have $\frac{y_{\text{thresh}}}{\sigma}=\Phi^{-1}(1-\frac{\alpha}{2})$, where $\Phi(\cdot)$ is the CDF of the standard normal distribution. Similarly, we can compute the power of the test as $\P(|\frac{\sgli}{\sigma}+Z|\geq\Phi^{-1}(1-\frac{\alpha}{2}))$. It can be shown that for any fixed $\sgli$, $\P(|\frac{\sgli}{\sigma}+Z|\geq\Phi^{-1}(1-\frac{\alpha}{2}))$ is a non-increasing function of $\sigma$. Thus, under a given type-I error rate $\alpha$, maximizing the power is equivalent to minimizing $\sigma$, which is the same objective in the case of optimal estimation as shown in Proposition \ref{prop:min_MSE}. The only difference here is that we need to enforce additional constraints (\ref{eq:typeI_constraint}) that guarantees $P_{\text{type-I}}=\alpha$. Besides, it is not hard to show adding this constraint will not affect the convexity of the feasible set of $\prox(y)$. The rest of the proof is the same as Proposition \ref{prop:min_MSE}. \subsection{Proximal Mapping $\protect\tprox_{\protect\vlambda}(\protect\vy)$} Before proving the main asymptotic results, we recall some key properties of $\tprox_{\vlambda}(\vy)$ that will be used later. First, it is easy to check that $[\tprox_{\vlambda}(\vy)]_{i}$ should have the same sign as $y_{i}$, $i=1,\ldots,p$, i.e., \begin{equation} \tprox_{\vlambda}(\vy)=\mD_{\sgn(\vy)}\tprox_{\vlambda}(|\vy|)\label{eq:prox_linearity_sgn} \end{equation} where $\mD_{\sgn(\vy)}=\diag\left\{ \sgn(y_{1}),\ldots,\sgn(y_{p})\right\} $, $|\vy|=\left(|y_{1}|,|y_{2}|,\ldots,|y_{p}|\right)^{T}$ and \[ \sgn(y)=\begin{cases} 1 & y>0\\ 0 & y=0\\ -1 & y<0 \end{cases}. \] Also we have: \begin{equation} \Pi\circ\tprox_{\vlambda}(\vy)=\tprox_{\vlambda}(\Pi\circ\vy),\label{eq:prox_linearity_permutation} \end{equation} where $\Pi$ is a permutation matrix. Therefore, without loss of generality, we can assume $0\leq y_{1}\leq y_{2}\leq\cdots\leq y_{p}$ and for the general $\left\{ y_{i}\right\} _{1\leq i\leq p}$, combining (\ref{eq:prox_linearity_sgn}) and (\ref{eq:prox_linearity_permutation}), we can express $\tprox_{\vlambda}(\vy)$ as: \begin{align*} \tprox_{\vlambda}(\vy) & =\mD_{\sgn(\vy)}\tprox_{\vlambda}(|\vy|)\\ & =\mD_{\sgn(\vy)}\Pi_{\uparrow}^{T}\circ\tprox_{\vlambda}(\acute{|\vy|}) \end{align*} where $\Pi_{\uparrow}$ is a permutation matrix such that the coordinates of $\Pi_{\uparrow}|\vy|$ are arranged in non-decreasing order and $\acute{|\vy|}\bydef {\Pi_{\uparrow}}\circ|\vy|$. As a consequence, we can focus on the non-negative and monotonely non-decreasing sequences of $\vy$: $0\leq y_{1}\leq y_{2}\leq\cdots\leq y_{p}$. For this type of $\vy$, we have the following three properties, which have been proved in \cite{bogdan2015slope} : \begin{lem} \label{lem:prox_previous_property}For $\vy\in\R^{p}$ satisfying: $0\leq y_{1}\leq y_{2}\leq\cdots\leq y_{p}$, it holds that: \end{lem} \begin{enumerate} \item $0\leq[\tprox_{\vlambda}(\vy)]_{1}\leq[\tprox_{\vlambda}(\vy)]_{2}\leq\cdots\leq[\tprox_{\vlambda}(\vy)]_{p}$, \item If $y_{1}-\lambda_{1}\leq y_{2}-\lambda_{2}\leq\cdots\leq y_{p}-\lambda_{p}$, then $[\tprox_{\vlambda}(\vy)]_{i}=\max(0,\,y_{i}-\lambda_{i})$, \item If $y_{i}-\lambda_{i}\geq y_{i+1}-\lambda_{i+1}\geq\cdots\geq y_{j}-\lambda_{j}$, then $[\tprox_{\vlambda}(\vy)]_{i}=[\tprox_{\vlambda}(\vy)]_{i+1}=\cdots=[\tprox_{\vlambda}(\vy)]_{j}$. \end{enumerate} From the 3rd property in Lemma \ref{lem:prox_previous_property}, if $\left\{ y_{k}-\lambda_{k}\right\} _{i\leq k\leq j}$ is non-increasing, it is shown in \cite{bogdan2015slope} that $\tprox_{\vlambda}(\vy)$ will remain unchanged if we replace $\lambda_{k}$ by $\lambda_{k}^{+}$ as follows: \begin{eqnarray} \lambda_{k}^{+} & = & \begin{cases} y_{k}-\frac{\sum_{l=i}^{j}y_{l}-\lambda_{l}}{j-i+1} & k=i,\ldots j\\ \lambda_{k} & \otherwise \end{cases}\label{eq:aver_lambda} \end{eqnarray} Since $y_{i}-\lambda_{i}\geq\frac{\sum_{l=i}^{j}y_{l}-\lambda_{l}}{j-i+1}$, $\lambda_{i}^{+}=y_{i}-\frac{\sum_{l=i}^{j}y_{l}-\lambda_{l}}{j-i+1}\geq\lambda_{i}$. Similarly, $\lambda_{j}^{+}\leq\lambda_{j}$, so $\lambda_{1}^{+}\leq\lambda_{2}^{+}\leq\cdots\leq\lambda_{p}^{+}$, which is still a valid regularization sequence. Also for $k=i,\ldots j$, $y_{k}-\lambda_{k}^{+}=\frac{\sum_{l=i}^{j}y_{l}-\lambda_{l}}{j-i+1}$. Therefore, the solution of (\ref{eq:slope_prox}) can be obtained by the following procedure (assuming $0\leq y_{1}\leq y_{2}\leq\cdots\leq y_{p}$): \begin{enumerate} \item Compute $y_{i}-\lambda_{i}$, $i=1,2,\ldots,p$, \item Find the smallest $i$ such that the sub-sequence from $i$ to $j$ is strictly decreasing: $y_{i}-\lambda_{i}>y_{i+1}-\lambda_{i+1}>\cdots>y_{j}-\lambda_{j}$ and $y_{j}-\lambda_{j}\leq y_{j+1}-\lambda_{j+1}$, \item For $i\leq l\leq j$, calculate the average and replace the original $\vlambda$ as (\ref{eq:aver_lambda}), \item Repeat step 2 and 3 until obtain a sequence $\vlambda^{+}$ s.t. $\left\{ y_{i}-\lambda_{i}^{+}\right\} _{1\leq i\leq p}$ is non-decreasing. \item Return $\tprox_{\vlambda}(y_{i})=\max(y_{i}-\lambda_{i}^{+},\,0)$, $i=1,2,\ldots,p$. \end{enumerate} The above procedure can be implemented in an efficient stack-based algorithm as proposed in \cite{zhong2012efficient,bogdan2015slope}. Here, to facilitate the asymptotic analysis later, we will present an equivalent algorithm implemented in a different manner. Before doing so, we first list a few definitions that will be used later. \begin{defn} \label{def:mono_segment}For a finite sequence $\left\{ a_{i}\right\} _{1\leq i\leq p}$, we call $\left\{ k_{1},k_{1}+1,\ldots,k_{2}\right\} $ a\emph{ maximal decreasing segment }(MDS) if $a_{k_{1}}>a_{k_{1}+1}>\cdots>a_{k_{2}}$ and $a_{k_{1}-1}\leq a_{k_{1}}$ or ($k_{1}=1$), $a_{k_{2}}\geq a_{k_{2}+1}$ or ($k_{2}=p$); similarly, $\left\{ k_{1},k_{1}+1,\ldots,k_{2}\right\} $ is a\emph{ maximal non-decreasing segment }(MNDS) if $a_{k_{1}}\leq a_{k_{1}+1}\leq\cdots\leq a_{k_{2}}$ and $a_{k_{1}-1}>a_{k_{1}}$ or ($k_{1}=1$), $a_{k_{2}}<a_{k_{2}+1}$ or ($k_{2}=p$). \end{defn} \begin{defn} \label{def:BP_discrete}In a finite sequence $\left\{ a_{i}\right\} _{1\leq i\leq p}$, $S_{1}=\left\{ 1,\ldots,k_{1}\right\} ,$$S_{2}=\left\{ k_{1},\ldots,k_{2}\right\} $ and $S_{3}=\left\{ k_{2},\ldots,k_{3}\right\} $ are 3 neighboring maximal segments. Suppose $S_{1}$ and $S_{3}$ are MNDS and $S_{2}$ is an MDS. For $j\in S_{3}$, define the index set: \[ \mathcal{I}_{j}=\left\{ i\Big|a_{i-1}>\aver ij,2\leq i\leq k_{1}\right\} . \] where \[ \aver ij\bydef\frac{\sum_{l=i}^{j}a_{l}}{j-i+1}. \] The corresponding \emph{left balance point} (LBP) of $j$ is defined as: \[ i^{*}(j)=\begin{cases} \min_{i\in\mathcal{I}_{j}}i-1 & \mathcal{I}_{j}\neq\emptyset\\ k_{1} & \mathcal{I}_{j}=\emptyset \end{cases} \] Using this definition, construct another index set: \[ \mathcal{J}=\left\{ j\Big|\aver{i^{*}(j)}j>a_{j+1},k_{2}\leq j\leq k_{3}-1\right\} \] and the \emph{right balance point} (RBP) is defined as: \[ j^{*}=\begin{cases} \max_{j\in\mathcal{J}}j+1 & \mathcal{J}\neq\emptyset\\ k_{2} & \mathcal{J}=\emptyset \end{cases} \] \end{defn} \begin{algorithm} \begin{algorithmic}[1] \Require{ $\{y_i\}_{1\leq i \leq p}$, $\{\lambda_i\}_{1\leq i \leq p}$ }, $0\leq\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_p$ \Ensure{ $\tprox_{\vlambda}(\vy)$ } \State{Calculate $g_{0,i}=|y|_{(i)}-\lambda_i$, $i=1,2,\ldots,p$} \State{Set $t=0$.} \While{$\exists$ MDS of $\{g_{t,i}\}_{1\leq i \leq p}$} \State{Find the first MDS $S_2$: $\{{k_1}, {k_1+1},\ldots,{k_2}\}$ } \If{$k_2=p$} \State{$k_R = p$} \State{$k_L = i^*(k_2)$} \Else \State{Find $S_3 = \{{k_2}, {k_2+1},\ldots,{k_3}\}$, the right neighbouring MNDS of $S_2$ and $S_1 = \{{1}, {2},\ldots,{k_1}\}$, the left neighbouring MNDS of $S_2$.} \State{Find RBP $j^*$ and corresponding LBP $i^*=i^*(j^*)$.} \State{$k_L \leftarrow i^*$, $k_R \leftarrow j^*$} \EndIf \State{For $i\in [k_L, k_R]$, replace original $g_{t,i}$ by $\mathtt{aver}(g_{t,k_L},g_{t,k_R})$ to obtain the new $\{g_{t+1,i}\}_{1\leq i\leq p}$.} \State{$t=t+1$} \EndWhile \State{$\{g_i\}_{1\leq i \leq p} \leftarrow \{g_{t,i}\}_{1\leq i \leq p}$} \State{$[\tprox_{\vlambda}(\vy)]_i = \sgn(y_i)\max\{0,g_{t,j(i)}\}$} \end{algorithmic} \caption{Recursive procedure for constructing $\protect\tproxl(\protect\vy)$\label{alg:discrete}} \end{algorithm} The reconstruction procedure for obtaining $\tprox_{\vlambda}(\vy)$ is given in Algorithm \ref{alg:discrete}. In the next section, we will study the limit of this algorithm when the input are converging sequences $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ satisfying (A.1)-(A.4). We will show that this limit is exactly $\prox(y;F_{y},F_{\lambda})$ in Proposition \ref{prop:prox}. To begin with, we first study some properties of Algorithm \ref{alg:discrete}. \begin{lem} \label{lem:discrete_LBP_monotone}Consider the same setting in Definition \ref{def:BP_discrete} and Algorithm \ref{alg:discrete}. For each fixed $j\in S_{3}$, $\aver ij$ as a function of $i$ satisfies the following: (1) $\aver ij<\aver{i-1}j$ for each $i^{*}(j)<i\leq k_{1}$, (2) $\aver ij\geq a_{i-1}$, $\forall2\leq i\leq i^{*}(j)$, (3) $\aver ij\geq\aver{i-1}j$ for each $2\leq i\leq i^{*}(j)$. \end{lem} \begin{IEEEproof} (1) Since $\forall i^{*}(j)<i\leq k_{1}$, $a_{i-1}>\aver ij$, so $\aver ij<\aver{i-1}j$, (2) For $2\leq i\leq i^{*}(j)$, $\aver{i^{*}(j)}j\geq a_{i-1}$ and $a_{k}\geq a_{i-1}$, $\forall i\leq k\leq i^{*}(j)-1$, so $\aver ij\geq a_{i-1}$, $\forall2\leq i\leq i^{*}(j)$, (3) Since $\aver ij\geq a_{i-1}$, $\forall2\leq i\leq i^{*}(j)$ by part (2), clearly $\aver ij\geq\aver{i-1}j$. \end{IEEEproof} \begin{lem} \label{lem:discrete_RBP_monotone}Consider the same setting in Definition \ref{def:BP_discrete} and Algorithm \ref{alg:discrete}. $\aver{i^{*}(j)}j$ as a function of $j$ satisfies the following: (1) $\aver{i^{*}(j)}j>\aver{i^{*}(j+1)}{j+1}$, $\forall k_{2}\leq j<j^{*}$, (2) $\aver{i^{*}(j)}j\leq a_{j+1}$, $\forall j^{*}\leq j\leq p-1$, (3) $\aver{i^{*}(j)}j\leq\aver{i^{*}(j+1)}{j+1}$, $\forall j^{*}\leq j\leq p-1$. \end{lem} \begin{IEEEproof} (1) By definition, $\aver{i^{*}(j)}j>a_{j+1}$, $\forall k_{2}\leq j<j^{*}$, so $\aver{i^{*}(j)}{j+1}<\aver{i^{*}(j)}j$. Therefore, from Lemma \ref{lem:discrete_LBP_monotone}, we know $i^{*}(j+1)\leq i^{*}(j)$. Since $\forall k\leq i^{*}(j)$, $a_{k}\leq a_{i^{*}(j)}$, $\aver{i^{*}(j+1)}{j+1}\leq\aver{i^{*}(j)}{j+1}<\aver{i^{*}(j)}j$. (2) By definition, $\aver{i^{*}(j^{*})}{j^{*}}\leq a_{j^{*}+1}$. Suppose $\exists j$, $j^{*}<j\leq p-1$, $\aver{i^{*}(j)}j>a_{j+1}$, then $\aver{i^{*}(j)}{j^{*}}>a_{j^{*}+1}\geq\aver{i^{*}(j^{*})}{j^{*}}$, which contradicts Lemma \ref{lem:discrete_LBP_monotone}. (3) From part (2), $\aver{i^{*}(j)}j\leq a_{j+1}$, $\forall j^{*}\leq j\leq p-1$, so $\aver{i^{*}(j)}j\leq\aver{i^{*}(j)}{j+1}$. From Lemma \ref{lem:discrete_LBP_monotone}, $\aver{i^{*}(j)}{j+1}\leq\aver{i^{*}(j+1)}{j+1}$, so $\aver{i^{*}(j)}j\leq\aver{i^{*}(j+1)}{j+1}$, $\forall j^{*}\leq j\leq p-1$. \end{IEEEproof} The next lemma is a direct implication of Lemma \ref{lem:discrete_LBP_monotone} and \ref{lem:discrete_RBP_monotone}. \begin{lem} \label{lem:discrete_LRBP_monotone}$\forall j=1,2,\ldots,p$, if $i_{1}\geq1$, $i_{2}>1$ satisfy: $i_{1}=1$ or $\aver{i_{1}}j\geq a_{i_{1}-1}$ and $\aver{i_{2}}j<a_{i_{2}-1}$, then $i_{1}\leq i^{*}(j)<i_{2}$. On the other hand, $\forall j_{1},j_{2}\leq p-1$, with $j_{2}\geq j_{1}+1$, if $\aver{i^{*}(j_{2})}{j_{2}}\leq a_{j_{2}+1}$ and $\aver{i^{*}(j_{1})}{j_{1}}>a_{j_{1}+1}$, then $j_{1}<j^{*}\leq j_{2}$. \end{lem} \subsection{\label{subsec:appendix_prox}Proof of Proposition \ref{prop:prox}} \subsubsection{Limiting Form of Algorithm \ref{alg:discrete}} As mentioned before, our strategy is to derive the asymptotics of $\tprox_{\vlambda^{(p)}}(\vy^{(p)})$ via the limit of Algorithm \ref{alg:discrete} as $p\to\infty$. The construction of this limiting form is already given in Algorithm \ref{alg:conti} in Sec. \ref{sec:Asymptotic-Results} and here we will give detailed descriptions of the whole procedure. We first extend several notions defined in Definition \ref{def:mono_segment} and \ref{def:BP_discrete} to their limiting forms. The following notations will be adopted later in our statements: \begin{enumerate} \item $(x)_{+}\bydef\max\left\{ 0,x\right\} $. \item $\diff(\bdp_{0}^{+})=\lim_{y\to\bdp_{0}^{+}}\diff(y)$ and $\diff(\bdp_{0}^{-})=\lim_{y\to\bdp_{0}^{-}}\diff(y)$, where $\diff(y)$ is a function defined on $\R$. \item $\text{Int}(\cdot)$ and $\text{Bd}(\cdot)$ denote the interior and boundary of a set in $\R$, respectively. \end{enumerate} We first extend Definition \ref{def:mono_segment} and \ref{def:BP_cont} to their continuous counterparts. \begin{defn} \label{def:mono_interval}For a function $\diff(y)$ on $y\geq0$, we call $[\bdp_{L},\bdp_{R}]$ a \emph{maximal decreasing interval }(MDI) if $\diff(y)$ is strictly decreasing on $[\bdp_{L},\bdp_{R}]$, while for some $\varepsilon>0$, non-decreasing on $(\bdp_{L}-\veps,\bdp_{L})$, if $\bdp_{L}>0$ and $(\bdp_{R},\bdp_{R}+\varepsilon)$, if $\bdp_{R}<\infty$. One special case is when $\diff(y)$ is discontinuous at $\bdp_{0}$ and $\diff(\bdp_{0}^{+})<\diff(\bdp_{0}^{-})$. We will call $[\bdp_{0},\bdp_{0}]$, which is a single discontinuous point, as an MDI. On the other hand, $(\bdp_{L},\bdp_{R})$ is called a \emph{maximal non-decreasing interval }(MNDI) if $\diff(x)$ is non-decreasing on $(\bdp_{L},\bdp_{R})$ and strictly decreasing on $[\bdp_{L}-\veps,\bdp_{L}]$ and $[\bdp_{R},\bdp_{R}+\varepsilon]$ for some $\varepsilon>0$, when $\bdp_{L}>0$ and $\bdp_{R}<\infty$. \begin{comment} On the other hand, $[\bdp_{L},\bdp_{R}]$ (or $[\bdp_{L},\bdp_{R})$) is called a \emph{maximal non-decreasing interval }(MNDI) if (1) $\bdp_{L}=\bdp_{R}=0$ and $0\in\text{MDI}$ or (2) $\diff(x)$ is non-decreasing on $(\bdp_{L},\bdp_{R}]$ (or $(\bdp_{L},\bdp_{R})$) and not non-decreasing on $[[-\varepsilon+\bdp_{L}]^{+},\,\bdp_{L}]$ and $(\bdp_{R},\,\bdp_{R}+\varepsilon)$ or ($[[-\varepsilon+\bdp_{L}]^{+},\,\bdp_{L}]$ and $[\bdp_{R},\,\bdp_{R}+\varepsilon)$) for any $\varepsilon>0$. \end{comment} \end{defn} \begin{defn} \label{def:BP_cont \begin{comment} Consider a function $\diff(y)$ on $y\geq0$. Suppose $I_{1}=[0,\bdp_{1}]$ (or $I_{1}=[0,\bdp_{1})$), $I_{3}=[\bdp_{2},\bdp_{3}]$ (or $I_{3}=[\bdp_{2},\bdp_{3})$) are two consecutive MNDIs and $I_{2}=[\bdp_{1},\bdp_{2}]$ is the sandwiched MDI. Here $\bdp_{3}$ can be $+\infty$. Let $Y$ be a non-negative random variable. For $w\in I_{3}$, construct the following set: \end{comment} {} Consider a function $\diff(y)$ on $y\geq0$. Suppose $I_{1}=(0,\bdp_{1})$, $I_{3}=(\bdp_{2},\bdp_{3})$ are two consecutive MNDIs and $I_{2}=[\bdp_{1},\bdp_{2}]$ is the sandwiched MDI. Let $Y$ be a non-negative random variable. For $w\in I_{3}$, construct the following set: \[ \mathcal{V}_{R,\veps}(w)=\left\{ v|v\in I_{1},\conde(v,w;\diff(y))<\diff(v^{-})-\veps\right\} \] where \begin{equation} \conde(v,w;\diff(y))\bydef\E_{Y}(\diff(y)|y\in[v,w]\cap(I_{T}\cup\left\{ 0\right\} ))\label{eq:conde} \end{equation} with \begin{equation} I_{T}\bydef I_{1}\cup I_{2}\cup I_{3}\label{eq:IT} \end{equation} Define $v_{R,\veps}^{*}(w)$ as: \[ v_{R,\veps}^{*}(w)=\begin{cases} \inf_{v\in\mathcal{V}_{R,\veps}(w)}v & \mathcal{V}_{R,\veps}(w)\neq\emptyset\\ \bdp_{1} & \mathcal{V}_{R,\veps}(w)=\emptyset \end{cases} \] Then the left balance point of $w$ is defined as: $v^{*}(w)\bydef v_{R,0}^{*}(w)$. Similarly, we can define the right balance point for $\diff(y)$, when $y\in I_{T}$. First, construct the following set: \[ \mathcal{W}_{L,\veps}=\left\{ w|w\in I_{3},\conde(v^{*}(w),w)>\diff(w^{+})+\veps\right\} \] Correspondingly, define the point: \begin{equation} w_{L,\veps}^{*}=\begin{cases} \sup_{w\in\mathcal{W}_{L,\veps}}w & \mathcal{W}_{L,\veps}\neq\emptyset\\ \bdp_{2} & \mathcal{W}_{L,\veps}=\emptyset \end{cases}\label{eq:w_L_eps_star} \end{equation} The right balance point of $\diff(y)$ over $I_{T}$ is defined as: $w^{*}\bydef w_{L,0}^{*}$ and we denote $v^{*}\bydef v^{*}(w^{*}).$ \end{defn} The conditional expectation function defined in (\ref{eq:conde}) is a crucial quantity in constructing limiting form of $\tproxl$. It can be viewed as the continuous counterpart of $\aver ij$. In Lemma \ref{lem:cont_LBP_monotone} and \ref{lem:cont_RBP_monotone} below, we summarize the properties of $\conde(v,w;\diff(y))$, which are in the similar sense to Lemma \ref{lem:discrete_LBP_monotone} and \ref{lem:discrete_RBP_monotone}. \begin{lem} \label{lem:condE_continuity}Consider the same setting in Definition \ref{def:BP_cont}. If $\diff(y)$ is continuous over $I_{1}$ and $I_{3}$, then $\conde(v,w;\diff)$ is right-continuous w.r.t. $w$ and left-continuous w.r.t. $w$; $\conde(v^{*}(w),w;\diff)$ as a function of $w$ is right-continuous. \end{lem} \begin{IEEEproof} This result can be directly proved from the continuity of $\diff(y)$ and the fact that $F_{|y|}$ is right continuous. \end{IEEEproof} \begin{lem} \label{lem:cont_LBP_monotone}Consider the same setting in Definition \ref{def:BP_cont}. Suppose $\diff(y)$ is continuous in $I_{1}$ and $I_{3}$. For a fixed $w\in I_{3}$, if $v^{*}(w)\in I_{1}$, $\conde(v,w;\diff)$ as a function of $v\in I_{1}$ satisfies the following: (1) non-decreasing as $v$ decreases on $(v^{*}(w),\bdp_{1})$, (2) $\conde(v^{*}(w),w;\diff)=\diff(v^{*}(w))$, (3) $\forall0<v\leq v^{*}(w)$, $\conde(v,w;\diff)\geq\diff(v)$, (4) non-increasing as $v$ decreases on $(0,v^{*}(w))$. \end{lem} \begin{IEEEproof} (1) $\forall v>v^{*}(w)$, $\diff(v^{-})>\conde(v,w;\diff)$. Since $\diff(y)$ is continuous on $I_{1}$, for a small enough $\delta$, $\diff(y)>\conde(v,w;\diff)$ on $(v-\delta,v)$. Therefore, $\conde(v,w;\diff)$ increases as $v$ decreases on $(v-\delta,v]$. This holds $\forall v>v^{*}(w)$, so $\conde(v,w;\diff)$ is non-decreasing as $v$ decreases on $(v^{*}(w),\bdp_{1})$. (2) Since $\diff(v^{-})>\conde(v,w;\diff(y))$, $\forall v\in(v^{*}(w),\bdp_{1})$, we know that $\diff(v^{*}(w)^{-})\geq\conde(v^{*}(w),w;\diff)$. Suppose $\diff(v^{*}(w)^{-})>\conde(v^{*}(w),w;\diff)$, then by the continuity of $\diff(y)$, $\exists v_{0}<v^{*}(w)$, s.t. $\diff(v_{0})>\conde(v_{0},w;\diff)$, which contradicts the definition of $v^{*}(w)$. Therefore, we must have $\diff(v^{*}(w)^{-})=\conde(v^{*}(w),w;\diff)$. By continuity of $\diff(y)$ on $(0,\bdp_{1})$, $\diff(v^{*}(w))=\diff(v^{*}(w)^{-})=\conde(v^{*}(w),w;\diff)$. (3) Since $\diff(v^{*}(w))=\conde(v^{*}(w),w;\diff)$ and $\diff(v^{*}(w))\geq\diff(v)$ $\forall v\leq v^{*}(w)$, we must have$\conde(v,w;\diff)\geq\diff(v)$. (4) $\forall v_{1},v_{2}\in(0,v^{*}(w))$ and $v_{1}<v_{2}$, from part (3) we know $\conde(v_{2},w;\diff)\geq\diff(v_{2})$. Besides, $\diff(v_{2})\geq\diff(y)$, $\forall y\in[v_{1},v_{2}]$, so $\conde(v_{2},w;\diff)\geq\conde(v_{1},w;\diff)$. \end{IEEEproof} \begin{lem} \label{lem:cont_RBP_monotone}Consider the same setting in Definition \ref{def:BP_cont}. Suppose $\diff(y)$ is continuous in $I_{1}$ and $I_{3}$. If $v^{*}\in(0,\bdp_{1})$ and $w^{*}\in(\bdp_{2},\bdp_{3})$, then $\conde(v^{*}(w),w;\diff)$ as a function of $w$ satisfies the following: (1) non-increasing on $(\bdp_{2},w^{*})$, (2) $\conde(v^{*},w^{*};\diff)=\diff(w^{*})$, (3) $\forall w\in(w^{*},\bdp_{3})$, $\conde(v^{*}(w),w;\diff)\leq\diff(w)$, (4) non-decreasing on $(w^{*},\bdp_{3})$. \end{lem} \begin{IEEEproof} (1) $\forall w<w^{*}$, $\conde(v^{*}(w),w;\diff)>\diff(w^{+})=\diff(w)$. By continuity of $\diff(y)$ on $I_{3}$, for small enough $\delta>0$, $\diff(y)\leq\conde(v^{*}(w),w;\diff)$ when $y\in(w,w+\delta)$, so $\conde(v^{*}(w),w;\diff)\geq\conde(v^{*}(w),w+\delta;\diff)$. Therefore, $\conde(v^{*}(w),w+\delta;\diff)\leq\diff(v^{*}(w))$, which indicates $v^{*}(w+\delta)\leq v^{*}(w)$ and hence $\diff(v^{*}(w+\delta))\leq\diff(v^{*}(w))$. Since $\conde(v^{*}(w+\delta),w+\delta;\diff)=\diff(v^{*}(w+\delta))$ by Lemma \ref{lem:cont_LBP_monotone}, we know $\conde(v^{*}(w+\delta),w+\delta;\diff)\leq\diff(v^{*}(w))$. On the other hand, $\diff(v^{*}(w))=\conde(v^{*}(w),w;\diff)$, so obtain that $\conde(v^{*}(w+\delta),w+\delta;\diff)\leq\conde(v^{*}(w),w;\diff)$. (2) Since $\conde(v^{*}(w),w;\diff)>\diff(w)$, $\forall w\in(\bdp_{2},w^{*})$, we have $\conde(v^{*},w^{*};\diff)\geq\diff(w^{*})$. If $\conde(v^{*},w^{*};\diff)>\diff(w^{*})$, by continuity of $\conde(v^{*}(w),w;\diff)$ from Lemma \ref{lem:condE_continuity} and $\diff(y)$, $\exists w_{1}>w^{*}$ s.t. $\conde(v^{*}(w_{1}),w_{1};\diff)>\diff(w_{1}),$which contradicts the definition of $w^{*}$. Therefore, $\conde(v^{*},w^{*};\diff)=\diff(w^{*})$. (3) Suppose $\exists w>w^{*}$ such that $\conde(v^{*}(w),w;\diff)>\diff(w)$. Then $\conde(v^{*}(w),w^{*};\diff)>\diff(w^{*})=\conde(v^{*}(w^{*}),w^{*};\diff)$, which contradicts Lemma \ref{lem:cont_LBP_monotone}. Therefore, $\conde(v^{*}(w),w;\diff)\leq\diff(w)$. (4) $\forall w_{1},w_{2}\in(w^{*},\bdp_{3})$, $w_{1}<w_{2}$, we already know from part (3) that $\conde(v^{*}(w_{1}),w_{1};\diff)\leq\diff(w_{1})$. Besides, $\diff(w)\geq\diff(w_{1})$, for $w\in[w_{1},w_{2}]$, so $\conde(v^{*}(w_{1}),w_{2};\diff)\geq\conde(v^{*}(w_{1}),w_{1};\diff)$. According to Lemma \ref{lem:cont_LBP_monotone}, we have $\conde(v^{*}(w_{2}),w_{2};\diff)\geq\conde(v^{*}(w_{1}),w_{2};\diff)$, so we get $\conde(v^{*}(w_{2}),w_{2};\diff)\geq\conde(v^{*}(w_{1}),w_{1};\diff)$. \end{IEEEproof} From Lemma \ref{lem:cont_LBP_monotone} and \ref{lem:cont_RBP_monotone}, we can directly verify $v^{*}$ and $w^{*}$ in Definition \ref{def:BP_cont} can be written as the solution of the following min-max optimization problem: \[ \min_{w\in I_{3}}\max_{v\in I_{1}}\conde(v,w;\diff). \] Using $v^{*}$and $w^{*}$in Definition \ref{def:mono_interval} and \ref{def:BP_cont}, we obtain Algorithm \ref{alg:conti}. Next we are going to prove that this limiting form is exactly $\prox(y;F_{y},F_{\lambda})$ in Proposition \ref{prop:prox}. Before doing so, we need some additional properties of some functions used in Algorithm \ref{alg:conti}. In Algorithm \ref{alg:conti}, $\diff_{t}(y;F_{y},F_{\lambda})$ plays a central role. We first study some of its properties, which will be associated with perturbation analysis in Sec. \ref{subsec:asym_sepa_RCS}. Here, $t$ is the index of $\mathtt{WHILE}$ $\mathtt{LOOP}$ and $F_{y}$, $F_{\lambda}$ indicates the dependence of $\diff_{t}(y;F_{y},F_{\lambda})$ on CDF of $Y$ and $\lambda$. In the statement of the following results, for notation simplicity, we will sometimes drop $t$, $F_{y}$, $F_{\lambda}$ when there is no confusion. \begin{lem} \label{lem:G0_continuity}$\diff_{0}(y)$ defined in Algorithm \ref{alg:conti} satisfies the following: (1) $\diff_{0}(y)$ is continuous in the interior of any MNDI. (2) If $\diff_{0}(y)$ is discontinuous at $y_{0}$, then $\diff_{0}(y_{0}^{-})>\diff_{0}(y_{0}^{+})$. \end{lem} \begin{IEEEproof} (1) By definition of $\lambda(y)$ in Algorithm \ref{alg:conti}, we can easily show $\lambda(y_{0}^{-})$ always exists for $y_{0}>0$ and $\lambda(y_{0}^{+})$ always exists $\forall y_{0}\geq0$. If $y_{0}\in\text{Int}(\text{MNDI})$, then $\lambda(y_{0}^{-})\geq\lambda(y_{0})\geq\lambda(y_{0}^{+})$, otherwise, $\diff_{0}(y)$ will be strictly decreasing in a sufficiently small neighborhood of $y_{0}$. On the other hand, $\lambda(y)$ is monotonely non-decreasing, so $\lambda(y_{0}^{-})\leq\lambda(y_{0})\leq\lambda(y_{0}^{+})$. As a result, we have $\lambda(y_{0}^{-})=\lambda(y_{0})=\lambda(y_{0}^{+})$. Since $\diff_{0}(y)=y-\lambda(y)$, $\diff_{0}(y)$ is continuous in the interior of any MNDI. (2) From part (1), we know $\diff_{0}(y)$ is only discontinuous in MDI, so if $y=y_{0}$ is a discontinuous point, $y_{0}\in\text{MDI}$. Therefore, we have $\diff_{0}(y_{0}^{-})\geq\diff_{0}(y_{0})\geq\diff_{0}(y_{0}^{+})$, but $\diff(y)$ is discontinuous at $y=y_{0}$, so we must have $\diff_{0}(y_{0}^{-})>\diff_{0}(y_{0})$ or $\diff_{0}(y_{0})>\diff_{0}(y_{0}^{+})$. \end{IEEEproof} \begin{lem} \label{lem:v_star_continuity}Consider the same setting in Definition \ref{def:BP_cont}. If $\bdp_{3}>\bdp_{2}$, then $w^{*}>\bdp_{2}$; if $\bdp_{1}>0$, then $v^{*}<\bdp_{1}$. Furthermore, if $v^{*}>0$, then $\conde(v^{*},w^{*};\diff_{0})=\diff_{0}(v^{*})$, where $\diff_{0}$ is given in Algorithm \ref{alg:conti}. \end{lem} \begin{IEEEproof} First, $\diff_{0}(\bdp_{1}^{-})>\conde(\bdp_{1},\bdp_{2};\diff_{0})>\diff_{0}(\bdp_{2}^{+})$, so for small enough $\delta>0$, $\conde(\bdp_{1},\bdp_{2}+\delta;\diff_{0})<\diff_{0}(\bdp_{1}^{-})$, which means $v^{*}(\bdp_{2}+\delta)<\bdp_{1}$. Besides, $\conde(v^{*}(\bdp_{2}+\delta),\bdp_{2}+\delta;\diff_{0})>\diff_{0}(\bdp_{2}+\delta)$, so $w^{*}>\bdp_{2}$. If $w^{*}<\bdp_{3}$, from Lemma \ref{lem:cont_RBP_monotone}, we know $\conde(v^{*},w^{*};\diff_{0})\leq\conde(v^{*}(\bdp_{2}+\delta),\bdp_{2}+\delta;\diff_{0})$, so $\diff_{0}(v^{*})\leq\diff_{0}(v^{*}(\bdp_{2}+\delta))$, which indicates that $v^{*}\leq v^{*}(\bdp_{2}+\delta)<\bdp_{1}$; if $w^{*}=\bdp_{3}$, since $\forall w\in(\bdp_{2}+\delta,\bdp_{3})$, $\conde(v^{*}(w),w;\diff_{0})>\diff_{0}(w)$ and $v^{*}(w)\leq v^{*}(\bdp_{2}+\delta)$, we also have $v^{*}\leq v^{*}(\bdp_{2}+\delta)<\bdp_{1}$. Therefore, if $v^{*}>0$, we have $v^{*}\in\text{Int}(I_{1})$, from Lemma \ref{lem:cont_LBP_monotone}, we can get $\diff_{0}(v^{*})=\conde(v^{*},w^{*};\diff_{0})$. \end{IEEEproof} \begin{lem} \label{lem:Gt_continuity}In each $\mathtt{WHILE}$ $\mathtt{LOOP}$ of Algorithm \ref{alg:conti}, we have for $t\geq1$ (1) $\diff_{t}(y)$ is continuous at $y=0$ and in the interior of any MNDI (2) If $\diff_{t}(y)$ is discontinuous at $y_{0}$, then $\diff_{t}(y_{0}^{-})>\diff_{t}(y_{0}^{+})$. \end{lem} \begin{IEEEproof} \begin{comment} We first show that $\diff_{0}(y;F_{y},F_{\lambda})$, $y\geq0$ is continuous in the interior of any MNDI. For any $y_{0}\geq0$ and $\delta>0$, by definition of $\lambda(y)$ in Algorithm \ref{alg:conti}, we have $F_{\lambda}^{-1}(F_{|y|}(y_{0}-\delta))\leq\lambda(y_{0})\leq F_{\lambda}^{-1}(F_{|y|}(y_{0}+\delta))$. Suppose $y_{0}$ is a discontinuous point, then we must have $F_{\lambda}^{-1}(F_{|y|}(y_{0}-\delta))<\lambda(y_{0})$ or $\lambda(y_{0})<F_{\lambda}^{-1}(F_{|y|}(y_{0}+\delta))$, $\forall\delta>0$, so we have $\lim_{\delta\to0}\diff_{0}(y_{0}-\delta;F_{y},F_{\lambda})>\diff_{0}(y_{0};F_{y},F_{\lambda})$ or $\diff_{0}(y_{0};F_{y},F_{\lambda})>\lim_{\delta\to0}\diff_{0}(y_{0}+\delta;F_{y},F_{\lambda})$, which contradicts the fact that $y_{0}$ is inside the interior of MNDI. \end{comment} {} Consider the $t=1$ case. We first show $\diff_{t}(y)$ is continuous at $y=v^{*}$. If $\bdp_{1}=0$, then $v^{*}=0$ and $\diff_{1}(0)=\diff_{1}(0^{+})=\conde(0,w^{*};\diff_{0})$; if $\bdp_{1}>0$, from Lemma \ref{lem:v_star_continuity}, $v^{*}<\bdp_{1}$, so when $v^{*}>0$, $\diff_{1}((v^{*})^{-})=\diff_{0}(v^{*})=\conde(v^{*},w^{*};\diff_{0})=\diff_{1}(v^{*});$when $v^{*}=0$, $\diff_{1}(0)=\diff_{1}(0^{+})=\conde(0,w^{*};\diff_{0})$. On the other hand, from Lemma \ref{lem:cont_RBP_monotone} and \ref{lem:v_star_continuity}, we know if $w^{*}\in\text{int}(I_{3})$, $\diff_{1}((w^{*})^{+})=\diff_{0}((w^{*})^{+})=\conde(v^{*},w^{*};\diff_{0})=\diff_{1}(w^{*})$, which means $\diff_{1}(y)$ is continuous at $w^{*}$. Next consider the behavior of $\diff_{1}(y)$ when $w^{*}=\bdp_{3}$. If $w^{*}=\bdp_{3}$ and $\bdp_{3}\notin I_{3}$, then $\diff_{1}(\bdp_{3}^{-})=\conde(v^{*}(\bdp_{3}),\bdp_{3};\diff_{0})\geq\diff_{0}(\bdp_{3}^{-})$ and $\diff_{0}(\bdp_{3}^{-})>\diff_{0}(\bdp_{3})=\diff_{1}(\bdp_{3})$, so $\diff_{1}(v^{*})=\diff_{1}(\bdp_{3}^{-})>\diff_{1}(\bdp_{3})$; if $w^{*}=\bdp_{3}$ and $\bdp_{3}\in I_{3}$, similarly, we can show that $\diff_{1}(v^{*})=\diff_{1}(\bdp_{3})\geq\diff_{0}(\bdp_{3}^{+})=\diff_{1}(\bdp_{3}^{+}).$ Therefore, $\diff_{1}(y)$ is continuous on $[0,\bdp_{3})$, which is the interior of the first MNDI of $\diff_{1}(y)$. Since in Algorithm \ref{alg:conti}, $\diff_{1}(y)=\diff_{0}(y)$ for $y>\bdp_{3}$, from Lemma \ref{lem:G0_continuity}, we get $\diff_{1}(y)$ is continuous in the interior of any MNDI. Besides, from the above derivations, $\diff_{1}(\bdp_{3}^{-})>\diff_{1}(\bdp_{3}^{+})$, if $\diff_{1}(y)$ is discontinuous at $\bdp_{3}$. Therefore, if $\diff_{1}(y)$ is discontinuous at $y_{0}$, then $\diff_{1}(y_{0}^{-})>\diff_{1}(y_{0}^{+})$. By induction, we know this holds for any $t\geq1$. \begin{comment} \begin{IEEEproof} Consider $t=0$ case. First we show that $\conde(v^{*},w^{*};\diff_{0})=\diff_{1}(v^{*};F_{y},F_{\lambda})$. If $v^{*}(w)\in\text{Bd}(I_{1})$, then immediately $\conde(v^{*}(w),w;\diff_{0})=\diff_{1}(v^{*}(w);F_{y},F_{\lambda})$; on the other hand, if $v^{*}(w)\in\text{Int}(I_{1})$, then $\conde(v^{*}(w),w;\diff_{0})\leq\diff_{0}(v^{*}(w);F_{y},F_{\lambda})$ by Definition \ref{def:BP_cont}. On the other hand, if $\conde(v^{*}(w),w;\diff_{0})<\diff_{0}(v^{*}(w);F_{y},F_{\lambda})$, then by continuity of $\diff_{0}(v;F_{y},F_{\lambda})$, $\exists v_{L}<v^{*}(w)$ s.t. $\conde(v_{L},w;\diff_{0})<\diff_{0}(v_{L};F_{y},F_{\lambda})$, which contradicts the definition of $v^{*}(w)$. Therefore, $\diff_{0}(v^{*}(w);F_{y},F_{\lambda})=\conde(v^{*}(w),w;\diff_{0})=\diff_{1}(v^{*}(w);F_{y},F_{\lambda})$. Similarly, we can show $\conde(v^{*}(w^{*}),w^{*};\diff_{0})=\diff_{1}(w^{*};F_{y},F_{\lambda})$. Combined with previous part, it leads to $\diff_{1}(v^{*};F_{y},F_{\lambda})=\diff_{1}(w^{*};F_{y},F_{\lambda})=\conde(v^{*},w^{*};\diff_{0})$. Therefore, $\diff_{1}(y;F_{y},F_{\lambda})$ is still continuous in the interior of any MNDI. Besides, it is easy to show $\diff_{1}(y;F_{y},F_{\lambda})$ should be also continuous at $y=0$. Then by induction, the results hold for any $t$. \end{IEEEproof} \end{comment} \end{IEEEproof} \begin{prop} \label{prop:G_Lipschitz}$\forall F_{y},F_{\lambda}$, $\diff(y;F_{y},F_{\lambda})$ in Algorithm \ref{alg:conti} satisfies: $0\leq\diff(y_{2};F_{y},F_{\lambda})-\diff(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$, $\forall0\leq y_{1}\leq y_{2}$. \end{prop} \begin{IEEEproof} Since the final output $\diff(y;F_{y},F_{\lambda})$ of Algorithm \ref{alg:conti} has no MDI, according to Lemma \ref{lem:Gt_continuity}, $\diff(y;F_{y},F_{\lambda})$ is continuous on $y\geq0$ and $\diff(y;F_{y},F_{\lambda})$ is monotonely non-decreasing. On the other hand, it is not hard to see that $\diff_{0}(y_{2};F_{y},F_{\lambda})-\diff_{0}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$ from Lemma \ref{lem:G0_continuity}. Suppose before the $t$th $\texttt{WHILE}$ $\texttt{LOOP}$, $\diff_{t-1}(y_{2};F_{y},F_{\lambda})-\diff_{t-1}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$. After the $t$th loop, $\diff_{t}(y;F_{y},F_{\lambda})=\diff_{t-1}(y;F_{y},F_{\lambda})$, $y\notin[v^{*},w^{*}]$. This means $\diff_{t}(y_{2};F_{y},F_{\lambda})-\diff_{t}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$, for $y_{1},y_{2}\in[0,v^{*})$ or $y_{1},y_{2}\in(v^{*},\infty)$ by assumption. Besides, we have $\diff_{t}(y;F_{y},F_{\lambda})=\diff_{t-1}(v^{*};F_{y},F_{\lambda})$, $y\in[v^{*},w^{*}]$, so $\diff_{t}(y_{2};F_{y},F_{\lambda})-\diff_{t}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$ also holds for $y_{1},y_{2}\in[v^{*},w^{*}]$. Since from Lemma \ref{lem:Gt_continuity}, at any discontinuous point $y_{0}$ of $\diff_{t}(y)$, $\diff_{t}(y_{0}^{-})\geq\diff_{t}(y_{0}^{+})$, we can obtain that $\diff_{t}(y_{2};F_{y},F_{\lambda})-\diff_{t}(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$. Finally, by induction, we can obtain the results hold for $\diff(y,F_{y},F_{\lambda})$. \end{IEEEproof} \subsubsection{\label{subsec:asym_sepa_RCS}Asymptotic Separability for Regular Converging Sequences} We first prove Proposition \ref{prop:prox} for a special class of converging sequences. Among all converging sequences $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ with limiting CDF $F_{y}$ and $F_{\lambda}$, we define: \begin{align} \begin{cases} y_{i}=F_{y}^{-1}(\frac{i}{p+1}) & i=1,2,\ldots,p\\ \lambda_{i}=F_{\lambda}^{-1}(\frac{i}{p+1}) & i=1,2,\ldots,p \end{cases}\label{eq:regu_convseq} \end{align} as the \emph{regular converging sequence} (RCS). The sequence in (\ref{eq:regu_convseq}) possesses a nice property that if $\diff_{0}(y;F_{y},F_{\lambda})$ is decreasing (non-decreasing) over $[\bdp_{L},\bdp_{R}]$, any sub-sequence $I$ with $\left\{ |y_{i}|\right\} _{i\in I}$ falling within $[\bdp_{L},\bdp_{R}]$ satisfies that $\left\{ \diffi_{0,i}\right\} _{i\in I}$ is decreasing (non-decreasing). This means that the number of MDS of $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ is bounded by the number of MDI of $\diff_{0}(y;F_{y},F_{\lambda})$. As a result, the implementation of Algorithm \ref{alg:discrete} under RCS is simpler, since the number of $\mathtt{WHILE}$ $\mathtt{LOOP},$ which equals to the number of MDS, is bounded. However, for other converging sequences with same $F_{y}$ and $F_{\lambda}$, the number of MDS may be much larger, which makes it much more complicated to analyze. The simplest case is when $\diff_{0}(y;F_{y},F_{\lambda})$ is non-decreasing over $[0,\infty)$, which means there is no MDS in $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$. Then the asymptotic separability in Proposition \ref{prop:prox} can be easily verified: \begin{lem} \label{lem:0MDI_asym_sepa}For RCS $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ with limiting CDF $F_{y}$ and $F_{\lambda}$. Assume $\diff_{0}(y;F_{y},F_{\lambda})$ in Algorithm \ref{alg:conti} has no MDI, then asymptotic separability holds. \end{lem} \begin{IEEEproof} Since by assumption $\diff_{0}(y;F_{y},F_{\lambda})$ is non-decreasing, $\left\{ |y|_{(i)}-\lambda_{i}\right\} _{1\leq i\leq p}$ has no MDS. Therefore, from Algorithm \ref{alg:discrete}, $[\tprox(\vy)]_{i}=\sgn(y_{i})(|y|_{(j(i))}-\lambda_{j})$. Also by Lemma \ref{lem:G0_continuity}, we know $F_{y}$ should be continuous here, so we have $\lambda_{i}=F_{\lambda}^{-1}(F_{|y|}(|y|_{(i)}))$. Therefore, from Algorithm \ref{alg:conti}, $\prox(y_{i};F_{y},F_{\lambda})=\sgn(y_{i})[|y|_{(j(i))}-F_{\lambda}^{-1}(F_{|y|}(|y|_{(j(i))}))]=[\tprox_{\vlambda}(\vy)]_{i}$, $\forall i=1,2,\ldots,p$. \end{IEEEproof} \begin{comment} To prove asymptotic separability for RCS in (\ref{eq:regu_convseq}), we first consider two special distributions of $Y$ and $\lambda$ with $F_{y}$ and $F_{\lambda}$ satisfying: (1) $F_{y}$ is continuous and $\diff_{0}(y;F_{y},F_{\lambda})$ in Algorithm \ref{alg:conti} has no MDI, (2) there exists exactly one MDI in $G(y)$. These two cases will be the building blocks for the proof of general distributions. In the following, we first prove the asymptotic separability of $\tproxl(\vy^{(p)})$ for RCS and then generalize it to all converging sequences with the same limiting distribution. We first define some quantities that will be used in proving convergence results. \end{comment} A more complicated case is when there exists exactly one MDI in $\diff_{0}(y;F_{y},F_{\lambda})$. To analyze this case, we need some auxiliary quantities. \begin{defn} \label{def:BP_perturb}Consider a function $\diff(y)$, $y\geq0$ with $I_{1}$, $I_{2}$ and $I_{3}$ same as in Definition \ref{def:BP_cont}. $Y$ is a non-negative random variable. For $w\in I_{3}$, construct the following set: \[ \mathcal{V}_{L,\veps}(w)=\left\{ v|v\in I_{1},\diff(v)<\conde(v^{*}(w),w;\diff)-\veps\right\} \] Then we define $v_{L,\veps}^{*}(w)$ as: \[ v_{L,\veps}^{*}(w)=\begin{cases} \sup_{v\in\mathcal{V}_{L,\veps}(w)}v & \mathcal{V}_{L,\veps}(w)\neq\emptyset\\ 0 & \mathcal{V}_{L,\veps}(w)=\emptyset \end{cases} \] Similarly, \[ \mathcal{W}_{R,\veps}=\left\{ w|w\in I_{3},\diff(w)>\conde(v^{*},w^{*};\diff)+\veps\right\} \] where $w_{R,\veps}^{*}$ is defined as: \begin{equation} w_{R,\veps}^{*}=\begin{cases} \inf_{w\in\mathcal{W}_{R,\veps}}w & \mathcal{W}_{R,\veps}\neq\emptyset\\ \bdp_{3} & \mathcal{W}_{R,\veps}=\emptyset \end{cases}\label{eq:w_R_eps_star} \end{equation} \end{defn} The quantities $v_{R,\veps}^{*}(w)$, $w_{L,\veps}^{*}$, $v_{L,\veps}^{*}(w)$ and $w_{R,\veps}^{*}$ can be considered as the small perturbation of the LBP and RBP in Definition \ref{def:BP_cont}, where $\veps$ measures the magnitude of this perturbation. Next we will show for any small $\veps$, as $p\to\infty$, within each $\texttt{WHILE}$ $\texttt{LOOP}$ in Algorithm \ref{alg:discrete}, $\diffi_{t,j^{*}}$ and $\diffi_{t,i^{*}}$ will be closed to $\diff_{t}(w^{*})$ and $\diff_{t}(v^{*})$ and also $\aver{i^{*}}{j^{*}}\to\conde(v^{*},w^{*})$. To begin with, we first establish some properties on the perturbation related quantities defined above in Lemma \ref{lem:LBP_bound}-\ref{lem:RBP_LLN_bound}. For all these lemmas, we assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI: $I_{2}=[\bdp_{1},\bdp_{2}]$. Also $I_{1}=(0,\bdp_{1})$, $I_{3}=(\bdp_{2},\bdp_{3})$ are the neighboring MNDIs. \begin{lem} \label{lem:LBP_bound} Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. For $w\in I_{3}$, we have $|\conde(v_{\cdot,\veps}^{*}(w),w)-\conde(v^{*}(w),w)|\leq\veps$, where ``$\cdot$'' can be ``R'' and ``L''. \end{lem} \begin{IEEEproof} It is not hard to show $v_{L,\veps}^{*}(w)\leq v^{*}(w)\leq v_{R,\veps}^{*}(w)$ from Lemma \ref{lem:cont_LBP_monotone} and the definitions of $v_{L,\veps}^{*}(w)$ and $v_{R,\veps}^{*}(w)$. We first prove $|\conde(v_{R,\veps}^{*}(w),w)-\conde(v^{*}(w),w)|\leq\veps$. There are two different cases: (a) $v_{R,\veps}^{*}(w)=0$ or $v^{*}(w)=\bdp_{1}$: then $v^{*}(w)=v_{R,\veps}^{*}(w)$, so $\diff_{0}(v_{R,\veps}^{*}(w))=\diff_{0}(v^{*}(w))$ and $\conde(v_{R,\veps}^{*}(w),w)=\conde(v^{*}(w),w)$. (b) $v^{*}(w)<\bdp_{1}$ and $v_{R,\veps}^{*}(w)>0$: then $0\leq\diff_{0}(v_{R,\veps}^{*}(w))-\conde(v_{R,\veps}^{*}(w),w)\leq\veps$, otherwise we can find a $v<v_{R,\veps}^{*}(w)$ such that $\conde(v,w)<\diff_{0}(v^{-})-\veps$, which contradicts the definition of $v_{R,\veps}^{*}(w)$. On the other hand, since $v^{*}(w)\leq v_{R,\veps}^{*}(w)$, we have $\conde(v_{R,\veps}^{*}(w),w)\leq\conde(v^{*}(w),w)$ and $\conde(v^{*}(w),w)\leq\diff_{0}(v^{*}(w))\leq\diff_{0}(v_{R,\veps}^{*}(w))$ by Lemma \ref{lem:cont_LBP_monotone}. Therefore, it holds that $0\leq\conde(v^{*}(w),w)-\conde(v_{R,\veps}^{*}(w),w)\leq\veps$. \begin{comment} if $v^{*}(w)<\bdp_{1}$, $v_{R,\veps}^{*}(w)>0$ and $0<\diff(v_{R,\veps}^{*}(w))-\conde(v_{R,\veps}^{*}(w),w)\leq\veps$. Since $\conde(v,w)$ is non-decreasing as $v$ decreases on $[v^{*}(w),\bdp_{1}]$, we have $\diff(v^{*}(w))=\conde(v^{*}(w),w)\geq\conde(v_{R,\veps}^{*}(w),w)$ and thus $0<\diff(v_{R,\veps}^{*}(w))-\diff(v^{*}(w))\leq\veps$; since $\diff(y)$ is non-increasing as $y$ decreases on $[v^{*}(w),y_{1}]$, $\diff(v_{R,\veps}^{*}(w))\geq\diff(v^{*}(w))=\conde(v^{*}(w),w)$, so $0<\conde(v^{*}(w),w)-\conde(v_{R,\veps}^{*}(w),w)\leq\veps$. \end{comment} Next, we prove $|\conde(v_{L,\veps}^{*}(w),w)-\conde(v^{*}(w),w)|\leq\veps$. There are two different cases: (a) $v^{*}(w)=0$ or $v_{L,\veps}^{*}(w)=\bdp_{1}$: then $v_{L,\veps}^{*}(w)=v^{*}(w)$ and hence $\diff_{0}(v_{L,\veps}^{*}(w))=\diff_{0}(v^{*}(w))$ and $\conde(v_{L,\veps}^{*}(w),w)=\conde(v^{*}(w),w)$ (b) $v^{*}(w)>0$ and $v_{L,\veps}^{*}(w)<\bdp_{1}$: if $\diff_{0}(y)$ is discontinuous at $y=0$, then $v^{*}(w)=0$ and hence $v_{L,\veps}^{*}(w)=0$, which implies the result as in case (a); if $\diff(y)$ is continuous at $y=0$, then from definition of $v_{L,\veps}^{*}$, we know $0\leq\conde(v^{*}(w),w)-\diff(v_{L,\veps}^{*}(w))\leq\veps$. Using Lemma \ref{lem:cont_LBP_monotone}, we have $\diff(v_{L,\veps}^{*}(w))\leq\conde(v_{L,\veps}^{*}(w),w)\leq\conde(v^{*}(w),w)$, so $0\leq\conde(v^{*}(w),w)-\conde(v^{*}(w),w)\leq\veps$. \begin{comment} first, $v^{*}(w)>0$ implies that $\diff(y)$ is continuous at $y=0$ and combining with the continuity of $\diff(y)$ in $(0,\bdp_{1})$ shown in Lemma \ref{lem:Gt_continuity}, we get $\diff(y)$ is continuous on $[0,\bdp_{1})$. Therefore, $0\leq\diff(v^{*}(w)^{-})-\diff(v_{L,\veps}^{*}(w))\leq\veps$ by definition of $v_{L,\veps}^{*}(w)$. If $v^{*}(w)<\bdp_{1}$, $\diff(v^{*}(w))=\conde(v^{*}(w),w)$ and $0<\conde(v^{*}(w),w)-\conde(v_{L,\veps}^{*}(w),w)\leq\veps$; If $v^{*}(w)=\bdp_{1}$, we have $\conde(v^{*}(w),w)-v_{L,\veps}^{*}(w)<$ \end{comment} \end{IEEEproof} \begin{lem} \label{lem:RBP_bound}Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. We have $|\conde(v^{*}(w_{\cdot,\veps}^{*}),w_{\cdot,\veps}^{*})-\conde(v^{*},w^{*})|\leq\veps$, where ``$\cdot$'' can be ``R'' and ``L''. \end{lem} \begin{IEEEproof} It is not hard to show that $w_{L,\veps}^{*}\leq w^{*}\leq w_{R,\veps}^{*}$ from the definitions and Lemma \ref{lem:cont_RBP_monotone}. We first prove $|\conde(v^{*}(w_{L,\veps}^{*}),w_{\cdot,\veps}^{*})-\conde(v^{*},w^{*})|\leq\veps$. Consider the following two different cases: (a) $w^{*}=\bdp_{2}$ or $w_{L,\veps}^{*}=\bdp_{3}$: then $w^{*}=w_{L,\veps}^{*}$, so clearly $\diff_{0}(w_{L,\veps}^{*})=\diff(w^{*})$ and $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})=\conde(v^{*},w^{*})$. (b) $w^{*}>\bdp_{2}$ and $w_{L,\veps}^{*}<\bdp_{3}$: First, $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})\geq\diff_{0}((w_{L,\veps}^{*})^{+})$ otherwise, by Lemma \ref{lem:cont_RBP_monotone}, $w^{*}=w_{L,\veps}^{*}=\bdp_{2}$ or $w^{*}<w_{L,\veps}^{*}$, which leads to contradiction. Also we have $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\diff_{0}((w_{L,\veps}^{*})^{+})\leq\veps$, otherwise from Lemma \ref{lem:cont_RBP_monotone} and \ref{lem:G0_continuity}, we know $\exists w>w_{L,\veps}^{*}$ s.t. $\conde(v^{*}(w),w)-\diff_{0}(w{}^{+})>\veps$, which contradicts definition in (\ref{eq:w_L_eps_star}). We can easily show that $\diff_{0}(w_{L,\veps}^{*})\leq\diff_{0}(w^{*})\leq\conde(v^{*},w^{*})$, so $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\conde(v^{*},w^{*})\leq\veps$. On the other hand, since $\conde(v^{*}(w),w)$ is non-increasing on $[\bdp_{2},w^{*}]$, $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\conde(v^{*},w^{*})\geq0$. \begin{comment} By definition, $w_{L,\veps}^{*}\leq w^{*}\leq w_{R,\veps}^{*}$. If $w^{*}=\bdp_{2}$ or $w_{L,\veps}^{*}=\bdp_{3}$, then $w^{*}=w_{L,\veps}^{*}$ and clearly $\diff(w_{L,\veps}^{*})=\diff(w^{*})$ and $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})=\conde(v^{*},w^{*})$; otherwise, $w^{*}>y_{2}$, $w_{L,\veps}^{*}<y_{3}$ and $0<\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\diff(w_{L,\veps}^{*})\leq\veps$. Since $\conde(v^{*}(w),w)$ is non-increasing on $[y_{2},w^{*}]$, we have $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})\geq\conde(v^{*}(w^{*}),w^{*})\geq\diff(w^{*})$, so $0<\diff(w^{*})-\diff(w_{L,\veps}^{*})\leq\veps$. On the other hand, $\diff(w_{L,\veps}^{*})\leq\diff(w^{*})\leq\conde(v^{*}(w^{*}),w^{*})$, so $0<\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\conde(v^{*}(w^{*}),w^{*})\leq\veps$. \end{comment} Next we show $|\conde(v^{*}(w_{R,\veps}^{*}),w_{\cdot,\veps}^{*})-\conde(v^{*},w^{*})|\leq\veps$. The proof is similar. There are two different cases: (a) $w^{*}=\bdp_{3}$ or $w_{R,\veps}^{*}=\bdp_{2}$: $\diff_{0}(w_{R,\veps}^{*})=\diff_{0}(w^{*})$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})=\conde(v^{*},w^{*})$. (b) $w^{*}<\bdp_{3}$ and $w_{R,\veps}^{*}>\bdp_{2}$: we have $0\leq\diff_{0}((w_{R,\veps}^{*}))-\conde(v^{*},w^{*})\leq\veps$. It can be directly verified from Lemma \ref{lem:cont_RBP_monotone} that $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})\leq\diff_{0}(w_{R,\veps}^{*})$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})\geq\conde(v^{*},w^{*})$, so we obtain $0\leq\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})-\conde(v^{*},w^{*})\leq\veps$. \begin{comment} if $w^{*}=y_{3}$ or $w_{R,\veps}^{*}=0$, $\diff(w_{R,\veps}^{*})=\diff(w^{*})$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})=\conde(v^{*},w^{*})$; otherwise, $w^{*}<y_{3}$ and $w_{R,\veps}^{*}>0$. If $\diff(y)$ is discontinuous at $y=y_{2}$, $w^{*}>y_{2}$ and since $\diff(y)$ is continuous on $(y_{2},y_{3})$, $0\leq\diff((w_{R,\veps}^{*})^{-})-\diff(w^{*})\leq\veps$ by definition of $w_{R,\veps}^{*}$. In addition, $\diff(w^{*})=\conde(v^{*},w^{*})$, so $|\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})-\conde(v^{*},w^{*})|\leq\veps$. \end{comment} \end{IEEEproof} \begin{lem} \label{lem:LBP_perturb} Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. For any $w\in I_{3}$, $\exists\veps_{1}\in(0,\veps]$ s.t. if $v_{R,\veps}^{*}(w)\in(0,\bdp_{1})$, $\conde(v_{R,\veps}^{*}(w),w)<\diff_{0}(v_{R,\veps}^{*}(w))-\veps_{1}$; if $v_{L,\veps}^{*}(w)\in(0,\bdp_{1})$, $\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v_{L,\veps}^{*}(w),w)-\veps_{1}$. \end{lem} \begin{IEEEproof} If $v_{R,\veps}^{*}(w)\in(0,\bdp_{1})$, by definition of $v_{R,\veps}^{*}(w)$, $\forall v\in(v_{R,\veps}^{*}(w),\bdp_{1})$, $\diff_{0}(v^{-})>\conde(v,w)+\veps$, so $\diff_{0}(v_{R,\veps}^{*}(w))>\E_{Y}(\diff(y)|y\in(v_{R,\veps}^{*}(w),w])+\veps$ from the continuity of $\diff_{0}(y)$ on $(0,\bdp_{1})$ shown in Lemma \ref{lem:G0_continuity}. Therefore, $\exists0<\veps_{1}\leq\veps$ s.t. $\diff_{0}(v_{R,\veps}^{*}(w)^{-})>\conde(v_{R,\veps}^{*}(w),w)+\veps_{1}=\conde(v_{R,\veps}^{*}(w),w)+\veps_{1}$. Also since $v_{R,\veps}^{*}(w)\in\text{Int}(I_{1})$, $\diff_{0}(v_{R,\veps}^{*}(w))=\diff_{0}(v_{R,\veps}^{*}(w)^{-})$. If $v_{L,\veps}^{*}(w)\in(0,\bdp_{1})$, then $\diff_{0}(v_{L,\veps}^{*}(w))=\conde(v^{*}(w),w)-\veps$ due to the continuity of $\diff_{0}(y)$. Based on this, it is easy to check that $\exists0<\veps_{1}\leq\veps$, $\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v_{L,\veps}^{*}(w),w)-\veps_{1}$. \end{IEEEproof} \begin{lem} \label{lem:LBP_LLN_bound} Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI and $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$are RCS. For any $w\in[\bdp_{2},\bdp_{3}]$, let $j_{w}=\max_{1\leq j\leq p}\left\{ j\Big||y_{j}|\leq w\right\} $. We have: \begin{equation} |\aver{i^{*}(j_{w})}{j_{w}}-\conde(v^{*}(w),w)|\leq\epsilon_{w}^{(p)},\label{eq:LBP_LLN} \end{equation} where $\epsilon_{w}^{(p)}\to0$ as $p\to\infty$. \end{lem} \begin{IEEEproof} Let $i_{L,\veps}(j_{w})=\min_{1\leq i\leq p}\left\{ i\Big||y|_{(i)}\geq v_{L,\veps}^{*}(w)\right\} $ and $i_{R,\veps}(j_{w})=\min_{1\leq i\leq p}\left\{ i\Big||y|_{(i)}\geq v_{R,\veps}^{*}(w)\right\} $. We first prove $i^{*}(j_{w})\geq i_{L,\veps}(j_{w})-1$ for large enough $p$. There are three different scenarios: (a) $i_{L,\veps}(j_{w})=1$: then $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})$ holds trivially. (b) $i_{L,\veps}(j_{w})>1$ and $v_{L,\veps}^{*}(w)<\bdp_{1}$: then $i_{L,\veps}(j_{w})\in S_{1}$ due to the fact that $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$are RCSs as in (\ref{eq:regu_convseq}). Also we have $v_{L,\veps}^{*}(w)>0$, otherwise $i_{L,\veps}(j_{w})=1$. According to Lemma \ref{lem:LBP_perturb}, $\exists\veps_{1}\in(0,\veps]$, s.t. $\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v_{L,\veps}^{*}(w),w)-\veps_{1}$. Since $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ is a converging sequence, $|\aver{i_{L,\veps}(j_{w})}{j_{w}}-\conde(v_{L,\veps}^{*}(w),w)|=\epsilon_{L,w}^{(p)}$, with $\epsilon_{L,w}^{(p)}\to0$, as $p\to\infty$. Therefore, $\diff_{0}(v_{L,\veps}^{*}(w))<\aver{i_{L,\veps}(j_{w})}{j_{w}}$ for large enough $p$. Since $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ is a RCS and $|y|_{(i_{L,\veps}(j_{w})-1)}<v_{L,\veps}^{*}(w)$, $\diff_{0}(v_{L,\veps}^{*}(w))\geq\diff_{0}(|y|_{(i_{L,\veps}(j_{w})-1)})=\diffi_{0,i_{L,\veps}(j_{w})-1}$ and $\diffi_{0,i_{L,\veps}(j_{w})-1}<\aver{i_{L,\veps}(j_{w})}{j_{w}}.$ By Lemma \ref{lem:discrete_LRBP_monotone}, $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})$. (c) $i_{L,\veps}(j_{w})>1$ and $v_{L,\veps}^{*}(w)=\bdp_{1}$: Since $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ are RCSs, we can get $i_{L,\veps}(j_{w})\leq\max_{i\in S_{1}}i+1$. Similar to scenario (b), we can also obtain that $i^{*}(j_{w})=\max_{i\in S_{1}}i$. Therefore, we have $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})+1$. \begin{comment} If $v_{L,\veps}^{*}(w)=0$, then $i_{L,\veps}(j_{w})=1$, so $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})$ holds trivially; if $\calV_{L,\veps}(w)\neq\emptyset$ and $v_{L,\veps}^{*}(w)>0$, by Lemma \ref{lem:LBP_perturb}, $\exists\veps_{1}\in(0,\veps]$, s.t. $\diff_{0}(v_{L,\veps}^{*}(w)^{-})<\conde(v_{L,\veps}^{*}(w),w)-\veps_{1}$. Since $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ is a converging sequence, $|\aver{i_{L,\veps}(j_{w})}{j_{w}}-\conde(v_{L,\veps}^{*}(w),w)|=\epsilon_{L,w}^{(p)}$, with $\epsilon_{L,w}^{(p)}\to0$, as $p\to\infty$. Therefore, $\diff_{0}(v_{L,\veps}^{*}(w)^{-})<\aver{i_{L,\veps}(j_{w})}{j_{w}}$ for large enough $p$. If $i_{L,\veps}(j_{w})=1$, then the result holds trivially, otherwise $\diffi_{0,i_{L,\veps}(j_{w})-1}\leq\diff_{0}(v_{L,\veps}^{*}(w)^{-})$, so $\diffi_{0,i_{L,\veps}(j_{w})-1}\leq\diff_{0}(v_{L,\veps}^{*}(w)^{-})$. From Lemma \ref{lem:discrete_LRBP_monotone}, we know $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})$. \end{comment} Next, we prove $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$. We have three different scenarios: (a) $v_{R,\veps}^{*}(w)=\bdp_{1}$: then by definition of $i_{R,\veps}(j_{w})$, we have $i_{R,\veps}(j_{w})\in S_{2}$. Since $i^{*}(j_{w})\in S_{1}$, we have $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$. (b) $0<v_{R,\veps}^{*}(w)<\bdp_{1}$: from Lemma \ref{lem:LBP_perturb}, $\diff_{0}(v_{R,\veps}^{*}(w))>\conde(v_{R,\veps}^{*}(w),w)+\veps$, where we have used the continuity of $\diff_{0}(y)$ on $(0,\bdp_{1})$. On the other hand, $|\aver{i_{R,\veps}(j_{w})}{j_{w}}-\conde(v_{R,\veps}^{*}(w),w)|=\epsilon_{R,w}^{(p)}$, with $\epsilon_{R,w}^{(p)}\to0$, as $p\to\infty$, so $\diff_{0}(v_{R,\veps}^{*}(w))>\aver{i_{R,\veps}(j_{w})}{j_{w}}$ for large enough $p$. If $i_{R,\veps}(j_{w})\in S_{2}$, then $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$ holds trivially; if $i_{R,\veps}(j_{w})\in S_{1}$, then $\diffi_{0,i_{R,\veps}(j_{w})}>\aver{i_{R,\veps}(j_{w})}{j_{w}}$, so using Lemma \ref{lem:discrete_LRBP_monotone}, we can obtain $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$. (c) $v_{R,\veps}^{*}(w)=0$: then by definition $i_{R,\veps}(j_{w})=1$. It suffices to show $i^{*}(j_{w})=1$. If $\diff_{0}(y)$ is discontinuous at $y=0$, then it is not hard to show $\diffi_{0,1}\in S_{2}$, so $i^{*}(j_{w})=1$; if $\diff_{0}(y)$ is continuous at $y=0$ and $\bdp_{1}=0$, then clearly $i^{*}(j_{w})=1$; if $\diff_{0}(y)$ is continuous at $y=0$ and $\bdp_{1}>0$, then $\diffi_{0,1}\geq\diff_{0}(0)\geq\conde(0,w)+\veps$. Since $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ is a converging sequence, $\aver 1{j_{w}}\to\conde(0,w)$ as $p\to\infty$. Therefore, $\diffi_{0,1}\geq\aver 1{j_{w}}+\veps/2$ for large enough $p$. This indicates that $\aver 1{j_{w}}>\aver 2{j_{w}},$so $\diffi_{0,1}\geq\aver 2{j_{w}}+\veps/2$ and by Lemma \ref{lem:discrete_LRBP_monotone} we get $i^{*}(j_{w})=1$. \begin{comment} Next, we prove $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$. If $v_{R,\veps}^{*}(w)=\bdp_{1}$, then $\diffi_{0,i_{R,\veps}(j_{w})}\in S_{2},$where $S_{2}$ is defined in Algorithm \ref{alg:discrete}. Since $\diffi_{0,i^{*}(j_{w})}\in S_{1},$$i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$; if $\calV_{R,\veps}(w)\neq\emptyset$ and $0<v_{R,\veps}^{*}(w)<\bdp_{1}$, from Lemma \ref{lem:LBP_perturb}, $\diff_{0}(v_{R,\veps}^{*}(w)^{-})>\conde(v_{R,\veps}^{*}(w),w)+\veps$. On the other hand, $|\aver{i_{R,\veps}(j_{w})}{j_{w}}-\conde(v_{R,\veps}^{*}(w),w)|=\epsilon_{R,w}^{(p)}$, with $\epsilon_{R,w}^{(p)}\to0$, as $p\to\infty$, so $\diff_{0}(v_{R,\veps}^{*}(w)^{-})>\aver{i_{R,\veps}(j_{w})}{j_{w}}$ for large enough $p$. We have two scenarios: (1) $\diffi_{0,i_{R,\veps}(j_{w})}\in S_{2},$then $i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$ holds trivially; (2)$\diffi_{0,i_{R,\veps}(j_{w})}<\max_{\diffi_{0,i}\in S_{2}}\diffi_{0,i}$, then $\aver{i_{R,\veps}(j_{w})}{j_{w}}\geq\aver{i_{R,\veps}(j_{w})+1}{j_{w}}.$ Since we choose $\left\{ \vy^{(p)}\right\} _{1\leq i\leq p}$ and $\left\{ \vlambda^{(p)}\right\} _{1\leq i\leq p}$ in (\ref{eq:regu_convseq}), $\diffi_{0,i_{R,\veps}(j_{w})}\geq\diff_{0}(v_{R,\veps}^{*}(w)^{-})$ and hence $\diffi_{0,i_{R,\veps}(j_{w})}>\aver{i_{R,\veps}(j_{w})}{j_{w}}.$ As a result, $\diffi_{0,i_{R,\veps}(j_{w})}>\aver{i_{R,\veps}(j_{w})+1}{j_{w}}$ and $i^{*}(j_{w})<i_{R,\veps}(j_{w})$ based on Lemma \ref{lem:discrete_LRBP_monotone}; if $\calV_{R,\veps}(w)\neq\emptyset$ and $v_{R,\veps}^{*}(w)=0$, $i_{R,\veps}(j_{w})=1$, it is not hard to prove $i^{*}(j_{w})=1$, for large enough $p$. \end{comment} Next, we prove (\ref{eq:LBP_LLN}). Since $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$is a converging sequence, we have $|\aver{i_{L,\veps}(j_{w})}{j_{w}}-\conde(v_{L,\veps}^{*}(w),w)|=\epsilon_{L,w}^{(p)}$, with $\epsilon_{L,w}^{(p)}\to0$, as $p\to\infty$. From Lemma \ref{lem:LBP_bound}, $|\conde(v^{*}(w),w)-\conde(v_{L,\veps}^{*}(w),w)|\leq\veps$, so \begin{equation} |\aver{i_{L,\veps}(j_{w})}{j_{w}}-\conde(v^{*}(w),w)|\leq\veps+\epsilon_{L,w}^{(p)}.\label{eq:LBP_LLN1} \end{equation} Now consider the following three different scenarios: (a) $\diff_{0}(y)$ is discontinuous at $y=0$: since $y=0$ is a discontinuous point, it is not hard to show $i^{*}(j_{w})=1$ and $v^{*}(w)=0$. Therefore, for large enough $p$, (\ref{eq:LBP_LLN}) holds. (b) $\diff_{0}(y)$ is continuous at $y=0$ with $v_{L,\veps}^{*}(w)=\bdp_{1}$ or $v_{R,\veps}^{*}(w)=0$: in this case, $v_{L,\veps}^{*}(w)=v_{R,\veps}^{*}(w)$. By definition of $i_{L,\veps}(j_{w})$ and $i_{R,\veps}(j_{w})$, we have $i_{L,\veps}(j_{w})=i_{R,\veps}(j_{w})$. Then $i_{L,\veps}(j_{w})-1\leq i^{*}(j_{w})\leq i_{L,\veps}(j_{w})$. Combining with (\ref{eq:LBP_LLN1}), we get for large enough $p$, \begin{equation} |\aver{i^{*}(j_{w})}{j_{w}}-\conde(v^{*}(w),w)|\leq\veps+\epsilon_{w}^{(p)},\label{eq:LBP_LLN2} \end{equation} with $\epsilon_{w}^{(p)}\to0$ as $p\to\infty$. Let $\veps\to0$ in (\ref{eq:LBP_LLN2}) we get (\ref{eq:LBP_LLN}). (c) $\diff_{0}(y)$ is continuous at $y=0$ with $v_{L,\veps}^{*}(w)<\bdp_{1}$ and $v_{R,\veps}^{*}(w)>0$: since $v_{R,\veps}^{*}(w)>0$, by continuity of $\diff_{0}(y)$ in the interior of $I_{1}$, we have $\conde(v^{*}(w),w)>\diff_{0}(v_{R,\veps}^{*}(w))-\veps$. On the other hand, we have $\conde(v^{*}(w),w)-\veps\leq\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v^{*}(w),w)$. These together lead to $|\diff_{0}(v_{R,\veps}^{*}(w))-\conde(v^{*}(w),w)|<\veps$ and $|\diff_{0}(v_{L,\veps}^{*}(w))-\conde(v^{*}(w),w)|<\veps$, which indicates that $|\diffi_{0,i}-\conde(v^{*}(w),w)|<\veps$, $\forall i_{L,\veps}(j_{w})\leq i<i_{R,\veps}(j_{w})$. Since $i_{L,\veps}(j_{w})-1\leq i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$, from (\ref{eq:LBP_LLN1}) we know (\ref{eq:LBP_LLN2}) holds for large enough $p$. Let $\veps\to0$ , we conclude that (\ref{eq:LBP_LLN}) holds. \begin{comment} otherwise, we have $\conde(v^{*}(w),w)-\veps\leq\diff_{0}(v_{L,\veps}^{*}(w))<\conde(v^{*}(w),w)$, $\conde(v^{*}(w),w)>\diff_{0}(v_{R,\veps}^{*}(w)^{-})-\veps$, which together gives $|\diff_{0}(v_{R,\veps}^{*}(w)^{-})-\conde(v^{*}(w),w)|<\veps$ and $|\diff_{0}(v_{L,\veps}^{*}(w))-\conde(v^{*}(w),w)|<\veps$. This indicates that $|\diffi_{0,i}-\conde(v^{*}(w),w)|<\veps$, $\forall i_{L,\veps}(j_{w})\leq i<i_{R,\veps}(j_{w})$ and since $i_{L,\veps}(j_{w})\leq i^{*}(j_{w})\leq i_{R,\veps}(j_{w})$, we obtain (\ref{eq:LBP_LLN2}) from (\ref{eq:LBP_LLN1}). Finally, let $\veps\to0$ in (\ref{eq:LBP_LLN2}) we get (\ref{eq:LBP_LLN}). \end{comment} \end{IEEEproof} \begin{lem} \label{lem:RBP_LLN_bound} Assume that $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. Besides, $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$are RCS. We have: \begin{equation} |\aver{i^{*}}{j^{*}}-\conde(v^{*},w^{*})|\leq\epsilon_{w^{*}}^{(p)}\label{eq:jstar_LLN} \end{equation} where $\epsilon_{w^{*}}^{(p)}\to0$ as $p\to\infty$. \end{lem} \begin{IEEEproof} In the rest of proof, we use $\conde(v,w)$ to denote $\conde(v,w;\diff_{0}).$ Let $j_{R\text{,\ensuremath{\veps}}}=\max_{1\leq j\leq p}\left\{ j\Big||y_{j}|\leq w_{R,\veps}^{*}\right\} $, where $w_{R,\veps}^{*}$ is the same definition as before. We first show $j^{*}\leq j_{R,\veps}$ for large enough $p$. If $w_{R,\veps}^{*}=\bdp_{3}$, this becomes trivial since $j_{R,\veps}=p$ in this case; if $w_{R,\veps}^{*}<\bdp_{3}$, then $\calW_{R,\veps}\neq\emptyset$. From continuity of $\diff_{0}(y)$ shown in Lemma \ref{lem:G0_continuity} and definition of $w_{R,\veps}^{*}$ in (\ref{eq:w_R_eps_star}), $\diff_{0}(w_{R,\veps}^{*})=\conde(v^{*},w^{*})+\veps$. Therefore, $\conde(v^{*},w_{R,\veps}^{*})\geq\conde(v^{*},w^{*})$. Consider the two different cases: (a) $\conde(v^{*},w_{R,\veps}^{*})=\conde(v^{*},w^{*})$: then $v^{*}(w_{R,\veps}^{*})=v^{*}$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})=\conde(v^{*},w^{*})$, so \begin{equation} \conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})=\diff_{0}(w_{R,\veps}^{*})-\veps,\label{eq:RBPR_jump} \end{equation} From Lemma \ref{lem:LBP_LLN_bound}, we know for large enough $p$, \begin{equation} |\aver{i^{*}(j_{R,\veps})}{j_{R,\veps}}-\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})|\leq\epsilon_{w_{R,\veps}^{*}}^{(p)}\label{eq:RBPR_LLN} \end{equation} with $\epsilon_{w_{R,\veps}^{*}}^{(p)}\to0$ as $p\to\infty$. From (\ref{eq:RBPR_jump}) and (\ref{eq:RBPR_LLN}), we know $\aver{i^{*}(j_{R,\veps})}{j_{R,\veps}}<\diff_{0}(w_{R,\veps}^{*})\leq\diffi_{j_{R,\veps}+1}$, which indicates $j^{*}\leq j_{R,\veps}$ by Lemma \ref{lem:discrete_LRBP_monotone}. (b) $\conde(v^{*},w_{R,\veps}^{*})>\conde(v^{*},w^{*})$: then $v^{*}(w_{R,\veps}^{*})>v^{*}$ and $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})>\conde(v^{*},w^{*})$ from Lemma \ref{lem:cont_RBP_monotone}. Let $j_{M,\veps}=\max_{1\leq j\leq p}\left\{ j\Big||y_{j}|\leq w^{*}\right\} $ and similar to (\ref{eq:RBPR_LLN}), we have \[ |\aver{i^{*}(j_{M,\veps})}{j_{M,\veps}}-\conde(v^{*},w^{*})|\leq\epsilon_{w_{M,\veps}^{*}}^{(p)} \] with $\epsilon_{w_{M,\veps}^{*}}^{(p)}\to0$ as $p\to\infty$. Therefore, combining with (\ref{eq:RBPR_LLN}) we get for large enough $p$, $\aver{i^{*}(j_{M,\veps})}{j_{M,\veps}}<\aver{i^{*}(j_{R,\veps})}{j_{R,\veps}}$. This indicates that $j^{*}\leq j_{R,\veps}$, because from Lemma \ref{lem:discrete_LRBP_monotone}, we can see that if $j<j^{*}$, $\aver{i^{*}(j-1)}{j-1}>\aver{i^{*}(j)}j$. \begin{comment} For $w_{R,\veps}^{*}$, choose $0<\veps_{2}<\veps_{1}/2$ s.t. \[ |\conde(v_{R,\veps_{2}}^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*};\diff)-\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*};\diff)|\leq\veps_{2} \] and \[ |\conde(v_{L,\veps_{2}}^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*};\diff)-\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*};\diff)|\leq\veps_{2}. \] Define $i_{L,\veps_{2}}(j_{R,\veps})=\min_{1\leq i\leq p}\left\{ i\Big||y_{i}|\geq v_{L,\veps_{2}}^{*}(w_{R,\veps}^{*})\right\} $ and $i_{R,\veps_{2}}(j_{R,\veps})=\min_{1\leq i\leq p}\left\{ i\Big||y_{i}|\geq v_{R,\veps_{2}}^{*}(w_{R,\veps}^{*})\right\} $. \end{comment} Next let $j_{L,\veps}=\max_{1\leq j\leq p}\left\{ j\Big||y_{j}|\leq w_{L,\veps}^{*}\right\} $ and we prove $j^{*}\geq j_{L,\veps}$. If $w_{L,\veps}^{*}=\bdp_{2}$, $j_{L,\veps}\leq\min_{j\in S_{3}}j$ and since $j^{*}\in S_{3}$, this becomes trivial; if $w_{L,\veps}^{*}=\bdp_{3}$, then $j_{L,\veps}=p$ and $\exists0<\veps_{2}\leq\veps$ s.t. $\conde(v^{*}(\bdp_{3}),\bdp_{3})>\diff(\bdp_{3})+\veps_{2}$. From Lemma \ref{lem:LBP_LLN_bound}, we know $|\aver{i^{*}(p)}p-\conde(v^{*}(w),w)|\leq\epsilon_{\bdp_{3}}^{(p)}$ with $\epsilon_{\bdp_{3}}^{(p)}\to0$ as $p\to\infty$. Hence letting $w=\bdp_{3}$ we obtain $\aver{i^{*}(p)}p>\diff(\bdp_{3})+\veps_{2}/2\geq\diffi_{0,p}+\veps_{2}/2$. Therefore, we can get $j^{*}=p=j_{L,\veps}$ from Lemma \ref{lem:discrete_LRBP_monotone}; if $\bdp_{3}>w_{L,\veps}^{*}>\bdp_{2}$, then from Lemma \ref{lem:LBP_perturb}, $\exists\veps_{3}<\veps$, $\veps_{3}<\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\diff_{0}(w_{L,\veps}^{*})\leq\veps$. From Lemma \ref{lem:LBP_LLN_bound}, we can get: \begin{equation} |\aver{i^{*}(j_{L,\veps})}{j_{L,\veps}}-\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})|\leq\epsilon_{w_{L,\veps}^{*}}^{(p)}\label{eq:RBPL_LLN} \end{equation} with $\epsilon_{w_{L,\veps}^{*}}^{(p)}\to0$ as $p\to\infty$ and hence $\aver{i^{*}(j_{L,\veps})}{j_{L,\veps}}-\veps_{3}/2>\diff_{0}(w_{L,\veps}^{*})\geq\diffi_{0,j_{L,\veps}}$. Then we get $j^{*}\geq j_{L,\veps}$ from Lemma \ref{lem:discrete_LRBP_monotone}. In addition, we have: \begin{equation} |\aver{i^{*}(j_{L,\veps})}{j_{L,\veps}}-\conde(v^{*},w^{*})|\leq\veps+\epsilon_{w_{L,\veps}^{*}}^{(p)}\label{eq:J1_LLN} \end{equation} by combining (\ref{eq:RBPL_LLN}) and Lemma \ref{lem:RBP_bound} and similarly, \begin{equation} |\aver{i^{*}(j_{R,\veps})}{j_{R,\veps}}-\conde(v^{*},w^{*})|\leq\veps+\epsilon_{w_{R,\veps}^{*}}^{(p)}\label{eq:j2_LLN} \end{equation} Now we are ready to prove (\ref{eq:jstar_LLN}). Consider the following three different scenarios: (1) $w_{L,\veps}^{*}=\bdp_{3}$: we have $w^{*}=\bdp_{3}$ and $j^{*}=p$ as shown above. Then from Lemma \ref{lem:LBP_LLN_bound}, we can get (\ref{eq:jstar_LLN}). (2) $w_{R,\veps}^{*}=\bdp_{2}$: we have $w^{*}=\bdp_{2}$ and $j^{*}=j_{\bdp_{2}}$. From Lemma \ref{lem:LBP_LLN_bound}, we can get (\ref{eq:jstar_LLN}). (3) $\bdp_{2}\leq w_{L,\veps}^{*}<\bdp_{3}$ and $\bdp_{2}<w_{R,\veps}^{*}\leq\bdp_{3}$: we have $\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\veps\leq\diff_{0}((w_{L,\veps}^{*})^{+})\leq\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})$, so $\diffi_{0,j_{L,\veps}+1}\geq\diff_{0}((w_{L,\veps}^{*})^{+})\geq\conde(v^{*}(w_{L,\veps}^{*}),w_{L,\veps}^{*})-\veps$. On the other hand, $\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})\leq\diff_{0}(w_{R,\veps}^{*})\leq\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})+\veps$, so $\diffi_{0,j_{R,\veps}}\leq\conde(v^{*}(w_{R,\veps}^{*}),w_{R,\veps}^{*})+\veps$. Then combining with Lemma \ref{lem:RBP_bound}, we have $\diffi_{0,j_{L,\veps}+1}\geq\conde(v^{*},w^{*})-2\veps$ and $\diffi_{0,j_{R,\veps}}\leq\conde(v^{*},w^{*})+2\veps$. Since $j_{L,\veps}-1\leq j^{*}\leq j_{R,\veps}$, from (\ref{eq:j2_LLN}) and (\ref{eq:J1_LLN}) and letting $\veps\to0$, we conclude (\ref{eq:jstar_LLN}) holds. \end{IEEEproof} \begin{lem} \label{lem:1MDI_asym_sepa}Suppose in Algorithm \ref{alg:conti}, $\diff_{0}(y;F_{y},F_{\lambda})$ has exactly one MDI. Then for RCS $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$ and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ with limiting CDF $F_{y}$ and $F_{\lambda}$, asymptotic separability holds. \end{lem} \begin{IEEEproof} Let $\diffi_{1,i}^{*}=\diff_{1}(|y|_{(i)})$, $1\leq i\leq p$ and $\prox(y_{i};F_{y},F_{\lambda})=\text{sign}(y_{i})\max\left\{ 0,\diffi_{1,j(i)}^{*}\right\} $. In the current setting, we know from Algorithm \ref{alg:conti} that: \[ \diff_{1}(y)=\begin{cases} \conde(v^{*},w^{*}) & y\in[v^{*},w^{*}]\\ \diff_{0}(y) & y\notin[v^{*},w^{*}] \end{cases} \] On the other hand, $\diffi_{1,i}$ in Algorithm \ref{alg:discrete} is: \[ \diffi_{1,i}=\begin{cases} \aver{i^{*}}{j^{*}} & i^{*}\leq i\leq j^{*}\\ \diff_{0}(y|_{(i)}) & \otherwise \end{cases} \] and $[\tprox_{\vlambda^{(p)}}(\vy^{(p)})]_{i}=\text{sign}(y_{i})\max\left\{ 0,\diffi_{1,i}\right\} $. Based on Lemma \ref{lem:RBP_LLN_bound}, for $\epsilon>0$ we can define the index sets: \[ I_{+}(\epsilon)\bydef\left\{ i\Big|\diff_{0}(y_{i})\geq\conde(v^{*},w^{*})+\epsilon\text{/2}\right\} \] and \[ I_{-}(\epsilon)\bydef\left\{ i\Big|\diff_{0}(y_{i})<\conde(v^{*},w^{*})-\epsilon/2\right\} \] From (\ref{eq:jstar_LLN}), we know for $i=1,2,\ldots,p$ and large enough $p$, \[ \begin{cases} \diffi_{1,i}=\diffi_{1,i}^{*} & i\in I_{+}(\epsilon)\cup I_{-}(\epsilon)\\ |\diffi_{1,i}-\diffi_{1,i}^{*}|\leq\epsilon & \otherwise \end{cases} \] Therefore, $|\prox(y_{i};F_{y},F_{\lambda})-[\tproxl(\vy^{(p)})]_{i}|\leq|\diffi_{1,i}-\diffi_{1,i}^{*}|\leq\epsilon$, $\forall i=1,2,\ldots,p$. With $\epsilon$ taken arbitrarily closed to 0, this implies that $\frac{1}{p}\|\prox(\vy^{(p)};F_{y},F_{\lambda})-\tproxl(\vy^{(p)})\|^{2}\to0$ as $p\to\infty$, which proves the results. \end{IEEEproof} Now we are ready to prove the asymptotic separability for RCS in (\ref{eq:regu_convseq}) with general limiting distribution $F_{y}$ and $F_{\lambda}$. \begin{lem} \label{lem:asym_sepa_regu}$\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ are RCS as in (\ref{eq:regu_convseq}) with limiting CDF $F_{y}$ and $F_{\lambda}$. Then asymptotic separability holds. \end{lem} \begin{IEEEproof} The case where $\diff_{0}(y;F_{y},F_{\lambda})$ is non-decreasing has been proved in Lemma \ref{lem:0MDI_asym_sepa}. In the other cases, $\diff_{0}(y;F_{y},F_{\lambda})$ in Algorithm \ref{alg:conti} must contain at least one MDI and $\left\{ \diffi_{0,i}\right\} _{1\leq i\leq p}$ as defined in Algorithm \ref{alg:discrete} must contain a MDS for large enough $p$. Our general proof strategy is to show that $\exists\left\{ \tilde{\vlambda}^{(p)}\right\} _{p\in\mathbb{N}}$ s.t. $\frac{\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\t{\vlambda}^{(p)}}(\vy^{(p)})\|^{2}}{p}\to0$, where $\left\{ \tilde{\vlambda}^{(p)}\right\} _{p\in\mathbb{N}}$ is a regular converging sequence with non-decreasing $\diff_{0}(y;F_{y},F_{\t{\lambda}})$, which is equal to $\diff(y;F_{y},F_{\lambda})$ (output of $\mathtt{WHILE}$ $\mathtt{LOOP}$ in Algorithm \ref{alg:conti} implemented for $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$). Let $\mathcal{I}_{T}=\left\{ i\Big||y|_{(i)}\in I_{T}\right\} $, where $I_{T}$ is defined in (\ref{eq:IT}). Define $\left\{ \diffi_{1,i}^{s}\right\} _{i\in\mathcal{I}_{T}}$ as the output of Algorithm \ref{alg:discrete} implemented on the sub-sequence $\left\{ |y|_{(i)}\right\} _{i\in\mathcal{I}_{T}}$and $\left\{ \lambda_{i}\right\} _{i\in\mathcal{I}_{T}}$and $\h g_{1,i}$ as \[ \h g_{1,i}=\begin{cases} \diffi_{1,i}^{s} & i\in\mathcal{I}_{T}\\ \diffi_{0,i} & i\notin\mathcal{I}_{T} \end{cases} \] We construct a new regularization sequence as follows: \[ \h{\lambda}_{1,i}=|y|_{(i)}-\h g_{1,i} \] When $i<i^{*}$ or $i>j^{*}$, $\h g_{1,i}=\diffi_{0,i}$ and thus $\lambda_{i}=\h{\lambda}_{i}$, since $\lambda_{i}=|y|_{(i)}-\diffi_{0,i}$. From Lemma \ref{lem:discrete_LRBP_monotone} that $\diffi_{0,i^{*}}\geq\aver{i^{*}}{j^{*}}=\h{\diffi}_{1,i^{*}}$ and $\diffi_{0,j^{*}}\leq\aver{i^{*}}{j^{*}}=\h{\diffi}_{1,j^{*}}$, so $\lambda_{i^{*}}\leq\h{\lambda}_{1,i^{*}}$, $\lambda_{j^{*}}\geq\h{\lambda}_{1,j^{*}}$. Besides, since $\h{\lambda}_{1,i^{*}-1}=\lambda_{i^{*}-1}\leq\lambda_{i^{*}}$ and $\h{\lambda}_{1,j^{*}+1}=\lambda_{j^{*}+1}\geq\lambda_{j^{*}}$, we have $\h{\lambda}_{1,i^{*}-1}\leq\h{\lambda}_{1,i^{*}}$ and $\h{\lambda}_{1,j^{*}+1}\geq\h{\lambda}_{1,j^{*}}$, which means $\left\{ \h{\lambda}_{1,i}\right\} _{1\leq i\leq p}$ is a non-decreasing sequence. On the other hand, since $\diffi_{0,i^{*}}\geq\h{\diffi}_{1,i^{*}}$, we get $\lambda_{i^{*}}\leq\h{\lambda}_{1,i^{*}}$ and given that $\left\{ \h{\lambda}_{1,i}\right\} _{1\leq i\leq p}$ is non-decreasing and $\lambda_{i}\geq0$, we know $\h{\lambda}_{1,i}\geq0$, $\forall i$. Therefore, $\left\{ \h{\lambda}_{1,i}\right\} _{1\leq i\leq p}$ is a valid regularization sequence. Moreover, it is not difficult to show if we replace $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ by $\left\{ \hat{\vlambda}_{1}^{(p)}\right\} _{p\in\mathbb{N}}$, the solution will remain unaltered: \[ \tprox_{\hat{\vlambda}_{1}^{(p)}}(\vy^{(p)})=\tprox_{\vlambda^{(p)}}(\vy^{(p)}), \] Then consider another regularization sequence: \[ \tilde{\lambda}_{1,i}=\begin{cases} |y|_{(i)}-\diff_{1}(|y|_{(i)};F_{y},F_{\lambda}) & i\in\mathcal{I}_{T}\\ \lambda_{i} & i\notin\mathcal{I}_{T} \end{cases} \] We can see that $\left\{ \tilde{\vlambda}_{1}^{(p)}\right\} _{p\in\mathbb{N}}$ is also a RCS and the corresponding $\diff_{0}(y;F_{y},F_{\t{\lambda}_{1}})$ is exactly $\diff_{1}(y;F_{y},F_{\lambda})$. Note that $\diff_{0}(y;F_{y},F_{\lambda})$ contains at most one MDI within $I_{T}$, so it naturally falls within the settings of Lemma \ref{lem:1MDI_asym_sepa}. Based on the proof of Lemma \ref{lem:1MDI_asym_sepa}, we can get $\frac{1}{p}\sum_{i=1}^{p}\left[\diff_{1}(|y|_{(i)};F_{y},F_{\lambda})-\h g_{1,i}\right]^{2}\to0$, so $\frac{1}{p}\|\hat{\vlambda}_{1}^{(p)}-\tilde{\vlambda}_{1}^{(p)}\|^{2}\to0$. From Lemma \ref{lem:replace}, we have $\frac{1}{p}\|\tprox_{\hat{\vlambda}_{1}^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{1}^{(p)}}(\vy^{(p)})\|^{2}\to0$. Up to now, we have found a RCS $\left\{ \tilde{\vlambda}_{1}^{(p)}\right\} _{p\in\mathbb{N}}$ satisfying $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{1}^{(p)}}(\vy^{(p)})\|^{2}\to0$. Besides, the number of MDIs in $\diff_{0}(y;F_{y},F_{\t{\lambda}_{1}})$ is smaller than $\diff_{0}(y;F_{y},F_{\lambda})$ by 1, since $\diff_{0}(y;F_{y},F_{\t{\lambda}_{1}})$ is exactly $\diff_{1}(y;F_{y},F_{\lambda})$. If $\diff_{0}(y;F_{y},F_{\t{\lambda}_{1}})$ still contains an MDI, we can repeat the above two steps by constructing another $\t{\vlambda}_{2}^{(p)}$ s.t. $\frac{1}{p}\|\tprox_{\tilde{\vlambda}_{1}^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{2}^{(p)}}(\vy^{(p)})\|^{2}\to0$ and $\diff_{0}(y;F_{y},F_{\t{\lambda}_{2}})=\diff_{1}(y;F_{y},F_{\t{\lambda}_{1}})$. Since $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{1}^{(p)}}(\vy^{(p)})\|^{2}\to0$ and $\diff_{1}(y;F_{y},F_{\t{\lambda}_{1}})=\diff_{2}(y;F_{y},F_{\lambda})$, we get $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{2}^{(p)}}(\vy^{(p)})\|^{2}\to0$ and $\diff_{0}(y;F_{y},F_{\t{\lambda}_{2}})=\diff_{2}(y;F_{y},F_{\lambda})$. If total number of MDI in $\diff_{0}(y;F_{y},F_{\lambda})$ is $T<\infty$, continuing this process, we can find a $\left\{ \tilde{\vlambda}_{T}^{(p)}\right\} _{p\in\mathbb{N}}$, which is a regular converging sequence such that $\diff_{0}(y;F_{y},F_{\t{\lambda}_{T}})$ is non-decreasing and $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\tilde{\vlambda}_{T}^{(p)}}(\vy^{(p)})\|^{2}\to0$. Using Lemma \ref{lem:0MDI_asym_sepa}, we also have \begin{align*} [\tprox_{\tilde{\vlambda}_{T}^{(p)}}(\vy^{(p)})]_{i} & =\sgn(y_{i})\diff_{0}(y_{i};F_{y},F_{\t{\lambda}_{T}})\\ & =\sgn(y_{i})\diff_{T}(y_{i};F_{y},F_{\lambda})\\ & =\prox(y_{i};F_{y},F_{\lambda}) \end{align*} so we obtain $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\prox(\vy^{(p)};F_{y},F_{\lambda})\|^{2}\to0$. \end{IEEEproof} \subsubsection{Replacement by General Converging Sequences} Now we generalize the results in Sec. \ref{subsec:asym_sepa_RCS} to general converging sequences. The bridge between the regular and general converging sequences is the following three lemmas. The first lemma shows the sensitivity of $\tproxl(\vy)$ to the perturbation of $\vlambda$ and $\vy$. \begin{lem} \label{lem:replace} Let $\vy_{1},\vy_{2},\,\vlambda_{1},\,\vlambda_{2}\in\R^{p}$ and without loss of generality assume $\left\{ \lambda_{1,i}\right\} _{1\leq i\leq p}$ and $\left\{ \lambda_{2,i}\right\} _{1\leq i\leq p}$ are both non-decreasing sequences. It holds that: \[ \|\tprox_{\vlambda_{1}}(\vy_{1})-\tprox_{\vlambda_{2}}(\vy_{2})\|_{2}\leq2\left(\|\vlambda_{1}-\vlambda_{2}\|_{2}+\|\vy_{1}-\vy_{2}\|_{2}\right) \] \end{lem} \begin{IEEEproof} Let $f_{1}(\vx)=\frac{1}{2}\|\vy_{1}-\vx\|_{2}^{2}+\sum_{i=1}^{p}\lambda_{1,i}|x|_{(i)}$ and $f_{2}(\vx)=\frac{1}{2}\|\vy_{2}-\vx\|_{2}^{2}+\sum_{i=1}^{p}\lambda_{2,i}|x|_{(i)}$. Denote corresponding optimal solutions as: $\vx_{1}^{*}$ and $\vx_{2}^{*}$ and we know $\vx_{1}^{*}=\tprox_{\vlambda_{1}}(\vy_{1})$ and $\vx_{2}^{*}=\tprox_{\vlambda_{2}}(\vy_{2}).$ From optimality of $\vx_{1}^{*}$ and $\vx_{2}^{*}$, we have: \begin{align} f_{1}(\vx_{2}^{*})-f_{1}(\vx_{1}^{*}) & =f_{2}(\vx_{2}^{*})-f_{2}(\vx_{1}^{*})+\sum_{i=1}^{p}\left(\lambda_{1,i}-\lambda_{2,i}\right)\left(|x_{2}^{*}|_{(i)}-|x_{1}^{*}|_{(i)}\right)\nonumber \\ & \quad+\sum_{i=1}^{p}\left(y_{2,i}-y_{1,i}\right)\left(x_{2,i}^{*}-x_{1,i}^{*}\right)\nonumber \\ & \leq\|\vlambda_{1}-\vlambda_{2}\|\left(\|\vx_{1}^{*}\|^{2}+\|\vx_{2}^{*}\|^{2}-2\sum_{i=1}^{p}|x_{1}^{*}|_{(i)}|x_{2}^{*}|_{(i)}\right)^{1/2}\nonumber \\ & \quad+\|\vy_{1}-\vy_{2}\|\|\vx_{1}^{*}-\vx_{2}^{*}\|\nonumber \\ & \leq\left(\|\vlambda_{1}-\vlambda_{2}\|+\|\vy_{1}-\vy_{2}\|\right)\|\vx_{1}^{*}-\vx_{2}^{*}\|\label{eq:pertleq} \end{align} On the other hand, since $f_{1}(\vx)$ is strongly convex, at the optimal solution $\vx_{1}^{*}$, we have: \begin{align} f_{1}(\vx_{2}^{*})-f_{1}(\vx_{1}^{*}) & \geq\nabla^{\T}f_{1}(\vx_{1}^{*})(\vx_{2}^{*}-\vx_{1}^{*})+\frac{1}{2}\|\vx_{2}^{*}-\vx_{1}^{*}\|^{2}\nonumber \\ & =\frac{1}{2}\|\vx_{2}^{*}-\vx_{1}^{*}\|^{2}\label{eq:pertgeq} \end{align} Combining (\ref{eq:pertleq}) and (\ref{eq:pertgeq}) together, we have: \[ \|\vx_{1}^{*}-\vx_{2}^{*}\|\leq2\left(\|\vlambda_{1}-\vlambda_{2}\|+\|\vy_{1}-\vy_{2}\|\right) \] \end{IEEEproof} The second lemma shows the convergence of empirical quantile function to the theoretical one in $L^{2}$ sense: \begin{lem} \label{lem:seq_similarity}Let $\left\{ \vx^{(p)}\right\} _{p\in\mathbb{N}}$ be a converging sequence satisfying (A.1)-(A.4) with limiting CDF $F(x)$. Let $F^{-1}(z)=\inf\left\{ x:\:F(x)\geq z\right\} $ be the corresponding quantile function. When $p\to\infty$, the following holds: \begin{equation} \frac{1}{p}\sum_{i=1}^{p}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\to0\label{eq:L2_quantile} \end{equation} \end{lem} \begin{IEEEproof} We can write $x_{(i)}=\h F_{p+1}^{-1}(\frac{i}{p+1})$, where $\h F_{p}^{-1}(z)$ is the quantile function of empirical measure: $\mu_{p}(x)=\frac{1}{p}\sum_{i=1}^{p}\delta(x-x_{i})$. Choose $\veps>0$ such that $F^{-1}(z)$ is continuous at $z=\veps,1-\veps$ and let $A_{\veps}=\max_{z\in(\veps,1-\veps)}|F^{-1}(z)|$ and $I_{(p),\veps}\bydef\left\{ i\Big|\frac{i}{p+1}\in(\veps,1-\veps),\,|x_{(i)}|\leq A_{\veps}\right\} $. The summation in (\ref{eq:L2_quantile}) can be partitioned into two parts: (a) $\frac{1}{p}\sum_{i\in I_{(p),\veps}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}$, (b) $\frac{1}{p}\sum_{i\in I_{(p),\veps}^{C}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}$. We next analyze these two parts separately. To control part (a), we follows the method for proving Glivenko-Cantelli theorem \cite{van2000asymptotic}. Let us start by constructing a sequence of points $\left\{ z_{k}\right\} _{1\leq k\leq K}$ in the following way: (1) Find $\veps=w_{1}<w_{2}<\cdots<w_{\ell}=1-\veps$ satisfying $F^{-1}(w_{j+1})-F^{-1}(w_{j}^{+})\leq\veps$. This can be done by setting $w_{j}=\inf\left\{ w\Big|F^{-1}(w_{j+1})-F^{-1}(w)\leq\veps\right\} $, $j=1,2,\ldots,\ell-1$. In this way, we also have $F^{-1}(w_{j+1})-F^{-1}(w_{j})\geq\veps$ otherwise, by left-continuity of $F^{-1}(\cdot)$, $\exists w_{j}'<w_{j}$, $F^{-1}(w_{j+1})-F^{-1}(w_{j}')\leq\veps$ which contradicts the definition of $w_{j}$. Therefore, we have $\ell\leq\frac{2A_{\veps}}{\veps}+1$. (2) For any discontinuous point of $w_{k}$, find two continuous points $w_{k,L}$ and $w_{k,R}$, s.t. $w_{k}\in(w_{k,L},\,w_{k,R})$ and $w_{k,R}-w_{k,L}<\veps_{1}/2$. Since $F^{-1}(\cdot)$ has countably many discontinuous points, $w_{k,L}$, $w_{k,R}$ always exist and $\veps_{1}$ can be made arbitrarily small. (3) Add all $w_{k,L}$, $w_{k,R}$ to $\left\{ w_{k}\right\} _{1\leq k\leq l}$ to get $\left\{ z_{i}\right\} _{1\leq i\leq K}$ and we know $K\leq3\left(\frac{2A_{\veps}}{\veps}+1\right)$. Intervals $(z_{k-1},z_{k})$ formed by $\left\{ z_{i}\right\} _{1\leq i\leq K}$ can be categorized into two types: (1) one of $z_{k-1}$, $z_{k}$ is discontinuous, (2) $z_{k-1}$ and $z_{k}$ are both continuous points of $F^{-1}(\cdot)$. Let us use $C_{(p),\veps,\veps_{1}}$ to denote the set of all $i\in I_{(p),\veps}$ s.t. $\frac{i}{p+1}$ falls within the intervals of type (1). The cardinality of $C_{(p),\veps,\veps_{1}}$ satisfies: $|C_{(p),\veps,\veps_{1}}|\leq\left(\frac{2A_{\veps}}{\veps}+1\right)\veps_{1}p$. For any $i\in C_{(p),\veps,\veps_{1}}$with $w_{k}<\frac{i}{p+1}\leq w_{k+1}$, we have \[ \h F_{p+1}^{-1}(w_{k}^{+})\leq\h F_{p+1}^{-1}(\frac{i}{p+1})\leq\h F_{p+1}^{-1}(w_{k+1}) \] and \[ F^{-1}(w_{k}^{+})\leq F^{-1}(\frac{i}{p+1})\leq F^{-1}(w_{k+1}) \] which gives us \[ F^{-1}(w_{k})-F^{-1}(w_{k+1})-\delta_{k}\leq\h F_{p+1}^{-1}(\frac{i}{p+1})-F^{-1}(\frac{i}{p+1})\leq F^{-1}(w_{k+1})-F^{-1}(w_{k})+\delta_{k+1}, \] where $\delta_{k}=|\h F_{p+1}^{-1}(w_{k})-F^{-1}(w_{k})|$ and we have use the continuity of $F^{-1}(\cdot)$ at $w_{k}$ and $w_{k+1}$. Using the fact that $\h F_{p}^{-1}(\cdot)$ converges at its continuous points \cite{van2000asymptotic}, we have $\delta_{k}$, $\delta_{k+1}\to0$ as $p\to\infty$. Also by construction, we have $|F^{-1}(w_{k+1})-F^{-1}(w_{k})|\leq\veps$. Then we have: \begin{align*} \frac{1}{p}\sum_{i\in I_{(p),\veps}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2} & =\frac{1}{p}\sum_{i\in C_{(p),\veps,\veps_{1}}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\\ & \quad+\frac{1}{p}\sum_{i\in I_{(p),\veps}\backslash C_{(p),\veps,\veps_{1}}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\\ & \leq\left(\frac{2A_{\veps}}{\veps}+1\right)\veps_{1}A_{\veps}^{2}+\frac{2K}{p}(\veps^{2}+\max\left\{ \delta_{k}^{2}\right\} _{k\in C_{(p),\veps,\veps_{1}}}) \end{align*} Hence for any $\veps>0$, we can find small enough $\veps_{1}$ and large enough $p$ such that $\frac{1}{p}\sum_{i\in I_{(p),\veps}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\leq\veps^{2}$. Next we analyze part (b). Since $\frac{1}{p}\sum_{i=1}^{p}x_{i}^{2}\to\E X^{2}$ and $\frac{1}{p}\sum_{i\in I_{(p),\veps}}x_{i}^{2}\leq\frac{1}{p}\sum_{i=1}^{p}x_{i}^{2}$, by dominated convergence theorem, we have $\lim_{\veps\to0}\lim_{p\to\infty}\frac{1}{p}\sum_{i\in I_{(p),\veps}^{C}}x_{i}^{2}=0$. Similarly, $\lim_{\veps\to0}\lim_{p\to\infty}\frac{1}{p}\sum_{i\in I_{(p),\veps}^{C}}\left[F^{-1}(\frac{i}{p+1})\right]^{2}=0$. Therefore, $\lim_{\veps\to0}\lim_{p\to\infty}\frac{1}{p}\sum_{i\in I_{(p),\veps}^{C}}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}=0$. Combined with the analysis of part (a), we conclude that $\frac{1}{p}\sum_{i=1}^{p}\left[x_{(i)}-F^{-1}(\frac{i}{p+1})\right]^{2}\to0$. \end{IEEEproof} The third lemma shows any $\prox(y;F_{y},F_{\lambda})$ obtained from Algorithm \ref{alg:conti} is non-decreasing and Lipschitz continuous with constant 1. \begin{lem} \label{lem:regu_Lipschitz}For any $F_{y}$ and $F_{\lambda}$, $\prox(y;F_{y},F_{\lambda})$ obtained from Algorithm \ref{alg:conti} satisfies: $0\leq\prox(y_{2};F_{y},F_{\lambda})-\prox(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$, $\forall y_{1}\leq y_{2}$. \end{lem} \begin{IEEEproof} Based on Proposition \ref{prop:G_Lipschitz} and the fact that $\prox(y;F_{y},F_{\lambda})=\sgn(y)\max\left\{ 0,\diff_{t}(|y|;F_{y},F_{\lambda})\right\} $, we know that $\prox(y;F_{y},F_{\lambda})$ is also non-decreasing. Also using the identity: $\left|\max\left\{ 0,\,x_{1}\right\} -\max\left\{ 0,\,x_{2}\right\} \right|\leq|x_{1}-x_{2}|$, we can get $\prox(y_{2};F_{y},F_{\lambda})-\prox(y_{1};F_{y},F_{\lambda})\leq y_{2}-y_{1}$. \end{IEEEproof} We are now ready to prove Proposition \ref{prop:prox}. \subsubsection{Proof of Proposition \ref{prop:prox}} Let us denote the converging sequence in (\ref{eq:regu_convseq}) as $\left\{ \vy_{r}^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda_{r}^{(p)}\right\} _{p\in\mathbb{N}}$. For any converging sequence $\left\{ \vy^{(p)}\right\} _{p\in\mathbb{N}}$and $\left\{ \vlambda^{(p)}\right\} _{p\in\mathbb{N}}$ , from Lemma \ref{lem:seq_similarity}, we know $\frac{1}{p}\|\vy_{r}^{(p)}-\vy^{(p)}\|^{2},\,\frac{1}{p}\|\vlambda_{r}^{(p)}-\vlambda^{(p)}\|^{2}\to0$, as $p\to\infty$. Using Lemma \ref{lem:replace}, we have $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\tprox_{\vlambda_{r}^{(p)}}(\vy_{r}^{(p)})\|^{2}\to0$. Besides, from Lemma \ref{lem:asym_sepa_regu}, $\frac{1}{p}\|\tprox_{\vlambda_{r}^{(p)}}(\vy_{r}^{(p)})-\prox(\vy_{r}^{(p)};F_{y},F_{\lambda})\|^{2}\to0$ and from Lemma \ref{lem:regu_Lipschitz}, $\frac{1}{p}\|\prox(\vy_{r}^{(p)};F_{y},F_{\lambda})-\prox(\vy^{(p)};F_{y},F_{\lambda})\|^{2}\leq\frac{1}{p}\|\vy_{r}^{(p)}-\vy^{(p)}\|^{2}\to0$. Combining the above convergence results together, we obtain that: $\frac{1}{p}\|\tprox_{\vlambda^{(p)}}(\vy^{(p)})-\prox(\vy^{(p)};F_{y},F_{\lambda})\|^{2}\to0$. \section{Conclusion \label{sec:Conclusions}} We have established the asymptotic characterization of SLOPE in high-dimensional regime. Although SLOPE is a high-dimensional regularized regression method, asymptotically its statistical performance can be fully characterized by a few scalar random variables. We showed this low-dimensional characterization enables us to design a regularization sequence that yields optimal estimation and variable selection performance of SLOPE.
1,477,468,750,743
arxiv
\section{Introduction}\label{sec:introduction} Sound generation algorithms synthesize a time domain waveform. This waveform should be coherent and appropriate for its intended use. These waveforms can convey complex and varied information. Deep generative networks \cite{gm_comprehensive_2020} have demonstrated great potential for such tasks having been used for the synthesis of a range of sounds, from pleasant pieces of music to natural speech \cite{huzaifah_deep_2020}. These models discover latent representations based on the distribution of the initial data and then sample from this distribution to generate new acoustic signals with the same properties as the original ones. In many cases, the deep learning models can operate along with signal processing algorithms and enhance their expression capabilities \cite{engel_ddsp_2020}\cite{valin_lpcnet_2019}. The representation of the sound embraced by the deep neural network plays a major role on the development of the algorithm. Raw time domain audio is a rich representation which leads to massive information making the network computationally expensive and therefore slow. Compressed Time-frequency representations based on spectrograms can decrease the computer power needed but the parameter detection and synthesis of the sound is usually a challenging task and the loss of information can cause significant reconstruction error \cite{govalkar_comparison_2019}. Parameters extracted from state-of-the-art vocoders have also been proposed for deep neural network applications \cite{blaauw_neural_2017}. These parameters demonstrate a potential in marrying the deep generative models with statistical parametric synthesizers. Finally, contemporary investigations allow the network to determine the feature needed for the task \cite{engel_neural_2017}. Linguistic and acoustic features can be encoded into latent representations such as embeddings. Apart from an overview of the audio representations existing in sound synthesis implementations, this paper additionally quotes popular schemes for conditioning a deep generative network with auxiliary data. Conditioning in generative models can control the aspects of the synthesis and lead to new samples with specific characteristics \cite{manzelli_conditioning_2018}. Furthermore, the paper highlights examples of deep generative models for audio generation applications. Deep neural networks have demonstrated remarkable progress in the field demonstrating impressive results. A final section discusses evaluation processes for synthesised sound. Subjective evaluation via listening tests are generally considered the most reliable measure of quality. However, multiple other metrics for assessing a generative model have been proposed converting both acoustic signals to intermediate representations of them to be examined. Consequently, audio representations assume an essential role not only as input data but also influence the network architecture, the conditioning technique as well as the evaluation process. \section{Input Representations} In the literature, numerous audio representations have been proved beneficial for audio synthesis applications. Many times, comparisons have been conducted between different forms of the sound to reveal the most appropriate representation for a specific deep learning architecture. Raw audio and time-frequency representations usually present the first attempts in such experiments. However, recent studies also look to higher level forms that offer more meaningful description such as embeddings, or multiple sound features like the fundamental frequency, loudness and features extracted by state-of-the-art vocoders such as WORLD \cite{morise_world_2016}. The table \ref{T1} summarizes the advantages and disadvantages of each sound representation. \subsection{Waveform - Raw Audio} The term raw audio is often used to refer to a waveform encoded using pulse code modulation (PCM). This is the sampling of a continuous waveform in time and amplitude. It represents the waveform as a sequence of numbers, each number representing an amplitude sample at a chosen sampling frequency. In order for this discrete sequence of samples to capture all the necessary information, the highest frequency in the signal should adhere to the Nyquist-Shannon theorem \cite{shannont_communication_1949}. According to this theorem, only frequencies of less than half the sampling frequency can be reproduced from the sampled signal. A typical sampling frequency for audio applications is 44.1kHz. Each real number is assigned to the approximate fixed value in a finite set of discrete numbers. The most common levels for quantization are stored in 8 bits (256 levels), 16 bits (65536 levels) and 24 bits (16.8 million levels). Therefore, a sound with a duration of one second sampled at 44.1kHz generates 44100 samples. This representations is considered extremely informative even for deep learning networks. In order for the outcome of the deep learning model to be more effective, a pre-processing step can be used to reduce the quantization range of the raw audio. Many research approaches \cite{oord_wavenet_2016}\cite{oord_parallel_2017}\cite{binkowski_high_2019} apply $\mu$-law to decrease the possible values of each prediction. $\mu$-law is presented in Eq. \ref{mulaw} where $-1<x<1$ and $\mu$ equals the number of levels created after the transformation. \begin{equation}\label{mulaw} f(x)=sgn(x)\frac{ln(1+\mu|x|)}{ln(1+\mu)} \end{equation} Although non-linear quantization processes such as $\mu$-law received much attention the last years, the majority of the existing papers use a normalized high resolution signal as input \cite{kim_flowavenet_2019}. Finally, other applications include linear quantization of the input waveform \cite{mehri_samplernn_2017}\cite{yamamoto_probability_2019} and different designs for most and less significant bits \cite{kalchbrenner_efficient_2018}. \subsection{Spectrograms} A spectrogram is a time/frequency visual representation of sound. A spectrogram can be obtained via the Short Time Fourier Transform (STFT), where the Fourier Transform is applied to overlapping segments of the waveform. The Discrete Fourier Transform (DFT) is presented by the equation Eq. \ref{dft} for $k=0,1,..,N-1$ where N is the number of samples and k is the number of segments. The spectrogram uses just the absolute values of the STFT, discarding the phase. This type of spectrograms has been used in many by a variety of papers \cite{wang_tacotron_2017}\cite{neekhara_expediting_2019}\cite{arik_fast_2019}. \begin{equation}\label{dft} X(k)=\sum_{n=0}^{N-1}x(n)e^{-j\omega_{k}n} \end{equation} Apart from the original spectrogram, deep learning architectures have also experimented with non linear spectrograms such as mel-spectrograms \cite{prenger_waveglow_2018}\cite{peng_non-autoregressive_2020}\cite{aouameur_neural_2019}\cite{ren_fastspeech_2019}\cite{ren_fastspeech_2021}\cite{liu_wavetts_2020}\cite{vasquez_melnet_2019} or Constant-Q Transformations (CQT) \cite{huang_timbretron_2019}. The mel-spectrogram is generated by the application of perceptual filters on the DFT called mel-filter bands. The most common formula for encoding mel-filter bands is presented by Eq. \ref{eq_mel} where $f$ is the frequency in Hertz. However, other models have captured the perceptual transformation applying a linear transformation until 1kHz and a logarithmic above this threshold. \begin{equation}\label{eq_mel} mel = 2595log_{10}(1+\frac{f}{700}) \end{equation} CQT is another time-frequency representation where the frequencies are geometrically spaced. The centre frequencies of the filters are calculated from the result of the formula $\omega_k = 2^{\frac{k}{b}}\omega_0$ where $k = 1, 2, .. k_{max}$ and b is a constant number. The bandwidth of each frequency then comes as $\delta_k = \omega_{k+1} - \omega_k = \omega_k(2\frac{1}{b}-1)$ and therefore the frequency resolution is determined by the Eq. \ref{cqt} where Q is the quality factor. \begin{equation}\label{cqt} Q = \frac{\omega_k}{\delta_k} = (2^{\frac{1}{b}}-1)^{-1} \end{equation} CQT is a representation with different frequency resolution in low and high frequencies. However, the phase part is discarded and the representation is in most of the cases irreversible. Following this argument, Velasco et al \cite{velasco_constructing_2011} proposed an invertible CQT based on nonstationary Gabor frames. Another variation of CQT is rainbowgrams. Rainbowgrams proposed by Engel et al \cite{engel_neural_2017} using colors to encode time derivatives of the phase. In addition, more complicated spectrogram-based representations have also been investigated. GANSynth \cite{engel_gansynth_2019} conducted experiments with numerous spectrograms including scaled logarithmic amplitude and phase of the STFT, increased resolution of the original spectrogram or applied mel-filters. Also, they examined an Instantaneous Frequency based spectrogram where the phase of the STFT is scaled and unwraped (add $-\pi$) and then the finite difference between the frames is computed. Other applications \cite{ping_clarinet_nodate}\cite{donahue_adversarial_2019}, also, made comparisons between raw audio and spectrogram to uncover the most functional representation for their deep learning model. \subsection{Acoustic Features} Overcoming the wealth of acoustic information presenting in a sound waveform, various studies extract perceptual features from the original signal. These acoustic features can be represented by phoneme inputs \cite{wang_style_nodate}, fundamental frequency and spectral features \cite{wang_neural_2019} or multiple information such as the velocity, instrument, pitch and time \cite{defossez_sing_nodate}. Other implementations have included cepstral coefficients \cite{valin_lpcnet_2019}\cite{subramani_vapar_2020} or a variety of linguistic and acoustic features \cite{juvela_glotnetraw_2019}\cite{binkowski_high_2019}. Finally, widely recommended parameters have also been extracted by the WORLD vocoder \cite{blaauw_neural_2017}\cite{yang_statistical_2017}\cite{henter_deep_2018}. \subsection{Embeddings} Embeddings initially introduced by Natural Language Processing (NLP) in order to convert a word or sentence into a real-valued vector. This approach assisted the process of text in deep learning applications by inserting the property of closer embeddings in vector space to encode words with similar meaning. The same approach has been adopted by sound processing to reduce the dimensionality of the signal \cite{bitton_neural_2020}\cite{peng_non-autoregressive_2020}, enhance the timbre synthesis \cite{engel_ddsp_2020} or even generate a more interpretable representation \cite{esling_generative_2018}\cite{esling_universal_2019} to effectively extract parameters for a synthesizer. In \cite{engel_neural_2017} an autoencoder generates a latent representation to condition a WaveNet model while Dhariwal et al \cite{dhariwal_jukebox_2020} implemented three separate encoders to generate vectors with different temporal resolutions. \subsection{Symbolic} In music processing, the term symbolic refers to the use of representations such as Musical Instrument Digital Interface (MIDI) or piano rolls. MIDI is a technical standard that describes a protocol, a digital interface or the link for the simultaneous operation between multiple electronic musical instruments. A MIDI file demonstrates the notes being played in every time step. Usually this file consists of information of the instrument being played, the pitch and its velocity. MidiNet \cite{yang_midinet_2017} is one of the most popular implementations using MIDI to generate music pieces. Piano roll constitutes a more dense representation of MIDI. A piece of music can be represented by a binary $N \times T$ matrix where N is the number of playable notes and T is the number of timesteps. In MuseGAN \cite{dong_musegan_2017}, Generative Adversarial Networks (GANs) have been applied for music generation using multiple-track piano-roll representation. Also, in DeepJ \cite{mao_deepj_2018}, they scaled the representation matrix between 0 and 1 to capture the note's dynamics. The most notable disadvantage of symbolic representations is that holding a note and replaying a note are demonstrated by the same representation. To differentiate these two stages, DeepJ included a second matrix called \textit{replay} along with the original matrix \textit{play}. \begin{table*} \centering \caption{Overview of sound representations} \label{T1} \begin{tabular}{||P{2cm} |P{1.6cm}|P{5cm}|P{5cm}|P{2cm}||} \hline Representation & Papers & Advantages & Disadvantages & Comments \T\B\\ \hline\hline \multirow{3}{*}{Waveform} & \T \cite{oord_wavenet_2016}\cite{oord_parallel_2017}\cite{binkowski_high_2019}\cite{kim_flowavenet_2019}\cite{mehri_samplernn_2017} \cite{yamamoto_probability_2019}\cite{kalchbrenner_efficient_2018} \B & \multirow{3}{5cm}{-Completely describes the waveform. \\ -Directly generates the output waveform.} & \multirow{3}{5cm}{-Computationally expensive. \\ -Unstructured representation that does not reflect sound perception.} & \multirow{3}{2cm}{\centering Used as input or conditional representation}\\ [1ex] \hline \multirow{4}{*}{Spectrograms} & \T \cite{wang_tacotron_2017}\cite{neekhara_expediting_2019}\cite{arik_fast_2019}\cite{prenger_waveglow_2018}\cite{peng_non-autoregressive_2020}\cite{aouameur_neural_2019}\cite{ren_fastspeech_2019}\cite{ren_fastspeech_2021}\cite{liu_wavetts_2020}\cite{vasquez_melnet_2019}\cite{huang_timbretron_2019}\B & \multirow{4}{5cm}{-Interpretable representations that are related to sound perception. \\ -Easy to illustrate/plot.} & \multirow{4}{5cm}{-Typically phase is discarded meaning it is not directly invertible.} & \multirow{4}{2cm}{\centering Used as input or conditional representation}\\ [1ex] \hline \multirow{4}{*}{Acoustic features} & \T \cite{wang_style_nodate}\cite{wang_neural_2019}\cite{defossez_sing_nodate}\cite{valin_lpcnet_2019}\cite{subramani_vapar_2020}\cite{juvela_glotnetraw_2019}\cite{binkowski_high_2019}\cite{blaauw_neural_2017}\cite{yang_statistical_2017}\cite{henter_deep_2018} \B & \multirow{4}{5cm}{-Compressed, descriptive representation of aspects of sound.} & \multirow{4}{5cm}{-Difficult to synthesize waveforms with long term coherence. \\ -Typically don’t fully specify sounds.} & \multirow{4}{2cm}{\centering Mostly used for \\ conditioning}\\ [1ex] \hline \multirow{3}{2cm}{\centering Latent\\ representations} & \T \cite{engel_ddsp_2020}\cite{peng_non-autoregressive_2020}\cite{bitton_neural_2020}\cite{esling_generative_2018}\cite{esling_universal_2019}\cite{engel_neural_2017}\cite{dhariwal_jukebox_2020} \B & \multirow{3}{5cm}{-Similar sounds lead to smaller distance in multi-dimensional space. \\ -Compressed representation. } & \multirow{3}{5cm}{-Losses in decoding. \\ -Can be difficult to interpret. } & \multirow{3}{*}{Mostly VAEs}\\ [1ex] \hline \multirow{2}{*}{Symbolic} & \T \cite{yang_midinet_2017}\cite{dong_musegan_2017}\cite{mao_deepj_2018} \B & \multirow{2}{5cm}{-Meaningful description of musical content. } & \multirow{2}{5cm}{-Very high level description of audio.} & \\ [1ex] \hline \end{tabular} \end{table*} \section{Conditioning Representations}\label{conditioning_representations} Neural networks are able to generate sound based on the statistical distribution of the training data. The more uniform the input data to the network is, the more natural outcome can be achieved. However, in cases where the amount of training data is not sufficient, additional data with similar properties can be included by applying conditioning methods. Following these techniques, the generated sound can be conditioned on specific traits such as speaker's voice \cite{zhao_wasserstein_2018}\cite{vasquez_melnet_2019}, independent pitch \cite{engel_ddsp_2020}\cite{jin_fftnet_2018}\cite{subramani_vapar_2020}, linguistic features \cite{arik_deep_2017}\cite{kalchbrenner_efficient_2018} or latent representations \cite{valin_lpcnet_2019}\cite{dong_musegan_2017}. Instead of one-hot-embeddings, some implementations have also used a confusion matrix to capture a variation of emotions \cite{henter_deep_2018}, while others provided supplementary positional information of each segment conditioning music to the artist or genre \cite{dhariwal_jukebox_2020}. After training, the user is able to decide between the conditioning properties of the synthesised sound. \subsection{Additional Input} The simplest strategy for applying conditioning to deep learning architectures is by including auxiliary input data while training. Two types of conditioning have been proposed, global and local \cite{oord_wavenet_2016}\cite{kong_diffwave_2021}. In global conditioning, additional latent representations can be appended across all training data. Global conditioning can encode speaker's voice or linguistics features. Local conditioning usually refers to supplementary timeseries with lower sampling rate than the original waveform or even mel-spectrograms, logarithmic fundamental frequency or auxiliary pitch information. WaveNet has achieved one of the most effective strategies for conditioning deep neural networks \cite{boilard_literature_nodate}. Therefore, later sound generation schemes adopted a WaveNet network for conditioning. The majority of these works conditioned their model to spectrograms \cite{ping_clarinet_nodate}\cite{shen_natural_2018}\cite{arik_deep_2017}\cite{ping_deep_2018}\cite{wang_style_nodate}\cite{li_neural_2019}\cite{huang_timbretron_2019}\cite{angrick_speech_2019} while others included linguistic features and pitch information \cite{oord_parallel_2017}\cite{arik_deep_nodate}, phoneme encodings \cite{blaauw_neural_2017}, features extracted from the STRAIGHT vocoder \cite{tamamori_speaker-dependent_2017} or even MIDI representations \cite{hawthorne_enabling_2019}. Although it has been proven that convolutional networks are capable of effective conditioning, other architectures can use auxiliary input data as well. Recurrent neural networks such as LSTMs have been adopted conditioning as frame-level auxiliary feature vectors \cite{ling_waveform_2018} or as one-hot representation encoding music style \cite{mao_deepj_2018}. Autoencoders can be conditioned including additional input to the encoder \cite{aouameur_neural_2019}\cite{subramani_vapar_2020} but also as input only to the decoder \cite{bitton_neural_2020}. \subsection{Input to the Generator} Generative Adversarial Networks (GANs) consist of two separate networks, the Generator and the Discriminator. Following the fundamental properties of GANs, the Generator converts random noise to structured data while the Discriminator endeavors to classify a signal as original or generated. For applying conditioning in GANs, the most common technique constitutes a biased input to the Generator. In sound synthesis, a well established conditioning method includes the mel-spectrogram as input to the Generator \cite{yamamoto_parallel_2020}\cite{yamamoto_probability_2019}\cite{neekhara_expediting_2019}. This way, the synthesised sound is not just a product of a specific distribution but it also obtains desirable properties. For example, it can be enforced to conditioning on predetermined instrument or voice. Furthermore, a Generator conditioned on spectrograms can also be used as a vocoder \cite{kumar_melgan_2019}. In addition to the mel-spectrogram, other implementations have been conditioned on raw audio \cite{pascual_segan_2017}, one-hot vectors to encode musical pitch \cite{engel_gansynth_2019}, linguistic features \cite{binkowski_high_2019}, or latent representations to identify speaker \cite{donahue_end--end_2021}. \subsection{Other} At last, other variations of conditioning have been introduced as well. Kim et al \cite{kim_flowavenet_2019} adjusted conditioning through the loss function. They estimated an auxiliary probability density using mel-spectrograms for local conditioning. Pink et al \cite{ping_waveflow_nodate} applied bias terms in every layer of the convolutional network using also mel-spectrograms. Extra bias to the network has been also proposed by \cite{rao_grapheme--phoneme_2015} to encode linguistic features while in \cite{engel_neural_2017} every layer was biased with a different linear projection of the temporal embeddings. In \cite{yang_statistical_2017} linguistic features were added to the output of each hidden layer in the Generator while in \cite{yang_midinet_2017} a new network was introduced by the name conditionerCNN to work along with the Generator encoding chords for melody generation. Finally, Juvela et al \cite{juvela_glotnetraw_2019} conducted a comparative study of conditioning methods. \section{Methods} During recent years, deep learning models have significantly contributed to research on sound generation. Using a variety of deep learning algorithms, multiple representations have been applied. The most common architectures include autoregressive methods, variational autoencoders (VAE), adversarial networks and normalising flows. However, many approaches can fall in more than one category. \subsection{Autoregressive} Autoregressive models define a category of generative models where every new sample in a sequence of data depends on previous samples. Autoregressive deep neural networks can be represented by architectures that demonstrate this continuation implicitely or explicitely. Conventional methods that implicitely indicate a time-related manner are the recurrent neural networks. These models are able to recall previous data dynamically using complex hidden state. SampleRNN \cite{mehri_samplernn_2017} is one well established research work that applies hierarchical recurrent neural networks such as GRU and LSTM on different temporal resolutions for sound synthesis. In order to illustrate the temporal behaviour of the network, Mehri et al conducted experiments to test the model's memory by injecting one second of silence between two random sequential samples. Other significant papers on autoregressive models using recurrent neural networks are WaveRNN \cite{kalchbrenner_efficient_2018}, MelNet \cite{vasquez_melnet_2019} or LPCNet \cite{valin_lpcnet_2019}. In WaveRNN, they introduced a method for reducing the sampling time by using a batch of short sequences instead of a unique long sequence while maintaining high quality sound. Generative models where the synthesis of the sequential samples follows a conditional probability distribution like the one in Eq.\ref{chain} are able to explicitly demonstrate temporal dependencies. \begin{equation}\label{chain} p(X) = \prod_{t-1}^{T}p(x_{t}|x_{1},...,x_{t-1}) \end{equation} WaveNet \cite{oord_wavenet_2016} presents the most influential architecture of explicit autoregressive models. The probability distribution can be imitated by a stack of convolutional layers. However, to improve efficiency, the sequential data passes through a stack of dilated causal convolutional layers where the input data are masked to skip some dependencies. Following a similar scheme, FFTNet \cite{jin_fftnet_2018} takes advantage of convolutional networks mimicking the FFT algorithm while upsampling the input data. However, to clear up the confusion, convolutional networks do not always lead to autoregressive models \cite{arik_fast_2019}. Autoregressive models have been initially proposed for sequential data. Therefore, in sound synthesis, raw audio is ordinarily used as the input representation. However, many auxiliary representations have been applied conditioning the audio generation on a variety of properties. More details about conditioning techniques for autoregressive models have been already presented in Section \ref{conditioning_representations}. Since autoregressive models can be applied on sequential data, they are well established in sound generation related topics. Autoregressive models are easy to train and they can manipulate data in real time. Furthermore, convolutional-based models can be trained in parallel. Nevertheless, although these models can be paralleled during training, the generation is sequential and therefore slow. Synthesised data are affected only by previous samples, providing half way dependencies. Finally, the generation can be consistent to specific properties for a definite number of samples and the outcome often lacks global structure. \subsection{Normalizing Flow} Normalizing flows constitute a family of generative models consisting of multiple simple distributions for transforming input data to latent representations. A sequence of simple, invertible and computationally inexpensive mappings $z \backsim p(z)$ can model a reversible complex one. This complex transformation is presented in Eq. \ref{flow1} and the inverse can be achieved by repeatedly changing the variables as shown in Eq. \ref{flow2}. The mapping functions, then, can be parametrised by a deep neural network. \begin{equation}\label{flow1} x = f_0 \circ f_1 \circ ... \circ f_k(z) \end{equation} \begin{equation}\label{flow2} z = f_{k}^{-1} \circ f_{k-1}^{-1} \circ ... \circ f_{0}^{-1}(x) \end{equation} WaveGlow \cite{prenger_waveglow_2018}, a flow-based generative network, can synthesise sound from its mel-spectrogram. By applying an Affine Coupling Layer and a 1x1 Invertible Convolution, the model aims to maximise the likelihood of the training data. The implementation has been proposed by NVIDIA and it is able to generate sound in real time. Insightful alternatives have also been proposed on normalising flows by using only a single loss function, without any auxiliary loss terms \cite{kim_flowavenet_2019} or by applying dilated 2-D convolutional layers \cite{ping_waveflow_nodate}. Finally, in order to reduce the number of repeated iterations needed by normalising flows, they have been merged with autoregressive methods. This architecture manages to increase the performance of autoregressive models since the sampling can be processed in parallel. Using Inverse Autoregressive Flows (IAF), Oord et al increased the efficiency of WaveNet \cite{oord_parallel_2017}. Their implementation follows a "probability density distillation" where a pre-trained WaveNet model is used as a teacher and scores the samples a WaveNet student outputs. This way, the student can be trained in accordance with the distribution of the teacher. A similar approach has been adopted by ClariNet \cite{ping_clarinet_nodate}, where a Gaussian inverse autoregressive flow is applied on WaveNet to train a text-to-wave neural architecture. \subsection{Adversarial Learning} Unlike the Inverse Autoregressive Flows where a pre-trained teacher network assist a student model, in adversarial learning, two neural networks match against each other in a two-player minimax game. The fundamental architecture of Generative Adversarial Networks (GANs) is based on two models, the Generator (G) and the Discriminator (D). The Generator maps a latent representation to the data space. In a vanilla GAN, the Generator maps random noise to a desirable representation. For sound synthesis this representation could be raw audio or spectrogram. This desired representation, original or generated, is used as input to the Discriminator which is trained to distinguish between real and fake data. The maximum benefit from GANs is acquired when the Generator produces perfect data and the Discriminator is not able to differentiate between real and fake data. From a more technical point of view, the Discriminator is trained using only the distribution of the original data. Its purpose is to maximise the probability of identifying real and generated data. On the other hand, the Generator is trained through the Discriminator. Information about the original distribution of the dataset are concealed from it and its aim is to minimise the error of the Discriminator. This minimax game can be summarised by the Eq. \ref{gans}. \begin{equation}\label{gans} \begin{aligned} \min_{G}\max_{D}V(D,G) = \mathbb{E}_{x \backsim p_{data}(x)}[\log D(x)] \\ + \mathbb{E}_{z \backsim p_{z}(z)}[\log(1-D(G(z)))] \end{aligned} \end{equation} On the field of sound generation a variety of implementations have been proposed using numerous representations. In \cite{engel_gansynth_2019}, spectrograms were generated using upsampling convolutions for fast generation while in \cite{donahue_adversarial_2019}, they investigated whether waveform or spectrograms are more effective for GANs applying the Wasserstein loss function. In Parallel WaveGAN \cite{yamamoto_parallel_2020}, a teacher-student scheme was adopted using non-autoregressive WaveNet in order to improve WaveGAN's efficiency. Yamamoto et al \cite{yamamoto_probability_2019} applied GANs using a IAF generator optimised by a probability density distillation algorithm. Also, in GAN-TTS \cite{binkowski_high_2019}, they examined an ensemble of Discriminators to generate acoustic features using the Hinge loss function along with \cite{kumar_melgan_2019}\cite{donahue_end--end_2021}. Lastly, GANs have also been applied in a variety of applications such as text-to-speech applications \cite{neekhara_expediting_2019}, speech synthesis \cite{oyamada_generative_2018}\cite{saito_statistical_2018}, speech enhancement \cite{pascual_segan_2017} or symbolic music generation \cite{dong_musegan_2017}. \subsection{Variational Autoencoders} An autoencoder is one of the fundamental deep learning architectures consisting of two separate networks, an encoder and a decoder. The encoder compresses the input data into a latent representation while the decoder synthesises data from the learned latent space. The original scheme of an autoencoder was initially created for dimensionality reduction purposes. Although theoretically the decoder bear some resemblance to the generator of GANs, the model is not well qualified for the synthesis of new examples. The network endeavors to reconstruct the original input, therefore it lacks of expressiveness. To use autoencoders as generative models, variational autoencoders have been proposed \cite{kingma_auto-encoding_2014}. In this architecture, the encoder first models a latent distribution and then the network samples from the distribution to generate latent examples. The success of the variational autoencoders is mostly based on the Kullback–Leibler (KL) divergence used as a loss function. The encoder introduces a new distribution $q(z|X)$ to estimate $p(z|X)$ as much as possible by minimising the KL divergence. The complete loss function is demonstrated in the Eq. \ref{vae} where the first term (called \textit{reconstruction loss}) is applied on the final layer and the second term (called \textit{regularization loss}) adjusts the latent layer. \begin{equation}\label{vae} \mathcal{L} = \mathbb{E}_{z \backsim q(z|X)} [\log p(X|z)] - D_{KL}[q(z|X)||p(z)] \end{equation} Many variations of VAE have been applied for sound generation topics. In \cite{engel_ddsp_2020} they used VAE with feedforward networks and an additive synthesiser to reproduce monophonic musical notes. In \cite{pandey_new_2019} and \cite{bitton_neural_2020} they applied convolutional layers while in \cite{subramani_vapar_2020} a Variational Parametric Synthesiser was proposed using a conditional VAE. A modification of variational autoencoders proposed for music synthesis is VQ-VAE \cite{dhariwal_jukebox_2020}. In this approach, the network is trained to encode the input data into a sequence of discrete tokens. Jukebox introduces this method to flatten the data and process it using autoregressive Transformers. \section{Evaluation} Although in the last decade generative models presented significant improvements, a definitive evaluation process still remains an open question. Many mathematical metrics have been proposed for perceptually evaluating the generative sound and usually a transformation to another audio representation have been adopted. However, despite the numerous attempts, none of these metrics are as reliable as the subjective evalution of human listeners. \subsection{Perceptual Evaluation} Human evaluation usually accounts for the mean opinion score between a group of listeners. To conduct the study, many researchers used crowdMOS \cite{ribeiro_crowdmos_nodate}, a user-friendly toolkit for performing listening evaluations. As well as the mean opinion score, a confidence interval is also been computed. Furthermore, in order to attract an accountable number of subjects with specific characteristics, Amazon Mechanical Turk has been widely used. In many cases, raters have been asked to pass a hearing test \cite{prenger_waveglow_2018}, keep headphones on \cite{kumar_melgan_2019}\cite{oord_wavenet_2016}, or only native speakers for evaluating speech have been asked \cite{yamamoto_parallel_2020}\cite{kumar_melgan_2019}\cite{li_neural_2019}. In these mean opinion score tests, subjects have been asked to rate a sound in a five-point Likert scale in terms of pleasantness \cite{prenger_waveglow_2018}, naturalness \cite{oord_wavenet_2016}\cite{binkowski_high_2019}\cite{donahue_end--end_2021}, sound quality \cite{kalchbrenner_efficient_2018} or speaker diversity \cite{donahue_adversarial_2019}. In addition, subjects have been requested to express a preference between sounds of two generative models hearing the same pitch \cite{engel_gansynth_2019} or speech \cite{kalchbrenner_efficient_2018}\cite{oord_wavenet_2016}. Finally, for evaluating WaveGAN \cite{donahue_adversarial_2019}, humans listened to digits between one to ten and were asked to indicate which number they heard. \subsection{Number of Statistically-Different Bins} The Number of Statistically-Different Bins (NDB) is a metric for unconditional generative models in order to estimate the diversity of the synthesised examples. Clustering techniques are applied on the training data creating cells of similar properties. Then, the same algorithm tries to categorise the generated data into the cells. If a generated example does not belong to a predefined cluster, then the generated sound is statistically significantly different. GANSynth \cite{engel_gansynth_2019} used $k$-means to map the log spectrogram of the generated sound into $k = 50$ Voronoi cells. As well as Mean Opinion Score and the Number of Statistically-Different Bins, GANSynth also used Inception Score, Pitch Accuracy and Pitch Entropy and Frechet Inception Distance for evaluation purposes. The rest of the metrics will be analysed in the following sections. A similar set of evaluation metrics has also been adopted by \cite{kong_diffwave_2021} including NDB. \subsection{Inception Score} The Inception Score (IS) is another perceptual metric which correlates with human evaluation and is mostly adopted by GANs. For the Inception Score, a pre-trained Inception classifier is applied to the output of the generative model. In order to measure the diversity of the synthesised data, the IS calculates the KL divergence between the model scores $P(y|x)$ and the marginal distribution $P(y)$ as it can be expressed by the Eq.\ref{IS} for every possible class \cite{salimans_improved_2016}. The IS is maximised when the generated examples belong to only one class and every class is predicted equally often. \begin{equation}\label{IS} IS = exp(\mathbb{E}_{x}D_{KL}(P(y|x)||P(y))) \end{equation} In \cite{engel_gansynth_2019} and \cite{engel_neural_2017}, a pitch classifier is trained on spectrograms of the NSynth dataset while in WaveGAN \cite{donahue_adversarial_2019}, the classifier is trained on normalised log mel-spectrograms having zero mean and unit variance. Finally, metrics like Pitch Accuracy and Pitch Entropy or a nearest neighbour technique have been adopted by GANSynth and WaveGAN respectively to further evaluate the efficiency of their Inception Score. Finally, in \cite{kong_diffwave_2021} they also applied a modified inception score. \subsection{Distances-based measurements} This evaluation category includes metrics that measure the distance between representations of the original data and the distribution of the generated examples. Binkowski et al. proposed two distance-based metrics, the Fr\'{e}chet DeepSpeech Distance (FDSD) and the Kernel DeepSpeech Distance (KDSD) \cite{binkowski_high_2019} for evaluating their text-to-speech model. The two metrics make use of the the Fr\'{e}chet distance and the Maximum Mean Discrepancy respectively on audio features extracted by a speech recognition model. The Fr\'{e}chet or 2-Wasserstein distance has been proposed by other research papers as well. Engel et al \cite{engel_gansynth_2019} applied the Fr\'{e}chet Inception Distance on features extracted by a pitch classifier while Kilgour et al \cite{kilgour_frechet_2019} used this distance to measure the intensity of a distortion in generated sound. However, although many researchers report successful results using 2-Wasserstein, Donahue et al \cite{donahue_end--end_2021} reported that a similar evaluation metric did not produce a desirable outcome in their experiments. Distances-based measurements have also been investigated individually by separate parameter estimations. In \cite{engel_ddsp_2020} distances between the generated loudness and fundamental frequency of synthesised and training data are used. \subsection{Spectral Convergence} The Spectral Convergence expresses the mean difference between the original and the generated spectrogram. It has been applied by \cite{dhariwal_jukebox_2020}\cite{arik_fast_2019}\cite{tamamori_speaker-dependent_2017}\cite{bitton_neural_2020} in order to evaluate their synthesised music. The Spectral Convergence can be expressed by the Eq.\ref{spectral_convergence} which is also identified as the minimization process of the Griffin-Lim algorithm. \begin{equation}\label{spectral_convergence} SC = \sqrt{\frac{\sum_{n,m}|S(n,m)-\widetilde{S}(n,m)|^2}{\sum_{n,m}S(n,m)}} \end{equation} \subsection{Log Likelihood} A final evaluation metric includes a Negative Log Likelihood (NLL) \cite{kalchbrenner_efficient_2018}\cite{mehri_samplernn_2017} and an objective Conditional Log Likelihood (CLL) \cite{kim_flowavenet_2019} usually measured in bits per audio sample. \section{Conclusion} The choice of audio representation is one of the most significant factors in the development of deep learning models for sound synthesis. Numerous representations have been proposed by previous researchers focusing on different properties. Raw audio is a direct representation demanding notable memory and computational cost. It is also not considered for evaluating purposes since different waveforms can perceptually produce the same sound. Spectrograms can overcome some of the disadvantages of raw audio and have been considered as an alternative for training as well as for evaluation. However, reconstructing the original sound from its spectrogram is a challenging task since it may produce sound suffering from distortions and lack of phase coherence. Recently, other audio representations have received much attention such as latent representations, embeddings and acoustic features but they all require a powerful decoder. The choice of audio representation is still very much dependent on the application. \section{Acknowledgments} This publication has emanated from research supported in part by a grant from Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission’. \bibliographystyle{ieeetr}
1,477,468,750,744
arxiv
\section{Experiments} \begin{figure}[tb] \renewcommand{\captionfont}{\small} \centering \includegraphics[scale=0.56]{figures/aless300.jpg} \caption{\textbf{Experiments on \cite{cimatti2016dynamic}'s benchmark from which DTNs and STNs have been removed.} The X-axis represents the allocated time in seconds and the Y-axis the number of instances in the benchmark each solver can solve within the corresponding allocated time. Timeout is set to 20 seconds per instance.} \label{aless-bench} \vspace{-0.4cm} \end{figure} \begin{figure}[tb] \renewcommand{\captionfont}{\small} \centering \includegraphics[scale=0.56]{figures/10-20.jpg} \caption{\textbf{Experiments on benchmark $\boldsymbol{B_1}$.} Axes are the same as in figure \ref{aless-bench}. Timeout is set to 30 seconds per instance.} \label{10-20} \vspace{-0.5cm} \end{figure} We carry out experiments to evaluate the efficiency of the proposed tree search approach and the effect of the MPNN's guidance. We also compare these methods to a DC solver from \cite{cimatti2016dynamic}. TDC is a subset of DC and a more restrictive form of controllability: non-TDC controllability does not imply non-DC controllability. In that sense, a TDC solver can be expected to offer better performance that a DC counterpart in exchange for potentially being unable to find a strategy when a DC algorithm would. In this section, we refer to the tree search algorithm as TS, the tree search algorithm guided by the trained MPNN up to the $15^{th}$ (respectively $X^{th}$) \textit{d-OR } node depth-wise in the tree as MPNN-TS (respectively MPNN-TS-X) and the most efficient DC solver from \cite{cimatti2016dynamic} as PYDC-SMT ordered. First, we use the benchmark in the experiments of \cite{cimatti2016dynamic} from which we remove DTNs and STNs. % We compare TS, MPNN-TS and PYDC-SMT on the resulting benchmark which is comprised of $290$ DTNUs and $1042$ STNUs. Here, Limiting the maximum depth use of the MPNN to $15$ offers a good trade off between guidance gain and cost of calling the heuristic. Results are given in Figure~\ref{aless-bench}. We observe that TS solves roughly $50\%$ more problem instances than PYDC-SMT within the allocated time ($20$ seconds). In addition, TS solves $56\%$ of all instances while the remaining ones time out. Among solved instances, a strategy is found for $89\%$ and the remaining $11\%$ are proved non-TDC. On the other hand, PYDC-SMT solves $37\%$ of all instances. A strategy is found for $85\%$ of PYDC-SMT's solved instances while the remaining $15\%$ are proved non-DC. Finally, out of all instances PYDC-SMT solves, TS solves $97\%$ accurately with the same conclusion, \emph{i.e. } TDC when DC and non-TDC when non-DC. The use of the heuristic leads to an additional $+6\%$ problems solved within the allocated time. We argue this small increase is essentially due to the fact that most problems solved in the benchmark are small-sized problems with few timepoints which are solved quickly. For further evaluation of the heuristic, we create new benchmarks using the DTNU generator described in \S \ref{randomized-tree-search} with varying number of timepoints. These benchmarks contain fewer DTNU instances which are quick to solve and more harder instances. Each benchmark contains 500 randomly generated DTNUs which have $1$ to $3$ uncontrollable timepoints. Moreover, each DTNU has 10 to 20 controllable timepoints in the first benchmark $B_1$, 20 to 25 in the second benchmark $B_2$ and 25 to 30 in the last benchmark $B_3$. Experiments on $B_1$, $B_2$ and $B_3$ are respectively shown in figure \ref{10-20}, \ref{20-25-b} (in the appendix) and \ref{25-30}. We note that for all three benchmarks, no solver ever proves non-TDC or non-DC controllability before timing out due to the larger size of these problems. \begin{figure}[tb] \renewcommand{\captionfont}{\small} \centering \includegraphics[scale=0.56]{figures/25-30.jpg} \caption{\textbf{Experiments on benchmark $\boldsymbol{B_3}$.} Axes are the same as in figure \ref{aless-bench}. Timeout is set to 180 seconds.} \label{25-30} \vspace{-0.5cm} \end{figure} In these benchmarks, PYDC-SMT does not perform well on $B_1$ and cannot solve any instance on $B_2$ and $B_3$. TS does not perform well on $B_2$ and only solves 2 instances on $B_3$. However, we see a significantly higher gain from the use of the MPNN for TS, varying with the maximum depth use. At best depth use, the gain is $+91\%$ instances solved for $B_1$, $+980\%$ instances solved for $B_2$ and $+1150\%$ instances solved for $B_3$. The more timepoints instances have, the more worthwhile heuristic guidance appears to be. Indeed, the optimal maximum depth use of the MPNN in the tree increases with the problem size: $15$ for $B_1$, $60$ for $B_2$ and $120$ for $B_3$. We argue this is due to the fact that more timepoints results in a wider search tree overall, including in deeper sections where heuristic use was not necessarily worth its cost for smaller problems. Furthermore, the MPNN is trained on randomly generated DTNUs which have 10 to 20 controllable timepoints. The promising gains shown by experiments on $B_2$ and $B_3$ suggest generalization of the MPNN to bigger problems than it is trained on. The tree search approach presented in this work presents a good trade off between search completeness and effectiveness: almost all examples solved by PYDC-SMT from \cite{cimatti2016dynamic}'s benchmark are solved with the same conclusion, and many more which could not be solved are. Moreover, the TDC approach scales up better to problems with more timepoints, and the tree structure allows the use of learning-based heuristics. Although these heuristics are not key to solving problems of big scales, our experiments suggest they can still provide a high increase in efficiency. \section{Introduction} Temporal Networks (TN) are a common formalism to represent and reason about temporal constraints over a set of time points (e.g. start/end of activities in a scheduling problem). The Simple Temporal Networks with Uncertainty (STNUs) \cite{kn:Ts} \cite{kn:ViFa} explicitly incorporate qualitative uncertainty into temporal networks. Considerable work has resulted in algorithms to determine whether or not all timepoints can be scheduled, either up-front or reactively, in order to account for uncertainty (e.g. \cite{kn:MoMu2}, \cite{kn:Mofast}). In particular, an STNU is {\em dynamically controllable} (DC) if there is a reactive strategy in which controllable timepoints can be executed either at a specific time, or after observing the occurrence of an uncontrollable timepoint. Cimatti et al. \cite{cimatti2016dynamic} investigate the problem of DC for Disjunctive Temporal Networks with Uncertainty (DTNUs), which generalize STNUs. Figure~\ref{fig:easy-example}a shows an example of two DTNUs $\gamma$ and $\gamma'$ on the left side; $a_i$ are controllable timepoints, $u_j$ are uncontrollable timepoints. Timepoints are variables which can take on any value in ${\rm I\!R}$. Constraints between timepoints characterize both a minimum and maximum time distance separating them, likewise valued in ${\rm I\!R}$. The key difference between STNUs and DTNUs lies in the {\em disjunctions} that yields more choice points for consistent scheduling, especially reactively. \begin{figure}[t!] \renewcommand{\captionfont}{\small} \centering \includegraphics[scale=0.57]{figures/fused-example.png} \caption{\textbf{Two example DTNUs $\gamma$ and $\gamma'$.} In both examples, timepoints $a_1$ and $a_2$ are controllable; $u_1$ is uncontrollable. Black arrows and their intervals (valued in ${\rm I\!R}$) represent time constraints between timepoints; the light red arrow and its interval contingency links. The dashed dark red arrow in $\gamma'$ implies $u_1$ has already been activated and will occur in the specified interval. A TDC strategy is displayed for $\gamma$: the root node of the strategy is $\gamma$ while other nodes are sub-DTNUs except the $\lor$ node which lists transitional possibilities. DTNU $\gamma'$, on the other hand, is an example of a DTNU which is DC but not TDC.} \label{fig:easy-example} \vspace{-0.4cm} \end{figure} The complexity of DC checking for DTNUs is $PSPACE$-complete \cite{kn:BhWi}, making this a very challenging problem. The difficulty in proving or disproving DC arises from the need to check all possible combinations of disjuncts in order to handle all possible occurrence outcomes of the uncontrollable timepoints. The best previous approaches for this problem use timed-game automata (TGAs) and Satisfiability Modulo Theories (SMTs), described in \cite{cimatti2016dynamic}. Recently, applications such as image classification have benefited from learning techniques such as Convolutional Neural Networks (CNNs) \cite{krizhevsky_2012}. A new emerging trend of neural networks, graph-based neural networks (GNNs), have been proposed as an extension of CNNs to graph-structured data; recent variants based on spectral graph theory include \cite{defferrard_2016}, \cite{YujiaLi2016}, \cite{kipf_2017}. These GNNs take advantage of relational properties between nodes for classification, but do not take into account potential edge weights. In newer approaches, Message Passing Neural Networks (MPNNs) with architectures such as in \cite{battaglia2016interaction}, \cite{gilmer2017neural} and \cite{kipf2018neural} use embeddings comprising edge weights within each computational layer. We focus our interest on these architecture types as DTNUs can be formalized as graphs with edge distances representing time constraints. In this work, we study DC checking of DTNUs as a % search problem, express % states as graphs, and use MPNNs to learn heuristics based on previously solved DTNUs to guide search. The key contributions of our approach are the following. \textbf{(1)} We introduce a time-based form of dynamic controllability (TDC) and a tree search approach to identify TDC strategies. We informally show that TDC implies DC, but the opposite is not generally true. \textbf{(2)} We define a relevant way of using an MPNN architecture for handling DTNU scheduling problems and use it as heuristic for guidance in the tree search. Moreover, we define a self-supervised training scheme to train the MPNN based on solving randomly generated DTNUs with short timeouts to limit search duration. \textbf{(3)} We introduce constraint propagation rules which enable us to enforce time domain restrictions for variables in order to ensure soundness of strategies found. We carry out experiments which show the tree search algorithm offers improved scalability over the best previous DC-solving approach evaluated in \cite{cimatti2016dynamic}, PYDC-SMT. Moreover, we show the tree search is improved upon significantly by the learned MPNN heuristic on harder DTNUs. \section{Learning-based Heuristic} \label{network} We present our learning model and explain how it provides tree search heuristic guidance. Our learning architecture originates from \cite{gilmer2017neural}. It uses message passing rules allowing neural networks to process graph-structured inputs where both vertices and edges possess features. Authors of this architecture carry out node classification experiments in quantum chemistry and achieve state-of-the-art results on a molecular property prediction benchmark. Here, we first define a way of converting DTNUs into graph data. Then, we process the graph data with our MPNN and explain how the output is used to guide the tree search. Let $\Gamma = \{A,U,C,L\}$ be a DTNU. We start by explaining how we turn $\Gamma$ into a graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$. First, we convert all time values from absolute to relative with the assumption the current time for $\Gamma$ is $t=0$. We search all converted time intervals $[x_i,y_i]$ in $C$ and $L$ for the highest interval bound value $d_{max}$, \emph{i.e. } the farthest point in time. We then proceed to normalize every time value in $C$ and $L$ by dividing them by $d_{max}$. As a result, every time value becomes a real number between $0$ and $1$. Next, we convert each controllable timepoint $a \in A$ and uncontrollable timepoint $u \in U$ into graph nodes with corresponding \textit{controllable} or \textit{uncontrollable} node features. The time constraints in $C$ and contingency links in $L$ are expressed as edges between nodes with $10$ different edge distance classes ($0:[0,0.1)$, $1:[0.1,0.2)$, ..., $9:[0.9,1]$). We also use additional edge features to account for edge types (constraint, disjunction, contingency link, direction sign for lower and upper bounds). Moreover, intermediary nodes are used with a distinct node feature in order to map possible disjunctions in constraints and contingency links. We also add a \textit{WAIT } node with a distinct node feature which implicitly designates the act of waiting a period of time. The graph conversion of DTNU $\gamma$ is characterized by three elements: the matrix of all node features $X_{v}$, the adjacency matrix of the graph $X_{e}$ and the matrix of all edge features $X_{w}$. Let $f$ be the mathematical function for our MPNN and $\theta$ its set of parameters. Our function $f$ stacks $5$ graph convolutional layers from \cite{gilmer2017neural} coupled with the $\mathrm{ReLU}(\cdot) = \max(0,\cdot)$ piece-wise activation function \cite{glorot2011deep}. The $\texttt{sigmoid}$ function $\sigma(\cdot) = \frac{1}{1+\exp(-\cdot)}$ is then used to obtain a list of probabilities $\pi$ over all nodes in $\mathcal{G}$ : $f_{\theta}(X_{v}, X_{e}, X_{w}) = \pi$. The probability of each node $v$ in $\pi$ corresponds to the likelihood of transitioning into a TDC DTNU from the original DTNU $\Gamma$ by taking the action corresponding to $v$. If $v$ represents a controllable timepoint $a$ in $\Gamma$, its corresponding probability in $\pi$ is the likelihood of the sub-DTNU resulting from the execution of $a$ being TDC. If $v$ represents a \textit{WAIT } decision, its probability refers to the likelihood of the \textit{WAIT } node having a \emph{true } attribute, \emph{i.e. } the likelihood of all children DTNUs resulting from the wait being TDC (with the wait duration rules set in \S \ref{waitperiod}). We call these two types of nodes \textit{active} nodes. Otherwise, if $v$ is another type of node, its probability is not relevant to the problem and ignored. Our MPNN is trained on DTNUs generated and solved in \S \ref{randomized-tree-search} only on active nodes by minimizing the cross-entropy loss: $$\frac{1}{m} \sum\limits_{i=1}^{m} \sum\limits_{j=1}^{q} - Y_{ij} \log (f_{\theta}(X_i)_j) - (1 - Y_{ij}) \log (1-f_{\theta}(X_i)_j)$$ Here $X_i = (X_{i_v}, X_{i_e}, X_{i_w})$ is DTNU number $i$ among a training set of $m$ examples, $Y_{ij}$ is the TDC controllability (1 or 0) of active node $j$ for DTNU number $i$. Lastly, the MPNN heuristic is used in the following way in the tree search. Once a \textit{d-OR } node is reached, the parent DTNU node is converted into a graph and the MPNN $f$ is called upon the corresponding graph elements $X_{v}, X_{e}, X_{w}$. Active nodes in output probabilities $\pi$ are then ordered by highest values first, and the tree search visits the corresponding children tree nodes in the suggested order, preferring children with higher likelihood of being TDC first. \section{Randomized Simulations for Heuristic Training} \label{randomized-tree-search} We leverage a learning-based heuristic to guide the tree search and increase its effectiveness. A key component in learning-based methods is the annotated training data. We generate such data in automatic manner by using a DTNU generator to create random DTNU problems and solving them with a modified version of the tree search. We store results and use them for training the MPNN. We detail now our data generation strategy. For training purposes, we create DTNUs which have a number of controllable timepoints ranging from 10 to 20 and a number of uncontrollable timepoints ranging from 1 to 3. The DTNUs are generated in the following way. For interval bounds of constraint conjuncts or contingency links, we always randomly generate real numbers within $[0,100]$. We restrict the number of conjuncts inside a disjunct to 5 at most. A random number $n_1 \in [10,20]$ of controllable timepoints and a random number $n_2 \in [1,3]$ of uncontrollable timepoints are selected. Each uncontrollable timepoint is randomly linked to a different controllable timepoint with a contingency link. Next, we iterate over the list of timepoints, and for each timepoint $v_i$ not appearing in constraints or contingency links, we add a disjunct for which at least one conjunct constrains $v_i$. The type of conjunct is selected randomly from either a \textit{distance} conjunct $v_i - v_j \in [x,y]$ or a \textit{bounded} conjunct $v_i\in [x,y]$. On the other hand, if $v_i$ was already present in the constraints or contingency links, we add a disjunct constraining $v_i$ with only a $20\%$ probability. In order to solve these random DTNUs, we modify the tree search as follows. For a DTNU $\Gamma$, the first \textit{d-OR } child node is developed as well as its children $\psi_1, \psi_2, ..., \psi_n \in \Psi$. The modified tree search will explore each $\psi_i$ multiple times ($\nu$ times at most), each time with a timeout of $\tau$ seconds. Here we set $\nu = 25$ and $\tau = 3$. For each exploration of $\psi_i$, children nodes of any \textit{d-OR } node encountered in the corresponding subtree are explored randomly each time. If $\psi_i$ is proved to be either TDC or non-TDC during an exploration, the next explorations of the same child $\psi_i$ are called off and the truth attribute $\beta_i$ of $\psi_i$ is updated accordingly. The active node number $k$, corresponding to the decision leading to $\psi_i$ from DTNU $\Gamma$'s \textit{d-OR } node, is updated with the same value, \emph{i.e. } $Y_{k} = \beta_i$. However, if every exploration times out, $\psi_i$ is assumed non-TDC and $Y_k$ is set to \textit{false}. Once each $\psi_i$ has been explored, the pair $\langle \mathcal{G}(\Gamma), (Y_1, Y_2, ..., Y_n)\rangle$ is stored in the training set, where $\mathcal{G}(\Gamma)$ is the application of the graph conversion of $\Gamma$ described in \S\ref{network}. The assumption of non-TDC controllability for children nodes for which all explorations time out is good enough. The output of the MPNN is a probability for each child node of the \textit{d-OR } node of the input DTNU. These children nodes are visited in the suggested order when the heuristic is active, until one is found to be TDC, and no child is ever discarded. The trained MPNN will tend to give higher probabilities for children nodes for which explorations often found a TDC strategy before timeout, and lower probabilities for ones where explorations often ended up with a timeout. \section{Strategy Execution} \label{strategy-exec} A strategy found by the tree search for a DTNU $\Gamma$ is sound and guarantees constraint satisfiability if executed in the following manner. Let $\mathcal{Q}$ be the system interacting with the environment, executing controllable timepoints and observing how uncontrollable timepoints unfold. At each DTNU node in the tree, $\mathcal{Q}$ will move on to the child \textit{d-OR } node. The child node $\psi_i$ of the \textit{d-OR } node which was found by the strategy to have a \emph{true } attribute is selected. If $\psi_i$ is a DTNU node, $\mathcal{Q}$ executes the corresponding controllable timepoint $a_i$ and moves on to $\psi_i$. On the other hand, if $\psi_i$ is a \textit{WAIT } node, $\mathcal{Q}$ moves on to $\psi_i$, reads the wait duration time $\Delta_t$ stored in $\psi_i$ and moves on to the child \textit{w-OR } node. The child node \textit{AND}$_{R_j}$ of the \textit{w-OR } node which has a \emph{true } attribute is selected, and $\mathcal{Q}$ will wait $\Delta_t$ time units with the reactive wait strategy $R_j$. After the wait is over, $\mathcal{Q}$ observes the list of all uncontrollable timepoints $\Lambda_i$ which occurred, deduces which DTNU child node of the \textit{AND}$_{R_j}$ node it transitioned into, and moves on to that node. By following these guidelines, the final tree node $\mathcal{Q}$ transitions into is necessarily a leaf node with a \emph{true } attribute, \emph{i.e. } a node for which all constraints are satisfied. This is due to the fact that for \textit{d-OR } and \textit{w-OR } nodes $\mathcal{Q}$ visits, $\mathcal{Q}$ chooses to transition into a child node with a \emph{true } attribute. For \textit{AND } nodes $\mathcal{Q}$ visits, all children DTNU nodes have a \emph{true } attribute, so $\mathcal{Q}$ transitions into a child node with a \emph{true } attribute regardless of how uncontrollable timepoints unfold. \section{Related Works} The use of learning-based heuristics has recently become increasingly popular for planning, combinatorial and network modeling problems. Recent works applied to network modeling and routing problems include \cite{rusek2019unveiling}, \cite{chen2018deep}, \cite{xu2018experience}, \cite{Kool2018}. Recently, GNNs have become a popular extension of CNNs. Essentially, their ability to represent problems with a graph structure and the resulting node permutation invariance makes them convenient for some applications. We refer the reader to \cite{wu2019comprehensive} for a complete survey on GNNs. In combinatorial optimization, GNNs can benefit both approximate and exact solvers. In \cite{ZhuwenLi2018}, authors combine tree search, GNNs and a local search algorithm to achieve state-of-the-art results for approximate solving of NP-hard problems such as the maximum independent set problem. On the other hand, \cite{gasse2019exact} use a GNN for branch and bound variable selection for exact solving of NP-hard problems and achieve superior results to previous learning approaches. In path-planning problems with NP-hard constraints, \cite{osanlou2019optimal} use a GNN to predict an upper bound for a branch and bound solver and outperform an A*-based planner coupled with a problem-suited handcrafted heuristic. Lastly, \cite{Ma2018} call a GNN for the selection of a planner inside a portfolio for STRIPS planning problems and outperform the leading learning-based approach which was based on a CNN \cite{sievers2019deep}. In most works, GNNs seem to offer generalization to bigger problems than they are trained on. Results from our experiments are in line with this observation. \section{Conclusion} We introduced a new type of controllability, time-based dynamic controllability (TDC), and a tree search approach for solving disjunctive temporal networks with uncertainty (DTNU) in TDC. Strategies are built by discretizing time and exploring different decisions which can be taken at different key points, as well as anticipating how uncontrollable timepoints can unfold. We defined constraint propagation rules which ensure soundness of strategies found. We showed that the tree search approach is able to solve DTNUs in TDC more efficiently than the state-of-the-art dynamic controllability (DC) solver, PYDC-SMT, with almost always the same conclusion. Lastly, we created MPNN-TS, a solver which combines the tree search with a heuristic function based on message passing neural networks (MPNN) for guidance. The MPNN is trained with a self-supervised strategy based on a variant of the tree search algorithm. The use of the MPNN allows significant improvement of the tree search on harder DTNU problems, notably on DTNUs of bigger size than those used for training the MPNN. \bibliographystyle{aaai} \section{Time-based Dynamic Controllability} \label{tdc} A DC {\em strategy} for a DTNU either executes controllable timepoints at a specific time, or reacts to the occurrence of an uncontrollable timepoint. We present our TDC formalism here. A TDC strategy executes controllable timepoints at specific times under the assumption that some uncontrollable timepoints may occur or not in a given time interval. Each interval in a TDC strategy can have an arbitrary duration. Thus, controllable timepoints are usually executed at the end of the same interval, regardless of if and when uncontrollable timepoints occur inside the interval. Within a given interval, TDC also leaves open the choice to execute a controllable timepoint at the same time as the occurrence of an uncontrollable timepoint, which we call reactive execution. Nonetheless, TDC is less flexible than a DC strategy which can wait for an uncontrollable timepoint to occur before making a new decision. It does not allow, for example, {\em delayed} reactive execution of the controllable timepoint. TDC is a subset of DC, and a stronger form on controllability: TDC implies DC. As described below, representing reactive DC strategies in TDC may require the tree to become arbitrarily large, so DC does not imply TDC. DTNU $\gamma'$ in figure~\ref{fig:easy-example}a shows an example of an STNU which is not TDC but DC. In this example, uncontrollable timepoint $u_1$ is activated, \emph{i.e. } the controllable timepoint associated to $u_1$ in the contingency links has been executed. Moreover, it is known that $u_1$ occurs between $t$ and $t + 1$, where $t$ is the current time. The interval $[t,t+1]$ is referred to as the \emph{activation time interval} for $u_1$. Controllable timepoint $a_1$ must be executed at least 1 time unit after $u_1$, and controllable timepoint $a_2$ at least 5 time units after $a_1$. However, controllable timepoint $a_2$ cannot be executed later than 6 time units after $u_1$. A valid DC strategy waits for $u_1$ to occur, then schedules $a_1$ exactly 1 time unit later, and $a_2$ 5 time units after $a_1$. However, for any TDC strategy, there is no wait duration small enough while waiting for $u_1$ to happen that does not violate these constraints. There will always be some strictly positive lapse of time between the moment $u_1$ occurs and the end of the wait. The exact execution time of $u_1$ during the wait is unknown: a TDC strategy therefore assumes $u_1$ happened at the end of the wait when trying to schedule $a_1$ at the earliest. Therefore, the earliest time $a_1$ can be scheduled in a TDC strategy is 1 time unit after the end of the wait, which is too late. \section{Tree Search Preliminaries} \label{tree-search-prelim} We introduce here the tree search algorithm. Intuitively, the approach discretizes uncontrollable durations, \emph{i.e. } durations when one or several uncontrollable timepoints can occur, into multiple reduced intervals. These reduced intervals are then used to account for possible outcomes of uncontrollable timepoints and adapt the scheduling strategy accordingly. The root of the search tree is the DTNU, and other tree nodes are either sub-DTNUs of the DTNU or logical nodes (\textit{OR, AND}) which represent the presence or lack of control over transition into children nodes. At a given DTNU tree node, decisions such as executing a controllable timepoint or waiting for a period of time develop children DTNU nodes for which these decisions are propagated to constraints. The TDC controllability of a \textit{leaf} DTNU, \emph{i.e. } a sub-DTNU for which all controllable timepoints have been executed and uncontrollable timepoints are assumed to have occurred in specific intervals, indicates whether or not this sub-DTNU has been solved at the end of the scheduling process. We also refer to the TDC controllability of a DTNU node in the search tree as its \textit{truth attribute}. Lastly, the search logically combines TDC controllability of children DTNUs to determine TDC controllability for parent nodes. We give a simple example of a TDC strategy for a DTNU $\gamma$ in figure~\ref{fig:easy-example}. Let $\Gamma = \{A,U,C,L\}$ be a DTNU. $A$ is the list of controllable timepoints, $U$ the list of uncontrollable timepoints, $C$ the list of constraints and $L$ the list of contingency links. The root node of the search tree built by the algorithm is $\Gamma$. There are four different types of nodes in the tree. Furthermore, each node possesses a \textit{truth} attribute, as explained in \S \ref{truthvalue}, which is initialized to \textit{unknown} and can be set to either \emph{true } or \emph{false }. The different types of tree nodes are listed below and shown in figure \ref{fig:ts-structure-fig}. \paragraph{\textit{DTNU} nodes.} Any DTNU node other than the original problem $\Gamma$ corresponds to a sub-problem of $\Gamma$ at a given point in time $t$, for which some controllable timepoints may have already been scheduled in upper branches of the tree, some amount of time may have passed and some uncontrollable timepoints may have occurred. A DTNU node is made of the same timepoints $A$ and $U$, constraints $C$ and contingency links $L$ as the original DTNU $\Gamma$. It also carries a schedule memory $S$ of what exact time, or during what time interval, scheduled timepoints were executed during previous decisions in the tree. Lastly, the node also keeps track of the activation time intervals of activated uncontrollable timepoints $B$. The schedule memory $S$ is used to create an updated list of constraints $C'$ resulting from the propagation of the execution time or execution time interval of timepoints in constraints $C$ as described in \S \ref{tightbounds}. A non-terminal DTNU node, \emph{i.e. } a DTNU node for which all timepoints have not been scheduled, has exactly one child node: a \textit{d-OR } node. \vspace{-0.4cm} \paragraph{\textit{OR} nodes.} When a choice can be made at time $t$, this transition control is represented by an \textit{OR} node. We distinguish two types of such nodes, \textit{d-OR } and \textit{w-OR }. For \textit{d-OR } nodes, the first type of choice available is which controllable timepoint $a_i$ to execute. This leads to a DTNU node. The other type of choice is to wait a period of time (\S\ref{wait}) which leads to a \textit{WAIT } node. \textit{w-OR } nodes can be used for \textit{reactive wait strategies}, \emph{i.e. } to stipulate that some controllable timepoints will be scheduled reactively during waits (\S \ref{instant-scheduling}). The parent of a \textit{w-OR } node is therefore a \textit{WAIT } node and its children are \textit{AND } nodes, described below. \vspace{-0.4cm} \paragraph{\textit{WAIT } nodes.} These nodes are used after a decision to wait a certain period of time $\Delta_t$. The parent of a \textit{WAIT } node is a \textit{d-OR } node. A \textit{WAIT } node has exactly one child: a \textit{w-OR } node, which has the purpose of exploring different reactive wait strategies. The uncertainty management related to uncontrollable timepoints is handled by \textit{AND } nodes. % \vspace{-0.4cm} \paragraph{\textit{AND } nodes.} % This type of node represents a lack of transition control over children nodes. It is used after a wait decision is taken and a reactive wait strategy is decided, represented consecutively by a \textit{WAIT } and \textit{w-OR } node. Each child node of the \textit{AND } node is a DTNU node at time $t + \Delta_t$, where $t$ is the time before the wait and $\Delta_t$ the wait duration. Furthermore, each child node represents an outcome of how uncontrollable timepoints may unfold, and is built from the set of {\em activated} uncontrollable timepoints (uncontrollable timepoints that have been started by the execution of their controllable timepoint) whose occurrence time interval overlaps the wait. If there are $l$ activated uncontrollable timepoints, then there are at most $2^l$ \textit{AND } node children, representing each element of the power set of activated uncontrollable timepoints (\S \ref{wait}). \begin{figure}[tb] \renewcommand{\captionfont}{\small} \centering \includegraphics[scale=0.68]{figures/tree-structure.png} \caption{\textbf{Basic structure of the search tree describing how a DTNU node \boldmath$DTNU_{O, P, t}$ is developed.} $DTNU_{O, P, t}$ (placed at the root of the tree) refers to a DTNU where $O$ is the set of controllable timepoints that have already been executed, $P$ the set of uncontrollable timepoints that have occurred, and $t$ the time. Each branch $a_i$ refers to a controllable timepoint $a_i$, $R_i$ to a reactive strategy during the wait, and $\Lambda_i$ to a combination of uncontrollable timepoints which can occur during the wait. } \label{fig:ts-structure-fig} \vspace{-0.5 cm} \end{figure} Figure \ref{fig:ts-structure-fig} illustrates how a sub-problem of $\Gamma$, referred to as $DTNU_{O, P, t}$, is developed. Here, $O \subset A$ is the set of controllable timepoints that have already been executed, $P \subset U$ the set of uncontrollable timepoints which have occurred, and $t$ the time. This root node transitions into a \textit{d-OR } node. The \textit{d-OR } node in turn is developed into several children nodes $DTNU_{O \cup \{a_i\}, P,t}$ and a \textit{WAIT } node. Each node $DTNU_{O \cup \{a_i\}, P,t}$ corresponds to a sub-problem which is obtained from the execution of controllable timepoint $a_i$ at time $t$. The \textit{WAIT } node refers to the process of waiting a given period of time, $\Delta_t$ in the figure, before making the next decision. The \textit{WAIT } node leads directly to a \textit{w-OR } node which lists different wait strategies $R_i$. If there are $l$ activated uncontrollable timepoints, there are $2^l$ subsets of uncontrollable timepoints $\Lambda_i$ that could occur. Each $\textit{AND}_{R_j}$ node has one sub-problem DTNU for each $\Lambda_i$. Each sub-problem $DTNU_{O_i, P \cup \Lambda_i, t + \Delta_t}$ of the node $\textit{AND}_{R_j}$ is a DTNU at time $t+\Delta_t$ for which all uncontrollable timepoints in $\Lambda_i$ are assumed to have happened during the wait period, \emph{i.e. } in the time interval $[t, t + \Delta_t]$. Additionally, some controllable timepoints may have been reactively executed during the wait and may now be included in the set of scheduled controllable timepoints $O_i$. Otherwise, $O_i = O$. Two types of leaf nodes exist in the tree. The first type is a node $DTNU_{A, U, t}$ for which all controllable timepoints $a_i \in A$ have been scheduled and all uncontrollable timepoints $u_i \in U$ have occurred. The second type is a node $DTNU_{A \setminus A', U, t}$ for which all uncontrollable timepoints $u_i \in U$ have occurred, but some controllable timepoints $a_i \in A'$ have not been executed. The constraint satisfiability test of the former type of leaf node is straightforward: all execution times of all timepoints are propagated to constraints in the same fashion as in \S \ref{tightbounds}. The leaf node's truth attribute is set to \emph{true } if all constraints are satisfied, \emph{false } otherwise. For the latter type, we propagate the execution times of all uncontrollable timepoints as well as all scheduled controllable timepoints in the same way, and obtain an updated set of constraint $C'$. This leaf node, $DTNU_{A \setminus A', U, t}$, is therefore characterized as $\{A', \emptyset, C', \emptyset\}$ and is a DTN. We add the constraints $a'_i \geq t, \forall a'_i \in A'$ and use a mixed integer linear programming solver \cite{cplex2009v12} to solve the DTN. If a solution is found, the execution time values for each $a_i' \in A'$ are stored and the leaf node's truth value is set to \textit{true.} Otherwise, it is set to \textit{false.} After a truth value is assigned to the leaf node, the truth propagation function defined in \S \ref{truthvalue} is called to logically infer truth value properties for parent nodes. Lastly, the search algorithm explores the tree in a depth-first manner. At each \textit{d-OR }, \textit{w-OR } and \textit{AND } node, children nodes are visited in the order they are created. Once a child node is selected, its entire subtree will be processed by the algorithm before the other children are explored. Some simplifications made in the exploration are detailed in \S \ref{simplications} in the appendix. \section{Tree Search Characteristics} \label{tree-search-chars} \subsection{Reactive scheduling during waits} \label{instant-scheduling} Some situations may arise when instant scheduling of a controllable timepoint is necessary as soon as an uncontrollable timepoint occurs to satisfy a constraint. We designate as a \textit{conjunct} a constraint relationship of the form $v_i - v_j \in [x,y]$ or $v_i \in [x,y]$, where $v_i, v_j$ are timepoints and $x, y, \in \rm I\!R$. We refer to a constraint where several conjuncts are linked by $\lor$ operators as a \textit{disjunct}. If at any given DTNU node in the tree there is an activated uncontrollable timepoint $u$ with the potential to occur during the next wait and there is at least one unscheduled controllable timepoint $a$ such that a conjunct of the form $u - a\in [0,y], y \geq 0$ is present in the constraints, a reactive wait strategy is considered that schedules $a$ as soon as $u$ occurs. Let $\Phi = \{\phi_1, \phi_2, ..., \phi_s\} \subset A$ be the complete set of unscheduled controllable timepoints for which there are conjunct clauses $u - \phi_i \in [0,y]$. We denote as $R_1, R_2, ..., R_m$ all possible combinations of elements taken from $\Phi$, including the empty set. As depicted in Figure \ref{fig:ts-structure-fig}, we account for potential reactive wait strategies by using a \textit{w-OR } node. The child node $\textit{AND}_{R_i}$ of the \textit{w-OR } node resulting from the combination $R_i$ has a reactive wait strategy for which all controllable timepoints in $R_i$ will be immediately executed at the moment $u$ occurs during the wait, if it does. If $u$ doesn't occur, no controllable timepoint is reactively scheduled during the wait. \subsection{Wait action} \label{wait} When a wait decision of duration $\Delta_t$ is taken at time $t$ for a given DTNU node, two different categories of uncontrollable timepoints are considered to account for all transitional possibilities: \\ \begin{itemize} \item $ Z = \{\zeta_1, \zeta_2, ..., \zeta_l\}$ is a set of timepoints that could either happen during the wait, or afterwards, i.e. the end of the activation time interval for each $\zeta_i$ is greater than $t + \Delta_t$. \item $ H = \{\eta_1, \eta_2, ..., \eta_m\}$ is a set of timepoints that are certain to happen during the wait, i.e. the end of the activation time interval for each $\eta_i$ is less than or equal to $t + \Delta_t$. \end{itemize}{} There are $q = 2^l$ number of different possible combinations (empty set included) $V_1, V_2, ..., V_q$ for elements taken from $Z$. For each combination $V_i$, the set $\Lambda_i = H \cup V_i$ is created. The union $\bigcup\limits_{i=1}^{q} \Lambda_i$ refers to all possible combinations of uncontrollable timepoints which can occur by $t+ \Delta_t$. In figure \ref{fig:ts-structure-fig}, for each \textit{AND } node, the combination $\Lambda_i$ leads to a DTNU sub-problem $DTNU_{O_i, P \cup \Lambda_i, t + \Delta_t}$ for which the uncontrollable timepoints in $\Lambda_i$ are considered to have occurred between $t$ and $t + \Delta_t$ in the schedule memory $S$. In addition, any potential controllable timepoint $\phi$ planned to be instantly scheduled in a reactive wait strategy $R_i$ in response to an uncontrollable timepoint $u$ in $\Lambda_i$ will also be considered to have been scheduled between $t$ and $t + \Delta_t$ in $S$. The only exception is when checking constraint satisfiability for the conjunct $u - \phi \in [0,y]$ which required the reactive scheduling, for which we assume $\phi$ executed at the same time as $u$, thus the conjunct is considered satisfied automatically. \subsection{Wait Eligibility and Period} \label{waitperiod} The way time is discretized is fundamental and holds direct implications on the search space explored and the capability of the algorithm to find TDC strategies. Longer waits make the search space smaller, but carry the risk of missing key moments where a decision is needed. On the other hand, smaller waits can make the search space too large to explore. We explain when the wait action is eligible, and how the wait duration is computed. \paragraph{Eligibility} At least one of the following criteria has to be met for a \textit{WAIT } node to be added as child of a \textit{d-OR } node: \begin{itemize} \item There is at least one activated uncontrollable timepoint for the parent DTNU node. \item There is at least one conjunct of the form $v \in [x,y]$, where $v$ is a timepoint, in the constraints of the parent DTNU node. \end{itemize} These criteria ensure that the search tree will not develop branches below \textit{WAIT } nodes when waiting is not relevant, \emph{i.e. } when a controllable timepoint necessarily needs to be scheduled. It also prevents the tree search from getting stuck in infinite \textit{WAIT } loop cycles. \paragraph{Wait Period} We define the wait duration $\Delta_t$ at a given \textit{d-OR } node eligible for a wait dynamically by examining the updated constraint list $C'$ of the parent DTNU and the activation time intervals $B$ of its activated uncontrollable timepoints. Let $t$ be the current time for this DTNU node. The wait duration is defined by comparing $t$ to elements in $C'$ and $B$ to look for a minimum positive value defined by the following three rules: \vspace{-0.5cm} \subparagraph{First rule} For each activated time interval $u \in [x,y] $ in $B$, we select $x - t$ or $y - t$, whichever is smaller and positive, and we keep the smallest value $\delta_1$ found over all activated time intervals. \vspace{-0.5cm} \subparagraph{Second rule} For each conjunct $v \in [x,y] $ in $C'$, where $v$ is a timepoint, we select $x - t$ or $y - t$, whichever is smaller and positive, and we keep the smallest value $\delta_2$ found over all conjuncts. \vspace{-0.5cm} \subparagraph{Third rule} This rule is used to determine timepoints which need to be scheduled ahead of time by chaining constraints together. Intuitively, when a conjunct $v \in [x,y] $ is in $C'$, it means $v$ has to be executed when $t \in [x,y]$ to satisfy this conjunct. However, $v$ could be linked to other timepoints by constraints which require them to happen before $v$. These timepoints could in turn be linked to yet other timepoints in the same way, and so on. The purpose of the third rule is to chain backwards to identify potential timepoints which start this chain and potential time intervals in which they need to be executed. The following mechanism is used: for each conjunct $v \in [x,y] $ in $C'$ found in 2), we apply a recursive backward chain function to both $(v, x)$ and $(v, y)$. We detail here how it is applied to $(v, x)$, the process being the same for $(v, y)$. Conjuncts of the form $v - v' \in [x', y'], x' \geq 0 $ in $C'$ are searched for. For each conjunct found, we add to a list two elements, $(v', x - x')$ and $(v', x - y')$. We also select $x - x' - t$ or $x - y' - t$, whichever is smaller and positive, as potential minimum candidate. The backward chain function is called recursively on each element of the list, proceeding the same way. We keep the smallest candidate $\delta_3$. Figure \ref{dynamicwait} in the appendix illustrates an application of this process. We set $\Delta_t = \min(\delta_1, \delta_2, \delta_3)$ as the wait duration. This duration is stored inside the \textit{WAIT } node. \subsection{Truth Value Propagation} \label{truthvalue} In this section, we describe how truth attributes of nodes are related to each other. The truth attribute of a tree node represents its TDC controllability, and the relationships shared between nodes make it possible to define sound strategies. When a leaf node is assigned a truth attribute $\beta$, the tree search is momentarily stopped and a propagator function \texttt{PropagateTruth()} is called. This function recursively propagates $\beta$ onto upper parent nodes. A parent node $\boldsymbol{\omega}$ is selected recursively and we distinguish the following cases: \begin{itemize} \item The parent $\boldsymbol{\omega}$ is a DTNU node or a \textit{WAIT } node: $\boldsymbol{\omega}$ is assigned $\beta$. \item The parent $\boldsymbol{\omega}$ is a \textit{d-OR } or \textit{w-OR } node: If $\beta = true$, then $\boldsymbol{\omega}$ is assigned \emph{true }. If $\beta = false$ and all children nodes of $\boldsymbol{\omega}$ have \emph{false } attributes, $\boldsymbol{\omega}$ is assigned \emph{false }. Otherwise, the propagation stops. \item The parent $\boldsymbol{\omega}$ is an \textit{AND } node: If $\beta = false$, then $\boldsymbol{\omega}$ is assigned \emph{false }. If $\beta = true$ and all children nodes of $\boldsymbol{\omega}$ have \emph{true } attributes, $\boldsymbol{\omega}$ is assigned \emph{true }. Otherwise, the propagation stops. \end{itemize} After the propagation algorithm finishes, the tree search algorithm resumes where it was temporarily stopped. If a truth attribute has reached the root node of the tree, the tree search algorithm will be swiftly ended due to the branch cuts implemented in \S \ref{searchoptimizations} in the appendix. A \emph{true } attribute reaching the root node of the tree means a TDC strategy has been found. A \emph{false } attribute means none could be found. The pseudocode for the \texttt{PropagateTruth()} function is given in Algorithm \ref{truthvaluealgo} in the appendix. \subsection{Constraint Propagation} \label{tightbounds} Decisions taken in the tree define when controllable timepoints are executed and also bear consequences on the execution time of uncontrollable timepoints. We explain here how these decisions are propagated into constraints, as well as the concept of `\textit{tight bound}'. Let $C'$ be the list of updated constraints for a DTNU node $\boldsymbol{\psi}$ for which the parent node is $\boldsymbol{\omega}$. We distinguish two cases. Either $\boldsymbol{\omega}$ is a \textit{d-OR } node and $\boldsymbol{\psi}$ results from the execution of a controllable timepoint $a_i$, or $\boldsymbol{\omega}$ is an \textit{AND } node and $\boldsymbol{\psi}$ results from a wait of $\Delta_t$ time units. In the first case, let $t$ be the execution time of $a_i$. The updated list $C'$ is built from the constraints of the parent DTNU of $\boldsymbol{\psi}$ in the tree. If a conjunct contains $a_i$ and is of the form $a_i \in [x,y]$, this conjunct is replaced with \textit{true} if $t \in [x,y]$, \textit{false} otherwise. If the conjunct is of the form $v_j - a_i \in [x,y]$, we replace the conjunct with $v_j \in [t + x, t + y]$. The other possibility is that $\boldsymbol{\psi}$ results from a wait of $\Delta_t$ time at time $t$, with a reactive wait strategy $R_j$. In this case, the new time is $t + \Delta_t$ for $\boldsymbol{\psi}$. As a result of the wait, some uncontrollable timepoints $u_i \in \Lambda _i$ may occur, and some controllable timepoints $a_i \in R_j$ may be executed reactively during the wait. Let $v_i \in \Lambda_i \cup R_j$ be these timepoints occurring during the wait. The execution time of these timepoints is considered to be in [$t$, $t+\Delta_t$]. For uncontrollable timepoints $u_i' \in \Lambda_i' \subset \Lambda_i$ for which the activation time ends at $t+\Delta_{t_i}'< t+\Delta_t$, and potential controllable timepoints $a_i'$ instantly reacting to these uncontrollable timepoints, the execution time is further reduced and considered to be in $[t, t+\Delta_{t_i}']$. We define a concept of \textit{tight bound} to update constraints which restricts time intervals in order to account for all possible values $v_i$ can take between $t$ and $t+\Delta_t$. For all conjuncts $v_j - v_i \in [x,y]$, we replace the conjunct with $v_j \in [t + \Delta_t + x, t + y]$. Intuitively, this means that since $v_i$ can happen at the latest at $t + \Delta_t$, $v_j$ can not be allowed to happen before $t + \Delta_t + x$. Likewise, since $v_i$ can happen at the earliest at $t$, $v_j$ can not be allowed to happen after $t + y$. Finally, if $t+ \Delta_t + x > t + y$, the conjunct is replaced with \emph{false }. Also, the process can be applied recursively in the event that $v_j$ is also a timepoint that occurred during the wait, in which case the conjunct would be replaced by \emph{true } or \textit{false}. In any case, any conjunct obtained of the form $a_j \in [x',y']$ is replaced with \emph{false } if $t + \Delta_t > y'$. Finally, if all conjuncts inside a disjunct are set to \emph{false } by this process, the constraint is violated and the DTNU is no longer satisfiable.
1,477,468,750,745
arxiv
\section{Introduction} The Anti-de Sitter spacetime is a maximally symmetric spacetime with negative constant curvature and a unique solution, which is strictly stationary, of Einstein's equations with a negative cosmological constant \cite{Boucher:1983cv}. In theoretical physics, the Anti-de Sitter spacetime has a central role because of the $AdS$/CFT correspondence \cite{Maldacena:1997re}. The timelike conformal infinity makes it fail to be globally hyperbolic, and the negative cosmological constant realizes a confined system. Due to that, non-trivial phenomena are induced, e.g., turbulent instabilities, superradiant instabilities, and holographic superconductors \cite{Cardoso:2004hs,Ishibashi:2004wx,Bizon:2011gg,Bizon:2017yrh,Hartnoll:2008kx}. Those motivate us to study dynamics of fields in the (asymptotically) Anti-de Sitter spacetime. The Anti-de Sitter spacetime has also been studied from the point of view of the near-horizon geometry~\cite{Kunduri:2013gce}, i.e., two-dimensional Anti-de Sitter spacetime ($AdS_2$) structures appear in the vicinity of extremal black hole horizons. Thus, it is expected that the study of $AdS_2$ brings us insight into fundamental properties near the horizon of the extremal black holes. Aretakis has shown that the (higher-order) derivatives of test massive scalar fields blow up at the late time along the event horizon in the four-dimensional extremal Reissner-Nordstr\"om black holes \cite{Aretakis:2011ha,Aretakis:2011hc}, which is called the Aretakis instability. In ~Refs. \cite{Aretakis:2012ei, Lucietti:2012xr,Lucietti:2012sf,Murata:2012ct,Gralla:2019isj}, it has also been shown that the same phenomena occur in other extremal black hole spacetimes and for other fields. Many aspects of the Aretakis instability have been studied in~Refs. \cite{Bizon:2012we,Aretakis:2012bm,Aretakis:2013dpa,Murata:2013daa,Angelopoulos:2016wcv,Zimmerman:2016qtn,Angelopoulos:2018yvt,Godazgar:2017igz,Gralla:2017lto,Gralla:2018xzo,Angelopoulos:2018uwb,Bhattacharjee:2018pqb,Cvetic:2018gss,Angelopoulos:2019gjn,Burko:2020wzq,Hadar:2017ven,Hadar:2018izi}. These suggest that the Aretakis instability is the robust phenomena around the extremal black holes. Thus, it is interesting to study the Aretakis instability from the point of view of the near-horizon geometry~\cite{Lucietti:2012xr,Zimmerman:2016qtn,Godazgar:2017igz,Gralla:2017lto,Gralla:2018xzo,Hadar:2017ven,Hadar:2018izi}. The Aretakis instability of massive scalar fields in $AdS_2$ has already been discussed \cite{Lucietti:2012xr,Zimmerman:2016qtn,Godazgar:2017igz,Gralla:2017lto,Gralla:2018xzo,Hadar:2017ven,Hadar:2018izi}. It has been argued that the higher-order radial derivatives of the scalar field show the polynomial growth on the future Poincar\'e horizon. In their study (in fact, also in the original Aretakis{'}s study), the divergent behavior of the higher-order derivatives has been shown in specific coordinate systems. Thus, it is not trivial whether this divergent behavior is just a coordinate effect or not. In the case of the extremal Reissner-Nordstr\"om black holes, there is a unique timelike Killing vector $V$ which is the generator of the event horizon. In the Eddington-Finkelstein coordinates $(v,r)$ where the timelike Killing vector~$V$ is a coordinate basis, the radial derivative operator $\partial_r$ satisfies ${\cal L}_{V} \partial_r = 0$. Then, the growth of some components of a tensor in the Eddington-Finkelstein coordinates is not a coordinate effect. Because the Aretakis instability, i.e., the growth of $\partial_r^{n} \Phi$ with an integer $n$, implies that $\nabla_r \nabla_r \cdots \nabla_r \Phi = \partial_r^{n} \Phi + {\rm lower~derivatives}$ is divergent, this is not a coordinate effect. However, in the case of $AdS_2$, there are many possible timelike Killing vectors, and there is no unique way to choose one of them. Actually, if we choose a coordinate system where one of the coordinate bases is the global timelike Killing vector, we can show that the higher-order derivatives do not blow up. Thus, one may think that the Aretakis instability in $AdS_2$ is due to the choice of the coordinate systems~\cite{Lucietti:2012xr, Hadar:2017ven}.\footnote{In Ref.~\cite{Hadar:2017ven}, there is an argument that the Aretakis instability in $AdS_2 \times S^2$ is not a coordinate effect if we consider $AdS_2 \times S^2$ as a near horizon geometry of extremal black holes. This is because the $AdS$ structure in the near horizon geometry appears in the Poincar\'e chart and the generator of the Poincar\'e horizon can be regarded as the horizon generator of the original black hole spacetime.} In this paper, to make this point clear, we revisit to study the Aretakis constants and instability in $AdS_2$. We find out the geometrical meaning of the Aretakis instability in the parallelly propagated (parallel-transported) null geodesics frame on the horizon, i.e., some components of the higher-order covariant derivatives of the field in the parallelly propagated frame blow up at the late time. In general relativity, parallelly propagated frames are used for studying the singular behavior of tensors in a coordinate independent way. For example, if the components of the Riemann tensor in the parallelly propagated frame are divergent at some point, we regard the point as a curvature singularity even if all scalar quantities constructed from the Riemann tensor, e.g., the Ricci scalar or the Kretschmann invariant, are finite~\cite{Hawking:1973uf}. Thus, our result implies the divergent behavior of the covariant derivatives of the fields at the late time. In the study of the Aretakis instability, the conserved quantities on the horizon, called the Aretakis constants, make the analysis easier~\cite{Aretakis:2011ha,Aretakis:2011hc,Aretakis:2012ei,Lucietti:2012xr,Lucietti:2012sf,Murata:2012ct,Gralla:2019isj}. In this paper, we also show that Aretakis constants in $AdS_2$ become some components of the higher-order covariant derivatives of the field in the parallelly propagated frame. Because $AdS_2$ is maximally symmetric, any null hypersurfaces have the same geometrical properties. If we prepare the parallelly propagated null geodesic frame along any null hypersurfaces, the above discussion holds not only on the future Poincar\'e horizon but also on any null hypersurfaces. This implies that the Aretakis instability is the result of singular behaviors of the higher-order covariant derivatives of the fields on the whole $AdS$ infinity, rather than a blow-up on a specific null hypersurface. Also, by focusing on the maximal symmetry of $AdS_2$, we can construct scalar quantities that are constant not only on the future Poincar\'e horizon but also on any null hypersurfaces, and reduce to the Aretakis constants on the future Poincar\'e horizon. In this paper, we call these scalar quantities the generalized Aretakis constants. In Ref. \cite{Cardoso:2017qmj}, it has been shown that the ladder operators constructed from the spacetime conformal symmetry of $AdS_2$ lead to conserved quantities on any null hypersurfaces, and checked that they coincide with the generalized Aretakis constants for special mass squared cases. In this paper, we explicitly show the relation with the generalized Aretakis constants for general cases. We also discuss that the generalized Aretakis constants and instability in $AdS_2$ are related to the conformal Killing tensors. This paper is organized as follows. In section \ref{sec:2}, we briefly review the Aretakis constants and instability in $AdS_2$ based on Ref. \cite{Lucietti:2012xr}. In section \ref{sec:3}, we introduce the parallelly propagated null geodesics frame, and discuss the Aretakis constants and instability in that frame. We also generalize to the case for any null hypersurfaces by using parallelly propagated frames on them. In section \ref{sec:4}, we discuss a relation between the generalization of the Aretakis constants and the spacetime conformal symmetry in $AdS_2$. In the final section, we summarize this paper. In Appendix \ref{appendix:massladder}, we review the mass ladder operators in $AdS_2$~\cite{Cardoso:2017qmj,Cardoso:2017egd}. Appendix \ref{appendix:relbtwnAQinAdS2} gives the proof of proposition.3 introduced in Sec. \ref{subsec:4-B}. \section{The Aretakis constants and instability in $AdS_2$} \label{sec:2} We briefly review the Aretakis constants and instability in $AdS_2$ \cite{Aretakis:2011ha,Aretakis:2011hc,Aretakis:2012ei,Lucietti:2012xr,Zimmerman:2016qtn,Godazgar:2017igz,Gralla:2017lto,Gralla:2018xzo,Hadar:2017ven,Hadar:2018izi}. In the ingoing Eddington-Finkelstein coordinates $(v,r)$, $AdS_2$ is described by \begin{equation} \label{ads2} ds^2=-r^2dv^2+2dvdr, \end{equation} where the future Poincar\'e horizon is located at $r=0$. We consider massive scalar fields $\Phi(v,r)$ in $AdS_2$. The fields obey the massive Klein-Gordon equation \begin{equation} \label{eom} 2\partial_v\partial_r\Phi+\partial_r\left(r^2\partial_r\Phi\right)-m^2\Phi=0. \end{equation} For mass squared $m^2=\ell(\ell+1)$ $(\ell=0,1,2,\cdots)$, acting the $\ell$-th order derivative operator $\partial_r^{\ell}$ on Eq. \eqref{eom} and evaluating it at $r=0$ show \begin{equation} \label{con1} \left.\partial_v\partial_{r}^{\ell+1}\Phi\right|_{r=0}=0. \end{equation} This shows that $\mathcal{H}_\ell$ defined by \begin{equation} \label{arec} \mathcal{H}_\ell:=\left.\partial_{r}^{\ell+1}\Phi\right|_{r=0}, \end{equation} are independent of $v$. Hence, $\mathcal{H}_\ell$ are conserved quantities along the future Poincar\'e horizon, and then called \textit{the Aretakis constants} in $AdS_2$. For other mass squared, such conserved quantities on the future Poincar\'e horizon cannot be found. Differentiating Eq.~\eqref{eom} $(\ell+1)$ times with respect to $r$, we obtain \begin{equation} \left.\partial_v\partial_r^{\ell+2}\Phi\right|_{r=0}=-(\ell+1)\mathcal{H}_\ell. \end{equation} This implies \begin{equation} \label{arecinstability} \partial_r^{\ell+2}\Phi|_{r=0}=-(\ell+1)\mathcal{H}_\ell v+{\rm const.} \end{equation} We see that $(\ell+2)$-th order derivative of the field on the future Poincar\'e horizon will blow up at the late time if $\mathcal{H}_\ell\neq0$. This divergent behavior is called \textit{the Aretakis instability} in $AdS_2$. We note that the $(\ell+3)$-th or higher-order derivatives are polynomially divergent at the late time. For general mass squared cases with $m^2 \ge m_{\rm BF}^{2}=-1/4$, where $m_{\rm BF}^{2}$ is the Breitenlohner-Freedman bound~\cite{Breitenlohner:1982jf, Breitenlohner:1982bm} in $AdS_2$, Ref. \cite{Lucietti:2012xr} has shown that the late-time behavior of the $n$-th derivatives of the fields with respect to $r$ at the future Poincar\'e horizon $r=0$ becomes\footnote{Note that we focus on the normalizable modes.} \begin{align} \label{divergentbehavior} \partial_r^{n}\Phi\big|_{r =0} \sim v^{n - \Delta_m}, \end{align} where $n$ is a non-negative integer and \begin{equation} \label{Deltam} \Delta_m :=\frac{1}{2} + \sqrt{m^2 + \frac{1}{4}}. \end{equation} Hence, using the notation, \begin{equation} \label{nm} n_{m} := \lfloor{\Delta_{m}}\rfloor + 1, \end{equation} where $\lfloor{\Delta_{m}}\rfloor$ denotes the integer part of $\Delta_{m}$, the $n_{m}$-th order derivative of the field at $r = 0$ will blow up at the late time. This is also called the Aretakis instability in $AdS_2$. We note that the $(n_m+1)$-th or higher-order derivatives are also unbounded. \section{The Aretakis constants and instability in the parallelly propagated null geodesics frame} \label{sec:3} In this section, we discuss the geometrical meaning of the Aretakis constants and instability in the parallelly propagated null geodesic frame. We shall show that some components of the higher-order covariant derivatives of the field in the parallelly propagated frame are constant or unbounded, and they correspond to the Aretakis constants and instability, respectively. \subsection{On the future Poincar\'e horizon} \label{sec:Poincarehorizon} We first discuss the late-time divergent behavior in Eq. \eqref{divergentbehavior}. We introduce vector fields on the future Poincar\'e horizon $r=0$, \begin{align} e_{(0)}^\mu \partial_\mu = \partial_v,~e_{(1)}^\mu \partial_\mu = -\partial_r, \end{align} where these satisfy \begin{equation} \begin{split} \label{basisrelation1AdS} & e_{(0)}^\mu \nabla_\mu e_{(0)}^\nu = 0,~ e_{(0)}^\mu \nabla_\mu e_{(1)}^\nu = 0, \\ & e_{(0)}^\mu e_{(0)\mu} = 0,~ e_{(1)}^\mu e_{(1)\mu} = 0,~ e_{(0)}^\mu e_{(1)\mu} = -1, \end{split} \end{equation} at $r=0$. Hence, $e_{(1)}^\mu$ is parallelly transported along the null geodesic $e_{(0)}^\mu$ on the future Poincar\'e horizon. The frame formed by $(e_{(0)}^\mu,e_{(1)}^\mu)$ is called \textit{the parallelly propagated null geodesic frame} on the future Poincar\'e horizon. For the massive scalar $\Phi(v,r)$ satisfying Eq. \eqref{eom} with general mass squared and positive integer $n$, the following relation holds: \begin{equation} \begin{split} \label{ppdivergent} &\left.\left(-1\right)^{n}e_{(1)}^{\mu_1}e_{(1)}^{\mu_2} \cdots e_{(1)}^{\mu_{n}} \nabla_{\mu_1}\nabla_{\mu_2} \cdots \nabla_{\mu_{n}} \Phi\right|_{r=0} = \left.\partial_{r}^{n} \Phi\right|_{r=0}. \end{split} \end{equation} Using the notation $n_m$ is defined in Eq. \eqref{nm}, the divergent behavior of $\partial_{r}^{n_m} \Phi$ at $r = 0$ implies that the $n_m$-th order covariant derivative is also divergent in the parallelly propagated null geodesic frame. We note that for $n\le n_m$, the components of $\nabla_{\mu_1}\cdots\nabla_{\mu_{n}} \Phi$ in the parallelly propagated null geodesic frame are bounded except for the $e_{(1)}^{\mu_1}e_{(1)}^{\mu_2} \cdots e_{(1)}^{\mu_{n}}$ component with $n=n_m$ from Eq.~\eqref{divergentbehavior}.\footnote{For positive integers $n$ and $q$, the following relation holds: \begin{equation} \begin{split} \label{e0components} &\left(-1\right)^{n}\left. e_{(0)}^{\mu_1} \cdots e_{(0)}^{\mu_{q}} e_{(1)}^{\nu_{1}} \cdots e_{(1)}^{\nu_{n}} \nabla_{\mu_1}\cdots \nabla_{\mu_{q}} \nabla_{\nu_{1}} \cdots \nabla_{\nu_{n}} \Phi\right|_{r = 0}= \left.\partial_v^{q} \partial_{r}^{n} \Phi\right|_{r = 0}. \end{split} \end{equation} Other components of the covariant derivatives in the parallelly propagated frame can be written by Eq.~\eqref{e0components} and the lower order derivatives using the commutation relation for the covariant derivatives.} For the mass squared $m^2=\ell(\ell+1)$ $(\ell=0,1,2,\cdots)$, we obtain \begin{equation} \begin{split} \label{AretakisinPP} \left.\left(-1\right)^{\ell+1}e_{(1)}^{\mu_1}e_{(1)}^{\mu_2} \cdots e_{(1)}^{\mu_{\ell+1}} \nabla_{\mu_1}\nabla_{\mu_2} \cdots \nabla_{\mu_{\ell+1}} \Phi\right|_{r=0}=\left.\partial_{r}^{\ell+1} \Phi\right|_{r = 0}. \end{split} \end{equation} We find that $e_{(1)}^{\mu_1}e_{(1)}^{\mu_2} \cdots e_{(1)}^{\mu_{\ell+1}}$ component of the $(\ell+1)$-th order covariant derivative of the field on the future Poincar\'e horizon is the Aretakis constant $\mathcal{H}_\ell$ in Eq. \eqref{arec}. \begin{figure} \centering \includegraphics[width=7cm]{diagramAdS2.eps} \caption{The Penrose diagram for $AdS_2$. The left and right $AdS$ boundaries are located at $U=V+\pi$ and $U=V$, respectively. The future and past Poincar\'e horizons are located at $U=\pi/2$ and $V=-\pi/2$, respectively.}\label{PenrosediagramAdS2} \end{figure} \subsection{On any null hypersurfaces} \label{sec:nullhypersurfaces} Because $AdS_2$ is maximally symmetric, the discussion in the previous subsection should also hold for any other null hypersurfaces. This implies that the Aretakis instability is the result of singular behaviors of the higher-order covariant derivatives of the fields on the whole $AdS$ infinity, rather than a blow-up on a specific null hypersurface. In this subsection, we explicitly show that for $m^2 = \ell (\ell + 1)$ $(\ell=0,1,2,\cdots)$ cases. \subsubsection{The massive scalar fields in the global chart $(U,V)$} For the latter convenience, we discuss the massive scalar fields in the global chart \cite{Cardoso:2017qmj}. In the double null global chart $(U,V)$ defined by \begin{equation} \label{UVads2} \tan U=v+\frac{2}{r},~\tan V=v, \end{equation} the line element in Eq. \eqref{ads2}, which describes $AdS_2$, is rewritten as \begin{equation} ds^2=-\frac{4}{H(U,V)}dUdV, \end{equation} where \begin{equation} \label{HUV} H(U,V)=\sin^2(U - V). \end{equation} The coordinate range is $-\infty < U < \infty, -\infty < V < \infty$ with $0 < U - V < \pi$, and the $AdS$ boundary locates at $V = U$ or $V = U + \pi$ where $H(U,V)=0$. The future and past Poincar\'e horizons are $U=\pi/2$ and $V=-\pi/2$, respectively. The Penrose diagram of $AdS_2$ is shown in FIG.~\ref{PenrosediagramAdS2}. In the present coordinates, the massive Klein-Gordon equation \eqref{eom} is rewritten as \begin{equation} \label{eomUV} \left[-H(U,V)\partial_V\partial_U-m^2\right]\Phi(U,V)=0, \end{equation} where $H(U,V)$ is given by Eq. \eqref{HUV}. We notice that for the massless scalar, this equation shows $\partial_V\partial_U\Phi=0$, and hence $\partial_U\Phi$ is constant along any null hypersurfaces $U={\rm const}$. At the future Poincar\'e horizon $U=\pi/2$, it coincides with the Aretakis constant $\mathcal{H}_0$ in Eq.~\eqref{arec}. Thus, $\partial_U\Phi$ is the generalization of the Aretakis constant $\mathcal{H}_0$. According to Ref. \cite{Cardoso:2017qmj}, for the mass squared $m^2=\ell(\ell+1)$, there exists the generalization of the Aretakis constants $\mathcal{H}_\ell$ in Eq. \eqref{arec} for general $\ell$,\footnote{If we exchange $U$ and $V$ in Eq. \eqref{garec}, we can construct conserved quantities on $V={\rm const.}$ surfaces.} \begin{equation} \label{garec} \mathcal{A}_\ell:=\left(\frac{\cos^2V}{H(U,V)}\right)^{\ell+1}\left[\frac{H(U,V)}{\cos^2V}\partial_U\right]^{\ell+1}\Phi, \end{equation} and they satisfy \begin{equation} \partial_v\mathcal{A}_\ell=0. \end{equation} We call $\mathcal{A}_\ell$ \textit{the generalized Aretakis constants}. It is easy to check that $\mathcal{A}_\ell=\mathcal{H}_\ell$ at the future Poincar\'e horizon $U=\pi/2$. For other mass squared cases, conserved quantities on a null hypersurface cannot be found. \subsubsection{The parallelly propagated null geodesic frame in $AdS_2$} Now, we introduce null vector fields \begin{equation} \begin{split} \label{basisads} \mathfrak{e}_{(0)}^\mu \partial_\mu = \frac{f(U)H(U,V)}{4}\partial_{V},~ \mathfrak{e}_{(1)}^\mu \partial_\mu = \frac{2}{f(U)}\partial_{U}, \end{split} \end{equation} where $f(U)$ is an arbitrary finite function. These satisfy the relations \begin{equation} \begin{split} \label{relas1AdS} & \mathfrak{e}_{(0)}^\mu \nabla_\mu \mathfrak{e}_{(0)}^\nu = 0,~ \mathfrak{e}_{(0)}^\mu \nabla_\mu \mathfrak{e}_{(1)}^\nu = 0, \\ & \mathfrak{e}_{(0)}^\mu \mathfrak{e}_{(0)\mu} = 0,~ \mathfrak{e}_{(1)}^\mu\mathfrak{e}_{(1)\mu} = 0,~ \mathfrak{e}_{(0)}^\mu \mathfrak{e}_{(1)\mu} = -1. \end{split} \end{equation} Therefore, $(\mathfrak{e}_{(0)}^\mu, \mathfrak{e}_{(1)}^\mu)$ form the parallelly propagated null geodesic frame for each null hypersurface $U={\rm const.}$ We should note that $\mathfrak{e}^{\mu}_{(0)}$ and $\mathfrak{e}^{\mu}_{(1)}$ are vector fields defined in the whole $AdS_2$ spacetime, while $e_{(0)}^\mu$ and $ e_{(1)}^\mu$ in Eq.~\eqref{basisrelation1AdS} are defined only on $r = 0$ surface. Hereafter, we set $f(U) = 2$. We note that this specific choice of $f(U)$ does not change the conclusion in the following discussions. \subsubsection{Massless scalar cases} \label{sec:massless} General solutions of the massless Klein-Gordon equation \eqref{eomUV} are \begin{equation} \Phi(U,V) =F(U) + G(V). \end{equation} Then, the generalized Aretakis constant in Eq. \eqref{garec} is $\mathcal{A}_0=\partial_UF(U)$. Now, we can see \begin{align} \mathfrak{e}^{\mu}_{(1)} \nabla_\mu \Phi &= \mathcal{A}_0,\label{aretakisuvcoordads2} \\ \label{e1e1chi} \mathfrak{e}^{\mu}_{(1)}\mathfrak{e}^{\nu}_{(1)} \nabla_\mu\nabla_\nu \Phi &= \mathcal{A}_0\frac{\partial_U H(U,V)}{H(U,V)}+ \partial_U^{2}F(U). \end{align} Eq. \eqref{aretakisuvcoordads2} shows that the geometrical meaning of ${\mathcal A}_0$ at each null hypersurface $U = {\rm const.}$ is the same as the Aretakis constant ${\cal H}_0$ at $r = 0$, i.e., a component of the covariant derivative in the parallelly propagated frame is constant at each null hypersurface. Because $\partial_U H/H=2/\tan(U-V)$, $\mathfrak{e}^{\mu}_{(1)}\mathfrak{e}^{\nu}_{(1)} \nabla_\mu\nabla_\nu \Phi$ in Eq. (\ref{e1e1chi}) is divergent linearly in $(U-V)^{-1}$ at the $AdS$ boundary if $\mathcal{A}_0 \neq 0$.\footnote{If we consider the ``normalizable" mode $\Phi \sim (U-V)^{\Delta_m}$ with $\Delta_m = 1$, where $\Delta_m$ is defined by Eq. \eqref{Deltam}, for the massless case at the $AdS$ boundary $U = V$, then the function $G(V)$ becomes $G(V) = - F(V)$. We note that the Aretakis instability occurs even in that case.} Near the $AdS$ boundary, we further show \begin{equation} \label{e1e1Kchi} \mathfrak{e}^{\mu_{1}}_{(1)}\mathfrak{e}^{\mu_{2}}_{(1)}\cdots\mathfrak{e}^{\mu_{n+2}}_{(1)} \nabla_{\mu_{1}}\nabla_{\mu_{2}}\cdots\nabla_{\mu_{n+2}} \Phi=\mathcal{A}_0\mathcal{O}\left((U-V)^{-n-1}\right), \end{equation} where $n\ge1$. Eqs. \eqref{e1e1chi} and \eqref{e1e1Kchi} show that the second and higher-order covariant derivatives of the field \textit{on any null hypersurfaces} have singular behaviors at the $AdS$ boundary if $\mathcal{A}_0\neq0$.\footnote{If $\mathcal{A}_0=0$ and $\partial_U^2F(U)\neq0$, the third-order derivative $\mathfrak{e}^{\mu_1}_{(1)}\mathfrak{e}^{\mu_2}_{(1)}\mathfrak{e}^{\mu_3}_{(1)} \nabla_{\mu_1}\nabla_{\mu_2}\nabla_{\mu_3} \Phi $ is divergent (see proposition.1 in Sec. \ref{sec:proposition}). } We comment that other components are bounded, \begin{align} \label{e0chi} \mathfrak{e}^{\mu}_{(0)} \nabla_\mu \Phi &= \frac{H(U,V)}{2}\partial_VG(V) ,\\ \mathfrak{e}^{\mu}_{(0)}\mathfrak{e}^{\nu}_{(0)} \nabla_\mu\nabla_\nu \Phi &= \frac{ H(U,V)}{4}\partial_V\left(H(U,V)\partial_VG(V)\right),\\ \mathfrak{e}^{\mu}_{(0)}\mathfrak{e}^{\nu}_{(1)} \nabla_\mu\nabla_\nu \Phi &= \mathfrak{e}^{\mu}_{(1)}\mathfrak{e}^{\nu}_{(0)} \nabla_\mu\nabla_\nu \Phi= 0. \label{e0e1chi} \end{align} \subsubsection{Massive scalar cases with $m^2 = \ell(\ell +1)$} For the cases $m^2 = \ell(\ell +1)$ $(\ell=1,2,\cdots)$, we can also explicitly see the divergent behavior at the $AdS$ boundary. For $\ell = 1$ case, the general normalizable Klein-Gordon fields, which are derived in Appendix \ref{appnedix:massladderUV}, take the form of \begin{equation} \begin{split} \Phi(U,V) =& \frac{2 \cos U \cos V}{\sin(U-V)}\left(F(U) - F(V)\right)- \cos^2 U \partial_UF(U) -\cos^2 V \partial_VF(V), \label{l1kgfield} \end{split} \end{equation} with an arbitrary function $F$.\footnote{ Note that $\Phi\left(U,V\right)$ satisfies the normalizable boundary condition, i.e., $\Phi \sim (U-V)^{\Delta_m}$ with $\Delta_m = \ell+ 1$, at $V = U$. If we also impose this condition at $V = U + \pi$, $F$ should satisfy $F(U) = F(U + \pi)$. } We obtain \begin{equation} \begin{split} \label{e1e1chi1} &\mathfrak{e}^{\mu_1}_{(1)}\mathfrak{e}^{\mu_2}_{(1)} \nabla_{\mu_1}\nabla_{\mu_2} \Phi =\mathcal{A}_1, \end{split} \end{equation} where $\mathcal{A}_1=2(-1 + 2 \cos(2U))\partial_UF(U)+\cos U \big(6\sin U\partial_U^{2}F(U) - \cos U\partial_U^{3}F(U) \big)$ is the generalized Aretakis constant \eqref{garec} and \begin{equation} \begin{split} \label{e1e1e1chi} \mathfrak{e}^{\mu_1}_{(1)}\mathfrak{e}^{\mu_2}_{(1)} \mathfrak{e}^{\mu_3}_{(1)} \nabla_{\mu_1}\nabla_{\mu_2} \nabla_{\mu_3} \Phi=2\mathcal{A}_1\frac{\partial_U H(U,V)}{H(U,V)}+\partial_U\mathcal{A}_1. \end{split} \end{equation} Because $\partial_U H/H=2/\tan(U-V)$, Eq. \eqref{e1e1e1chi} shows that the third-order covariant derivative of the field on any null hypersurfaces has the linear growth of $(U-V)^{-1}$ at the $AdS$ boundary if $\mathcal{A}_1\neq0$. We comment that other components are bounded. For $\ell \ge 2$, acting the mass ladder operators, which are given by Eq. \eqref{massladderinUV}, we can easily obtain the explicit form of the general normalizable Klein-Gordon field, which is Eq.~\eqref{massrasingPhi} with $G(V)=-F(V)$. We can show that $(\ell+1)$-th and $(\ell+2)$-th order covariant derivatives are, respectively, constant along each null hypersurface and divergent at the $AdS$ boundary, \begin{equation} \begin{split} \label{e1e1garec} \mathfrak{e}^{\mu_1}_{(1)}\mathfrak{e}^{\mu_2}_{(1)} \cdots\mathfrak{e}^{\mu_{\ell+1}}_{(1)}\nabla_{\mu_1}\nabla_{\mu_2} \cdots\nabla_{\mu_{\ell+1}} \Phi=\mathcal{A}_\ell, \end{split} \end{equation} where $\mathcal{A}_\ell$ is the generalized Aretakis constant in Eq. \eqref{garec} and \begin{equation} \begin{split} \label{e1e1e1divergent} \mathfrak{e}^{\mu_1}_{(1)}\mathfrak{e}^{\mu_2}_{(1)} \cdots\mathfrak{e}^{\mu_{\ell+2}}_{(1)}\nabla_{\mu_1}\nabla_{\mu_2} \cdots\nabla_{\mu_{\ell+2}} \Phi=(\ell+1)\mathcal{A}_\ell\frac{\partial_U H(U,V)}{H(U,V)}+\partial_U\mathcal{A}_\ell. \end{split} \end{equation} \subsection{Relation between the conserved quantities on the null hypersurface and divergent behavior} \label{sec:proposition} We have observed that, for a solution of the massive Klein-Gordon equation \eqref{eomUV} with the mass squared $m^2 = \ell (\ell +1)$ $(\ell=0,1,2,\cdots)$ in $AdS_2$, in the parallelly propagated null geodesic frame, $(\ell+1)$-th covariant derivative of the field gives a constant along each null hypersurface and $(\ell+2)$-th covariant derivative has a linear divergent behavior along the null hypersurface. We can generalize this relation, i.e., the relation between a conserved quantity on a null hypersurface and the divergent behavior, as follows: \\ \\ {\bf Proposition.1~} \textit{ If the relation \begin{align} \mathfrak{e}^{\mu_{n}}_{(1)} \mathfrak{e}^{\mu_{n-1}}_{(1)}\cdots\mathfrak{e}^{\mu_1}_{(1)} \nabla_{\mu_{n}} \nabla_{\mu_{n-1}}\cdots \nabla_{\mu_1} \Psi = A(U), \label{eq:aretakisconst_a} \end{align} holds for some scalar field $\Psi(U,V)$ in $AdS_2$, a positive integer $n$, and a regular function $A(U) (\not \equiv 0)$, then $\mathfrak{e}^{\mu_{n+1}}_{(1)}\mathfrak{e}^{\mu_n}_{(1)}\cdots\mathfrak{e}^{\mu_{1}}_{(1)} \nabla_{\mu_{n+1}} \nabla_{\mu_n}\cdots \nabla_{\mu_{1}} \Psi$ is divergent at the $AdS$ boundary. } \\ \\ {\bf Proof}.~~ Acting an operator $\mathfrak{e}^{\mu_{n+1}}_{(1)} \nabla_{\mu_{n+1}}$ to Eq.~\eqref{eq:aretakisconst_a}, we obtain \begin{align} \mathfrak{e}^{\mu_{n+1}}_{(1)}\mathfrak{e}^{\mu_n}_{(1)}\cdots\mathfrak{e}^{\mu_{1}}_{(1)} \nabla_{\mu_{n+1}} \nabla_{\mu_n}\cdots \nabla_{\mu_{1}} \Psi=&\partial_UA(U) -\mathfrak{e}^{\mu_{n+1}}_{(1)} \nabla_{\mu_{n+1}}\left(\mathfrak{e}^{\mu_{n}}_{(1)} \cdots\mathfrak{e}^{\mu_1}_{(1)}\right) \nabla_{\mu_{n}} \cdots \nabla_{\mu_1} \Psi\notag\\ =&\partial_UA(U)+nA(U)\frac{\partial_U H(U,V)}{H(U,V)}, \label{omega1omega1ddpsi} \end{align} where we have used a relation \begin{align} & \mathfrak{e}^{\mu}_{(1)}\nabla_\mu \mathfrak{e}^{\nu}_{(1)} =-\frac{\partial_U H(U,V)}{H(U,V)}\mathfrak{e}^{\nu}_{(1)}. \label{omega1eq} \end{align} In the right-hand side of Eq.~\eqref{omega1omega1ddpsi}, the first term is finite but the second term is divergent at the $AdS$ boundary $V = U$ (or $V = U +\pi$) because $\partial_U H/H=2/\tan(U-V)$. \hfill$\Box$ $~$ \\ If $\Psi(U,V)$ is the massive Klein-Gordon field with the mass squared $m^2 = \ell (\ell + 1)$, the above proposition leads to the relation between the generalized Aretakis constant and the divergent behavior at the $AdS$ boundary. As another application of the above proposition with $n = 1$, for the massive Klein-Gordon fields $\Phi(U,V)$ with the mass squared $m^2 = \ell (\ell +1)$ in $AdS_2$, if we choose the function $\Psi(U,V)$ as \begin{align} \label{masslessPsi} \Psi = D_{i_{1},1} D_{i_{2},2} \cdots D_{i_\ell,\ell} \Phi, \end{align} where $D_{i,\ell}$ are the mass ladder operators in Eq. \eqref{massladderinUV}, then $\Psi(U,V)$ satisfies the massless Klein-Gordon equation \eqref{eomUV}. Then, $\mathfrak{e}^{\mu}_{(1)} \nabla_\mu \Psi\propto \partial_U\Psi$ is a constant along each null hypersurface, which corresponds to the generalized Aretakis constant $\mathcal{A_\ell}$ in Eq. \eqref{garec} as will be shown in section \ref{sec:4}, and $\mathfrak{e}^{\mu}_{(1)} \mathfrak{e}^{\nu}_{(1)} \nabla_\mu \nabla_\nu \Psi$ is linearly divergent along the null hypersurface. We can also show the following proposition: ~~\\ ~~\\ {\bf Proposition.2~} \textit{ If the relation \begin{align} \mathfrak{e}^{\mu_{n}}_{(1)} \mathfrak{e}^{\mu_{n-1}}_{(1)}\cdots\mathfrak{e}^{\mu_1}_{(1)} \nabla_{\mu_{n}} \nabla_{\mu_{n-1}}\cdots \nabla_{\mu_1} \Psi = A_0 + A_1(V)(U-U_0) + {\cal O}((U-U_0)^2), \label{eq:aretakisconst_b} \end{align} holds for some scalar field $\Psi(U,V)$ in $AdS_2$, a positive integer $n$, a constant $A_0$ ($\neq 0$), and a bounded function $A_1(V)$, then $\mathfrak{e}^{\mu_{n+1}}_{(1)}\mathfrak{e}^{\mu_n}_{(1)}\cdots\mathfrak{e}^{\mu_{1}}_{(1)} \nabla_{\mu_{n+1}} \nabla_{\mu_n}\cdots \nabla_{\mu_{1}} \Psi$ is divergent at the $AdS$ boundary along $U= U_0$. } \\ \\ {\bf Proof}.~~ If we set $A = A_0 + A_1(V)(U-U_0) + {\cal O}((U-U_0)^2)$, Eq.~\eqref{omega1omega1ddpsi} still holds. Because $A_1$ is a bounded function, $\mathfrak{e}^{\mu_{n+1}}_{(1)}\mathfrak{e}^{\mu_n}_{(1)}\cdots\mathfrak{e}^{\mu_{1}}_{(1)} \nabla_{\mu_{n+1}} \nabla_{\mu_n}\cdots \nabla_{\mu_{1}} \Psi$ is divergent at the $AdS$ boundary along $U= U_0$. \hfill$\Box$ $~$ \\ We note that scalar fields $\Psi(U,V)$ in the above propositions are not necessarily the massive Klein-Gordon fields. Proposition.2 shows that the existence of a constant along a null hypersurface leads to the divergent behavior of the higher derivative. Finally, we comment that proposition.2 holds if $A_0$ is a function of $V$ and has a non-vanishing limiting value $\lim_{V\to \infty} A_0 \neq 0$. \subsection{The relation among the conformal Killing tensors, the Aretakis constants and instability } \label{sec:conformalKilling} For positive integers $n$, rank-$n$ tensors \begin{equation} \label{conformalKillingtensor} K^{\mu_1 \mu_2 \cdots \mu_{n}} :=\mathfrak{e}_{(1)}^{\mu_1}\mathfrak{e}_{(1)}^{\mu_2}\cdots \mathfrak{e}_{(1)}^{\mu_{n}}, \end{equation} are conformal Killing tensors in $AdS_2$ and the only non-trivial components are $K^{UU\cdots U} = 1$.\footnote{We note that $K^{\mu_1 \mu_2 \cdots \mu_{n}}$ is parallelly propagated along $\mathfrak{e}_{(0)}^{\mu}$, i.e., $\mathfrak{e}_{(0)}^{\nu}\nabla_\nu K^{\mu_1 \mu_2 \cdots \mu_{n}}=0$, and satisfies ${\cal L}_\xi K^{\mu_1 \mu_2 \cdots \mu_n} = 0$ with the Killing vector $\xi = \partial_V + \partial_U$.} For the scalar fields $\Phi(U,V)$ with the mass squared $m^2=\ell(\ell+1)$ $(\ell=0,1,2,\cdots)$, Eq. \eqref{e1e1garec} shows that the generalized Aretakis constants $\mathcal{A}_\ell$ in Eq. \eqref{garec} relate with the rank-$(\ell+1)$ conformal Killing tensor \cite{Cardoso:2017qmj}, \begin{equation} K^{\mu_1 \mu_2 \cdots \mu_{n_{\ell+1}}}\nabla_{\mu_1}\nabla_{\mu_2} \cdots \nabla_{\mu_{\ell+1}} \Phi =\mathcal{A}_\ell. \end{equation} Eq. \eqref{e1e1e1divergent} implies near the $AdS$ boundary $V\simeq U$, \begin{equation} \label{KDPhi} K^{\mu_1 \mu_2 \cdots \mu_{n_{\ell+2}}}\nabla_{\mu_1}\nabla_{\mu_2} \cdots \nabla_{\mu_{\ell+2}} \Phi=2(\ell+1)\frac{\mathcal{A}_\ell}{U-V}+\mathcal{O}\left((U-V)^0\right). \end{equation} Hence, the contraction with the rank-$(\ell+2)$ conformal Killing tensor and the $(\ell+2)$-th order covariant derivative will blow up linearly in $(U-V)^{-1}$ at the $AdS$ boundary if $\mathcal{A}_\ell\neq 0$. For the general mass squared $m^2\ge m^2_{\rm BF}=-1/4$, where the Aretakis constants do not necessarily exist, we have the relation \begin{equation} K^{\mu_1 \mu_2 \cdots \mu_{n_m}}\nabla_{\mu_1}\nabla_{\mu_2} \cdots \nabla_{\mu_{n_m}} \Phi= \mathfrak{e}_{(1)}^{\mu_1}\mathfrak{e}_{(1)}^{\mu_2}\cdots \mathfrak{e}_{(1)}^{\mu_{n_m}}\nabla_{\mu_1}\nabla_{\mu_2} \cdots \nabla_{\mu_{n_m}} \Phi, \end{equation} where the notation $n_m$ is defined in Eq. \eqref{nm}. As discussed in Secs. \ref{sec:Poincarehorizon} and \ref{sec:nullhypersurfaces}, the right-hand side is divergent at the $AdS$ boundary. Thus, the Aretakis instability can also be regarded as that the contraction with the conformal Killing tensor $K^{\mu_1 \mu_2 \cdots \mu_{n_m}}$ and the $n_m$-th order covariant derivative of the Klein-Gordon field is divergent at the $AdS$ boundary. \section{The Aretakis constants from the spacetime conformal symmetry} \label{sec:4} In this section, we discuss the relation between the generalized Aretakis constants in $AdS_2$ in Eq.~\eqref{garec} and the ladder operators constructed from the spacetime conformal symmetry~\cite{Cardoso:2017qmj,Cardoso:2017egd} for massive Klein-Gordon fields with the mass squared $m^2=\ell(\ell+1)$ $(\ell=0,1,2,\cdots)$. First, we construct conserved quantities at each null hypersurface $U = {\rm const.}$ following~Ref. \cite{Cardoso:2017qmj}. Next, we show that they coincide with the generalized Aretakis constants up to constant factors. Note that cases for $\ell=1,2$ have been discussed~\cite{Cardoso:2017qmj}. \subsection{Conserved quantities at each null hypersurface from the mass ladder operators} \label{subsec:4-A} We discuss the scalar fields $\Phi(U,V)$ obeying the massive Klein-Gordon equation \eqref{eomUV} with the mass squared $m^2=\ell(\ell+1)$. First, let us consider the massless case $\ell = 0$. The massless Klein-Gordon equation \eqref{eomUV} shows \begin{equation} \partial_V\partial_U\Phi=0. \end{equation} We can see that $\partial_U\Phi$ is a conserved quantity at each null hypersurface $U={\rm const.}$, and this quantity is the generalized Aretakis constant $\mathcal{A}_0$ in Eq. \eqref{garec}. Next, we consider $\ell \ge 1$ cases. Using the mass ladder operators \cite{Cardoso:2017qmj,Cardoso:2017egd} (see Appendix~\ref{appendix:massladder} for a brief review), the massive Klein-Gordon fields can be mapped into the massless Klein-Gordon fields. Following Ref.~\cite{Cardoso:2017qmj}, we can construct conserved quantities at each null hypersurface $U = {\rm const.}$ similar to the massless case. The explicit calculation is shown below. From the relation \eqref{eq:comrelformassladderinAdS2} with $k=s=\ell$ on the scalar field~$\Phi$, we obtain \begin{equation} \begin{split} \label{massladderell} D_{i_{\ell},-1}D_{i_{\ell-1},0}\cdots D_{i_{1},\ell-2}\left[-H(U,V)\partial_V\partial_U-\ell(\ell+1)\right]\Phi=-H(U,V)\partial_V\partial_U D_{i_\ell,1}D_{i_{\ell-1},2}\cdots D_{i_{1},\ell}\Phi, \end{split} \end{equation} where the mass ladder operators $D_{i,k}$ are given by Eq.~\eqref{massladderinUV} and $H(U,V)$ is given by Eq.~\eqref{HUV}. Since the left-hand side vanishes due to the Klein-Gordon equation for $\Phi$, Eq.~\eqref{massladderell} leads to \begin{equation} \begin{split} \label{massless_eom} -H(U,V)\partial_V\partial_U D_{i_\ell,1}D_{i_{\ell-1},2}\cdots D_{i_{1},\ell}\Phi=0. \end{split} \end{equation} Thus, solutions of the massive Klein-Gordon equation with the mass squared $m^2 = \ell(\ell +1)$ in $AdS_2$ can be mapped into that of the massless Klein-Gordon equation. We note that massive fields with other mass squared cannot be mapped into massless fields. As in the case $\ell=0$, Eq.~\eqref{massless_eom} shows \begin{equation} \label{conservationlawUV} \partial_V \mathcal{Q}_\ell = 0, \end{equation} where \begin{align} \label{QinUV} {\cal Q}_\ell := W(U)\partial_U D_{i_\ell,1}D_{i_{\ell-1},2}\cdots D_{i_{1},\ell}\Phi. \end{align} For later convenience, using $\partial_V W(U) = 0$, we have added an arbitrary function $W(U)$ as a factor. Eq.~\eqref{conservationlawUV} shows that ${\cal Q}_\ell$ are conserved quantities at each null hypersurface $U={\rm const.}$ As will be discussed below, the quantity ${\cal Q}_\ell $ relates to the generalized Aretakis constant $\mathcal{A}_\ell$. \subsection{The relation with the Aretakis constants on the future Poincar\'e horizon} \label{subsec:4-B} We shall show that ${\cal Q}_\ell$ coincide with the Aretakis constants $\mathcal{H}_\ell$ in Eq. \eqref{arec} on the future Poincar\'e horizon $U=\pi/2$ by choosing $W(U)$ appropriately. It is convenient to use the ingoing Eddington-Finkelstein coordinates $(v,r)$. Using $\partial_U=-(2+2vr+(1/2+v^2/2)r^2)\partial_r$, Eq. \eqref{QinUV} is written as \begin{align} {\cal Q}_\ell= -W(U)\left(2+2vr+\frac{1+v^2}{2}r^2\right)\partial_r D_{i_\ell,1}D_{i_{\ell-1},2}\cdots D_{i_{1},\ell}\Phi. \end{align} Because $\tan{U}=v+2/r$ in Eq. \eqref{UVads2}, we can regard $W$ as a function of $v+2/r$. Hereafter, we consider $W = -2^{-1}C_W(v/2 + 1/r)^q$ cases, where $C_W$ and $q$ are constants. Then, we can evaluate the leading term of ${\cal Q}_\ell$ as \begin{align} \label{QAdS} {\cal Q}_\ell= C_Wr^{-q}\partial_r D_{i_\ell,1}D_{i_{\ell-1},2}\cdots D_{i_{1},\ell}\Phi\left(1+\mathcal{O}\left(r\right)\right). \end{align} By choosing $C_W$ and $q$ appropriately, we can show that ${\cal Q}_\ell $ coincide with the Aretakis constants on the future Poincar\'e horizon, $\mathcal{H}_\ell$ in Eq. \eqref{arec}. For this purpose, we introduce the following proposition. ~~\\ ~~\\ {\bf Proposition.3~} \textit{For analytic solutions of the massive Klein-Gordon equation with the mass squared $\ell (\ell + 1), ~(\ell = 0, 1, 2, \cdots)$ in $AdS_2$, \begin{align} \left[2\partial_v\partial_r+2r\partial_r+r^2\partial_r^{2} - \ell(\ell + 1)\right]\Phi(v,r) = 0, \label{eq:AdS2kgeq} \end{align} the relation \begin{equation} \begin{split} &2^{- n_{1}+n_{-1}} r^{-2 n_{-1}- n_0} \partial_r D_{i_\ell,1}D_{i_{\ell-1},2}\cdots D_{i_{1},\ell}\Phi= \partial_r^{\ell+1} \Phi + {\cal O}(r), \label{eq:d1eqinads2} \end{split} \end{equation} holds, where $n_{-1}, n_0, n_{1}$ are the numbers of the mass ladder operators constructed from $\zeta_{-1}, \zeta_{0}, \zeta_1$, respectively, included in the left-hand side of Eq. \eqref{eq:d1eqinads2}. The numbers $n_{-1}, n_0, n_{1}$ satisfy $n_{-1}+ n_0+ n_{1}= \ell$. } ~~\\ ~~\\ The proof is given in Appendix \ref{appendix:relbtwnAQinAdS2}. Because $\mathcal{H}_\ell=\partial_r^{\ell+1}\Phi|_{r=0}$, the above proposition and Eq.~\eqref{QAdS} show $\mathcal{Q}_\ell|_{r=0}=\mathcal{H}_\ell$ if $C_W=2^{- n_{1}+n_{-1}}$ and $q=2 n_{-1}+ n_0$. We should note that regardless of the choice of the closed conformal Killing vectors $\zeta_{-1}, \zeta_{0}, \zeta_1$, $\mathcal{Q}_\ell|_{r=0}$ relate with the same conserved quantities $\mathcal{H}_\ell$. \subsection{The relation with the generalized Aretakis constants} \label{subsec:4-C} Next, we shall discuss that ${\cal Q}_\ell$ in Eq. \eqref{QinUV} coincide with the generalized Aretakis constants $\mathcal{A}_\ell$ in Eq. \eqref{garec} by choosing $W(U)$ appropriately. In the construction of $Q_\ell$ in Eq.~\eqref{QinUV}, if we replace the mass ladder operators $D_{i,k}$ with the general mass ladder operators in Eq.~\eqref{generalmlo}, $\mathcal{Q}_\ell$ are still independent of $V$. In that case, if all general mass ladder operators contain $\zeta_1$, $\mathcal{Q}_\ell$ at the Poincar\'e horizon coincide with $\mathcal{Q}_\ell$ constructed only from $\zeta_1$ up to the constant factor because of proposition.3. Because $AdS_2$ is maximally symmetric, we can generalize this to other null hypersurfaces $U = {\rm const.}$, i.e., if all general closed conformal Killing vectors to construct $\mathcal{Q}_\ell$ are not proportional to $\partial_V$ at a null hypersurface $U = U_0$, then those $\mathcal{Q}_\ell$ at $U = U_0$ are proportional to $\mathfrak{e}^{\mu_1}_{(1)}\mathfrak{e}^{\mu_2}_{(1)} \cdots\mathfrak{e}^{\mu_{\ell+1}}_{(1)}\nabla_{\mu_1}\nabla_{\mu_2} \cdots\nabla_{\mu_{\ell+1}} \Phi$. Thus, $\mathcal{Q}_\ell$ in Eq.~\eqref{QinUV} coincide with the generalized Aretakis constant ${\mathcal A}_\ell$ up to the factor of a function of $U$ because $\zeta_{-1}, \zeta_0, \zeta_1$ are not proportional to $\partial_V$ except at the Poincar\'e horizon. \section{Summary and discussion} In this paper, we have studied the geometrical meaning of the Aretakis constants and instability for massive scalar fields in $AdS_2$. We have shown that the Aretakis constants and instability in $AdS_2$ can be understood as that some components of the higher-order covariant derivatives of the scalar fields in the parallelly propagated null geodesic frame are constant or unbounded at the future Poincar\'e horizon. Due to the maximal symmetry of $AdS_2$, the same discussion holds not only on the future Poincar\'e horizon but also on any null hypersurfaces. We then have clarified that the generalization of the Aretakis constants~\cite{Cardoso:2017qmj} called \textit{the generalized Aretakis constants} have the same geometrical meaning as that in the future Poincar\'e horizon, i.e., some components of the higher-order covariant derivatives in the parallelly propagated null geodesic frame are constant at each null hypersurface. Also, we have seen that the higher-order covariant derivatives of the scalar fields have singular behaviors at the whole $AdS$ boundary, and that causes the Aretakis instability in $AdS_2$. If we consider cases for the mass squared with $m^2_{\rm BF} < m^2 < 0$, where $m^2_{\rm BF} = -1/4$ is the Breitenlohner-Freedman (BF) bound~\cite{Breitenlohner:1982jf, Breitenlohner:1982bm}, the first order covariant derivatives of the scalar fields are divergent at the $AdS$ boundary. This implies that some physical quantities such as the energy-momentum tensor have also divergent behaviors at the $AdS$ boundary for $m^2_{\rm BF} < m^2 < 0$. We have also discussed the relation with the spacetime conformal symmetry. For the fields with the mass squared $m^2=\ell(\ell+1)$ $(\ell=0,1,2,\cdots)$, the contraction with the rank-$(\ell+2)$ conformal Killing tensor and the $(\ell+2)$-th order covariant derivatives of the field is divergent at the whole $AdS$ boundary if the generalized Aretakis constant exists. If we see this divergent behavior on a null hypersurface, it corresponds to the Aretakis instability. We note that the generalized Aretakis constants can be expressed as the contraction with the rank-$(\ell+1)$ conformal Killing tensor and the $(\ell+1)$-th order covariant derivatives~\cite{Cardoso:2017qmj}. We have demonstrated that the generalized Aretakis constants can be derived from the mass ladder operators constructed from the closed conformal Killing vectors~\cite{Cardoso:2017qmj}. Since the $AdS_2$ structures appear in the vicinity of extremal black hole horizons \cite{Kunduri:2013gce,Lucietti:2012xr,Zimmerman:2016qtn,Godazgar:2017igz,Gralla:2017lto,Gralla:2018xzo,Hadar:2017ven,Hadar:2018izi}, we expect that the Aretakis instability in extremal black hole spacetimes has a similar geometrical meaning as our result in $AdS_2$ cases. In fact, this expectation is correct, i.e., the Aretakis instability for black hole cases \cite{Aretakis:2011ha,Aretakis:2011hc,Aretakis:2012ei,Lucietti:2012sf,Murata:2012ct,Gralla:2019isj} can be understood as that some components of the higher-order covariant derivatives of the field in the parallelly propagated frame are unbounded at the late time \cite{arecinBH}. \begin{acknowledgments} The authors would like to thank Shahar Hadar, Tomohiro Harada, Takaaki Ishii, Shunichiro Kinoshita, Keiju Murata, Shin Nakamura, Harvey S. Reall, and Norihiro Tanahashi for useful comments and discussions. M.K. acknowledges support by MEXT Grant-in-Aid for Scientific Research on Innovative Areas 20H04746. \end{acknowledgments}
1,477,468,750,746
arxiv
\section{Introduction} While graphene\cite{Novoselov2004} has proven to be a remarkable material, with electronic properties that are interesting from a fundamental\cite{Neto2009,Novoselov2005} as well as a technological viewpoint,\cite{Geim2007,Geim2009} the absence of a band gap severely limits its possible applications. Several methods have been proposed for opening a gap in graphene. Relying on quantum confinement effects, the most immediate way of making graphene semiconducting is by reducing the dimensionality by cutting graphene into narrow ribbons. Such so-called graphene nanoribbons (GNRs) have band gaps that in general scale inversely with the width of the GNR, but which are very sensitive to the exact geometry of the edge of the ribbon.\cite{Nakada1996,Brey2006,Son2006} Related to these ideas, periodically perforated graphene, termed graphene antidot lattices, effectively result in a network of ribbons, and has been shown to be an efficient way of inducing an appreciable band gap in graphene. \cite{Pedersen2008a} This idea has been successfully applied to fabricate simple graphene-based semiconductor devices.\cite{Bai2010,Kim2010} Modifying graphene via adsorption of hydrogen presents another route towards opening a gap in graphene, with fully hydrogenated graphene exhibiting a band gap of several electron volts,\cite{Sofo2007,Elias2009} while patterned hydrogen adsorption yields band structures resembling those of graphene antidot lattices, with reported band gaps of at least $450$~meV.\cite{Balog2010} The prospect of opening a band gap in graphene via electrostatic gating is intriguing, since it would allow for switching between semi-metallic and semiconducting behavior and to dynamically alter the band gap to fit specific applications. This makes it significantly more flexible than proposals relying on structural modification of graphene. However, a linearization of the tight-binding Hamiltonian of graphene, resulting in the now widely studied Dirac equation (DE) of graphene,\cite{Semenoff1984,Neto2009} suggests that the Dirac fermions of graphene cannot be confined by electrostatic gating, due to the phenomenon of Klein tunneling.\cite{Novoselov2006,Beenakker2008} Thus, while periodic gating of usual semiconductor heterostructures such as, e.g., GaAs quantum wells, does induce gaps in the dispersion relation,\cite{Pedersen2008} previous theoretical studies have indicated that band gaps are induced for neither one-dimensional \cite{Barbier2008, Barbier2009} nor two-dimensional\cite{Park2008} periodic gating of graphene. These studies have taken as their starting point the Dirac model of graphene, which is a low-energy continuum model, ignoring atomistic details. Here, we instead use a more accurate tight-binding (TB) model to study periodically gated graphene. Contrary to predictions of continuum (Dirac) models, the TB model suggests that it is indeed possible to open a band gap in graphene via periodic gating. The aim of this paper is two-fold: (i) To compare periodically gated graphene with graphene antidot lattices. In doing so we will illustrate that, contrary to what may be expected from the Dirac equation, a sufficiently large scalar potential, i.e., not necessarily a mass term, yields a band structure that is highly similar to that of perforated graphene structures; (ii) to serve as a feasibility study of periodic gating as a means of inducing a band gap in graphene. To this end, we will illustrate and discuss the non-trivial dependence of the band gap on the gate potential, as well as the intricate relation between band gap and the edge geometry of the gated region. These results will also serve to illustrate some of the key differences between graphene and ordinary two-dimensional electron gases. While, initially, the potential will be modeled as a simple step function, we will show below that introducing smoothing in the potential distribution severely reduces the attainable band gap. Continuum and atomistic models of periodically gated graphene have previously been compared in Ref.~\onlinecite{Zhang2010}. That study, however, focused on a single value of the potential strength and only considered structures that are rotated $30^\circ$ compared to the ones of the present work and, therefore, do not necessarily display any band gap even for perforated structures.\cite{Petersen2011} Moreover, in this work we examine in detail the non-trivial dependence of the band gap on the magnitude of the potential and we consider more realistic, smooth potential profiles. Finally, we elucidate the intricate dependence on the precise edge geometry and show how the energy gap correlates with the gate region overlap of electron and hole states. \section{Models} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{geoms.eps} \caption{(Color online) Unit cells used in the calculations for the $\{12,5\}$ lattice. (a) Perforated graphene sheet, with carbon atoms removed in the region of the antidot. (b) Staggered potential (mass term) in the antidot region. The color indicates the sign of the on-site energies. (c) Constant gate potential in the antidot region. (d) Gate potential modeled via Eq.~(\ref{eq:potential}), assuming the gate is directly below the graphene sheet, with no insulating layer in-between. The lower panel illustrates the potential of each model on the separate $A$ and $B$ sublattices. } \label{fig:geoms} \end{center} \end{figure} In Fig.~\ref{fig:geoms} we illustrate the graphene structures that we will consider in this article. We consider only superlattices with triangular symmetry, as shown in the figure. An important decision lies in the choice of the angle between the basis vectors of the superlattice and the carbon-carbon bonds in graphene. In particular, if the superlattice basis vectors are rotated $30^\circ$ compared to the carbon-carbon bonds (such as in Ref.~\onlinecite{Zhang2010}), Clar sextet theory predicts that perforated graphene structures only exhibit significant band gaps for every third value of the side length of the hexagonal unit cell.\cite{Petersen2011} In contrast to this, perforated graphene structures with basis vectors parallel to the carbon-carbon bonds always have band gaps. We choose to focus in this paper on the latter geometries, in order to ensure that the superlattice symmetry in itself does not prohibit the emergence of a band gap. We characterize a given structure by $\{L,R\}$, where $L$ denotes the side length of the hexagonal unit cell, while $R$ is the radius of the central region, both in units of the graphene lattice constant, as illustrated in Fig.~\ref{fig:geoms}. In these units, $L$ also corresponds to the number of benzene rings along each edge of the unit cell. Note that the exact geometry of the edge of the central region differs greatly depending on the radius $R$. Below, we discuss in detail the crucial dependence of the results on the edge geometry. We will consider four distinct ways of periodically modifying graphene: (a) Perforated graphene (graphene antidot lattices), with carbon atoms removed from the central region, (b) a periodic mass term, non-zero only in the central region, and (c) periodically gated graphene, with a constant gate potential within the central region and a vanishing potential outside. Furthermore, to discuss the feasibility of realizing gapped graphene via periodic gating, we will also consider (d) periodically gated graphene, with a more realistic model of the spatial dependence of the gate potential, obtained from a solution to the Laplace equation. Focus will be on periodically gated graphene, with the other forms of modulation included for comparison only. To illustrate the dependence of the results on the exact edge of the gate or mass region, we will use a Dirac model as well as a more accurate tight-binding treatment, in which the atomistic details of the structures are included. We find significant discrepancies between these two methods, quantitatively as well as qualitatively. In particular, we will show that the DE does not predict a band gap opening for periodic gating, which is present in the TB results. In what follows, we briefly describe the two models. In the continuum model of the problem, we employ the Dirac Hamiltonian \begin{equation} H_\mathrm{D} = \left[ \begin{array}{cc} \Delta(x,y) & v_F\left(\hat{p}_x-i\hat{p}_y\right) \\ v_F\left(\hat{p}_x+i\hat{p}_y\right) & \pm\Delta(x,y) \end{array} \right],\label{eq:DE} \end{equation} where $v_F\simeq 10^6$~m/s is the Fermi velocity, while $\Delta(x,y)$ denotes the gate potential or mass term. Here, the $+$ ($-$) is used when modeling a gate potential (mass term). Imposing periodic Bloch boundary conditions at the edge of the unit cell, we solve the problem in a plane-wave spinor basis, $\left<\mathbf{r}|A_\mathbf{G}\right> = \bigl(\begin{smallmatrix} 1 \\0 \end{smallmatrix}\bigr) e^{i(\mathbf{G}+\mathbf{k})\cdot\mathbf{r}}$ and $\left<\mathbf{r}|B_\mathbf{G}\right> = \bigl(\begin{smallmatrix} 0 \\1 \end{smallmatrix}\bigr) e^{i(\mathbf{G}+\mathbf{k})\cdot\mathbf{r}}$, with $\mathbf{k}$ the Bloch wave vector and $\mathbf{G}$ the reciprocal lattice vectors. We take $\Delta(\mathbf{r})=V_0\Theta(R-r)$, with $\Theta(r)$ the Heaviside step function, yielding $\Delta(\mathbf{G}) = 2\pi R V_0 J_1(GR)/(GA)$, where $A$ is the unit cell area while $J_1(x)$ is the Bessel function of the first kind. A total of $1058$ plane-wave spinors were included in the calculations, to ensure convergence of the results. In the tight-binding model we include only nearest-neighbor coupling between $\pi$ orbitals, parametrized via the hopping term $-t$, with $t=3$~eV. We ignore the overlap between neighboring $\pi$ orbitals, assuming that our basis is orthogonal, and set the on-site energy of the $\pi$ orbitals to zero. This parametrization accurately reproduces the Fermi velocity of graphene, and is also in quantitative agreement with density functional theory when applied to perforated graphene structures.\cite{Furst2009} For periodically gated graphene, we set the diagonal terms of the Hamiltonian equal to the gate potential. In the case of a mass term, the diagonal terms become $\pm V_0$, with the sign depending on which sublattice the carbon atom resides on. For perforated graphene, atoms are removed entirely in the region of the hole, ensuring that no dangling bonds are created. While including next-nearest neighbor coupling, as well as taking into account the non-orthogonality of the basis set, will change our results quantitatively, we expect the overall trends and the conclusions to remain the same in more accurate models. \section{Band structures} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{bandL12R5Hole.eps} \caption{(Color online) Band structures of the $\{12,5\}$ lattice. The solid, black lines show results for perforated graphene, calculated using a TB model. The blue, dashed (red, dotted) lines correspond to graphene with a periodic mass term of $V_0=t$, calculated using the TB (DE) model. The thick, red line shows the location of the Fermi level. Note the perfect electron-hole symmetry in this case, and the agreement on the magnitude of the band gap between all three methods. } \label{fig:bandL12R5Hole} \end{center} \end{figure} In Fig.~\ref{fig:bandL12R5Hole} we show the band structure for a $\{12,5\}$ graphene antidot lattice, i.e., periodically perforated graphene, and compare to the case of a periodic mass term, modeled using either the TB or the DE approach. A sufficiently large mass term should ensure that electrons are excluded entirely from the region of the mass term, and we thus expect relatively good correspondence with perforated graphene. In the figure, we consider the case where the mass term is equal in size to the TB hopping term, $V_0=t$. As expected, we find quite good agreement between all three methods. In particular, the magnitudes of the band gaps are in near-perfect agreement. Using a finite, but sufficiently large mass term in the DE model thus yields much better results than models where the limit of infinite mass term is used to impose boundary conditions on the edge of the hole in the DE model.\cite{Furst2009} Note that electron-hole symmetry is preserved for all models. For higher-lying bands, the differences between the DE and TB results become more pronounced, as the linear approximation of the DE model breaks down. Further, comparing the case of perforated graphene to that of a periodic mass term in the TB model, we see significant differences in the higher-lying bands. However, we note that increasing the mass term further results in excellent agreement with the perforated graphene case, for all bands shown. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{bandL12R5V0p5.eps} \caption{(Color online) Band structures of a periodically gated $\{12,5\}$ lattice. The solid, black (blue, dashed) lines show results for periodically gated graphene, calculated using a TB (DE) model. The gate potential is $V_0=t/2$. The thick, red line shows the location of the Fermi level. Note the nearly dispersionless band near $-0.2$~eV. Inset: A zoom of the band structure near the $\Gamma$ point, illustrating the emergence of a band gap in the TB results and the absence of such a gap in the DE model. } \label{fig:bandL12R5V0p5} \end{center} \end{figure} A periodic mass term is expected to induce a gap in graphene due to the fact that it explicitly breaks sublattice symmetry via the $\hat{\sigma}_z$ operator in the continuum model or, similarly, through the staggered on-site potential in the TB approach. Contrary to this, analysis of periodic potentials in a DE model of graphene suggests that periodic gating does \emph{not} induce a gap in graphene around the Fermi level,\cite{Barbier2008, Barbier2009} but rather leads to the generation of new Dirac points near the superlattice Brillouin zone boundaries.\cite{Park2008} Superlattices lacking inversion symmetry have been suggested as a means of achieving tunable band gaps in graphene, based on results using a DE model.\cite{Tiwari2009} However, these results were recently found to be based on numerical errors.\cite{Tiwari2012} Indeed, based on the DE model, a gap cannot be produced by any Hamiltonian that preserves time-reversal symmetry, i.e. $H=\hat{\sigma}_y H^*\hat{\sigma}_y$, where $\hat{\sigma}_y$ is the Pauli spin matrix while $H^*$ denotes the complex conjugate (not the Hermitian conjugate) of the Hamiltonian.\cite{Lin2012} A pure scalar potential, such as the one we consider for periodically gated graphene, see Eq.~(\ref{eq:DE}), preserves this symmetry and the DE model thus suggests that periodic gating does \emph{not} open a band gap. Instead, a combination of a scalar as well as a vector potential is needed.\cite{Lin2012} In Fig.~\ref{fig:bandL12R5V0p5} we show the band structure of a periodically gated $\{12,5\}$ graphene structure, with a gate potential of half the TB hopping term, $V_0=t/2$. Results are shown for TB and DE models, respectively. Contrary to a periodic mass term we see that, as could be expected, periodic gating breaks electron-hole symmetry and shifts the Fermi level to higher energies. Comparing DE and TB results, we note that there is quite good agreement overall, between the two methods. However, a crucial difference emerges when considering the bands in close vicinity of the Fermi level, as illustrated in the inset: while the DE results suggest that periodic gating does not open a band gap, TB results demonstrate that a band gap \emph{does} occur right at the Fermi level. We attribute this to a local sublattice symmetry breaking at the edge of the gate region and substantiate this claim below. We note that while a band gap appears, the magnitude of the band gap is of the order of tens of meV, an order of magnitude smaller than that of the corresponding perforated graphene structure. This dramatic qualitative difference between TB and DE modelling agrees with previous results \cite{Zhang2010} comparing density functional theory and Dirac models for rotated triangular geometries. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{bandL12R5V10.eps} \caption{(Color online) Band structures of a periodically gated $\{12,5\}$ lattice. The solid, black lines show results for perforated graphene, calculated using a TB model. The blue, dashed lines correspond to graphene with a periodic gate potential of $V_0=10 t$, calculated using the TB model. Bands near the original Dirac energy of graphene are shown. For the gated structure, the Fermi level is far removed from the Dirac energy of graphene, outside the range of the figure, and no band gap occurs at the Fermi level for this structure. } \label{fig:bandL12R5V10} \end{center} \end{figure} Above, we illustrated how a sufficiently large mass term serves as an excellent model of a hole in graphene, see Fig.~\ref{fig:bandL12R5Hole}. Because a simple scalar potential cannot confine Dirac electrons \cite{Novoselov2006,Beenakker2008}, one would expect that modeling the hole via a large gate potential would be inaccurate. In Fig.~\ref{fig:bandL12R5V10} we show the band structure of periodically gated graphene, with a very large gate potential of $V_0 = 10t$.\cite{footnote:V10} For comparison, we also show the corresponding perforated graphene structure. Contrary to the aforementioned expectations, we see that the periodically gated graphene structure is an excellent model of perforated graphene. We note that increasing the gate potential further results in near-perfect agreement between the periodically gated and the perforated structures. With a gate potential of $V = 10t$ we are way beyond the linear regime of the band structure, for which a Dirac treatment of graphene is viable, which explains why the theoretical arguments pertaining to Dirac electrons break down in this case. \subsection{Gate region overlap} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{ADoverlap.eps} \caption{(Color online) Overlap of eigenstates with the gate region, calculated at the $\Gamma$ point for the $\{12,5\}$ lattice with a gate potential (upper panel) or mass term (lower panel) of $V_0=t/2$. The Fermi level is indicated by the dashed, vertical line. The inset in the upper panel shows the eigenstate corresponding to the state highlighted with a circle. The size of the filled, colored circles indicates the absolute square of the wavefunction. The black circle indicates the radius of the gate region. } \label{fig:ADoverlap} \end{center} \end{figure} Returning now to the band structure for the periodically gated $\{12,5\}$ lattice, shown in Fig.~\ref{fig:bandL12R5V0p5}, we note the appearance of a nearly dispersionless band near $-0.2$~eV. This state is localized predominantly within the gate region. In the upper panel of Fig.~\ref{fig:ADoverlap} we show the overlap of all eigenstates with the gate region as a function of energy, calculated at the $\Gamma$ point. For comparison, we show the corresponding results for a periodic mass term in the lower panel. We note that several states exist, which have significant overlap with the gate region, also at energies below the Fermi level. An example of one such state is shown in the figure. As the gate potential is increased further, these states become less energetically favorable, and are eventually all situated at energies well above the Fermi level. In stark contrast to this, a periodic mass term dictates perfect electron-hole symmetry, and thus always predicts states below the Fermi level having significant overlap with the gate region. In fact, as the mass term is increased, states nearly entirely localized within the mass term region develop at both extrema of the spectrum. Below, we will illustrate how this fundamental difference between a mass term and a scalar potential manifests itself via the dependence of the band gap on the magnitude of the gate potential for periodically gated graphene. \section{Band gaps in periodically gated graphene} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{GapVsGate.eps} \caption{(Color online) Band gap at the Fermi level for periodically gate graphene, as a function of the gated radius (in units of the graphene lattice constant) times the gate potential in units of the TB hopping term. Results are shown for three different lattices (solid lines), with roughly equal ratios $R/L$ of the radius of the gate region to the side length of the hexagonal unit cell. The dashed line shows the results for the $\{7,2.8\}$ lattice, which has roughly the same $R/L$ ratio. Inset: Results for $\{12,5\}$, when the potential is replaced by a mass term. The dashed line indicates the band gap for a perforated graphene structure. } \label{fig:gapVsGate} \end{center} \end{figure} Having determined that a TB treatment of periodically gated graphene does indeed suggest the opening of a band gap at the Fermi level, we now proceed to investigate the behavior of the band gap magnitude in more detail. From hereon, all results shown have been calculated using the TB model. In Fig.~\ref{fig:gapVsGate}, the solid lines show the magnitude of the band gap at the Fermi level for three different lattices, $\{7,3\}$, $\{12,5\}$, and $\{15, 6.3\}$, all of which have approximately the same ratio $R/L$ of gate radius to side length of the hexagonal unit cell. When plotted against the gate radius times the gate potential, the resulting curves emerge as simple scaled versions of each other, as seen in Fig.~\ref{fig:gapVsGate}. While, initially, raising the gate potential increases the band gap, a maximum gap is reached at a certain gate potential, after which the band gap diminishes. This behavior is completely different from the case where the potential is replaced by a mass term, as illustrated in the inset of the figure. In this case, the band gap continues to increase until a saturation point is reached in the limit where the structure resembles that of perforated graphene. While the three periodic lattices indicated with solid lines in Fig.~\ref{fig:gapVsGate} result in similar dependencies of the gap on $RV_0$, we stress that this is \emph{not} the case for all lattices, even if the ratio $R/L$ is approximately the same. To illustrate this, we also show in Fig.~\ref{fig:gapVsGate} results for the $\{7, 2.8\}$ lattice. The dependence of the band gap on gate potential differs markedly for this lattice. This indicates that the exact geometry at the edge of the gate region plays a large role in determining the band gap, in agreement with findings in Ref.~\onlinecite{Zhang2010}. \subsection{Edge dependence} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{GapVsGateL7.eps} \caption{(Color online) Band gap at the Fermi level for periodically gated graphene. The band gap is shown as a function of the gate radius (in units of the graphene lattice constant) times the gate potential in units of the TB hopping term. Results are shown for lattices $\{7,R\}$ with varying $R$. Below $RV_0\simeq 3t$ all gaps are direct ($\Gamma$--$\Gamma$). Above this transition the $\Gamma$--$\Gamma$ gaps (dashed lines) exceed the indirect $\Gamma$--$K$ gap. The unit cells of the $\{7,R\}$ lattices are shown above, in order of increasing radius. The edge geometry is highlighted. } \label{fig:gapVsGateL7} \end{center} \end{figure} To illustrate in more detail the role of the edge in determining the band gap, we show in Fig.~\ref{fig:gapVsGateL7} the band gap as a function of the gate potential, for lattices $\{7,R\}$ with increasing values of $R$. The radius is increased in the minimum steps resulting in new geometries. The structures with $R\in\{2.5,\, 3.0,\, 3.3\}$ show quite similar behaviors. In particular, a maximum band gap is reached at $RV_0\simeq 2t$ in all three cases. The band gap then closes, but reopens once more as the gate potential is increased further. Around $RV_0\simeq 3t$ the band gap changes from direct ($\Gamma$--$\Gamma$) to indirect ($\Gamma$--$K$) as the gate voltage is raised. The dashed lines in the figure illustrate the $\Gamma$--$\Gamma$ gap above the direct to indirect-gap transition. However, after a slight further increase of the gate voltage, the final closing of the band gap occurs as the energy at the $K$ point moves below that at the $\Gamma$ point, resulting in crossing bands at the Fermi level. Finally, we note that while the three lattices show similar behavior, the dependence of the band gap on the radius of the gate region is clearly not monotonic, and a larger gate region does not necessarily result in a larger band gap. In contrast to the similarities of the other three structures, the dependence of the band gap on the gate potential for the $\{7,2.8\}$ lattice differs greatly. In the upper panel of Fig.~\ref{fig:gapVsGateL7} we show the unit cells corresponding to the $\{7,R\}$ lattices considered, with the edge geometries highlighted. The $\{7,2.8\}$ lattice stands out from the rest of the geometries, in that the entire edge of the gate region is made up from several pure zigzag edges. We stress that the sublattice imbalance for the entire edge is zero, while there is a local sublattice imbalance on the individual straight zigzag edges. In contrast to this, the other geometries have gate regions with zigzag as well as armchair edges. We find that the general trend is for zigzag edges to quench the band gap of the periodically gated graphene structures, which we have also verified via calculations of gate regions of hexagonal symmetry, which always have pure zigzag edges. This trend can be explained by noting that pure zigzag edges, such as, e.g., in zigzag graphene nanoribbons \cite{Nakada1996,Brey2006} or graphene antidot lattices with triangular holes \cite{Vanevic2009,Furst2009a,Gunst2011}, lead to localized midgap states.\cite{Inui1994} For periodically gated graphene the edge is defined by a finite potential, rather than a complete absence of carbon atoms, so we expect the tendency of electrons to localize on the edge to be less pronounced. Nevertheless, our findings suggest that local zigzag geometry still has the effect of quenching the band gap. Since, in general, larger circular holes will have longer regions of zigzag geometry at the edge of the gate region, this explains why larger gate regions will not invariably lead to larger band gaps. In the present case, we note that the $\{7,3.3\}$ structure indeed has a significantly smaller band gap than the $\{7, 2.5\}$ structure. The $\{7,3.0\}$ lattice is unique in that the equivalent of dangling bonds are present at the edge of the gate region, which further decrease the magnitude of the band gap. \subsection{Dependence on gate region overlap} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{OLvsGate.eps} \caption{(Color online) Overlap of the eigenstates nearest the Fermi level as a function of the gate radius (in units of the graphene lattice constant) times the gate potential in units of the TB hopping term. The solid lines show the overlap of the highest valence band state, while the dashed lines show the overlap of the lowest conduction band state. The overlap is shown relative to the ratio between the area of the gate region, and the unit cell area. The upper panel repeats the data from Fig.~\ref{fig:gapVsGateL7} showing the band gap. Note that the overlaps of the two states are equal exactly when the band gap is at a maximum, as highlighted for the $\{7,2.5\}$ lattice with the vertical black line. The left (right) inset illustrates schematically the dependence of the conduction and valence band edges on the gate potential, in the regime where the overlap with the gate of the state at the valence band edge is smaller (larger) than that of the state at the conductance band edge. } \label{fig:OLvsV} \end{center} \end{figure} First-order perturbation theory suggests that the dependence of the energy of the eigenstate on the gate potential be proportional to the overlap of the state with the gate region, i.e., $\partial E/\partial V_0\propto \int_\mathrm{gate} d\mathbf{r} |\Psi(\mathbf{r})|^2$. We thus expect the overlap with the gate region of the two eigenstates closest to the Fermi level to be a crucial parameter in describing the opening and quenching of the band gap as the gate voltage is varied. We will also see that it illustrates the crucial differences between graphene and ordinary two-dimensional electron gases. In Fig.~\ref{fig:OLvsV} we show the overlap of the eigenstate with the gate region as a function of the magnitude of the potential. The overlap is shown for the eigenstates at the valence and conduction band edges, and normalized by the ratio between gate and unit cell areas. A value of one thus indicates that the overlap with the gate region is the same as if the eigenstate is evenly distributed throughout the unit cell, while a value larger (smaller) than one suggests that the eigenstate is localized predominantly inside (outside) the gate region. As we saw also in Fig.~\ref{fig:ADoverlap}, the states near the Fermi level both have quite large overlaps with the gate region, even when the potential is of the order of the TB hopping term. Initially, for low values of the gate potential, the overlap with the gate region of the unoccupied state in the conduction band is larger than the corresponding overlap of the occupied state in the valence band. Relying on first-order perturbation theory we thus expect the energy of the conductance band state to increase more strongly with the gate potential than the valence band state, resulting in a larger band gap as the gate potential is raised, as illustrated in the left inset of Fig.~\ref{fig:OLvsV}. However, contrary to what would be expected for an ordinary two-dimensional electron gas, we see that as the potential is increased further, the valence band state also becomes localized predominantly within the gate region. Indeed, eventually the overlap of the valence band state with the gate region becomes larger than the one of the conduction band state, which results in a quenching of the band gap as the potential is increased further, as illustrated in the right inset of Fig.~\ref{fig:OLvsV}. We note that the point where the overlap of the two states with the gate region become equal exactly matches the point where the band gap is at a maximum. This is illustrated by the vertical, black line in the figure. The strong influence of the exact edge geometry is apparent, manifesting itself in a qualitatively different dependence of the overlap on gate voltage for the $\{7,2.8\}$ lattice. In particular, while the gate region overlap of the valence band state of the $\{7,2.5\}$ and $\{7,3.3\}$ lattices initially decreases with the size of the potential, both valence and conduction band states immediately start localizing within the gate region for the $\{7,2.8\}$ structure. This leads to much faster quenching of the initial band gap. \subsection{Realistic potential profiles} As we have illustrated above, the band gap of periodically gated graphene depends strongly on the edge geometry at the boundary between the gated and the non-gated region. So far, we have used a simple step function to model the spatial dependence of the potential due to the gate. However, it is obvious that in actual realizations of periodically gated graphene, some form of smoothing of the potential will inevitably be present. Due to the intricate relationship between the band gap and the edge geometry, it is relevant to investigate the effect of smoothing out the potential. In particular, since the DE model predicts no gap at all, one may wonder whether smoothing will cause the gap to close entirely. Previous studies have included smoothing of the gate potential, but with a smearing distance of the order $0.1$~\AA,\cite{Zhang2010} small enough that an atomically resolved edge can still be defined. To model a more realistic gate potential, we use an analytical expression for the potential distribution resulting from a constant potential disk in an insulating plane, obtained by direct solution of the Laplace equation. In cylindrical coordinates, this reads as\cite{Nanis1979} \begin{equation} V(r,z) = \frac{2V_0}{\pi}\sin^{-1}\!\!\left(\frac{2R}{\sqrt{\left(r-R\right)^2+z^2}+\sqrt{\left(r+R\right)^2+z^2}}\right), \label{eq:potential} \end{equation} with $z$ the distance above the gate, while $r$ is the distance from the center of the disk. Note that for $z=0$ the expression simplifies to $V(r,0) = V_0$ for $r\leq R$ while $V(r,0) = 2V_0 \sin^{-1}(R/r)/\pi$ for $r>R$. Of course, more exact approaches such as, e.g., finite-element methods, could be used to determine the potential distribution from a realistic back gate. However, we choose to use this relatively simple analytical expression, since we are mainly interested in discussing the general trends that occur as the edge of the potential region becomes less well-defined. One could imagine more elaborate setups that would generate sharper potential distributions. To include such possibilities, we consider a modified potential distribution $\tilde{V}(r,z) = V_0 [V(r,z)/V_0]^\eta$, with the additional parameter $\eta$, which allows us to control the smoothing of the potential further. As $\eta\rightarrow\infty$ we approach the limit where the potential is described by a Heaviside step function, as in the results presented so far. We note that Eq.~(\ref{eq:potential}) is derived for an isolated constant potential disk rather than a periodic array of gates. Ignoring coupling between the different gates, one simple way of improving this model would be to add the potentials generated from the nearest-neighbor gates, to account for the overlap between them. However, this would merely serve to smoothen out the potential further, as well as add a constant background potential, effectively decreasing the height of the potential barrier. Here, we are interested in illustrating the critical dependence of the band gap on smoothing out the potential, so we are adopting a `best case' scenario, which also means that we will use $z=0$ throughout, assuming that the graphene layer is deposited directly on the periodic gates, with no insulating layer in-between. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{GapVsGateL12R5smooth.eps} \caption{(Color online) Band gap as a function of gate potential for the $\{12,5\}$ periodically gated graphene lattice. The potential distribution due to the periodic gates is modelled via Eq.~(\ref{eq:potential}). We assume the distance from the plane of the gate to the graphene layer is zero. Results are shown for increased values of the $\eta$ parameter, which determines the strength of the smoothing. The inset illustrates the potential distribution $\tilde{V}/V_0$ for each case, with markers indicating the radial position of the carbon atoms. } \label{fig:GapvsGateSmooth} \end{center} \end{figure} In Fig.~\ref{fig:GapvsGateSmooth} we show the band gap for a $\{12,5\}$ lattice as a function of gate potential, for increased values of the $\eta$ parameter. While for $\eta\rightarrow\infty$, corresponding to a Heaviside step function distribution, the maximum band gap is about $33$~meV, the band gap for $\eta=1$ is drastically lower, with a maximum value of only $0.9$~meV. As we artificially decrease the amount of smoothing by raising the value of $\eta$, we slowly recover the maximum band gap attainable. However, we stress that even for $\eta=20$, which as shown in the inset of the figure amounts to smoothing over a range of little more than a single carbon atom, the maximum band gap has decreased by more than 20\% from the value at $\eta\rightarrow\infty$. This suggests that the band gap does indeed critically depend on an edge effect, which is very quickly washed out as the potential step is smoothed out over several carbon atoms. This is in agreement with previous studies, which have indicated that intervalley scattering is crucial in describing the band gap of periodically gated graphene.\cite{Zhang2010} In order for a scalar potential to induce intervalley scattering, it must vary significantly on a scale of the carbon-carbon distance, so that a local sublattice asymmetry is introduced. \section{Summary} By employing a tight-binding description of graphene, we have shown that, contrary to what is predicted on basis of a continuum model, it is indeed possible to induce a band gap in graphene via periodic, electrostatic gating. Further, if the magnitude of the potential is made sufficiently large, periodically gated graphene is an accurate model for perforated graphene structures, with one caveat, namely that the Fermi level is far removed from the location of the band gap. For smaller, more realistic values of the gate potential, a band gap appears right at the Fermi level. However, we find that the band gap is orders of magnitude smaller than that of the corresponding perforated graphene structure. The dependence of the band gap on the gate potential is highly non-trivial, and entirely different from the case where graphene is modulated by a periodic mass term. In particular, a maximum magnitude of the band gap is reached, after which increasing the gate potential further quenches the gap. Also, a transition from a direct ($\Gamma$--$\Gamma$) to an indirect ($\Gamma$--$K$) semiconductor occurs for larger gate potentials. The exact magnitude and dependence of the band gap on gate potential depends critically on the precise geometry of the edge of the gate region. In particular, large regions of local zigzag geometries tend to result in significantly smaller band gaps than geometries where armchair edges dominate. Because the emergence of a band gap relies on a local sublattice asymmetry, we find that it is extremely fragile. If smoothing is introduced in the potential distribution, such that the edge of the gate region is no longer atomically resolved, the magnitude of the band gap drops significantly. Even if the smoothing occurs over a range of little more than a single carbon atom, we find that the maximum band gap decreases to less than 80\% of the value for a perfectly defined edge. This presents a serious challenge to opening a band gap in graphene via periodic gating. \begin{acknowledgments} The work by J.G.P. is financially supported by the Danish Council for Independent Research, FTP grant numbers 11-105204 and 11-120941. The Center for Nanostructured Graphene (CNG) is sponsored by the Danish National Research Foundation. We thank Prof. Antti-Pekka Jauho for helpful comments during the development of the manuscript. \end{acknowledgments}
1,477,468,750,747
arxiv
\section{\large{Introduction}} \label{sec:Introduction} Let ${\mathcal S}(\RR^+)$ denote the space of restrictions to $\RR^+$ of functions in ${\mathcal S}(\RR)$. Say $0 \not\equiv f_0 \in {\mathcal S}(\RR^+)$, and let \[f(s) = sf_0(s).\] One then has the {\em Calder\'on formula} for $f$: if $c \in (0,\infty)$ is defined by \[c = \int_0^{\infty} |f(t)|^2 \frac{dt}{t} = \int_0^{\infty} t|f_0(t)|^2 dt,\] then for all $s > 0$, \begin{equation} \label{cald} \int_0^{\infty} |f(ts)|^2 \frac{dt}{t} = c < \infty. \end{equation} The even function $h(\xi) = f(\xi^2)$ is then in ${\mathcal S}(\RR)$ and satisfies \begin{equation} \label{cald1} \int_0^{\infty} |h(t\xi)|^2 \frac{dt}{t} = \frac{c}{2} < \infty. \end{equation} (In fact, all even functions in ${\mathcal S}(\RR)$ satisfying (\ref{cald1}) arise in this manner). Its inverse Fourier transform $\psi = \check{h}$ is admissible (i.e. is a continuous wavelet). That is, for some $c' > 0$, \begin{equation} \label{ctswvrn0} \int_0^{\infty} \|F*\psi_t\|^2_2 \frac{dt}{t} = c' \|F\|^2_2 \end{equation} for all $F \in L^2(\RR^n)$. Here, as usual, $\psi_t(x) = t^{-1} \psi(x/t)$.\\ We prefer to write, formally, \[ \check{h} = f(-d^2/dx^2) \delta; \] the formal justfication being that \[ (f(-d^2/dx^2) \delta)\hat{\:} = f(\xi^2) = h(\xi). \] Thus $f(-d^2/dx^2) \delta$ is a continuous wavelet on $\RR$. Our program is to construct analogues of continuous wavelets, on much more general spaces, by replacing the positive number $s$ in (\ref{cald}) by a positive self-adjoint operator $T$ on a Hilbert space ${\cal H}$. If $P$ is the projection onto the null space of $T$, by the spectral theorem we obtain the relation \begin{equation} \label{gelmay1} \int_0^{\infty} |f|^2(tT) \frac{dt}{t} = c(I-P), \end{equation} where the integral in (\ref{gelmay1}) converges strongly. (\ref{gelmay1}) will be justified in the next section. Taking $T$ to be $-d^2/dx^2$ on $\RR$ leads to the continuous wavelet $f(-d^2/dx^2) \delta$ on $\RR$. (Of course, on $\RR$, $P = 0$.) We began our program of looking at more general positive self-adjoint operators $T$, in order to construct continuous wavelets, in our article \cite{gm1}. There we took $T$ to be the sublaplacian $L$ on $L^2(G)$, where $G$ is a stratifed Lie group, and thereby obtained continuous wavelets and frames on such $G$. In fact, in that context, $f(L)\delta$ was a continuous wavelet. (Again, in that context, $P$ was zero.) (Our article \cite{gm1} was motivated by the second author's thesis \cite{Mayelithesis05}, where it was shown that if $f(s) = se^{-s}$, then the ``Mexican hat'' $f(L)\delta$ is a continuous wavelet on the Heisenberg group.) In this article we will look at the (much more practical!) situation in which $T$ is the Laplace-Beltrami operator on $L^2({\bf M})$, where ${\bf M}$ is a smooth compact oriented Riemannian manifold without boundary. We will construct analogues of continuous Schwartz wavelets in this context, and will obtain explicit formulas for them if ${\bf M}$ is the sphere or the torus. In a sequel article (\cite{gmfr}) we shall use a discrete analogue of (\ref{gelmay1}) to construct nearly tight frames in this situation, and show that they are appropriately well-localized (specifically, that they satisfy a space-frequency analysis). In both of these articles, $P$ will be the projection onto the one-dimensional space of constant functions. We now summarize our results and methods. We discuss prior work on wavelets on manifolds, especially the work of Narcowich, Petrushev and Ward on the sphere \cite{narc1}, \cite{narc2}, at the end of this introduction. \\ In the model situation (\ref{cald}) -- (\ref{ctswvrn0}) on $\RR$, note that the kernel of the operator $F \rightarrow F*\psi_t$ is $K_t(x,y) := t^{-1} \psi((x-y)/t)$. Since $h(0)=0$, we have $\int K_t(x,y) dx = 0$ for all $y$ and $\int K_t(x,y) dy = 0$ for all $x$. Also $K_t(x,y)$ is smooth in $t,x,y$ for $t > 0$. Motivated by this model case, we define continuous wavelets on ${\bf M}$ as follows. Suppose that the function $K_t(x,y)$ is smooth for $t > 0$, $x,y \in {\bf M}$. For $t > 0$, define $T_t: L^2({\bf M}) \rightarrow C^{\infty}({\bf M})$ to be the operator with kernel $K_t$, We define $K_t(x,y)$ to be a {\em continuous wavelet} on ${\bf M}$, provided that for some $c > 0$, \begin{equation} \label{ttfiso} \int_0^{\infty} \|T_t F\|^2_2 \frac{dt}{t} = c \|(I-P)F\|^2_2, \end{equation} for all $F \in L^2({\bf M})$, and that for any $t > 0$, $\int_{\bf M} K_t(x,y) d\mu(x) = 0$ for all $y$ and $\int_{\bf M} K_t(x,y) d\mu(y) = 0$ for all $x$. (Here $\mu$ is the measure on ${\bf M}$ arising from integration with respect to the volume form.) It is then easy to see that, if $K_t$ is the kernel of $f(t^2\Delta)$ ($f$ as before), where $\Delta = \Delta_{\bf M}$ is the Laplace-Beltrami operator on ${\bf M}$, then $K_t(x,y)$ is a continuous wavelet on ${\bf M}$. In particular, (\ref{ttfiso}) follows easily from (\ref{gelmay1}), simply by applying both sides of (\ref{gelmay1}) to $F$, taking the inner product of both sides with $F$, and making the change of variables which replaces $t$ by $t^2$ in the integral. Note that $K_t(x,y) = [f(t^2\Delta)\delta_y](x)$, which is analogous to $f(-d^2/dx^2) \delta$ being a continuous wavelet on $\RR$.\\ This then gives an $L^2$ theory of continuous wavelets on manifolds. However, in practice, one wishes to go beyond this theory, looking at continuous wavelets on other function spaces, and discretizing them to obtain frames. For this, one needs the wavelets to have further properties. Fortunately, as we shall see, if $K_t$ is the kernel of $f(t^2\Delta)$, these properties are present. Specifically, let us return to our model situation; let us however work on $\RR^n$, not just $\RR^1$. Then $K_t(x,y)$, the kernel of $f(t^2\Delta)$, would be of the form $t^{-n} \psi((x-y)/t)$ for some $\psi \in \mathcal{S}$. (Here $\Delta$ is the usual Laplacian on $\RR^n$, and $\hat{\psi} = G$, where $G(\xi) = f(|\xi|^2)$.) For any $N, \alpha, \beta$, there would thus exist $C_{N, \alpha, \beta}$ such that \begin{equation}\notag t^{n+|\alpha|+|\beta|} \left|\left(\frac{x-y}{t}\right)^N \partial_x^{\alpha} \partial_y^{\beta} K_t(x,y)\right| \leq C_{N, \alpha, \beta} \end{equation} for all $t, x, y$. Such estimates are essential in the theory of wavelets on $\RR^n$. We therefore make the following definition: \begin{definition} \label{ctsswav} Let $K_t(x,y)$ be a continuous wavelet on ${\bf M}$. Then we say that $K_t(x,y)$ is a {\em continuous ${\cal S}$-wavelet} on ${\bf M}$ provided that:\\ \ \\ For every pair of $C^{\infty}$ differential operators $X$ (in $x$) and $Y$ (in $y$) on ${\bf M}$, and for every integer $N \geq 0$, there exists $C_{N,X,Y}$ as follows. Suppose $\deg X = j$ and $\deg Y = k$. Then \begin{equation} \label{xykt} t^{n+j+k} \left|\left(\frac{d(x,y)}{t}\right)^N XYK_t(x,y)\right| \leq C_{N,X,Y} \end{equation} for all $t > 0$ and all $x,y \in {\bf M}$. \end{definition} Here $d$ is the geodesic distance on ${\bf M}$. We use the terminology ``continuous ${\cal S}$-wavelet'' to express the idea that these wavelets are analogous to Schwartz wavelets on $\RR^n$.\\ \ \\ In Lemma \ref{manmol}, we will show the following key result:\\ \ \\ {\bf Lemma \ref{manmol}} If $K_t$ is the kernel of $f(t^2\Delta)$ ($f$ as before), then $K_t(x,y)$ satisfies (\ref{xykt}) (and hence is a continuous ${\cal S}$-wavelet on ${\bf M}$).\\ \ \\ Our proof of this lemma uses the theory of pseudodifferential operators (most crucially, a result of Strichartz \cite{Stric72}), and Huygens' principle. In Theorem \ref{hldchar} we shall show that continuous ${\cal S}$-wavelets are well adapted to the study of certain other function spaces. Specifically, we shall show the following generalization of a theorem of Holschneider and Tchamitchian (\cite{hotch}), who worked on the real line, and characterized the wavelet transforms of the spaces of H\"older continuous functions (for H\"older exponent strictly between $0$ and $1$).\\ \ \\ {\bf Theorem \ref{hldchar}}\\ \textit {Let $K_t(x,y)$ be a continuous ${\cal S}$-wavelet on ${\bf M}$, and, for $t > 0$, let $T_t$ be the operator on $L^2$ with kernel $K_t$. Suppose $F \in L^2({\bf M})$. Then:\\ (a) If $F$ is H\"older continuous, with H\"older exponent $\alpha$ ($0 < \alpha \leq 1$), then for some $C > 0$, \begin{equation} \label{hldcharway0} \|T_t F\| \leq C t^{\alpha} \end{equation} for all $t > 0$. (Here $\|\:\|$ denotes sup norm.)\\ (b) Conversely, say $0 < \alpha < 1$, $C > 0$, and that $F$ satisfies (\ref{hldcharway0}) for all $t > 0$. Then $F$ is H\"older continuous, with H\"older exponent $\alpha$.}\\ \ \\ The proof of this result will be a straightforward generalization of the argument of Holschneider and Tchamitchian (\cite{hotch}). Our construction of nearly tight frames in our sequel article \cite{gmfr} will also rely heavily on Lemma \ref{manmol}. In another sequel article (already available \cite{gm3}), we will use Lemma \ref{manmol} to show that one can determine whether $F$ is in a Besov space, solely from a knowledge of the size of its frame coefficients. In a future article, we hope to study the same question for Triebel-Lizorkin spaces. (The analogous problems on $\RR^n$ were solved in \cite{fj} and \cite{fj2}.) \\ It should be noted that in (\ref{xykt}) we assume nothing about the $t$ derivatives of $K_t(x,y)$, and no such information is needed in proving Theorem \ref{hldchar}. However, in fact, if $K_t$ is the kernel of $f(t^2\Delta)$, one does have the following improvement on (\ref{xykt}):\\ \ \\ For every pair of $C^{\infty}$ differential operators $X$ (in $x$) and $Y$ (in $y$) on ${\bf M}$, and for all integers $m, N \geq 0$, there exists $C_{N,m,X,Y}$ as follows. Suppose $\deg X = j$ and $\deg Y = k$. Then \begin{equation} \label{diagest3} t^{m+n+j+k} \left|\left(\frac{d(x,y)}{t}\right)^N \left(\frac{\partial}{\partial t}\right)^m XYK_t(x,y)\right| \leq C_{N,m,X,Y} \end{equation} for all $t > 0$ and all $x,y \in {\bf M}$. \\ This is in fact an easy consequence of (\ref{xykt}). For example, $\frac{\partial}{\partial t}K_t(x,y)$ is the kernel of $2t \Delta f(t^2 \Delta)$, and hence equals $2t \Delta_x K_t(x,y)$, which we can estimate by using (\ref{xykt}).\\ We should explain in what sense $K_t(x,y)$ (the kernel of $f(t^2\Delta)$) deserves to be called a {\em wavelet}. In our model situation on $\RR^n$, $K_t(x,y) = t^{-n} \psi((x-y)/t)$ behaves in evident, well-known ways under dilation and translation: \begin{eqnarray} K_{rt}(rx,ry) & = & r^{-n}K_t(x,y) \mbox{ for } r > 0; \mbox{ and } \label{ktrx} \\ K_t(x+a,y+a) & = & K_t(x,y) \mbox{ for } a \in \RR^n \label{kta} \end{eqnarray} Now (\ref{ktrx}) says that, in the model case, $K_t(x,y)$ is homogeneous of degree $-n$ in $(t,x,y)$, so that for any $m, \alpha, \beta$, $t^{n+m+|\alpha|+|\beta|} \partial_t^m \partial_x^{\alpha} \partial_y^{\beta} K_t(x,y)$ is homogeneous of degree $0$, and is hence bounded as a function of $(t,x,y)$. An analogous fact to this boundedness holds on ${\bf M}$, by (\ref{diagest3}) with $N=0$. As for (\ref{kta}), a general manifold ${\bf M}$ has nothing akin to translations, but in Section 5 we shall discuss the situation in which ${\bf M}$ has a transitive group $G$ of smooth metric isometries. In that case one can easily see that $K_t(Tx,Ty) = K_t(x,y)$ for all $T \in G$, which is analogous to (\ref{kta}). (Manifolds with such a group $G$ are usually called {\em homogeneous}. Obvious examples are the sphere and the torus.)\\ \ \\ Of course, to apply our algorithm in practice, one would need to compute the kernels $K_t$ approximately. We give some examples where this can be done, in Section 5. In the special case $f(s) = se^{-s}$, $K_t$ can be thought of, very naturally, as a ``Mexican hat'' continuous wavelet for ${\bf M}$. (On $\RR^n$, the Mexican hat wavelet is a multiple of $\Delta e^{-\Delta} \delta$, the second derivative of a Gaussian, the function whose Fourier transform is $|\xi|^2 e^{-|\xi|^2}$.) In Section 5 we compute these continuous wavelets in the special cases where ${\bf M}$ is the $n$-sphere $S^n$ and the $n$-torus $T^n$. For the $2$-torus, we show that $K_t$ can be evaluated by use of either of two different sums, one (obtained from the eigenfunction expansion) which converges very quickly for $t$ large, and the other which (obtained from the proof of the Poisson summation formula) converges very quickly for $t$ small. (The method can be extended to general $n$.) For the sphere, the eigenfunction expansion again gives a sum which converges very quickly for $t$ large. When $n=2$, for $t$ small, by use of heat trace methods, we obtain a formula which converges very quickly, and which appears (from numerical evidence) to be an excellent approximation to $K_t$. Specifically, in \cite{Polt}, I. Polterovich obtains a completely explicit formula for the heat trace asymptotics on the sphere. (Earlier, less explicit formulae were found earlier in (\cite{CahnWolf}) and (\cite{Camp}).) It is not hard to see that, on the sphere, $K_t(x,y) := h_t(x \cdot y)$ is a function of $x \cdot y$. Using Polterovich's result, we show how one can compute the Maclaurin series for $4\pi h_t(\cos \theta)$. In this manner we obtain an approximation \begin{equation} \label{htapp0} 4\pi h_t(\cos \theta) \sim \frac{e^{-\theta^2/4t^2}}{t^2}[(1-\frac{\theta^2}{4t^2})p(t,\theta)-t^2q(t,\theta)], \end{equation} where \begin{equation}\notag p(t,\theta)=1+\frac{t^2}{3}+\frac{t^4}{15}+\frac{4t^6}{315}+\frac{t^8}{315}+ \frac{\theta^2}{4}(\frac{1}{3}+\frac{2t^2}{15}+\frac{4t^4}{105}+\frac{4t^6}{315}) \end{equation} and \begin{equation}\notag q(t,\theta)= \frac{1}{3}+\frac{2t^2}{15}+\frac{4t^4}{105}+\frac{4t^6}{315}+ \frac{\theta^2}{4}(\frac{2}{15}+\frac{8t^2}{105}+\frac{4t^4}{105}) \end{equation} Maple says that when $t=.1$, the error in the approximation (\ref{htapp0}) is never more than $9.5 \times 10^{-4}$ for any $\theta \in [-\pi,\pi]$, even though both sides have a maximum of about 100. (To obtain rigorous bounds on the error is research in progress, which we expect to complete soon.) We derive a similar approximation to the heat kernel itself, when $n=2$. The method can be extended to general $n$. Note that, if in (\ref{htapp0}) we approximate $p \sim 1$ and $q \sim 0$, we would obtain the formula for the usual Mexican hat wavelet on the real line, as a function of $\theta$. \subsection{Historical comments} A number of groups of researchers have been studying continuous wavelets and frames on manifolds, and some have obtained important real-world applications. While our method, based on the new formula (\ref{gelmay1}) is original, certain of the ideas that we have presented in this introduction have arisen in other forms before. We now discuss the work of these other researchers. \\ Weaker forms of Lemma \ref{manmol} have appeared before; here is the history, as best we can determine it. If $f$ has compact support away from $0$, in 1989, Seeger-Sogge \cite{SS} showed (\ref{xykt}) modulo a remainder term that they must handle separately. (We would not be able to handle this remainder in the applications we seek.) Next, in 1996, Tao showed (\cite{Tao}. Proposition 3.1 (ii)) the case $j=k=0$ of Lemma \ref{manmol}, under the restriction that $\hat{h}$ has compact support. (Recall that $h(\xi) = f(\xi^2)$. An assumption that $\hat{h}$ has compact support would not be natural in our context.) \\ Most relevantly, in 2006 Narcowich, Petrushev and Ward (\cite{narc1}, \cite{narc2}) showed a slight variant of Lemma \ref{manmol} if ${\bf M} = S^n$, the sphere, provided $f$ had compact support away from $0$. (In their variant, $K_t(x,y)$ was the kernel not of $f(t^2\Delta)$ but rather of $f(t^2{\cal M})$, where ${\cal M}$ is a particular first-order pseudodifferential operator which is similar to $\Delta$. Specifically, $\Delta$ multiplies spherical harmonics of degree $l$ by $l(l+n-1)$, while ${\cal M}$ multiplies them by $l^2$. This is a minor distinction, however.) Narcowich, Petrushev and Ward do not discuss continuous wavelets, but use spectral theory arguments to construct tight frames on $S^n$. They then apply their variant of Lemma \ref{manmol} for purposes similar to ours, including characterizations of Besov and Triebel-Lizorkin spaces through frame coefficients on the sphere. Their frames have been dubbed ``needlets'', and have been used by statisticians and astrophysicists to study cosmic microwave background radiation (CMB). (See, for instance, \cite{mar}, \cite{baldi}, \cite{guil} and the references therein.) We present a detailed comparison of their approach to frames and ours in our sequel article \cite{gmfr}. Returning to our own approach, but still restricting to the sphere, the most important cases to consider are the case in which $f$ has compact support away from $0$ (the ``needlet '' case, essentially considered by Narcowich, Petrushev and Ward), and the case in which $f(s) = s^r e^{-s}$ for some integer $r \geq 1$. In our sequel article \cite{gmfr}, we construct frames from the latter $f$; we call these frames {\em Mexican needlets}. Needlets and Mexican needlets each have their own advantages. Needlets are a tight frame, and frame elements at non-adjacent scales are orthogonal. Mexican needlets, though not tight, are nearly tight; they have the advantage that one can work with them directly on the sphere, because of the formula (\ref{htapp0}). (This formula is only for $r = 1$, but can be readily generalized to general $r$. As we said before, estimating the error in (\ref{htapp0}) is work in progress.) Assuming this formula, Mexican needlets have strong Gaussian decay at each scale, and do not oscillate (for small $r$), so they can be implemented directly on the sphere, which is desirable if there is missing data (such as the ``sky cut'' of the CMB). \\ The statistical properties of needlets were investigated in \cite{mar}. Also, the statistical properties of Mexican needlets are already being investigated, by the second author in \cite{mayuncor}, and by Lan and Marinucci in \cite{lanmar}. It would be worthwile to utilize both needlets and Mexican needlets in the analysis of CMB, and the results should be compared.\\ A number of other researchers have studied wavelets and frames on manifolds. In all of the works mentioned below, when orthonormal bases were constructed, they are not known to give rise to a space-frequency analysis; and when frames were constructed, they are not known to be tight or to give rise to a space-frequency analysis.\\ Let us begin by discussing earlier works on manifolds, which contain some ideas related to those in this article. In alphabetical order:\\ \begin{itemize} \item Antoine, Vandergheynst, and collaborators (\cite{ant}, \cite{bogant}) have constructed smooth continuous wavelets on the sphere and related manifolds, by use of stereographic dilations (replacing the usual dilations), rotations (replacing the usual translations), and spherical convolution (replacing the usual convolution). They obtained frames by discretizing these continuous wavelets. \item Coifman, Maggioni, and collaborators (\cite{coimag}, \cite{mag2}) used the heat equation on manifolds for the rather different purpose of constructing orthonormal wavelet bases through a diffusion process, leading to a multiresolution analysis. They exploit the idea (which they attribute to Stein) of $e^{-t\Delta}$ being a sort of dilate of $e^{-\Delta}$. \item Freeden and collaborators (\cite{free1}, \cite{free2}) defined continuous wavelets on the sphere $S^2$, and applied them to the geosciences. Their continuous wavelets were of the form $f(t^2\Delta)$ for various $f$ (not all in the Schwartz space), although they did not formulate them in that manner. One of their many examples was our Mexican hat wavelet, which they called the Gauss-Weierstrass wavelet of order zero. They did not have Lemma \ref{manmol}, so they restricted to an (extensive) $L^2$ theory of continuous wavelets. In the context of $S^2$, they had, in particular, results equivalent to our (\ref{gelmay1}). \item Han (\cite{han1}, \cite{han2}) constructed frames on general spaces of homogeneous type (including manifolds). His method is to discretize a discrete version of Calder\'on's formula in this general setting. He also used the $T(1)$ theorem to estimate errors, \end{itemize} The following researchers have also worked on wavelets and frames on manifolds. They used methods which seem unrelated to those in the present article. In alphabetical order: \begin{itemize} \item Dahlke (\cite{dahl}) constructed an analogue of Haar wavelets on Riemannian manifolds. \item Dahmen and Schneider (\cite{dahsch}) have used parametric lifings from standard bases on the unit cube to obtain wavelet bases on manifolds which are the disjoint union of smooth parametric images of the standard cube. \item Schr\"oder and Sweldens (\cite{swel}) used a lifing scheme to build wavelets on manifolds. This lifting scheme uses no invariance properties, and regularity information is not easily obtained. \end{itemize} \section{Applying the Spectral Theorem}\label{applying-the-spectral-theorem} In this section, we give the proof of (\ref{gelmay1}), as well as the proof of a discrete analogue, which will be used in our sequel article \cite{gmfr} to construct nearly tight frames. (The proofs are quite elementary, and the reader who is willing to accept (\ref{gelmay1}) can go on to the next section.) Specifically, we shall show: \begin{lemma} \label{gelmaylem} Let $T$ be a positive self-adjoint operator on a Hilbert space ${\cal H}$, and let $P$ be the projection onto the null space of $T$. Suppose $l \geq 1$ is an integer, $f_0 \in {\cal S}(\RR^+)$, $f_0 \not\equiv 0$, and let $f(s) = s^l f_0(s)$. Set $c = \int_0^{\infty} |f(t)|^2 \frac{dt}{t}$. \begin{itemize} \item[$(a)$] Then for any $F \in {\cal H}$, \begin{equation} \label{strint} \lim_{\varepsilon \rightarrow 0^+, N \rightarrow \infty} \left[\int_{\varepsilon}^{N} |f|^2(tT) \frac{dt}{t}\right]F = c(I-P)F, \end{equation} Thus \[ \int_0^{\infty} |f|^2(tT) \frac{dt}{t} := \lim_{\varepsilon \rightarrow 0^+, N \rightarrow \infty} \left[\int_{\varepsilon}^{N} |f|^2(tT) \frac{dt}{t}\right] \] exists in the strong operator topology, and equals $c(I-P)$. \item[$(b)$] Suppose that $a > 0$ $(a \neq 1)$ is such that the {\em Daubechies condition} holds: for any $s > 0$, \begin{equation} \label{daubrep} 0 < A_a \leq \sum_{j=-\infty}^{\infty} |f(a^{2j} s)|^2 \leq B_a < \infty, \end{equation} Then $\lim_{M, N \rightarrow \infty} \left[\sum_{j=-M}^N |f|^2(a^{2j} T)\right]$ exists in the strong operator topology on ${\cal H}$; we denote this limit by $\sum_{j=-\infty}^{\infty} |f|^2(a^{2j} T)$. Moreover \begin{equation} \label{strsum1} A_a(I-P) \leq \sum_{j=-\infty}^{\infty} |f|^2(a^{2j} T) \leq B_a(I-P). \end{equation} \end{itemize} \end{lemma} {\bf Remark} $(b)$ is a discrete analogue of $(a)$, since the sum in (\ref{daubrep}) is a multiple of a Riemann sum for the integral $\int_0^{\infty} |f(st)|^2 \frac{dt}{t} = c$, while the spectral theorem will show that it is valid to replace $s$ in (\ref{daubrep}) by $T$, to obtain (\ref{strsum1}). \begin{proof} We prove $(a)$ and $(b)$ together. Let $T = \int_0^{\infty} \lambda dP_{\lambda}$ be the spectral decomposition of $T$; thus, in particular, $P = P_{\{0\}}$. Observe that, by the spectral theorem, if $g$ is a bounded Borel function on $\RR^+$, then $\|g(T)\| \leq \sup_{s \geq 0} |g(s)|$. It follows readily that the integrand in (\ref{strint}) is a family of operators in ${\cal B}({\cal H})$ (indexed by $t$), which depends continuously on $t \in [\varepsilon,N]$ (in the norm topology on ${\cal B}({\cal H})$). (Use the mean value theorem for $s$ in a suitable compact interval and the rapid decay of $f$ at $\infty$.) Thus $\int_{\varepsilon}^{N} |f|^2(tT) \frac{dt}{t}$ makes sense as a bounded operator on ${\cal H}$. (As usual, it is defined as the unique bounded operator $S$ with $ \langle S\varphi,\psi \rangle = \int_{\varepsilon}^{N} \langle |f|^2(tT)\varphi,\psi \rangle \frac{dt}{t}$ for all $\varphi,\psi \in {\cal H}$.) For $0 < \varepsilon < N < \infty$, define $g_{\varepsilon,N}: [0,\infty) \rightarrow [0,\infty)$ by \begin{equation} \label{gepN} g_{\varepsilon,N}(\lambda) = \int_{\varepsilon}^{N} |f|^2(t\lambda) \frac{dt}{t}. \end{equation} Then \begin{equation}\notag g_{\varepsilon,N}(T) = \int_{\varepsilon}^{N} |f|^2(tT) \frac{dt}{t}. \end{equation} (This follows from an elementary application of Fubini's theorem to $\int_{\varepsilon}^{N} \int_0^{\infty} |f|^2(t\lambda) d \langle P_{\lambda}\varphi,\psi \rangle \frac{dt}{t}$.) For $M,N \geq 0$ we also define $h_{M,N}: [0,\infty) \rightarrow [0,\infty)$ by \begin{equation}\notag h_{M,N}(\lambda) = \sum_{j=-M}^N |f|^2(a^{2j} \lambda); \end{equation} and we also set \begin{equation} \label{hla} h(\lambda) = \sum_{j=-\infty}^{\infty} |f|^2(a^{2j} \lambda). \end{equation} To prove the lemma it is enough to show that $g_{\varepsilon,N}(T) \rightarrow c(I-P)$ (strongly, as $\varepsilon \rightarrow 0^+$ and $N \rightarrow \infty$) and that $h_{M,N}(T) \rightarrow h(T)$ (strongly, as $M,N \rightarrow \infty$). Indeed, $(a)$ would then be immediate. $(b)$ would then also follow at once from (\ref{daubrep}), which implies, by the spectral theorem, that $A_a(I-P) \leq h(T) \leq B_a(I-P)$.\\ To establish these conclusions, we first note that this strong convergence need only be proved on $(I-P){\cal H}$, since all the operators vanish identically on $P{\cal H}$. Next we note that for any $\varepsilon, M, N$, $\|g_{\varepsilon,N}\|_{\infty} \leq c$, $\|h_{M,N}\|_{\infty} \leq B_a$, and $\|h\|_{\infty} \leq B_a$, whence $||g_{\epsilon,N}(T)|| \leq c$, $||h_{M,N}(T)||_{\infty} \leq B_a$, and $||h(T)|| \leq B_a$, Thus the needed strong convergence need only be proved on a dense subset of $(I-P){\cal H}$. Set \[ V = \bigcup_{0 < \eta < L < \infty} P_{[\eta,L]}\mathcal{H};\] observe that $V$ is dense in $(I-P)\mathcal{H}$ (since $P = P_{\{0\}}$). Thus, it suffices to show the following: fix $0 < \eta < L < \infty$, and say $F \in P_{[\eta,L]}\mathcal{H}$. Then $g_{\varepsilon,N}(T)F \rightarrow cF$ and $h_{M,N}(T)F \rightarrow h(T)F$. This, however, is immediate from the spectral theorem and the evident facts that $g_{\epsilon,N} \rightarrow c$ and $h_{M,N} \rightarrow h$, {\em uniformly} on $[\eta,L]$. Although this uniform convergence is easily shown, for later purposes, we carefully express it quantitatively in the next lemma. (Note that we may assume $a > 1$; otherwise replace it by $1/a$.)\end{proof} \begin{lemma} \label{uncon} Notation as in Lemma $\ref{gelmaylem}$, and as in equations $(\ref{gepN})$ through $(\ref{hla})$. Suppose $J \geq 1$ is an integer, and let $M_J = \max_{r > 0}|r^J f(r)|$. Suppose $0 < \eta < L < \infty$, and let $I$ be the closed interval $[\eta,L]$. Let $\|\:\|$ denote the sup norm on $I$. \begin{itemize} \item[$(a)$] If $0 < \varepsilon < N$, then \[ \|g_{\varepsilon,N} - c\| \leq c_L \varepsilon^{2l} + \frac{C_{\eta}}{N^{2J}}, \] where we may take $c_L = L^{2l}\|f_0\|^2_{\infty}/(2l)$, and $C_{\eta} = M_J^2/(2J\eta^{2J})$. \item[$(b)$] If $M,N \geq 0$, and $a > 1$, then \[ \|h_{M,N} - h\| \leq \frac{c^{\prime}_L}{a^{4Ml}} + \frac{C^{\prime}_{\eta}}{a^{4NJ}}. \] where we may take $c^{\prime}_L = (L^{2l}\|f_0\|^2_{\infty})/(a^{4l}-1)$, and $C^{\prime}_{\eta} = M_J^2/[(a^{4J}-1)\eta^{2J}]$. \end{itemize} \end{lemma} \begin{proof} Say $s \in I$. Then \[ |g_{\varepsilon,N}(s) - c| = \int_0^{\varepsilon}|f|^2(st)\frac{dt}{t} + \int_N^{\infty}|f|^2(st)\frac{dt}{t}. \] (a) follows from noting $|f|^2(st) \leq (Lt)^{2l}\|f_0\|^2_{\infty}$ in the first integral, and that $|f|^2(st) \leq M_J^2/(\eta t)^{2J}$ in the second. Similarly, \[ |h_{M,N}(s) - h(s)| = \sum_{j<-M} |f|^2(a^{2j}s) + \sum_{j>N} |f|^2(a^{2j}s). \] (b) follows from noting $|f|^2(a^{2j}s) \leq (La^{2j})^{2l}\|f_0\|^2_{\infty}$ in the first summation, and that $|f|^2(a^{2j}s) \leq M_J^2/(\eta a^{2j})^{2J}$ in the second. This completes the proof. \end{proof} We can now express the strong convergence in Lemma \ref{gelmaylem} in the following very quantitative manner: \begin{lemma} \label{gelmayquant} Notation as in Lemmas \ref{gelmaylem} and \ref{uncon}, and again let $T = \int_0^{\infty} \lambda dP_{\lambda}$ be the spectral decomposition of $T$. Then for any $F \in {\mathcal H}$, we have: \begin{equation} \label{quant1} \left\|\left[g_{\varepsilon,N}(T) - c\right]F\right\| \leq \left[c_L \varepsilon^{2l} + \frac{C_{\eta}}{N^{2J}}\right]\|F\| + 2c \left\|(I - P_{[\eta,L]})F\right\|, \end{equation} and, if $a > 1$, \begin{equation} \label{quant2} \left\|\left[h_{M,N}(T) - h(T)\right]F\right\| \leq \left[\frac{c^{\prime}_L}{a^{4Ml}} + \frac{C^{\prime}_{\eta}}{a^{4NJ}}\right]\|F\| + 2B_a \left\|(I - P_{[\eta,L]})F\right\|. \end{equation} \end{lemma} \begin{proof} For (\ref{quant1}), we need only substitute $F = P_{[\eta,L]}F + (I-P_{[\eta,L]})F$ in the left side. This gives \begin{eqnarray*} \|\left[g_{\varepsilon,N}(T) - c\right]F\| & \leq & \|\left[g_{\varepsilon,N}(T) - c\right]P_{[\eta,L]}F\| + \|g_{\varepsilon,N}(T)\|\:\|(I - P_{[\eta,L]}) F\| + c\|(I - P_{[\eta,L]}) F\| \\ & \leq & \|[g_{\varepsilon,N}(T) - c]\chi_{[\eta,L]}\|_{\infty}\|F\| + 2c\|(I - P_{[\eta,L]}) F\|, \end{eqnarray*} which, because of Lemma \ref{uncon}, establishes (\ref{quant1}). The proof of (\ref{quant2}) is entirely analogous. \end{proof} (\ref{quant1}) and (\ref{quant2}) are of significance for numerical calculations. Say, for instance, that $F = (I-P)F$, and one wants to compute $h(T)F$ to a certain precision. This involves summing an infinite series, so one instead seeks to compute $[h_{M,N}(T)]F$ for $M, N$ large enough; how large must one take them to be? One first chooses $\eta,L$ so that the second term on the right side of (\ref{quant2}) is very small. Then one chooses $M, N$ to make the first term on the right side of (\ref{quant2}) very small as well. We will return to this point in our discussion of space-frequency analysis, in our sequel article \cite{gmfr}. \section{Preliminaries on Manifolds}\label{preliminaries-on-manifolds} For the rest of the article, $({\bf M},g)$ will denote a smooth, compact, connected, oriented Riemannian manifold of dimension $n$, and $\mu$ will denote the measure on ${\bf M}$ arising from integration with respect to the volume form on ${\bf M}$. In this section we assemble several well-known facts about analysis on ${\bf M}$ (preceded, below, by bullets), which we shall need in this article and in sequel articles. \ \\ For $x,y \in {\bf M}$, we let $d(x,y)$ denote the infimum of the lengths of all piecewise $C^1$ curves joining $x$ to $y$; then $({\bf M},d)$ is evidently a metric space. It is well-known (see, e.g., \cite{Milnor}) that there is a geodesic joining $x$ to $y$ with length $d(x,y)$, but this fact, though basic, is not so relevant for this article. Most of what we need to know about the metric $d$ is contained in the simple proposition which follows. For $x \in {\bf M}$, we let $B(x,r)$ denote the ball $\{y: d(x,y) < r\}$. \begin{proposition} \label{ujvj} Cover ${\bf M}$ with a finite collection of open sets $U_i$ $(1 \leq i \leq I)$, such that the following properties hold for each $i$: \begin{itemize} \item[$(i)$] there exists a chart $(V_i,\phi_i)$ with $\overline{U}_i \subseteq V_i$; and \item[$(ii)$] $\phi_i(U_i)$ is a ball in $\RR^n$. \end{itemize} Choose $\delta > 0$ so that $3\delta$ is a Lebesgue number for the covering $\{U_i\}$. Then, there exist $c_1, c_2 > 0$ as follows:\\ For any $x \in {\bf M}$, choose any $U_i \supseteq B(x,3\delta)$. Then, in the coordinate system on $U_i$ obtained from $\phi_i$, \begin{equation}\notag d(y,z) \leq c_2|y-z| \end{equation} for all $y,z \in U_i$; and \begin{equation}\notag c_1|y-z| \leq d(y,z) \end{equation} for all $y,z \in B(x,\delta)$. \end{proposition} \begin{proof} Say $y,z \in U_i$. We work in the coordinate system on $U_i$ obtained from $\phi_i$. Then $d(y,z)$ is at most the length (in the Riemannian metric) of the straight line joining $y$ to $z$, which is $\leq c_2|y-z|$ for some $c_2$. (By assumption (ii), that straight line is contained in $U_i$.) On the other hand, if $y,z \in B(x,\delta) \subseteq U_i$, we may take a sequence of piecewise $C^1$ curves $\gamma_k$, joining $y$ to $z$, whose lengths $l(\gamma_k)$ approach $d(y,z)$ as $k \rightarrow \infty$. Surely $d(y,z) < 2\delta$. Thus, for some $N > 0$, if $k > N$, then $l(\gamma_k) < 2\delta$. Therefore each point on $\gamma_k$ is at distance at most $2\delta$ from $y$, hence at most $3\delta$ from $x$. Thus $\gamma_k \subseteq U_i$. Letting $\|\gamma_k^{\prime}(t)\|$ denote the length of the tangent vector $\gamma_k^{\prime}(t)$ in the Riemannian metric, we see that \[ l(\gamma_k) = \int_0^1 \|\gamma_k^{\prime}(t)\|dt \geq c_1\int_0^1 |\gamma_k^{\prime}(t)| dt \geq c_1|z-y|,\] since $z-y = \int_0^1 \gamma_k^{\prime}(t) dt$. Letting $k \rightarrow \infty$, we see that $d(y,z) \geq c_1|y-z|$ as well. This completes the proof. \end{proof} We fix collections $\{U_i\}$, $\{V_i\}$, $\{\phi_i\}$ and also $\delta$ as in Proposition \ref{ujvj}, once and for all. \begin{itemize} \item {\em Notation as in Proposition \ref{ujvj}, there exist $c_3, c_4 > 0$, such that \begin{equation} \label{ballsn} c_3r^n \leq \mu(B(x,r)) \leq c_4r^n \end{equation} whenever $x \in {\bf M}$ and $0 < r \leq \delta$, and such that \begin{equation} \label{ballsn1} c_3 \delta^n \leq \mu(B(x,r)) \leq \mu({\bf M}) \leq c_4r^n \end{equation} whenever $x \in {\bf M}$ and $r > \delta$.}\\ \ \\ To see (\ref{ballsn}), note that, since the collection $\{U_i\}$ is finite, we may fix $i$ and prove it for all $x$ with $B(x,3\delta) \subseteq U_i$. We work in the coordinate system on $U_i$ obtained from $\phi_i$; in that coordinate system, $U_i$ is a Euclidean ball, say $\{y: |y-x_0| < R\}$. (See Proposition \ref{ujvj}). By compactness and a simple contradiction argument, there is an $\eta > 0$ such that, for all $x$ with $B(x,3\delta) \subseteq U_i$, one has that $|x-x_0| < R - \eta$. Accordingly, for such an $x$, if $|y-x| < \eta$, then $y \in U_i$. Thus, by Proposition \ref{ujvj}, we have that \[ \{y: |y-x|< \min(r/c_2,\eta)\} \subseteq B(x,r) \subseteq \{y: |y-x|< r/c_1\}, \] for all $r < \delta$. (\ref{ballsn}) now follows from the fact that the determinant of the metric tensor $g$ is bounded above and below on $U_i$. For (\ref{ballsn1}), one need only note that if $r > \delta$, then $\mu(B(x,r)) \geq \mu(B(x,\delta)) \geq c_3\delta^n$, while $\mu(B(x,r)) \leq \mu({\bf M}) \leq [\mu({\bf M})/\delta^n]\delta^n \leq [\mu({\bf M})/\delta^n]r^n$. \item {\em $({\bf M},d,\mu)$ is a space of homogeneous type, in the sense of \cite{CoifWei71}.}\\ \ \\ Indeed, $d$ is a metric and $\mu$ is a positive Borel measure, so one only needs to check the doubling condition: $\mu(B(x,2r)) \leq C\mu(B(x,r))$ with $C$ independent of $x,r$. But this is immediate from (\ref{ballsn}) and (\ref{ballsn1}). \item {\em For any $N > n$ there exists $C_N$ such that \begin{equation} \label{ptestm} \int_{\bf M} [1 + d(x,y)/t]^{-N} d\mu(y) \leq C_N t^n \end{equation} for all $x \in {\bf M}$ and $t > 0$.}\\ \ \\ (\ref{ptestm}) is proved by the ``dyadic annulus'' method. Fix $x, t$ and let $A_j = B(x,2^jt) \setminus B(x,2^{j-1}t)$, so that, by (\ref{ballsn}) and (\ref{ballsn1}), $\mu(A_j) \leq c_42^{nj}t^n$. (\ref{ptestm}) now follows at once, if one breaks up the integral in (\ref{ptestm}) into integrals over $B(x,t), A_1, A_2, \ldots$, and notes that $\sum_{j=0}^{\infty} 2^{(n-N)j} < \infty$. \item {\em For any $N > n$ there exists $C_N^{\prime}$ such that \begin{equation} \label{ptestm1} \int_{d(x,y) \geq t} d(x,y)^{-N} d\mu(y) \leq C_N^{\prime} t^{n-N} \end{equation} for all $x \in {\bf M}$ and $t > 0$.}\\ \ \\ (\ref{ptestm1}) follows at once from (\ref{ptestm}), once we observe that, if $d(x,y) \geq t$, then\\ $[d(x,y)/t]^{-N} \leq C[1+d(x,y)/t]^{-N}$, if $C = 2^N$. \item{\em For any $N > n$ there exists $C_N^{\prime \prime}$ such that \begin{equation} \label{ptestm2} \int_{\bf M} [1 + d(x,z)/t]^{-N} [1 + d(z,y)/t]^{-N}d\mu(z) \leq C_N^{\prime \prime} t^n[1 + d(x,y)/t]^{-N} \end{equation} for all $x,y \in {\bf M}$ and $t > 0$.}\\ \ \\ To see this, break up the integral into integrals over $H_1 = \{z: d(x,z) \leq d(y,z)\}$ and $H_2 = \{z: d(x,z) > d(y,z)\}$ (which, by the way, are hemispheres if ${\bf M}$ is a round sphere and $x \neq y$). By symmetry we need only estimate the integral over $H_1$. But if $z$ is in $H_1$, $d(x,y) \leq 2d(z,y)$, so $[1 + d(z,y)/t]^{-N} \leq C[1 + d(x,y)/t]^{-N}$ (where $C = 2^N$). Thus the integral over $H_1$ is no greater than $C[1 + d(x,y)/t]^{-N}\int_{H_1} [1 + d(x,z)/t]^{-N} d\mu(z))$. Estimating the latter integral through (\ref{ptestm}), we obtain (\ref{ptestm2}). \item{\em For all $M, t > 0$, and for all $E \subseteq {\bf M}$ with diameter less than $Mt$, if $x_0 \in E$, then one has that \begin{equation} \label{alcmpN} \frac{1}{M+1}[1+d(x,y)/t] \leq [1+d(x_0,y)/t] \leq (M+1)[1+d(x,y)/t] \end{equation} for all $x \in E$ and all $y \in {\bf M}$.}\\ \ \\ This is true simply because $d$ is a metric. \end{itemize} \section{Kernels}\label{kernels} $\Delta$ will now denote the Laplace-Beltrami operator on ${\bf M}$ (equal to $-d^*d$, where $\ ^*$ is taken with respect to the given Riemannian metric). We apply Lemma \ref{gelmaylem} to $T=\Delta$. In order to carry out the plan explained in the introduction to this article, we must study the kernel $K_{\sqrt t}(x,y)$ of the operator $f(t\Delta)$ for $f \in {\mathcal S}(\RR^+)$, and we do so in this section. Before proving the crucial Lemma \ref{manmol}, we will present some well-known information about $K_{\sqrt t}(x,y)$ for large $t$ and also, off the diagonal, for small $t$. (This information will be preceded, below, by $\rhd$ signs.)\\ Concerning the Laplace-Beltrami operator $\Delta$, we first recall: \begin{itemize} \item $\Delta$, as an operator on $C^{\infty}({\bf M})$, has an orthonormal basis of eigenfunctions $\{u_l: 0 \leq l < \infty\}$; all are in $C^{\infty}({\bf M})$. We may, and do, choose the $u_l$ to be real-valued. We order the $u_l$ so that the corresponding eigenvalues $\lambda_l$ form a non-decreasing sequence. Then $u_0$ is constant, $\lambda_0 = 0$ and all other $\lambda_l > 0$. \vspace{.3cm} This easily implies: \item $\Delta$, as an operator on $C^{\infty}({\bf M})$, has a unique extension to a self-adjoint operator on $L^2({\bf M})$ (which we also denote by $\Delta$). Its domain is $\{F= \sum a_l u_l \in L^2({\bf M}): \sum |\lambda_l a_l|^2 < \infty\}$, and for such an $F$, $\Delta F = \sum \lambda_l a_l u_l$. \vspace{.3cm} Of great importance is {\em Weyl's Lemma}, which says \cite{Hor68}, in sharp form: \item For $\lambda > 0$, let $N(\lambda)$ denote the number of eigenvalues of $\Delta$ which are less than or equal to $\lambda$ (counted with respect to multiplicity). Then for some $c > 0$, $N(\lambda) = c\lambda^{n/2} + O(\lambda^{(n-1)/2})$. \vspace{.3cm} Since $N(\lambda_l) = l+1$, we conclude: \item For some constants $c_1, c_2 > 0$, we have $c_1 l^{2/n} \leq \lambda_l \leq c_2 l^{2/n}$ for all $l$. \vspace{.3cm} Since $\Delta^m u_l = \lambda_l^m u_l$, and $\Delta^m$ is an elliptic differential operator of degree $2m$, Sobolev's lemma, combined with the last fact, implies: \item For any integer $k \geq 0$, there exists $C_k, \nu_k > 0$ such that $\|u_l\|_{C^k({\bf M})} \leq C_k (l+1)^{\nu_k}$. \vspace{.2cm} (In fact, by Sobolev's lemma, we may, for any $\varepsilon > 0$, take $\nu_k = (2k+n+\varepsilon)/2n$. By \cite{Sogge93}, Lemma 4.2.4, with $\lambda = \lambda_l$ in that lemma, we may in fact take $\nu_0 = (n-1)/n$.) \vspace{.3cm} From these facts one sees at once: \item The mapping $\sum a_l u_l \rightarrow (a_l)_{l \geq 0}$ gives a Fr\'echet space isomophism of $C^{\infty}({\bf M})$ with the space of rapidly decaying sequences. \end{itemize} For the rest of this section, say $f \in {\mathcal S}(\RR^+)$. We conclude: \begin{itemize} \item[ $\rhd$] For $t > 0$, $x,y \in {\bf M}$, let (for the rest of this section) \begin{equation} \label{kerexp} K_{{\sqrt t}}(x,y) = \sum_{l=0}^{\infty} f(t\lambda_l)u_l(x)u_l(y). \end{equation} Then $K_{\sqrt t}$ is the kernel of the operator $f(t\Delta)$, in the sense that if $F \in L^2({\bf M})$, then \begin{equation}\notag [f(t\Delta) F](x) = \int_{\bf M} K_{\sqrt t}(x,y) F(y) d\mu(y), \end{equation} and $K_{\sqrt t}(x,y)$ is smooth in $(t,x,y)$ (for $t > 0$, $x, y \in {\bf M}$).\\ \ \\ From (\ref{kerexp}), our estimates on the $\lambda_l$ and the $\|u_l\|_{C^k}$, and the rapid decay of Schwartz functions, we conclude: \item[ $\rhd$] If $f(0) = 0$, then for any $M, N \geq 0$, \[ \lim_{t \rightarrow \infty} t^M \frac{\partial^N}{\partial t^N} K_{\sqrt t} = 0 \] in $C^{\infty}({\bf M} \times {\bf M})$. \vspace{.3cm} Note also, that whenever $l > 0$, $\int_{\bf M} u_l d\mu = C \int_{\bf M} u_l \overline{u_0} d\mu = 0$. Accordingly:\\ \item[ $\rhd$] If $f(0) = 0$, then $\int_{\bf M} K_{\sqrt t} (x,y) d\mu(x) = 0$ for all $y \in {\bf M}$, and $\int_{\bf M} K_{\sqrt t} (x,y) d\mu(y) = 0$ for all $x \in {\bf M}$. \end{itemize} Next we discuss the behavior of $K_{\sqrt t}$, as $t \rightarrow 0^+$. For this, we utilize some of the theory of pseudodifferential operators. We will need some facts about symbols on $\RR$. For $m \in \RR$, let $S^m_1(\RR)$ denote the space of standard symbols $p(\xi)$ of order $m$, which depend only on the ``dual variable" $\xi$. Let $\{\|\:\|_{m,N}\}$ denote the natural nondecreasing family of seminorms defining the Fr\'echet space topology of $S^{m}_{1}(\RR)$; thus \[ \|p\|_{m,N} = \sum_{0 \leq j \leq N}\sup_{\xi} \left[(1+|\xi|)^{j-m}|p^{(j)}(\xi)|\right] \] for $p \in S^{m}_{1}(\RR)$. If $G \in S^m_1(\RR)$, let $G_t(\xi) = G(t\xi)$. It is evident that $G_t \in S^{m}_{1}(\RR)$. In fact we have: \begin{itemize} \item {\em Say that $G \in S^m_1(\RR)$. Let $k$ be the least integer which is greater than or equal to $m$; if $k > 0$, suppose further that $G$ vanishes to order at least $k$ at $0$. Then for any $N$, there exists $C > 0$ such that \begin{equation} \label{tmdep} \|G_t\|_{m,N} \leq Ct^m \end{equation} whenever $0 < t < 1$}. \end{itemize} To see this, say $j \geq 0$ is an integer. One needs only note: \begin{itemize} \item[$(i)$] if $0 \leq j < m$ (so that $k > 0$) and $t|\xi| \leq 1$, then \[ |(G_t)^{(j)}(\xi)| \leq Ct^j (t^{k-j}|\xi|^{k-j}) \leq Ct^j (t^{m-j}|\xi|^{m-j}) \leq C t^m (1 + |\xi|)^{m-j};\] \item[$(ii)$] if $0 \leq j < m$ and $t|\xi| > 1$, then \[ |(G_t)^{(j)}(\xi)| \leq Ct^j (1 + t|\xi|)^{m-j} \leq Ct^j (t^{m-j}|\xi|^{m-j}) \leq C t^m (1 + |\xi|)^{m-j};\] \item[$(iii)$] if $j \geq m$, then \[|(G_t)^{(j)}(\xi)| \leq Ct^j (1+|t\xi|)^{-(j-m)} \newline \leq Ct^m(1+|\xi|)^{-(j-m)}.\] \end{itemize} This proves the claim.\\ \ \\ In particular, say $G \in S^1_1(\RR)$ is arbitrary. Then $G-G(0) \in S^{1}_{1}(\RR)$, and vanishes to order at least $1$ at $0$. Accordingly, for any $N$, $\|G_t-G(0)\|_{1,N} \leq Ct$ for $0 < t < 1$, so that $G_t \rightarrow G(0)$ in $S^1_{1}(\RR)$ as $t \rightarrow 0^+$.\\ \\ We return now to ${\bf M}$, where we need to look at the class $OPS^m_{1,0}({\bf M})$ of pseudodifferential operators of order $m \in [-\infty,\infty)$. As is familiar, $T: C^{\infty}({\bf M}) \to C^{\infty}({\bf M})$ is in $OPS^m_{1,0}({\bf M})$ provided that the following conditions hold for $\varphi, \psi \in C^{\infty}({\bf M})$: \begin{enumerate} \item If $\supp \varphi \cap \supp \psi = \oslash$, then the operator $\varphi T \psi$ has a smooth kernel; and \item If $\supp \varphi \cup \supp \psi$ is contained in a chart $(V,\Phi)$, then $\varphi T \psi$ is the pullback to ${\bf M}$ of a pseudodifferential operator $\Phi_*(\varphi T \psi) \in OPS^m_{1,0}(\RR^n)$.\\ \end{enumerate} (Of course, here, $(\varphi T \psi)F = \varphi T(\psi F)$.) One places a Fr\'echet space structure on $OPS^m_{1,0}({\bf M})$ in a natural manner. (A brief sketch: First note that $OPS^{-\infty}({\bf M})$ is the space of operators with smooth kernels, so it has a natural Fr\'echet space structure, inherited from $C^{\infty}({\bf M} \times {\bf M})$. For other $m$, one chooses a finite atlas $\{W_k\}$ on ${\bf M}$ with the property that if two charts in the atlas intersect, their union is contained in a chart. One chooses a partition of unity $\{\varphi_k\}$ subordinate to this atlas. One notes that if $T \in OPS^m_{1,0}(\RR^n)$, then $T = \sum_{i,j} \varphi_i T \varphi_j$. One notes that if $W_i \cap W_j = \oslash$, then $\varphi_i T \varphi_j \in OPS^{-\infty}({\bf M})$, a Fr\'echet space; while if $W_i \cap W_j \not= \oslash$, then $W_i \cap W_j \subseteq V$ for some chart $(V,\Phi)$, and $\Phi_*(\varphi T \psi) \in OPS^m_{1,0}(\RR^n)$, also a Fr\'echet space. Finally one defines seminorms on $OPS^m_{1,0}(\RR^n)$ of the form $\sum_{i,j}\|\varphi_i T \varphi_j\|$, where, in the summation, one uses appropriate seminorms coming from $OPS^{-\infty}({\bf M})$ if $W_i \cap W_j = \oslash$, or from $OPS^m_{1,0}(\RR^n)$ if $W_i \cap W_j \not= \oslash$. The Fr\'echet space topology thereby placed on $OPS^m_{1,0}({\bf M})$ is independent of all choices made.)\\ One has the following theorem of Strichartz (\cite{Stric72}, or Theorem 1.3, page 296, of \cite{Tay81}): \begin{itemize} \item {\em If $p(\xi) \in S^{m}_{1}(\RR)$, then $p(\sqrt{\Delta}) \in OPS^m_{1,0}({\bf M})$.} \end{itemize} In fact, the map $p \rightarrow p(\sqrt{\Delta})$ is continuous from $S^{m}_{1}(\RR)$ to $OPS^m_{1,0}({\bf M})$. Indeed, by the closed graph theorem for Fr\'echet spaces, it is enough to observe that if $u \in C^{\infty}({\bf M})$, then the maps $p \rightarrow \langle p(\sqrt{\Delta})u,u_l \rangle $ are continuous from $S^{m}_{1}(\RR)$ to $\CC$ for every $l$, and this is clear.\\ \ \\ As a consequence, if $\varphi, \psi \in C^{\infty}({\bf M})$ are as in \#1 above, then the map from $S^{m}_{1}(\RR)$ to $OPS^{-\infty}({\bf M})$, which takes $p$ to $\varphi p(\sqrt{\Delta}) \psi$, is continuous. If $\varphi, \psi \in C^{\infty}({\bf M})$ are as in \#2 above, then the map from $S^{m}_{1}(\RR)$ to $OPS^m_{1,0}(\RR^n)$, which takes $p$ to $\Phi_*(\varphi p(\sqrt{\Delta}) \psi)$ is continuous.\\ \ \\ As usual, $f \in {\mathcal S}(\RR^+)$; let $G(\xi) = f(\xi^2)$. Then $G \in {\mathcal S}(\RR)$. (In fact, if we allow $f$ to vary, the map $f \rightarrow G$ is evidently a bijection between ${\mathcal S}(\RR^+)$ and the space of even Schwartz functions on $\RR$.) Now $G_{\sqrt t}(\xi) = f(t \xi^2)$, and $G_{\sqrt t}(\sqrt{\Delta}) = f(t \Delta)$. Since $G_{\sqrt t} \rightarrow G(0)$ in $S^1_{1}(\RR)$ as $t \rightarrow 0^+$, we infer: \begin{itemize} \item {\em $f(t\Delta) \rightarrow f(0)I$ in $OPS^1_{1,0}({\bf M})$ as $t \rightarrow 0^+$. } \end{itemize} Let $D$ denote the diagonal of ${\bf M} \times {\bf M}$. We can now show: \begin{itemize} \item[$\rhd$] {\em For any $N > 0$, \[ \lim_{t \rightarrow 0} \frac{\partial^N}{\partial t^N} K_{\sqrt t} = 0 \] in $C^{\infty}(({\bf M} \times {\bf M}) \setminus D)$.} \end{itemize} To prove this, we adapt the arguments of \cite{Tay81}, page 313. Say $\varphi, \psi \in C^{\infty}({\bf M})$, have disjoint supports. Suppose further that $\varphi \equiv 1$ in an open set $U$ and $\psi \equiv 1$ in an open set $V$. It is enough to show that, for any $C^{\infty}$ differential operator $Y$ on ${\cal M}$, acting in the $y$ variable, \[ \lim_{t \rightarrow 0} Y \frac{\partial^N}{\partial t^N} K_{\sqrt t}(x,y) \mbox{ (regarded as a function of } x) = 0 \] in $C^{\infty}(U)$, uniformly for $y \in V$. But if $x \in U$ and $y \in V$, then \[ Y \frac{\partial^N}{\partial t^N} K_{\sqrt t}(x,y) = \varphi(x)\left[Yf^{(N)}(t\Delta)(\psi \Delta^N \delta_y)\right](x) = \left[S_t(w_y)\right](x), \mbox{ say }, \] where $S_t$ is the pseudodifferential operator $\varphi Yf^{(N)}(t\Delta)\psi$, and $w_y = \Delta^N \delta_y$. For some $s > 0$, the set $\{w_y: y \in V\}$ is a bounded subset of $H^{-s}({\bf M})$. Also, as $t \rightarrow 0^+$, $f^{(N)}(t\Delta) \rightarrow f^{(N)}(0)I$ in $OPS^1_{1,0}({\bf M})$, so $Y f^{(N)}(t\Delta) \rightarrow f^{(N)}(0)Y$ in $OPS^{k+1}_{1,0}({\bf M})$, if $k = \deg Y$. But $\varphi, \psi$ have disjoint supports, so the map $R \rightarrow \varphi R \psi$ is continuous from $OPS^{k+1}_{1,0}({\bf M})$ to $OPS^{-\infty}_{1,0}({\bf M})$. Therefore $S_t \rightarrow \varphi \left[f^{(N)}(0) Y\right] \psi \equiv 0$ in $OPS^{-\infty}_{1,0}({\bf M})$. Thus: \[ S_t w_y \rightarrow 0 \mbox{ in } C^{\infty}({\bf M}), \mbox{ uniformly for }y \in V, \] as desired.\\ \ \\ Applying the mean value theorem in the $t$ variable repeatedly to the last fact about $K_{\sqrt t}$, we see: \begin{itemize} \item[ $\rhd$] Let $E$ be any fixed compact subset of $({\bf M} \times {\bf M}) \setminus D$, and let ${\mathcal U}$ be the interior of $E$. Then for any $k, N$ there exists $C_{k,N}$ such that \[ \|K_{\sqrt t}\|_{C^k({\mathcal U})} \leq C_{k,N} t^N \] whenever $0 < t < 1$. \end{itemize} So far, nearly everything in this section has been well-known, but now we must consider the behavior of $K_t$ near the diagonal for small $t$. As we have explained and motivated in the introduction, this behavior is described by (\ref{xykt}): \begin{lemma} \label{manmol} Say $f(0) = 0$. Then for every pair of $C^{\infty}$ differential operators $X$ $($in $x)$ and $Y$ $($in $y)$ on ${\bf M}$, and for every integer $N \geq 0$, there exists $C_{N,X,Y}$ as follows. Suppose $\deg X = j$ and $\deg Y = k$. Then \begin{equation} \label{diagest} t^{n+j+k} \left|\left(\frac{d(x,y)}{t}\right)^N XYK_t(x,y)\right| \leq C_{N,X,Y} \end{equation} for all $t > 0$ and all $x,y \in {\bf M}$. \end{lemma} \begin{proof} Of course $d$ is bounded on ${\bf M} \times {\bf M}$. Thus, by what we already know about $K_t$, it suffices to prove (\ref{diagest}) for $0 < t < 1$. In fact, with notation as in Proposition \ref{ujvj}, it suffices to show that (\ref{diagest}) holds whenever $0 < t < 1$ and $d(x,y) < \delta/3$. Cover ${\bf M}$ by a finite collection of balls $\{B(z_l,\delta/3): 1 \leq l \leq J\}$. Then any pair of points $(x,y)$ with $d(x,y) < \delta/3$ lie together in one of the balls $B(z_l,2\delta/3)$. Thus, it suffices to show that (\ref{diagest}) holds whenever $0 < t < 1$ and $x,y \in B(z_l,2\delta/3)$ for some $l$. Moreover, we may fix a positive integer $M$ and prove that (\ref{diagest}) holds for all $N \leq M$. \\ \ \\ Now $K_t$ is the kernel of $f(t^2\Delta) = G(t\sqrt{\Delta})$, where $G(\xi) = f(\xi^2)$. We claim that it is enough to prove (\ref{diagest}) in each of the following two cases: \begin{itemize} \item[$(i)$] supp$\widehat{G} \subseteq (-1,1)$; and \item[$(ii)$]$G$ vanishes to order at least $M$ at $0$. \end{itemize} Indeed, any even $G$ in ${\mathcal S}(\RR)$ can be written as the sum of two even functions $G_1$ and $G_2$, where $G_1$ is of type (i) and $G_2$ is of type (ii). (To see this, say that, for $0 \leq l \leq M-1$, $G^{(l)}(0) = a_l$. It is enough to show that there exists an even function $G_1$ with supp$\widehat{G_1} \subseteq (-1,1)$, such that $G_1^{(l)}(0)= a_l$, for $0 \leq l \leq M-1$, for then we can set $G_2=G-G_1$. For this, see Lemma \ref{ccamplem} in the Appendix (Section \ref{cclem}).)\\ In case (i), note that, by Huygens' principle, the support of $K_t$, the kernel of \begin{equation}\notag G(t{\sqrt \Delta}) = c\int_{-\infty}^{\infty} \hat{G}(s) e^{-ist{\sqrt \Delta}} ds, \end{equation} is contained in $\{(x,y): d(x,y) \leq t\}$. Thus, in this case, we may take $M=0$. In either case (i) or case (ii), it is sufficient to show that, for every $\varphi, \psi \in C_c^{\infty}(B(z_l,\delta))$, we have that \begin{align}\notag t^{n+j+k} \left(\frac{d(x,y)}{t}\right)^N \left|XY\left[\varphi(x)K_t(x,y)\psi(y)\right] \right| \leq C \end{align} whenever $\deg X = j$ and $\deg Y = k$, for all $0 < t < 1$, all $N \leq M$, and all $x,y \in B(z_l,\delta)$. (Indeed, we could then take $\varphi, \psi \equiv 1$ on $B(z_l,2\delta/3)$.) Select $U_i$ as in Proposition \ref{ujvj}, with $B(z_l,3\delta) \subseteq U_i$. Now, $\varphi(x)K_t(x,y)\psi(y)$ is the kernel of the pseudodifferential operator $\varphi G(t{\sqrt \Delta})\psi$. We can use the coordinate map $\phi_i$ to pull this kernel over to $\RR^n$, thereby obtaining a smooth, compactly supported kernel $L_t$, with support in $\RR^n \times \RR^n$. Let us change our notation and now use $x$ and $y$ to denote points in $\RR^n$. By Proposition \ref{ujvj}, it is enough to show that: \[t^{n+|\alpha|+|\beta|}\left(\frac{|x-y|}{t}\right)^N \left|\partial_x^{\alpha} \partial_y^{\beta} L_t(x,y)\right| \leq C \] for any multiindices $\alpha, \beta$, for all $0 < t < 1$, all $N \leq M$, and all $x,y \in \RR^n$. Now let $p_t(x,\xi)$ denote the symbol of the operator with kernel $L_t$. Then \begin{equation}\notag L_t(x,y) = \int e^{i(y-x)\cdot \xi} p_t(x,\xi) d\xi. \end{equation} Thus, $\partial_x^{\alpha} \partial_y^{\beta} L_t(x,y)$ is a finite linear combination of terms of the form \begin{equation} \label{typterm} T = \int e^{i(y-x)\cdot \xi} \xi^{\gamma} \partial_x^{\delta} p_t(x,\xi) d\xi, \end{equation} where $|\gamma|, |\delta| \leq |\alpha| + |\beta|$. In case (i) we may take $M=0$, so we need only estimate $|T|$, the absolute value of the term $T$ in (\ref{typterm}). It will be enough to show that $|T| \leq Ct^{-n-|\gamma|}$ (for $0 < t < 1$), since $Ct^{-n-|\gamma|} \leq Ct^{-n-|\alpha|-|\beta|}$. But \begin{eqnarray*} |T| & \leq &\int_{|\xi| \leq 1/t} |\xi^{\gamma} \partial_x^{\delta} p_t(x,\xi)|d\xi + \int_{|\xi| > 1/t} |\xi^{\gamma} \partial_x^{\delta} p_t(x,\xi)|d\xi \\ & &\leq C\left[A_t t^{-n-|\gamma|} + B_t \int_{1/t}^{\infty} r^{|\gamma|+n-1}r^{-|\gamma|-n-1} dr\right]\\ & &\leq C\left[A_t t^{-n-|\gamma|} + B_t t\right] \end{eqnarray*} where $A_t = \sup_{x,\xi}|\partial_x^{\delta} p_t(x,\xi)|$, and $B_t = \sup_{x,\xi}|\xi|^{|\gamma|+n+1}|\partial_x^{\delta} p_t(x,\xi)|$. But, by (\ref{tmdep}) and the continuity of the map $p \rightarrow p({\sqrt \Delta})$, from $S^m_{1}(\RR)$ to $OPS^m_{1,0}({\bf M})$ in the cases $m = 0$ and $m = -(|\gamma| + n + 1)$, we find that $A_t \leq C$ (independent of $0 < t < 1$) and $B_t \leq Ct^{-|\gamma|-n-1}$. Altogether $|T| \leq Ct^{-n-|\gamma|}$, as claimed. This completes the proof in case (i). In case (ii), we need only show that for every $n$-tuple $\nu$ with $|\nu| \leq M$, we have that $|(x-y)^{\nu}T| \leq Ct^{-n-|\gamma|+|\nu|}$. Note that $(x-y)^{\nu} e^{i(y-x)\cdot \xi} = c \partial_{\xi}^{\nu}e^{i(y-x)\cdot \xi}$. Substituting this in the explicit expression for $(x-y)^{\nu}T$, and repeatedly integrating by parts in $\xi$, we see that $(x-y)^{\nu}T$ is a finite linear combination of terms of the form \begin{equation}\notag T' = \int e^{i(y-x)\cdot \xi} \xi^{\kappa} \partial_x^{\delta} \partial_{\xi}^{\chi} p_t(x,\xi) d\xi, \end{equation} where $|\kappa| \leq |\gamma|$, $|\chi| \leq |\nu| \leq M$, and $|\gamma|- |\kappa| + |\chi| = |\nu|$. Just as in our estimate for $T$ above, we see that \begin{eqnarray*} |T'| & \leq & \int_{|\xi| \leq 1/t} \left|\xi^{\kappa} \partial_x^{\delta} \partial_{\xi}^{\chi} p_t(x,\xi)\right|d\xi + \int_{|\xi| > 1/t} \left|\xi^{\kappa} \partial_x^{\delta} \partial_{\xi}^{\chi} p_t(x,\xi)\right|d\xi \\ & &\leq C\left[A_t t^{-n-|\kappa|} + B_t \int_{1/t}^{\infty} r^{|\kappa|+n-1}r^{-|\kappa|-n-1} dr\right]\\ & &\leq C\left[A_t t^{-n-|\kappa|} + B_t t\right] \end{eqnarray*} where now \[ A_t = \sup_{x,\xi}\left|\partial_x^{\delta} \partial_{\xi}^{\chi} p_t(x,\xi)\right| = \sup_{x,\xi}(1 + |\xi|)^{-|\chi|+|\chi|}\left|\partial_x^{\delta} \partial_{\xi}^{\chi} p_t(x,\xi)\right|,\] and \[B_t = \sup_{x.\xi}|\xi|^{|\kappa|+n+1}\left|\partial_x^{\delta} \partial_{\xi}^{\chi} p_t(x,\xi)\right| \leq \sup_{x.\xi}(1 + |\xi|)^{|\kappa|+n+1-|\chi|+|\chi|}\left|\partial_x^{\delta} \partial_{\xi}^{\chi} p_t(x,\xi)\right|.\] But, by (\ref{tmdep}) and the continuity of the map $p \rightarrow p({\sqrt \Delta})$, from $S^m_{1}(\RR)$ to $OPS^m_{1,0}({\bf M})$ in the cases $m = |\chi|$ and $m = |\chi|-(|\kappa| + n + 1)$, we find that $A_t \leq Ct^{|\chi|}$ and $B_t \leq Ct^{|\chi|-|\kappa|-n-1}$. ((\ref{tmdep}) may be used here, since $G$ vanishes to order at least $M$ at $0$, and both $|\chi|$ and $|\chi|-(|\kappa| + n + 1)$ are less than or equal to $M$.) Altogether $|T'| \leq Ct^{-n+|\chi|-|\kappa|} = Ct^{-n-|\gamma|+|\nu|}$, as claimed. This completes the proof.\end{proof} \begin{remark} Note that, in Lemma \ref{manmol}, the conclusion (\ref{diagest}) holds even without the hypothesis $f(0)=0$, {\em provided} $t$ is restricted to lie in the interval $(0,1]$. Indeed, after the second sentence of the proof of the lemma, we assumed $0 < t < 1$ and never used the hypothesis that $f(0)=0$. Of course (\ref{diagest}) holds also for $t=1$ by continuity.\end{remark} \section{Continuous ${\cal S}$-Wavelets on Manifolds}\label{continuous-s-wavelets-on-manifolds} We now turn to our definitions of continuous wavelets and continuous ${\cal S}$-wavelets on ${\bf M}$, which we have motivated in the introduction. \begin{definition} \label{ctswvmn} Suppose that the function $K_t(x,y)$ is smooth for $t > 0$, $x,y \in {\bf M}$. For $t > 0$, define $T_t: L^2({\bf M}) \rightarrow C^{\infty}({\bf M})$ to be the operator with kernel $K_t$, so that for all $F \in L^2({\bf M})$ and all $x \in {\bf M}$, \[ (T_t F)(x) = \int_{\bf M} K_t(x,y) F(y) d\mu(y). \] As usual, let $P$ denote the projection in $L^2({\bf M})$ onto the space of constant functions. Then we define $K_t(x,y)$ to be a {\em continuous wavelet} on ${\bf M}$, provided the following three conditions hold, for some $c > 0$: \begin{itemize} \item[(i)] For all $F \in L^2({\bf M})$, \begin{equation} \label{ctswveq} \int_0^{\infty} \|T_t F\|^2_2 \frac{dt}{t} = c \|(I-P)F\|^2_2; \end{equation} \item[(ii)] $\int_{\bf M} K_t(x,y) d\mu(y) = 0$ for all $t > 0$ and all $x \in {\bf M}$ (or, equivalently, $T_t(1) = 0$ for all $t > 0$); \item[(iii)] $\int_{\bf M} K_t(x,y) d\mu(x) = 0$ for all $t > 0$ and all $y \in {\bf M}$ (or, equivalently, $T_t^*(1) = 0$ for all $t > 0$). \end{itemize} \end{definition} \begin{definition} \label{ctsSwavmn} Suppose $K_t(x,y)$ is a continuous wavelet on ${\bf M}$. We then say that $K_t(x,y)$ is a continuous ${\cal S}$-wavelet on ${\bf M}$, if the following additional condition holds: \begin{itemize} \item[(iv)] For every pair of $C^{\infty}$ differential operators $X$ (in $x$) and $Y$ (in $y$) on ${\bf M}$, and for every integer $N \geq 0$, there exists $C_{N,X,Y}$ as follows. Suppose $\deg X = j$ and $\deg Y = k$. Then \begin{equation} \label{diagest1} t^{n+j+k} |(\frac{d(x,y)}{t})^N XYK_t(x,y)| \leq C_{N,X,Y} \end{equation} for all $t > 0$ and all $x,y \in {\bf M}$. \end{itemize} \end{definition} We then have the following result: \begin{theorem} \label{ctswvthm} Say $f_0 \in {\mathcal S}(\RR^+)$, $f_0 \not\equiv 0$, and let $f(s) = sf_0(s)$. For $t > 0$, let $K_t$ be the kernel of $f(t^2 \Delta)$. Then $K_t(x,y)$ is a continuous ${\cal S}$-wavelet on ${\bf M}$. \end{theorem} \begin{proof} Of course, condition (iv) is Lemma \ref{manmol}. As we have seen, conditions (ii) and (iii) of Definitio \ref{ctswvmn} are immediate consequences of (\ref{kerexp}), as for condition (i), say $F \in L^2({\bf M})$. We need only take the inner product of both sides of (\ref{strint}) with $F$ to see that, if $c = \int_0^{\infty} |f(t)|^2 \frac{dt}{t}$, then \[ \int_{0}^{\infty} \|f(t\Delta)F\|_2^2 \frac{dt}{t} = c\|(I-P)F\|_2^2.\] Replacing $t$ by $t^2$ in this equation, we find that \[ \int_{0}^{\infty} \|f(t^2\Delta)F\|_2^2 \frac{dt}{t} = \frac{c}{2}\|(I-P)F\|_2^2,\] which yields condition (i) at once. This completes the proof.\end{proof} As for properties of continuous wavelets, we first remark that it is a standard, simple matter to show the following result, which generalizes (\ref{strint}) (with $T = \Delta$ there): \begin{proposition} \label{ctswvinvthm} Suppose $K_t(x,y)$ is a continuous wavelet on ${\bf M}$, and, for $t > 0$, let $T_t$ be the operator on $L^2({\bf M})$ with kernel $K_t$. Then for any $F \in (I-P)L^2({\bf M})$, we may reconstruct $F$ through the identity \begin{equation} \label{ctswveq1} \int_0^{\infty} T_t^* T_t F \frac{dt}{t} = cF. \end{equation} Here the integral on the left side of (\ref{ctswveq1}) conveges unconditionally in $L^2$. \end{proposition} \begin{proof} Let ${\mathcal H}_1$ be the Hilbert space $(I-P)L^2({\bf M})$, and let ${\mathcal H}_2$ be the Hilbert space $L^2(\RR^+, {\mathcal H}_1, dt/t)$. By our definition of continuous wavelet, we may define a bounded operator $U: {\mathcal H}_1 \rightarrow {\mathcal H}_2$ by \[ UF = (T_t F)_{t > 0}. \] Moreover, we may define a bounded operator $V: {\mathcal H}_2 \rightarrow {\mathcal H}_1$ by \[ V(G_t)_{t > 0} = \int_0^{\infty} T_t^*G_t \frac{dt}{t} \] where the integral converges uncondtionally in ${\mathcal H}_1$. Indeed, let $\|\:\|$ denote $\|\:\|_{{\mathcal H}_1}$, and let $S = \{F \in {\mathcal H}_1: \|F\| = 1\}$. If $E \subseteq (0,\infty)$ is measurable and contained in a compact subset of $(0,\infty)$, we have \begin{align}\notag \|\int_E T_t^* G_t \frac{dt}{t}\| = \sup_{F \in S} |\int_E \langle T_t^* G_t,F \rangle \frac{dt}{t} | = &\sup_{F \in S} |\int_E \langle G_t,T_t F \rangle \frac{dt}{t} |\\\notag & \leq \left[\int_E \|G_t\|^2 \frac{dt}{t}\right]^{1/2} \left[\sup_{F \in S}|\int_E \|T_t F\|^2 \frac{dt}{t}\right]^{1/2}; \end{align} but, by (\ref{ctswveq}), this is less than or equal to $[\int_E \|G_t\|^2 \frac{dt}{t}]^{1/2}$, and the unconditional convergence follows. One now readily checks that $V = U^*$. By (\ref{ctswveq}), $ \langle U^*UF, F \rangle = c\|F\|^2$ for all $F \in {\mathcal H}_1$. Polarizing this identity, we find (\ref{ctswveq1}), as desired.\end{proof} As an example of the usefulness of continuous ${\cal S}$-wavelets, we now show the following direct analogue of a theorem of Holschneider and Tchamitchian (\cite{hotch}): \begin{theorem} \label{hldchar} Let $K_t(x,y)$ be a continuous ${\cal S}$-wavelet on ${\bf M}$, and, for $t > 0$, let $T_t$ be the operator on $L^2$ with kernel $K_t$. Suppose $F \in L^2({\bf M})$. Then:\\ (a) If $F$ is H\"older continuous, with H\"older exponent $\alpha$ ($0 < \alpha \leq 1$), then for some $C > 0$, \begin{equation} \label{hldcharway} \|T_t F\| \leq C t^{\alpha} \end{equation} for all $t > 0$. (Here $\|\:\|$ denotes sup norm.)\\ (b) Conversely, say $0 < \alpha < 1$, $C > 0$, and that $F$ satisfies (\ref{hldcharway}) for all $t > 0$. Then $F$ is H\"older continuous, with H\"older exponent $\alpha$. \end{theorem} \begin{proof} For (a), we just note: \begin{align}\notag \left| (T_t F)(x) \right|&= \left| \int F(y)K_t(x,y)d\mu(y)\right| \label{goholder1}\\\notag &= \left| \int \left( F(x)-F(y)\right)K_t(x,y) d\mu(y)\right|\\\notag &\leq Ct^{-n}\int d(x,y)^{\alpha} \left[1+d(x,y)/t\right]^{-n-1-\alpha} d\mu(y)\\\notag &\leq Ct^{\alpha-n}\int \left[1+d(x,y)/t\right]^{-n-1} d\mu(y)\\\notag &\leq Ct^{\alpha}\notag \end{align} by (\ref{ptestm}), as desired. For (b), of course $PF$, being constant, is H\"older continuous of any exponent. Since $T_t 1= 0$, we may assume that $F = (I-P)F$. Set $g_t = T_t F$, so that \begin{equation} \label{gtcta} \|g_t\| \leq Ct^{\alpha}. \end{equation} For $x, y \in {\bf M}$, set $$K_{t,x}(y) = K_t^y(x) = K_t(x,y).$$ Thus, for all $x$, $g_t(x) = \langle K_{t,x},\overline{F}\rangle$. Since we are assuming $F \in L^2$, by Cauchy-Schwartz and (\ref{diagest1}), we obtain the additional estimate \begin{equation} \label{gtctn} \|g_t\| \leq Ct^{-n}. \end{equation} By (\ref{diagest1}), we have that $|K_t(x,y)| \leq Ct^{-n}(1 + d(x,y)/t)^{-n-1}$, so by (\ref{ptestm}), \begin{equation} \label{kty1} \|K_t^y\|_1 \leq C \end{equation} for any $y$. Thus, for any $y$, \begin{equation}\notag |(T_t^* T_t F)(y)| = |\langle g_t, K_t^y \rangle| \leq \|g_t\|\:\|K_t^y\|_1 \leq C\|g_t\|. \end{equation} Accordingly, by (\ref{gtcta}) and (\ref{gtctn}), for any $y$, \begin{equation}\notag \int_0^{\infty} |(T_t^* T_t F)(y)| \frac{dt}{t} \leq C\left(\int_0^1 t^{\alpha - 1} dt + \int_1^{\infty} t^{-n - 1} dt\right) \leq C. \end{equation} By Proposition \ref{ctswvinvthm}, we now see that for almost every $y$, \[ cF(y) = \int_0^{\infty} T_t^* T_t F(y) \frac{dt}{t}, \] and from this, that $F \in L^{\infty}$. To complete the proof, we claim that it suffices to show that if $d(y,z) \leq \min(t,\delta)$, then \begin{equation} \label{ktxyz} \int_{\bf M} |K_t(x,y)-K_t(x,z)| d\mu(x) \leq Cd(y,z)/t. \end{equation} For then, by (\ref{gtcta}), (\ref{gtctn}), (\ref{kty1}) and (\ref{ktxyz}), we would have, if $d(y,z) < \delta$, then \begin{eqnarray*} c|F(y)-F(z)| & \leq & \int_0^{\infty} \int_{\bf M} |K_t(x,y)-K_t(x,z)|d\mu(x) \|g_t\| \frac{dt}{t}\\ & \leq & C\left[ \int_0^{d(y,z)} t^{\alpha-1} dt + d(y,z)\int_{d(y,z)}^1 t^{\alpha-2} dt + d(y,z)\int_1^{\infty} t^{-n-2} dt\right]\\ & \leq & C d(y,z)^{\alpha}, \end{eqnarray*} as needed. To prove (\ref{ktxyz}), choose $U_i \supseteq B(y,3\delta)$, and let us work in the local coordinates on $U_i$ obtained from $\phi_i$. We use the mean value theorem. By (\ref{diagest1}), for any $x \in {\bf M}$, there is point $w_x$ on the line segment joining $y$ to $z$ such that \[ |K_t(x,y)-K_t(x,z)| \leq C|y-z| t^{-n-1}(1 + d(x,w_x)/t)^{-n-1}.\] By Proposition \ref{ujvj}, if $w$ is any point on that line segment, \[ d(y,w) \leq c_2|y-w| \leq c_2|y-z| \leq c_1c_2d(y,z) \leq c_1c_2t. \] Thus the diameter of the line segment is at most $2c_1c_2t$, and so, by (\ref{alcmpN}), we have \[ |K_t(x,y)-K_t(x,z)| \leq Cd(y,z) t^{-n-1}(1 + d(x,y)/t)^{-n-1}.\] (\ref{ktxyz}) now follows from (\ref{ptestm}), as desired.\end{proof} \section{Homogeneous Manifolds}\label{homogeneous-manifolds} In this section look at the situation in which ${\bf M}$ has a transitive group $G$ of smooth metric isometries. (Such manifolds are usually called homogneous manifolds.) Obvious examples of such manifolds are the sphere $S^n$, where we take $G$ to be the group $SO(n+1)$ of rotations, and the torus ${\bf T}^n = (S^1)^n$, where we take $G$ to be the group $[SO(2)]^n$. If $T \in G$ and $F$ is a function on ${\bf M}$, we define the function $TF$ on ${\bf M}$ by $(TF)(x) = F(T^{-1}x)$. Then $T: L^2({\bf M}) \rightarrow L^2({\bf M})$ is a unitary operator which commutes with the Laplace-Beltrami operator $\Delta$. Consequently, as operators on $L^2({\bf M})$, $f(t^2\Delta)$ commutes with elements of $G$ for any bounded Borel function $f$ on $\RR$, and in particular, if $f \in {\mathcal S}(\RR)$, which we now assume. Thus, if $T \in G$, $F \in L^2({\bf M})$ and $x\in {\bf M}$, we have \begin{align} \notag \int_{\bf M} K_t(Tx, Ty) F(y) d\mu(y) &= \int_{\bf M} K_t(Tx, y) F(T^{-1}y) d\mu(y) \\\notag &= [f(t^2\Delta)(TF)](Tx)\\\notag &= T([f(t^2\Delta)(F)])(Tx); \end{align} but this is just $[f(t^2\Delta)(F)](x) = \int_{\bf M} K_t(x, y) F(y) d\mu(y)$, so \begin{equation} \label{rotinv} K_t(Tx,Ty) = K_t(x,y) \end{equation} for all $x,y \in {\bf M}$. Thus, since $G$ is transitive, if $x_0$ is any fixed point in ${\bf M}$, once one knows $K_t(x_0,y)$ for all $y$, then one knows $K_t(x,y)$ for all $x,y$. In the analogous situation on $\RR^n$, $K_t(x,y)$ has the form $t^{-n} \psi((x-y)/t)$ for some $\psi \in \mathcal{S}$, and so $K_t(x,y) = K_t(Tx,Ty)$ for any {\em translation} $T$ on $\RR^n$. Equation (\ref{rotinv}) is a natural analogue of this fact for ${\bf M}$. It is interesting to note that one has \begin{equation} \label{indlink1} K_t(x,x) = \mbox{tr}(f(t^2\Delta))/\mbox{vol}({\bf M}) \end{equation} for all $x$ and all $f \in {\mathcal S}(\RR^+)$. Indeed, by (\ref{rotinv}), $K_t(x,x)$ is constant for $x \in {\bf M}$. Accordingly \[ \mbox{vol}({\bf M}) K_t(x,x) = \int_{\bf M} K_t(y,y) d\mu(y) = \sum_l f(t^2\lambda_l)\int_{\bf M}|u_l(y)|^2 d\mu(y) = \sum_l f(t^2\lambda_l) = \mbox{tr}(f(t^2\Delta))\] as claimed. Say now $c > 0$, and let us look at the special case \begin{equation} \label{mxstup} f(s) = (s/c)e^{-(s/c)}. \end{equation} We have $\mbox{tr}(f(t^2\Delta)) = \mbox{tr}((t^2/c)\Delta e^{-(t^2\Delta)/c})$. A well known fact, usually associated with the heat kernel approach to index theorems (\cite{MinPle}, \cite{Gilkey}, \cite{Polt} and \cite{Gilkeybook}, pages 58 and 316), is that as $s \rightarrow 0^+$, \begin{equation} \label{heattras} \mbox{tr}(e^{-s\Delta}) \sim \sum_{m=0}^{\infty} s^{m-n/2}a_m, \end{equation} where \begin{equation} \label{heattr0} a_0 = (4\pi)^{-n/2} \mbox{vol}({\bf M}). \end{equation} Differentiating with respect to $s$ one finds that \[ \mbox{tr}(\Delta e^{-s\Delta}) \sim \sum_{m=0}^{\infty} (\frac{n}{2}-m)s^{m-1-n/2}a_m. \] To lowest order, then, \begin{equation} \label{mexhtxx} K_t(x,x) = \mbox{tr}((t^2/c)\Delta e^{-(t^2\Delta)/c})/\mbox{vol}({\bf M}) \sim \frac{nc t^{-n}}{2(4\pi)^{n/2}}. \end{equation} \ \\ Again let $f$ be general, but now let us look at the special case ${\bf M} = {\bf T}^n$, the torus. We write ${\bf T}^n = \left\{(e^{2 \pi i r_1},\ldots, e^{2 \pi i r_n}):\; -1/2 < r_1, \ldots, r_n \leq 1/2\right\}$. Here an orthonormal basis of eigenfunctions is given simply by $\{ e^{2\pi i m \cdot r}: m \in {\ZZ}^n \}$. The Laplace-Beltrami operator $\Delta$ is just $-\sum_{l = 1}^n (\partial/ \partial r_l)^2$, and the eigenvalues are given through $\Delta e^{2\pi i m \cdot r} = 4 \pi^2 \|m\|^2 e^{2\pi i m \cdot r}$. (Here $\|m\|^2 = m_1^2 + \ldots m_n^2$.) The kernel $K_t(r,s)$ of $f(t^2\Delta)$ is given by \[ K_t(r,s) = \sum_{m \in \ZZ^n} f(4\pi^2 t^2 \|m\|^2) e^{2\pi i m \cdot (r-s)}. \] Thus, if $F \in L^2({\bf T}^n)$, \[ [f(t^2 \Delta)F](r) = \int_{-1/2}^{1/2} \ldots \int_{-1/2}^{1/2} F(s) K_t(r,s) ds_1\ldots ds_n = [F*h_t(r)] \] where \[ h_t(s) = \sum_{m \in (\ZZ)^n} f(4\pi^2 t^2 \|m\|^2) e^{2\pi i m \cdot s}, \] and $*$ denotes the natural convolution on ${\bf T}^n$. We specialize now to the case $n = 2$, $f(u) = ue^{-u/4\pi}/4\pi^2$. We free the letter $n$ for other uses. For $t > 0$ define the functions $U_t, V_t : \RR \rightarrow \CC$ by \begin{equation} \label{utdf} U_t(x) = \sum_{n=-\infty}^{\infty} e^{-\pi t^2 n^2} e^{2\pi i n x}, \end{equation} \begin{equation} \label{vtdf} V_t(y) = \sum_{n=-\infty}^{\infty}(nt)^2 e^{-\pi t^2 n^2} e^{2\pi i n y}. \end{equation} It is then easy to calculate that \begin{equation}\notag h_t(s_1,s_2) = U_t(s_1) V_t(s_2) + U_t(s_2) V_t(s_1). \end{equation} $h_t$ is the ``Mexican hat" on the torus ${\bf T}^2$. One can use these equations to draw its graph. One of course needs to approximate the series in (\ref{utdf}) and (\ref{vtdf}) by finite sums. This is a simple matter if $t$ is greater than $1$, but if $t$ is small the series do not converge very quickly. Fortunately, however, we can give alternative series expansions for $U_t$ and $V_t$ which do converge very quickly for $0 < t < 1$. We do this by using the Poisson summation formula, or rather, its proof: if $g \in {\mathcal S}(\RR)$, then the periodic function $G(x) = \sum_{n = -\infty}^{\infty} g(x + n)$ has Fourier series $\sum_{n = -\infty} \check{g}(n) e^{2\pi i nx}$, and hence, $G(x)$ equals the latter series. (Here we use the inverse Fourier transform $\check{g} (x) = \int_{-\infty}^{\infty} g(\xi) e^{-2\pi i x \xi} d\xi$.) Applying this with $\check{g}(y) = e^{-\pi t^2 y^2}$, we obtain the formula \begin{equation} \label{utalt} U_t(x) = \frac{1}{t} \sum_{n=-\infty}^{\infty} e^{-\pi (n+x)^2/t^2}. \end{equation} Taking instead $\check{g}(y) = (ty)^2 e^{-\pi t^2 y^2}$, we obtain the formula \begin{equation} \label{vtalt} V_t(x) = \frac{1}{t} \sum_{n=-\infty}^{\infty}(\frac{1}{2\pi} - (\frac{n+x}{t})^2) e^{-\pi (n+x)^2/t^2}. \end{equation} (\ref{utalt}) and (\ref{vtalt}) converge very quickly for $t$ small, making it practical to draw pictures of $h_t$ for $t$ small. We include pictures, obtained by using Maple, of the Mexican hat functions $\pi h_t(r_1,r_2)$ $-1/2 < r_1, r_2 \leq 1/2$, for $t = 2$ (Figure 1, left), $t = 1/2$ (Figure 1, middle) and $t = 1/8$ (Figure 1, right). \begin{figure} \begin{center} \begin{tabular}{ccc} \includegraphics[scale=0.32]{fig1.pdf} & \includegraphics[scale=0.32]{fig2.pdf}& \includegraphics[scale=0.32]{fig3.pdf}\\ \end{tabular}\\ \caption{\label{fig1.pdf}\small $h_t$ on ${\bf T}^2$ for $t=2$ (left), $t=1/2$ (middle), $t=1/8$ (right)} \end{center} \end{figure} Note that the characteristic Mexican hat shape is obtained. To keep the pictures uncluttered, we have omitted the axes. However, one can ask Maple to evaluate $t^2\pi h_t(0,0)$ for various values of $t$. When $t=2$, it is $.00070$; when $t=1$, it is $.59017$; when $t=1/2$, it is $.99984$, and when $t=1/8$, it is $1.00000$. This is consistent with the predictions of (\ref{mxstup}) with $c = 4\pi$, and (\ref{mexhtxx}).\\ Now let us look at the special case ${\bf M} = S^n$, and let $f$ be general for now. Let ${\bf N} = (1,0,\ldots,0)$, the ``north pole". We shall now calculate $K_t({\bf N},y)$ as an explicit infinite series. (As we have explained, we will then know $K_t(x,y)$ for all $x,y$.) Recall (\cite{StWeis71}) that we may write $L^2(S^n) = \bigoplus_{l \geq 0}{\mathcal H}_l$, where ${\mathcal H}_l$ is the space of spherical harmonics of degree $l$. If $P \in {\mathcal H}_l$, then \[ \Delta P = l(l+n-1) P. \] Within each space ${\mathcal H}_l$ is a unique {\em zonal harmonic} $Z_l$, which has the property that for all $P \in {\mathcal H}_l$, $P({\bf N}) = \langle P,Z_l \rangle $. In particular, $P$ is orthogonal to $Z_l$ if and only if $P({\bf N}) = 0$. We wish to use (\ref{kerexp}) to evaluate $K_t({\bf N},y)$. To do so we choose an orthonormal basis for each ${\mathcal H}_l$, one of whose elements is $Z_l/\|Z_l\|_2$. Then any other element of this orthonormal basis vanishes at ${\bf N}$, so we find \begin{equation}\notag K_t({\bf N},y) = \sum_{l=0}^{\infty} f(t^2l(l+n-1)) Z_l({\bf N})Z_l(y)/\|Z_l\|_2^2. \end{equation} But surely $Z_l({\bf N}) = \langle Z_l,Z_l \rangle $, so we simply have \begin{equation} \label{kersph2} K_t({\bf N},y) = \sum_{l=0}^{\infty} f(t^2l(l+n-1))Z_l(y). \end{equation} However, $Z_l(y)$ is known explicitly. In fact (\cite{StWeis71}), if $\omega_n$ is the area of $S^n$, then for some constant $c_l$, \begin{equation} \label{zony} Z_l(y) = c_l P_l^{\lambda}(y_1), \end{equation} where $y = (y_1,\ldots,y_n)$, $\lambda = (n-1)/2$, and $P^{\lambda}_l$ is the ultraspherical (or Gegenbauer) polynomial of degree $l$ associated with $\lambda$. The $P^{\lambda}_l$ may be defined in terms of the generating function \begin{equation} \label{genfn} (1-2r\tau+r^2)^{-\lambda} = \sum_{l=0}^{\infty} P_l^{\lambda}(\tau)r^l. \end{equation} In particular, if $\tau = 1$, we see that \[ (1-r)^{-(n-1)} = \sum_{l=0}^{\infty} P_l^{\lambda}(1)r^l, \] so that \begin{equation} \label{pk1} P_l^{\lambda}(1) = \left(\displaystyle^{n+l-2}_{\ \ \ l\ \ \ }\right) = b_l \end{equation} On the other hand, \begin{equation} \label{zonn} Z_l({\bf N}) = [\omega_n]^{-1}\dim {\mathcal H}_l = [\omega_n]^{-1}\left[\left(\displaystyle^{n+l}_{\ \:n\ }\right)-\left(\displaystyle^{n+l-2}_{\ \ \ l\ \ \ }\right)\right] = d_l. \end{equation} Comparing (\ref{zony}),(\ref{pk1}) and (\ref{zonn}), we see that \begin{equation} \label{ckeval} c_l = d_l/b_l = \frac{\left(\displaystyle^{n+l}_{\ \:n\ }\right)-\left(\displaystyle^{n+l-2}_{\ \ \ n\ \ \ }\right)} {\omega_n\left(\displaystyle^{n+l-2}_{\ \ \ l\ \ \ }\right)} = \frac{n+2l-1}{\omega_n(n-1)}. \end{equation} From (\ref{kersph2}), we find \begin{equation} \label{kersph3} K_t({\bf N},y) = \sum_{l=0}^{\infty} \frac{(n+2l-1)}{\omega_n(n-1)}f(t^2l(l+n-1)) P_l^{\lambda}(y_1) := h_t(y_1). \end{equation} If $x \in S^n$, we can choose a rotation $T$ with $Tx = {\bf N}$. If also $y \in S^n$, then by (\ref{rotinv}), \[ K_t(x,y) = K_t(Tx,Ty) = K_t({\bf N},Ty) = h_t((Ty)_1) = h_t({\bf N}\cdot Ty) = h_t(Tx \cdot Ty) = h_t(x \cdot y). \] Thus, if $F \in L^2(S^n)$, \[ [f(t^2 \Delta) F](x) = \int_{S^n} F(y) h_t(x \cdot y) d\mu(y), \] the {\em spherical convolution} of $F$ and the axisymmetric function $h_t$. Let us now take $f(s) = se^{-s}$. From (\ref{kersph3}), we find \begin{equation} \label{kersph4} h_t(y_1) = K_t({\bf N},y) = \sum_{l=0}^{\infty} \frac{l(l+n-1)(n+2l-1)}{\omega_n(n-1)}t^2e^{-t^2l(l+n-1)} P_l^{\lambda}(y_1). \end{equation} One can use (\ref{kersph4}) to draw the graph of $h_t$ for any $t > 0$. We do so, in the most practical situation, $n = 2$. We can go all around a great circle by using spherical coordinates, in which $y_1 = \cos(\theta)$, with $\theta$ going from $-\pi$ to $\pi$. We draw these graphs when $t = 1$ (Figure 2, left), $t = .1$ (Figure 2, middle) and $t = .05$ (Figure 2, right). Actually, since (\ref{mexhtxx}) predicts \[ 4\pi h_t(1) = 4\pi K_t({\bf N}, {\bf N}) \sim 1/t^2, \] we draw the graphs of $4\pi\:h_t(\cos(\theta))$ instead, with $\theta$ going from $-\pi$ to $\pi$ on the horizontal axis. \begin{figure} \begin{center} \begin{tabular}{ccc} \includegraphics[scale=0.30]{fig4.pdf} \includegraphics[scale=0.35]{fig5.pdf} \includegraphics[scale=0.35]{fig6.pdf} \end{tabular} \caption{ \label{fig2.pdf} $4\pi h_t(\cos \theta)$ on $S^2$ for $t=1$ (left), $t=.1$ (middle), $t=0.05$ (right) } \end{center} \end{figure} The pictures do bear out the relation $4 \pi h_t(1) \sim 1/t^2$. We also see the characteristic ``Mexican hat'' shape, familiar from the analogous situation on $\RR^1$, where $K_t(x,y)$ is the Schwartz kernel of $f(-t^2 d^2/dx^2)$. Of course, in that situation, $\left[f(-t^2 d^2/dx^2)F\right]\hat{\:}(\xi) = f(t^2 \xi^2) \hat{F}(\xi) = t^2 \xi^2 e^{-t^2 \xi^2} \hat{F}(\xi)$ for $F \in {\mathcal S}(\RR)$, so $K_t(x,y) = t^{-1} \psi((x-y)/t)$, where $\psi$ is the second derivative of a Gaussian. The series (\ref{kersph4}) converges quickly for $t$ large, but not if $t$ is small. Therefore, as in the case of the torus, for computational purposes, it is important to have a quickly converging alternate expression for this function for small $t$. Since the pictures indicate that $4\pi h_t(\cos \theta)$ is negligible outside a small neighborhood of $\theta = 0$ if $t$ is small, one would assume that it is only necessary to compute the Maclaurin series of $h_t$. It is very fortunate that any number of terms of this series can be computed explicitly,, through use of the work of Polterovich \cite{Psph}, together with some additional insights. In fact, what Polterovich found in \cite{Psph} was the heat trace asymptotics on the sphere, i.e. explicit formulae for the $a_m$ of ({\ref{heattras}) for the manifold $S^n$, as sums of explicit finite series, for $m \geq 1$. (Earlier, less explicit formulae were found earlier in (\cite{CahnWolf}) and (\cite{Camp}).) Using Polterovich's formula to evaluate some of these $a_m$ on $S^2$, (it is easiest to use Maple), we find that \begin{equation} \label{heattrasx} \mbox{tr}(e^{-s\Delta}) \sim \frac{1}{s} + \frac{1}{3} + \frac{s}{15} + \frac{4s^2}{315} + \frac{s^3}{315} + O(s^4); \end{equation} there would be no difficulty in evaluating more terms. (Recall that $a_0$ is given by (\ref{heattr0}), for general ${\bf M}$.) Using this formula we are going to evaluate the first few terms of the Maclaurin series of $4\pi h_t(\cos \theta)$, and we will show how any number of terms could be obtained. Let $J_t(x,y)$ be the kernel of $e^{-t^2\Delta}$ on $S^2$, so that, in particular, by (\ref{indlink1}), $J_t(x,x) = \mbox{tr}(e^{-t^2\Delta})/4\pi$. From (\ref{kersph3}), with $f(r) = e^{-r}$, we find that if $s = t^2$, then \begin{equation} \label{kersph5} J_t({\bf N},y) = \sum_{l=0}^{\infty} \frac{(2l+1)}{4\pi}e^{-sl(l+1)} P_l^{\lambda}(y_1) := g_t(y_1). \end{equation} and similarly, from (\ref{kersph4}), \begin{equation} \label{kersph6} \frac{1}{s}K_t({\bf N},y) = \sum_{l=0}^{\infty} \frac{l(l+1)(2l+1)}{4\pi}e^{-sl(l+1)} P_l^{\lambda}(y_1)=\frac{1}{s}h_t(y_1). \end{equation} We would like to understand the Maclaurin series of $4\pi g_t(\cos \theta)$ and $4\pi h_t(\cos \theta)$. To this end, we shall use the following lemma. \begin{lemma} \label{sphlpx} Suppose $U \in C^2(S^{n})$ is a function of $x_1$ only, $U(x) = u(x_1)$, say, and that $u$ is in fact $C^2$ in a neighborhood of $[-1,1]$. Then $\Delta U$ is also a function of $x_1$, and \begin{equation} \label{lapgx1} (\Delta U)(x) = nx_1 u'(x_1) - (1-x_1^2)u''(x_1). \end{equation} In particular, if $n=1$, one has \begin{equation} \label{lapgx2} -\frac{d^2}{d\theta^2} [u(\cos \theta)] = (\cos \theta) u'(\cos \theta) - (\sin^2 \theta)u''(\cos \theta). \end{equation} \end{lemma} \begin{proof} By the results of \cite{gesph}, $\Delta = - \sum_{j < k}W_{jk}^2$, where \[ W_{jk} = x_j\frac{\partial}{\partial x_k} - x_k\frac{\partial}{\partial x_j}. \] if $1 \leq j,\:k \leq n+1$. Since $U$ is a function of $x_1$ only we compute \begin{eqnarray*} \Delta U & = & - \sum_{k=1}^{n+1} W_{1k}^2 U \\ & = & -\sum_{j=2}^{n+1} [-x_1 u' + x_j^2 u''] \end{eqnarray*} which proves (\ref{lapgx1}), since on the unit sphere, $\sum_{j=2}^{n+1} x_j^2 = 1-x_1^2$. Of course (\ref{lapgx2}) is elementary, but we note that it can also be viewed as a special case of (\ref{lapgx1}), since, in polar coordinates, the spherical Laplacian on $S^1$ is just $-\frac{d^2}{d\theta^2}$. This completes the proof.\end{proof} In particular, if we set $x_1 = 1$ in (\ref{lapgx1}), and $\theta = 0$ in (\ref{lapgx2}), we find that \begin{equation} \label{d2gth} -\frac{d^2}{d\theta^2} [u(\cos \theta)]|_{\theta = 0} = \frac{1}{n}(\Delta U)({\bf N}), \end{equation} since both sides equal $u'(1)$. \begin{remark} We shall show below, in Lemma \ref{sphlpall}, that if $u$ is $C^{2m}$ on an open interval containing $[-1,1]$, one can similarly obtain $\frac{d^{2m}}{d\theta^{2m}} [u(\cos \theta)]|_{\theta = 0}$ from a knowledge of $(\Delta^i U)({\bf N})$ for $i = 1,\ldots,m$. Thus, if $u$ is $C^{\infty}$ on an open interval containing $[-1,1]$, the entire Maclaurin series of $u(\cos \theta)$ (regarded as a function of $\theta$) is completely determined from a knowledge of $(\Delta^i U)({\bf N})$ for $i \geq 0$. (Of course, $u(\cos \theta)$ is an even function of $\theta$, so all of its odd-order derivatives vanish at $0$.)\end{remark} For now, let us apply this lemma to $u=g_t$ as in (\ref{kersph5}) or $\frac{1}{s}h_t$ as in (\ref{kersph6}). To do so, we first explain why $g_t$ and $h_t$ have $C^{\infty}$ extensions to an open neighborhood of $[-1,1]$. For this, it is evidently sufficient to show that for every $n \geq 1$ and every $m \geq 0$, there exists $c_{n,m}$ with \[ \left|\frac{d^m}{d\tau^m}P_l^{(n-1)/2}(\tau)\right| \leq c_{n,m} l^{n+2m-1}. \] From the generating function formula (\ref{genfn}), we see the classical formula that the derivative of $P_l^{\lambda}$ is $2\lambda P_{l-1}^{\lambda+1}$. Thus we may assume $m=0$. By (\ref{zony}) and (\ref{ckeval}), we need only show that, on $S^n$, $|Z_l(y)| \leq C l^n$, and for this, by (\ref{zonn}), we need only show that $|Z_l(y)| \leq Z_l({\bf N})$ for all $y \in S^n$. But we may choose an orthogonal transformation $T$ on $\RR^n$ with $T{\bf N} = y$, and then \begin{equation}\notag Z_l(y) = Z_l \circ T({\bf N}) = \langle Z_l \circ T, Z_l \rangle \leq \|Z_l \circ T\|_2 \|Z_l\|_2 = \|Z_l\|_2^2 = Z_l({\bf N}), \end{equation} as claimed. Let us return to $S^2$. Suppose as usual that $t > 0$, and again put $s = t^2$. By (\ref{d2gth}) and (\ref{kersph5}) we have that \begin{align}\notag -\frac{d^2}{d\theta^2} 4\pi[g_t(\cos \theta)]|_{\theta = 0} &= \frac{4\pi}{2}(\Delta_y J_{\sqrt s})({\bf N},{\bf N}) \\\notag &= -\frac{4\pi}{2} \frac{d}{ds} J_{\sqrt s}({\bf N},{\bf N})\\\notag & = -\frac{1}{2}\frac{d}{ds} \mbox{tr}(e^{-s\Delta}). \end{align} Thus, the first few terms of the Maclaurin series of $4\pi[g_t(\cos \theta)]$ are \begin{equation}\nota 4\pi g_t(\cos \theta) \sim \mbox{tr}(e^{-s\Delta}) + \frac{\theta^2}{4}\frac{d}{ds} \mbox{tr}(e^{-s\Delta}). \end{equation} Similarly \begin{align}\nota -\frac{d^2}{d\theta^2} 4\pi[\frac{1}{s}h_t(\cos \theta)]|_{\theta = 0} &= \frac{4\pi}{2}(\Delta_y \frac{1}{s}K_{\sqrt s})({\bf N},{\bf N})\\\notag &= -\frac{4\pi}{2} \frac{d}{ds} \frac{1}{s}K_{\sqrt s}({\bf N},{\bf N})\\\notag & = +\frac{1}{2}\frac{d^2}{ds^2} \mbox{tr}(e^{-s\Delta}). \end{align} Thus, the first few terms of the Maclaurin series of $4\pi[g_t(\cos \theta)]$ and $4\pi[h_t(\cos \theta)]$ are \begin{equation} \label{gtmac2} 4\pi g_t(\cos \theta) \sim \mbox{tr}(e^{-s\Delta}) + \frac{\theta^2}{4}\frac{d}{ds} \mbox{tr}(e^{-s\Delta}). \end{equation} \begin{equation} \label{htmac1} 4\pi h_t(\cos \theta) \sim -s\frac{d}{ds}\mbox{tr}(e^{-s\Delta}) -\frac{\theta^2}{4}s\frac{d^2}{ds^2} \mbox{tr}(e^{-s\Delta}). \end{equation} In order to put these into a useful form, we now invoke a result of Kannai \cite{K}. \newline Set $z(\theta) = (\cos\theta,\sin\theta,0,\ldots,0) \in S^n$. Since $g_t(\cos \theta)$ equals $J_t({\bf N},z(\theta))$, where $J_t(x,y)$ is the kernel of the heat operator $e^{-s\Delta}$, Kannai's results imply that \begin{equation} \label{gtmack} 4\pi g_t(\cos \theta) \sim \frac{1}{s}e^{-\theta^2/4s}\sum_{j=0}^{\infty}v_j(\theta)s^j \end{equation} at least for $0 < |\theta| < \pi$, and where the $v_j$ are smooth functions. (Kannai's result for general ${\bf M}$, proved through use of the Hadamard parametrix for the wave equation, is that the kernel of $e^{-s\Delta}$ has the form \[ \frac{1}{(4\pi s)^{n/2}} e^{-d(x,y)^2/4s}\sum_{j=0}^{\infty}V_j(x,y)s^j \] for $x$ sufficiently close to $y$, where the $V_j$ are smooth. In our case the geodesic distance from ${\bf N}$ to $z(\theta)$ is just $|\theta|$, and we have set $v_j(\theta) = V_j({\bf N},z(\theta))$.) From (\ref{gtmack}) and the fact that (by (\ref{kersph5}) and (\ref{kersph6})), one has \begin{equation} \label{gthtrel} \frac{\partial}{\partial s}g_t(\cos \theta) = -\frac{1}{s} h_t(\cos \theta), \end{equation} we are now motivated to find the first few terms in the Maclaurin series (in $\theta$) of $e^{\theta^2/4s}4\pi g_t(\cos \theta)$ and $e^{\theta^2/4s}4\pi h_t(\cos \theta)$. For this, we need only multiply the right sides of (\ref{gtmac2}) and (\ref{htmac1}) by $$e^{\theta^2/4s} \sim \frac{1}{s}(s+\theta^2/4 + \cdots).$$ This yields \begin{equation}\nota e^{\theta^2/4s}4\pi g_t(\cos \theta) \sim \frac{1}{s}[s\mbox{tr}(e^{-s\Delta}) + \frac{\theta^2}{4}[s\frac{d}{ds} \mbox{tr}(e^{-s\Delta}) + \mbox{tr}(e^{-s\Delta})] \end{equation} and \begin{equation} \label{hjloword} e^{\theta^2/4s}4\pi h_t(\cos \theta) \sim -\frac{1}{s}[s^2\frac{d}{ds}\mbox{tr}(e^{-s\Delta}) + \frac{\theta^2}{4}[s^2\frac{d^2}{ds^2} \mbox{tr}(e^{-s\Delta}) + s\frac{d}{ds}\mbox{tr}(e^{-s\Delta})], \end{equation} where the error, for any fixed $s$, is $O(\theta^4)$. Combining this with Polterovich's result (\ref{heattrasx}), and putting $s = t^2$ again, we obtain the approximations \begin{equation} \label{gtapp} 4\pi g_t(\cos \theta) \sim \frac{e^{-\theta^2/4s}}{s}[(1+\frac{s}{3}+\frac{s^2}{15}+\frac{4s^3}{315}+\frac{s^4}{315}) + \frac{\theta^2}{4}(\frac{1}{3}+\frac{2s}{15}+\frac{4s^2}{105}+\frac{4s^3}{315})] \end{equation} It would take considerably more analysis to estimate the error here, but Maple says that when $t=.1$, the error is never more than $6 \times 10^{-4}$ for any $\theta \in [-\pi,\pi]$, even though both sides have a maximum of about 100. Maple says that the greatest error occurs at around $\theta=.3$, where both sides are about 10.655. Similarly one could use (\ref{hjloword}) to give an approximation to $4\pi h_t(\cos \theta)$. But Maple says that the errors are smaller if we differentiate formally with respect to $s$ in (\ref{gtapp}) (and then multiply by $-s$); here we are recalling (\ref{gthtrel}). If we do this and finally replace $s$ by $t^2$, this yields the approximation \begin{equation} \label{htapp} 4\pi h_t(\cos \theta) \sim \frac{e^{-\theta^2/4t^2}}{t^2}[(1-\frac{\theta^2}{4t^2})p(t,\theta)-t^2q(t,\theta)], \end{equation} where \begin{equation}\notag p(t,\theta)=1+\frac{t^2}{3}+\frac{t^4}{15}+\frac{4t^6}{315}+\frac{t^8}{315}+ \frac{\theta^2}{4}(\frac{1}{3}+\frac{2t^2}{15}+\frac{4t^4}{105}+\frac{4t^6}{315}) \end{equation} and \begin{equation}\notag q(t,\theta)= \frac{1}{3}+\frac{2t^2}{15}+\frac{4t^4}{105}+\frac{4t^6}{315}+ \frac{\theta^2}{4}(\frac{2}{15}+\frac{8t^2}{105}+\frac{4t^4}{105}) \end{equation} This approximation differs from the one obtained from (\ref{hjloword}) only in terms which are fourth order in $\theta$. Maple says that when $t=.1$, the error in the approximation (\ref{htapp}) is never more than $9.5 \times 10^{-4}$ for any $\theta \in [-\pi,\pi]$, even though both sides have a maximum of about 100. Maple says that the greatest error occurs at around $\theta = .4$, where both sides are about -5.593. Of course, if in (\ref{htapp}) we approximate $p \sim 1$ and $q \sim 0$, we would obtain the formula for the usual Mexican hat wavelet on the real line, as a function of $\theta$. Let us then explain how one can readily obtain any number of terms of the Maclaurin series of $g_t$ and $h_t$. \begin{lemma} \label{sphlpall} For any positive integer $m$, there are constants $a_1,\ldots,a_m$ as follows. If, in the situation of Lemma \ref{sphlpx}, $u$ is $C^{2m}$ on an open interval containing $[-1,1]$, then \begin{equation}\notag \frac{d^{2m}}{d\theta^{2m}} [u(\cos \theta)]|_{\theta = 0} = \sum_{i=1}^m a_i (\Delta^i U)({\bf N}). \end{equation} Moreover, $a_m \neq 0$. \end{lemma} \begin{proof} It is enough to show that, generalizing (\ref{lapgx1}), there are polynomials $p_1,\ldots,p_{2m}$ in $x_1$ such that \begin{equation} \label{lapgxall} (\Delta^m U)(x) = \sum_{i=1}^{m} p_i(x_1) u^{(i)}(x_1) + \sum_{i=m+1}^{2m}(1-x_1^2)^{i-m}p_i(x_1) u^{(i)}(x_1) \end{equation} where $p_m(1) \neq 0$. For then the mapping $$(u'(1),\cdots,u^{(m)}(1)) \mapsto \left((\Delta U)({\bf N}),\cdots,(\Delta^m U)({\bf N}) \right)$$ will be given by an invertible upper triangular matrix. In particular this is true if $n=1$, so that the mapping $$\left(u'(1),\cdots,u^{(m)}(1)\right) \mapsto \left(\frac{d^2}{d\theta^2} [u(\cos \theta)]|_{\theta = 0} ,\cdots, \frac{d^{2m}}{d\theta^{2m}} [u(\cos \theta)]|_{\theta = 0} \right)$$ is also given by an invertible upper triangular matrix. Thus the map $$ \left((\Delta U)({\bf N}),\cdots,(\Delta^m U)({\bf N})\right) \mapsto \left(\frac{d^2}{d\theta^2} [u(\cos \theta)]|_{\theta = 0} ,\cdots,\frac{d^{2m}}{d\theta^{2m}} [u(\cos \theta)]|_{\theta = 0} \right)$$ is also given by an invertible upper triangular matrix, which is the desired result. To prove (\ref{lapgxall}), we recall first that by (\ref{lapgx1}), $(\Delta U)(x) = Du(x_1)$, where $D = nx_1 \frac{d}{dx_1}-(1-x_1^2)\frac{d^2}{dx_1^2}$. From this, (\ref{lapgxall}), save for the statement that $p_m(1) \neq 0$, follows at once by a simple induction on $m$. If $p_m(1)$ were zero, then the mapping $$ \left(u'(1),\cdots,u^{(m)}(1) \right) \mapsto \left((\Delta U)({\bf N}),\cdots,(\Delta^m U)({\bf N})\right)$$ would be given by a singular upper triangular matrix, so its range would not be all of $\RR^m$. But the range is all of $\RR^m$, since it contains all the vectors \[ \left((\Delta Z_l)({\bf N}),\ldots,(\Delta^m Z_l)({\bf N})) = Z_l({\bf N})l(l+n-1)(1,l(l+n-1),\ldots,[l(l+n-1)]^{m-1}\right), \] for $l=1,\ldots,m$, and since the rows of a Vandermonde matrix are linearly independent. This contradiction completes the proof.\end{proof} To obtain further terms of the Maclaurin series of $g_t$ or $h_t$, one need only use Lemma \ref{sphlpall} together with the fact that applying $\Delta^i$ to the series in (\ref{kersph5}) or (\ref{kersph6}) is the same as applying $(-\partial /\partial s)^i$. Then Polterovich's formula for the heat trace asymptotics may be used.\\ \begin{remark}Throughout this discussion, we have been considering (\ref{kersph3}) for $f(s) = se^{-s}$. If one instead used $f_m(s) = s^me^{-s}$, where $m$ is a nonnegative integer, one could still obtain explicit formulas, similar to (\ref{htapp}), giving ``generalized'' Mexican hat wavelets on the sphere.) Indeed, if $h_t^m$ corresponds to $f_m$ in the same way that $h_t$ corresponds to $f= f_1$, we have in place of (\ref{gthtrel}) that $\frac{\partial^m}{\partial s^m}g_t(\cos \theta) = \frac{(-1)^m}{s^m} h_t^m(\cos \theta)$. Using $f_m$ in place of $f$ has certain advantages. In particular, considering functions (of $\Delta$) which vanish more quickly at $0$ (such as the $f_m$) is very important in the characterization of Besov spaces in \cite{gm3}.\end{remark} \section{Appendix: A Technical Lemma}\label{a-technical-lemma} \label{cclem} In the proof of Lemma \ref{manmol}, we used the following fact. \begin{lemma} \label{ccamplem} Suppose $a_0,\ldots,a_L \in \CC$. Then there exists an even function $u \in {\mathcal S}(\RR)$, such that $u^{(2l)}(0) = a_l$, whenever $0 \leq l \leq L$, and such that supp$\: \hat{u} \subseteq (-1,1)$. (Of course, all {\em odd} order derivatives of $u$ vanish at $0$.) \end{lemma} \begin{proof} We construct $h = \hat{u}$. We need to show that, given numbers $b_0,\ldots,b_L$, there exists $h \in C_c^{\infty}(\RR)$, $h$ even, supp$\:h \subseteq (-1,1)$, with $\int_{-\infty}^{\infty} \xi^{2l} h(\xi) d\xi = b_l$ for $0 \leq l \leq L$. Select an even function $\eta \in C_c^{\infty}(\RR)$, with supp$\eta \subseteq (-1,1)$, and $\int \eta = 1/2$. Select $L+1$ different numbers $r_1,\ldots, r_{L+1} \in (0,1/2)$. For $0 < t < 1$, consider the even function $\eta_{t,m}(\xi) = t^{-1}[\eta((x+r_m)/t) + \eta((x-r_m)/t)]$. We claim that if $t$ is sufficiently small, we may construct our $h$ by setting $h = \sum_{m=1}^{L+1} c_m \eta_{t,m}$ for suitable $c_1,\ldots c_{L+1}$. Indeed, as $t \rightarrow 0^+$, $\eta_{t,m}$ becomes increasingly concentrated near $r_m$ and near $-r_m$, so that $\int \xi^{2l} [\sum_{m=1}^{L+1} c_m \eta_{t,m}] d\xi \rightarrow \sum_{m=1}^{L+1} c_m (r_m)^{2l}$. The existence of suitable $c_1,\ldots c_{L+1}$ now follows from the nonvanishing of small perturbations of the Vandermonde determinant. \end{proof}
1,477,468,750,748
arxiv
\section{Introduction} The idea of robotic soccer games was proposed as a novel research topic in 1992, and since then the RoboCup has been considered as an annual international competition for developing new ideas in A.I. and robotics. This competition is formed of various leagues such as Rescue\cite{resque1,resque2,resque3,resque4,resque5}, Soccer Simulation\cite{3d} and Standard Platform\cite{stand} leagues. Cyrus Team is one of the soccer simulation team in the 2D Soccer Simulation league. This team was established in 2013, and it has engaged in RoboCup and IranOpen competitions since then. It is worth mentioning that this team has gained the second, third, fourth, and fifth places in RoboCup 2018, 2019, 2017, 2014 years respectively. Also, Cyrus won first place in IranOpen 2018 and 2014, RoboCup Asia-Pacific 2018, and second place in JapanOpen 2020 competitions. The Cyrus’s team base is agent2d\cite{agent2d}. \subsection{Previous Work} In the recent years we have concentrated on exploiting artificial intelligence and machine learning techniques to improve the functionality of Cyrus team \cite{cyrus14,cyrus15,cyrus18,cyrus19}. Among these works, we can mention the improvement of agents' defense decision-making process using Reinforcement Learning (RL)\cite{rl1}, prediction of an opponent's behavior, and optimization of the shoot skill. Helios has developed an algorithm for the analysis of the agents' offensive behavior \cite{helios19,helios18,heliospaper}. Fractals, 2019 which is partially based on Gliders2d used elements of evolutionary computation, within the framework of Guided Self-Organisation \cite{fractals}. FRA-United has researched on the commutation of agents in games \cite{fra16,fra18,fra19}. FCP\_GPR teams has developed a framework for the free-kick \cite{fcp}, while the Namira has implemented a python-based application for the analysis of soccer simulation games\cite{namira1,namira2,namira3}. Razi has worked on scoring the offensive behavior in the 2D soccer simulation\cite{razi18,razi19}. \setcounter{footnote}{0} \subsection{Release} \subsubsection{Cyrus 2014 Source} As a part of our contribution to the development of the 2D Soccer Simulation league, we have released the Cyrus 2014 \cite{cyrus14} source code to encourage new teams to participate in the competitions. Cyrus 2014 won the 1st and 5th places in the Iran-Open RoboCup Competition and International RoboCup Competition, respectively. The source code can be found in our github\footnote{Cyrus 2014 Source \url{https://github.com/naderzare/cyrus2014}.}. \subsubsection{Starter Agent and Starter Librcsc} Cyrus team members - in cooperation with IranOpen technical committee of 2D soccer simulation league - have designed a simplified version of the agent base \cite{agent2d} and the librcsc library for the 2D soccer simulation starter league. High-level behaviors like passing, dribbling, and shooting have been omitted from this base. This version of 2D soccer simulation base and librcsc - specifically are designed for junior students - have been exploited in 2D soccer simulation starter league during both IranOpen RoboCup 2018, IranOpen RoboCup 2020 and RoboCup Asia-Pacific 2018. More than ten teams participated in each of the competitions, with more than fifty participants in total. All of the participants have used the this base developed by Cyrus and IranOpen committee of 2D soccer simulation league. The base can be found in our github\footnote{Starter Agent 2D \url{https://github.com/naderzare/StarterAgent2D}} \footnote{Starter LibRCSC \url{https://github.com/naderzare/StarterLibRCSC}}. \subsubsection{CppDNN} The C++ Deep Neural Network (CppDNN) library has been developed by Cyrus team members to facilitate the implementation of Deep Neural Network in the 2D Soccer Simulation environment. This library stores the trained weights of a neural network which has been trained by Keras library. The developed script within this library transforms the trained weights of a deep neural network into a text file. Subsequently, it loads the trained weights to recreate the original deep neural network in C++. This library employs Eigen Library for its calculation. The library can be found in our github\footnote{CppDNN Source Code \url{https://github.com/Cyrus2D/CppDNN}}. \subsubsection{Pyrus - Python Soccer Simulation 2D Base} Most of 2D soccer simulation teams exploit the Helios \cite{agent2d}, Gliders2d \cite{gldbase}, WrightEagle \cite{wrbase} or Oxsy \cite{oxsy} base. All of these bases have been developed in C++. Although those have shown fast processing and execution time, developing machine learning algorithms will be a challenging and time-consuming process. Due to the fast growth of Python language popularity among students and scientist, and its strength for implementing machine learning algorithms, Cyrus team members have started developing an open source python base for 2D soccer simulation league. This base is currently available in Cyrus github\footnote{Pyrus Base Source Code \url{https://github.com/Cyrus2D/Pyrus}} and it will support all features of current 2D soccer simulation server in the Full-State mode in the near future. \section{Kick Behavior Predictor} One of the main goals of 2D Soccer team is increasing the winning chance, and it can be achieved by enhancing the general performance of the team. This objective can be interpreted as increasing the team's number of goals and reducing the number of goals against the team. Enhancing the functionality of the team's results in a better performance in the field. However, random noises on the observation of the agents from the environment are the major challenge the agents face while they want to choose their actions. The 2D soccer environments exert the random noises on the observation of agents from the environment to simulate the real-world soccer match; however, these noises complicate the agents' decision-making process. The soccer simulation server provides an option known as \textbf{\emph{"full-state mode"}} to eliminate the random noises from the the agents' observation. If the server runs with the \textbf{\emph{"full-state mode"}}, it distributes the pure state of the game to the teams. In order to understand the impact of noise on the functionality of teams, we tested the Cyrus against Helios 2019\cite{helios19} with two different settings for the simulation server. In the first, server was run with its default settings. In the second, the server was run with the full-state mode. This phase was divided into two sub-experiments. Cyrus receives the full-state of the game from the server and uses it in two different fashions: 1 - full-state observation: the agents exert the pure observation of the system for their decision-making; 2 - full-state chain action: the pure observations are only exploited for the chain action of agents, and the noisy world model was used for the rest of processing. These three operation modes have been tested 500 times, and the experimental results are reported in the following section. The distribution of goal for and goal against for these experiments are shown in Fig. \ref{fig:Fig1}. Also, the win rate, expected win rate, and average score are denoted in Table\ref{table:Table1}. The results of these experiments prove the extreme effect of noisy data on the functionality of the teams. In order to tackle this problem, many team are exploiting opponent behavior prediction or noise reduction algorithms. In this TDP, we aim to address this challenge by enabling agents to predicts their full-state behavior using the noisy observations and exploiting this prediction for the optimization of their behavior. Correspondingly, the server ran in the fullstate mode, and it passed the word Model (WM) and the Fullstate World Model (FWM) to the agents. At this point the WM and FWM will be received by the agent for the further processing. The agent passes the FWM and WM to the Kick Decision-Making module, and it only passes the WM to Move Decision-Making module. If the ball is not within the kickable area of the agent, move-decision module chooses behavior of agent and it sends the action to the server, otherwise kick-decision making module sends the WM and FWM to Data Extractor module and Chain action module respectively. Rhe Chain Action module employs the FWM to choose optimal action, then afterward, it sends the action and to Data Extractor module and server. The Data Extractor Module receives the WM and action and it attempts to generate the Data Set using its submodules (Feature Extractor and Label Generator). Feature extractor is a part of the data extractor module to select the important features (will be discussed in the next subsection) from the received data. Label Generator takes the action of the agent from Chain Action module and generates the fullstate action label for this data. The structure of agent and its processing modules are presented in the Fig. \ref{fig:Fig2}. \begin{figure}[h!] \includegraphics[scale=0.35]{myplot.png} \caption{The distribution of goal for and goal against between Cyrus and Helios in three different modes: a. WM b. FWM and c. fullstate chain action .} \label{fig:Fig1} \end{figure} \begin{table}[] \tiny \centering \caption{The win rate, expected win rate, and average score} \label{table:Table1} \begin{tabular}{|l|l|l|l|l|} \hline Run Type & Win Rate & Expected Win Rate & Cyrus Average Goal & Helios2019 Average Goal \\ \hline Normal & 7.09 & 8.19 & 0.45 & 2.13 \\ Chain Action Full State & 24.64 & 33.46 & 1.59 & 2.09 \\ Full State & 72.7 & 85.76 & 2.72 & 1.19 \\ \hline \end{tabular} \end{table} \begin{figure}[h!] \includegraphics[scale=0.23]{data.png} \caption{The internal structure of agent and its processing modules} \label{fig:Fig2} \end{figure} \subsection{Feature Extractor} As we mentioned in Section 2, the feature extractor module receives the WM, and it extracts the most significant attributes of the input data. The related features of the ball, players, and others are denoted in Tables \ref{table:Table2} \ref{table:Table3} respectively. \begin{table}[h] \tiny \centering \caption{List of Ball Features and Other}\label{tab1} \label{table:Table2} \begin{tabular}{|l|l|l|} \hline Feature Class & Feature Name & Description \\ \hline Ball\_Position & Ball\_X & Ball Position - X \\ Ball\_Position & Ball\_Y & Ball Position - Y \\ Ball\_Position & Ball\_RX & Distance to Holder Player - X \\ Ball\_Position & Ball\_RY & Distance to Holder Player - Y \\ Ball\_Position & Ball\_R & Euclidean Distance from Ball to Holder Player \\ Ball\_Position & Ball\_Teta & Angle From Holder Player to Ball \\ Ball\_Velocity & Ball\_VX & Ball Velocity - X \\ Ball\_Velocity & Ball\_VY & Ball Velocity - Y \\ Ball\_Velocity & Ball\_VR & Ball Velocity - Length \\ Ball\_Velocity & Ball\_VTeta & Ball Velocity - Angle \\ Dribble & Dribble\_Free\_Distance & Distance of ball to the nearest opponent in 12 sector \\ Other & Cycle & Cycles of the game \\ Other & Offside count & The accuracy count for the offside line\\ \hline \end{tabular} \end{table} \begin{table} \tiny \centering \caption{List of player's features}\label{tab1} \label{table:Table3} \begin{tabular}{|l|l|l|l|} \hline Feature Class & Feature Name & Tm or Opp & Description \\ \hline Other & Player\_Side & Both & Side of player 1 or -1 \\ Other & Player\_Unum & Both & Uniform number of player \\ Other & Player\_Body & Both & Body angle \\ Other & Player\_Face & Both & Face angle \\ Other & Player\_Tackling & Both & Is player tackling \\ Other & Player\_Kicking & Both & Is player kicking \\ Other & Player\_Card & Both & Has player yellow card or no \\ Type & Player\_Type\_DashRate & Both & Dash Rate of player\\ Type & Player\_Type\_EffortMax & Both & Maximum Effort of player \\ Type & Player\_Type\_EffortMin & Both & Minimum Effort of player \\ Type & Player\_Type\_KickableDist & Both & Kickable Distance of player \\ Type & Player\_Type\_MarginDist & Both & Margin Distance of player \\ Type & Player\_Type\_KickPowerRate & Both & Kich Power rate of player\\ Type & Player\_Type\_Decay & Both & Decay of player\\ Type & Player\_Type\_Size & Both & Size of player \\ Type & Player\_Type\_SpeedMax & Both & Maximum speed of player \\ Position & Player\_X & Both & Position of player - X \\ Position & Player\_Y & Both & Position of player - Y\\ Position & Player\_RX & Both & Distance to holder player - X\\ Position & Player\_RY & Both & Distance to holder player - Y\\ Position & Player\_R & Both & Distance of player to holder player \\ Position & Player\_Teta & Both & Angle from holder player to player \\ Position & Player\_Offside & Teammate & Player is in offside \\ Velocity & Player\_VX & Both & Velocity of player - X\\ Velocity & Player\_VY & Both & Velocity of player - Y \\ Velocity & Player\_VR & Both & Velocity of player - Length \\ Velocity & Player\_VTeta & Both & Velocity of player - angle\\ Position & Player\_PosCount & Both & Count since last position observation \\ Velocity & Player\_VelCount & Both & count since last velocity observation \\ Other & Player\_IsKicker & Teammate & Is this player kicker \\ Pass & Player\_FreePassAngle & Teammate & Maximum free angle for direct pass \\ Pass & Player\_DirectPassDist & Teammate & Distance from ball to player \\ Opponent & Player\_NearestOpponentDist & Teammate & Minimum distance from opponent to player \\ Position & Player\_GCA & Both & Angle from player to opponent goal center \\ Position & Player\_GCD & Both & Distance from player to opponent goal center \\ Shoot & Player\_FreeShootAngle & Teammate & Maximum free angle for shoot \\ Stamina & Player\_Stamina & Both & Stamina of player \\ Stamina & Player\_StaminaCount & Both & Count since last stamina observation \\ \hline \end{tabular} \end{table} \subsection{Features Sorting Methods} In our proposed method, we take advantage of a deep neural network for the prediction of the agents' behavior using the noisy observations. We've generated 10 different datasets from the world model to examine the effect of the input setting on the prediction of the network. To create each one of this dataset, we used one of the sorting method that is explained in Table \ref{table:Table4}. Each one of this sorting method changes the order of players' features. To make the process of these sorting methods more clear, the results of them for the players in Fig. \ref{fig:Fig3} are demonstrated in Table \ref{table:Table4}. \begin{figure}[h!] \centering \includegraphics[scale=0.15]{sorting.png} \caption{Sample positions of agents in the field} \label{fig:Fig3} \end{figure} \begin{table}[] \tiny \centering \caption{Sorting Algorithm}\label{tab1} \label{table:Table4} \begin{tabular}{|l|l|} \hline Sorting Method & Description \\ \hline \rowcolor{LightGray} X & Sorting players of each team by their X of position \\ \rowcolor{LightGray} & Sorting Results: 9 8 5 3 4 \\ X\_FK & Similar to X approach, but the Kicker player has the first place in sorting\\ & Sorting Results: 5 9 8 3 4 \\ \rowcolor{LightGray} Unum & Sorting players of each team by their Uniform Number \\ \rowcolor{LightGray} & Sorting Results: 3 4 5 8 9 \\ Unum\_FK & Like X, But Kicker Player be first \\ & Sorting Results: 5 3 4 8 9 \\ \rowcolor{LightGray} AFC & Sorting player of each team by their angle from their current position to center of field \\ \rowcolor{LightGray} & Sorting Results: 4 5 9 8 3 \\ AFC\_FK & Similar to AFC, but the Kicker player has the first place in sorting\\ & Sorting Results: 5 4 9 8 3 \\ \rowcolor{LightGray} AK & Sorting player of each team by their angle from their current position to the kicker player \\ \rowcolor{LightGray} & Sorting Results: 4 5 9 8 3 \\ AK\_FK & Similar to AK, but the Kicker Player has the first place in sorting \\ & Sorting Results: 5 4 9 8 3 \\ \rowcolor{LightGray} AKG & Sorting Player of each Team by angle from position to Goal Center \\ \rowcolor{LightGray} & Sorting Results: 9 4 5 3 8 \\ AKG\_FK & Similar to AK, but the Kicker player has the first place in sorting\\ & Sorting Results: 5 9 4 3 8 \\ \hline \end{tabular} \end{table} \subsection{Label Generator} This module takes action from the chain action module to generate the labels for each the data raw. The labels of data are noted in Table \ref{table:Table5}. \begin{table} \tiny \centering \caption{List of Labels}\label{tab1} \label{table:Table5} \begin{tabular}{|l|l|l|} \hline Label & Description \\ \hline Category & Hold \(\vert \vert\) Pass \(\vert \vert\) Dribble \\ TargetUnum & Uniform number of target player \\ TargetIndex & Index of target player after sorting\\ Description & Dribble \(\vert \vert\) Direct Pass \(\vert \vert\) Cross Pass \(\vert \vert\) Through Pass \(\vert \vert\) Lead Pass \\ TargetPosition & Target position \\ FirstKickAngle & Angle of selected action from ball \\ FirstKickSpeed & Ball kick speed \\ \hline \end{tabular} \end{table} \subsection{Results} To examine the impact of different input features on the prediction of the neural network, we chose 1 million of the Cyrus and Helios2019 raw data for training the deep neural network. We have created 10 diverse dataset using different sorting methods(see Table \ref{table:Table4}). The whole process of behavior prediction is demonstrated in Fig. \ref{fig:Fig5}. We reported the accuracy and error rate of model for those 10 dataset in Table \ref{table:Table6}. According to Table \ref{table:Table6}, Unum\_FK has better accuracy in comparison to the approaches. \begin{table} \small \centering \caption{Accuracy and error rate of the model for 10 datasets}\label{tab1} \label{table:Table6} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Prediction & Type & \rotatebox{75}{X} & \rotatebox{75}{X\_FK} & \rotatebox{75}{Unum} & \rotatebox{75}{Unum\_FX} & \rotatebox{75}{AKG} & \rotatebox{75}{AKG\_FK} & \rotatebox{75}{AK} & \rotatebox{75}{AK\_FK} & \rotatebox{75}{AFC} & \rotatebox{75}{AFC\_FK} \\ \hline \rowcolor{LightGray} Category & Classification & 76.55 & 77.23 & 76.69 & \cellcolor{green} 77.37 & 76.09 & 76.61 & 76.08 & 76.63 & 76.03 & 76.41 \\ Unum & Classification & 57.22 & 57.71 & \cellcolor{green}60.57 & 60.39 & 56.17 & 56.48 & 56.22 & 56.93 & 56.7 & 57.31\\ \rowcolor{LightGray} Unum in Passes & Classification & 57.87 & 58.04 & 61.80 & \cellcolor{green}62.51 & 57.27 & 57.71 & 57.07 & 57.49 & 57.70 & 57.20\\ Index & Classification & 58.89 & 60.20 & 60.22 & \cellcolor{green}60.84 & 57.79 & 58.45 & 57.08 & 58.66 & 56.51 & 58.57\\ \rowcolor{LightGray} Index in Passes & Classification & 58.49 & 59.73 & 61.79 & \cellcolor{green}62.01 & 58.58 & 58.13 & 58.72 & 58.42 & 57.39 & 57.06\\ Description & Classification & 71.62 & \cellcolor{green}72.12 & 71.31 & 71.53 & 71.36 & 71.34 & 71.32 & 71.58 & 71.70 & 71.49\\ \rowcolor{LightGray} TargetPosition & Regresion & 2.44 & 2.43 & 2.59 & 2.38 & 2.50 & 2.42 & 2.79 & 2.41 & \cellcolor{green}2.29 & 2.59\\ FirstKickAngle & Regresion & 5.14 & 6.51 & 5.22 & 6.58 & 6.65 & 7.30 & \cellcolor{green}5.11 & 5.36 & 7.34 & 5.14\\ \rowcolor{LightGray} FirstKickSpeed & Regresion & \cellcolor{green}0.041 & 0.054 & 0.043 & 0.054 & 0.055 & 0.06 & 0.042 & 0.044 & 0.061 & 0.042\\ \hline \end{tabular} \end{table} In this section we try to evaluate the value of features for the prediction of agents' behavior. To accomplish this task, we exerted the Random Forest algorithm implemented in Sckiti-Learn and Permutation Feature Importance implemented by ELI5 library. The Permutation Feature Importance algorithm attempts the most effective features of input data for a trained neural network. We evaluated the value of sorted data features (sorted by UNUM) using Random Forest algorithm . Also, we selected two of our predictor deep neural networks that were trained by the UNUM sorted dataset to predict the Category or UNUM of target teammate. Using these two neural networks and the Permutation Feature Importance algorithm we chose the significant features of data. see Fig.\ref{fig:Fig4}. \vspace*{-5pt} \begin{figure}[H] \centering \includegraphics[scale=0.25, trim= 1.2cm 15cm 0.8cm 14cm, clip]{all_plot.png} \caption{The important features of sorted data extracted by Random Forest and Permutation Feature Importance} \label{fig:Fig4} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.276]{use.png} \caption{The process of behavior prediction} \label{fig:Fig5} \end{figure} \subsection{How To Use Predictor} To assess the effect of the fullstate predictor network, we have examined it on Cyrus 2020. The trained neural network - that has been trained by sorted data (UNUM-FK sorted) - predicts the UNUM of fullstate target player. If the ball holder wouldn't have accurate information about the ball receiver agent, it prioritizes observing that agent. The experimental results of this approach on Cyrus team suggest win rate improvement from 8.19\% to 17.49\% and goal rate from 0.45\% to 0.89\%. \section{Conclusion} This paper describes all of the previous efforts and current research of Cyrus2020 on the exploitation of AI algorithms in 2D soccer simulation. Using the \textbf{\emph{"fullstate mode"}} of the server, we created a dataset from agents' perceived observations and their FWM behavior. Then we sorted this dataset, and we fed them to the disparate deep neural network for the behavior prediction. Subsequently, we exerted the best trained neural network to optimize the viewpoint of players. The experimental results demonstrate the improvement of the win rate and goal rate. \section{Future Work} In the near future we plan to improve the proposed approach in this TDP. We are planning to exert the Convolution Neural Network (CNN) as our predictor network. For the next step, we intend to process our data using the recurrent neural network which can process temporal data.
1,477,468,750,749
arxiv
\section{INTRODUCTION} \par A supersolid is a solid with superfluidity, and has been sought in recent decades in He II \cite{Andreev1969,Chester1970,Leggett1970,Matsuda1970,Boninsegni2012}. Recently it was created experimentally in an optical lattice \cite{Leonard2017,Li2017,Tanzi2019,Bottcher2019,Chomaz2019,Tanzi2019A,Natale2019,Guo2019}. The observation of the Nambu-Goldstone mode \cite{Nambu1960,Goldstone1960,Nambu1961} due to spontaneous symmetry breaking (SSB) of the global phase gives direct evidence of supersolidity for an optical lattice \cite{Tanzi2019A,Natale2019,Guo2019}. Very recently the existence of supersolidity of subatomic nature ---supersolidity of the three $\alpha$ cluster structure in the nucleus $^{12}$C ---was discussed \cite{Ohkubo2020}. The $\alpha$ particle model, in which the boson $\alpha$ particle with spin 0 is considered as a constituent unit of the nucleus, was proposed as the first nuclear structure model in 1937 \cite{Wefelmeier1937,Wheeler1937,Wheeler1937B} but criticized \cite{Blatt1952} in the advent of the shell model \cite{Mayer1948,Haxel1949} and the collective model \cite{Bohr1952}. However, the successful shell model and the collective model \cite{Bohr1969A,Bohr1969B,Ring1980} also encountered difficulty explaining the emergence of very low-lying intruder states in light nuclei such as the mysterious $0_2^+$ (6.05 MeV) state in $^{16}$O \cite{Arima1967,Marumori1968}. The $\alpha$ cluster model based on the geometrical crystallinity picture, in which the effect of the Pauli principle is taken into account, was revived and has witnessed tremendous success in recent decades in explaining both shell-model like states and $\alpha$ cluster states comprehensively, which are reviewed for light nuclei in Refs. \cite{Suppl1972,Wildermuth1977,Suppl1980} and for the medium-weight nuclei in Ref. \cite{Suppl1998,Ohkubo1998}. \par The Brink cluster model based on the geometrical crystallinity picture using the generator coordinate method (GCM) \cite{Brink1966}, the resonating group method (RGM), which is equivalent to the GCM \cite{Horiuchi1977}, and the orthogonality condition model (OCM) \cite{Saito1969} and the local potential model (LPM) \cite{Buck1975,Ohkubo1977,Michel1983,Ohkubo2016} ---both of which are approximations of the RGM and take into account of the Pauli principle by excluding the Pauli forbidden states in the RGM ---have been successful in understanding the structure of nuclei \cite{Suppl1972,Wildermuth1977,Suppl1980,Suppl1998,Ohkubo1998}. Examples are the two $\alpha$ dumbbell structure of $^8$Be \cite{Horiuchi1970,Hiura1972}, the three $\alpha$ triangle structure of $^{12}$C \cite{Uegaki1977,Uegaki1979,Kamimura1977}, and the $\alpha$+$^{16}$O structure in $^{20}$Ne \cite{Horiuchi1972,Hiura1972B,Fujiwara1980}. The unified understanding of cluster structure in the low energy region, and prerainbows and nuclear rainbows in the scattering region, which are confirmed for systems such as $\alpha$+$^{16}$O and $\alpha$+$^{40}$Ca \cite{Michel1998,Ohkubo2016}, supports the geometrical crystallinity picture of the cluster structures. Very recently Ohkubo {\it et al.} \cite{Ohkubo2020} used a field theoretical superfluid cluster model (SCM) for $^{12}$C to report that the $\alpha$ cluster structure has a duality of crystallinity and condensation, a property of supersolidity. According to this theory, while the former is the view from the particle nature of the cluster structure, the latter is the view from the wave nature due to the coherence of the condensate cluster structure. It is important to clarify whether this supersolidity of $\alpha$ cluster structure is inherent only to the three $\alpha$ cluster structure of $^{12}$C or a general property of $\alpha$ cluster structure with geometrical crystallinity. $\alpha$ cluster structure was recently paid attention from the viewpoint of quantum phase transition \cite{Elhatisari2016,Ebran2020}. \par In this paper, by using the Brink $\alpha$ cluster model it is shown generally that $\alpha$ cluster structure has the duality of apparently exclusive properties of crystallinity and condensation, i.e., supersolidity. The $\alpha$ cluster structure of $^{40}$Ca, which has been understood from the viewpoint of geometrical cluster structure, is studied from the viewpoint of condensation, superfluidity, by using a field theoretical superfluid $\alpha$ cluster model which treats rigorously spontaneous symmetry breaking of the global phase due to condensation. The mechanism of why the mysterious $0^+$ state in $^{40}$Ca emerges as a collective state at very low excitation energy, which has been a longstanding subject in the shell model and the collective model, is investigated and is shown to arise as a member state of the Nambu-Goldstone (NG) zero-mode due to global phase locking caused by the condensation aspect of the duality of $\alpha$ clustering in $^{40}$Ca. \par The organization of this paper is as follows. In Sec. II by using the Brink cluster model it is generally shown that $\alpha$ cluster structure has a dual property of crystallinity and condensation. In Sec. III a field theoretical superfluid cluster model with the order parameter of condensation, in which spontaneous symmetry breaking of the global phase due to condensation is rigorously treated, is given. In Sec. IV $\alpha$ cluster structure in $^{40}$Ca is studied. First, historical attempts to understand the mysterious $0^+$ state of $^{40}$Ca in the shell model, collective model, and the $\alpha$ cluster model are briefly reviewed. It is summarized that by the observation of the $K=0^-$ band with the $\alpha$ cluster structure, which was predicted from the study of anomalous large angle scattering in $\alpha$ + $^{36}$Ar scattering, the geometrical $\alpha$ cluster model was confirmed. Second, from viewpoint of the duality of crystallinity and condensation, the condensation aspect of the $\alpha$ cluster of $^{40}$Ca is studied using the superfluid cluster model. The mechanism of the emergence of a mysterious $0^+$ state is investigated. Sec. V is devoted to discussion. A summary is given in Sec. VI. \section{THE DUALITY OF THE $\alpha$ CLUSTER STRUCTURE: CRYSTALLINITY AND CONDENSATION} \par I show that $\alpha$ cluster structure with crystallinity has condensate nature simultaneously by using the Brink $\alpha$ cluster model based on a geometrical crystallinity picture. The $n$-$\alpha$ cluster model based on the geometrical crystalline picture, such as the two $\alpha$ cluster model of $^8$Be and the three $\alpha$ cluster model of $^{12}$C, is given by the following Brink wave function \cite{Brink1966}: \begin{eqnarray} & \Phi^{B}_{n\alpha}(\boldsymbol{R}_1, \cdots , \boldsymbol{R}_n) =\frac{1}{\sqrt{(4n)!} }{\rm det} [\phi_{0s}(\boldsymbol{r}_1- \boldsymbol{R}_1) \chi_{\tau_1,\sigma_1} \cdots \nonumber \\ & \qquad\qquad\qquad\qquad \phi_{0s}(\boldsymbol{r}_{4n}- \boldsymbol{R}_n)\chi_{\tau_{4n},\sigma_{4n}}], \label{Brinkwf} \end{eqnarray} \noindent where $\boldsymbol{R_i}$ is a parameter that specifies the center of the $i$th $\alpha$ cluster. $\phi_{0s} (\boldsymbol{r} - \boldsymbol{R})$ is a 0s harmonic oscillator wave function with a size parameter $b$ around a center $\boldsymbol{R}$, \begin{align} \phi_{0s}(\boldsymbol{r} - \boldsymbol{R}) = \left( \frac{1}{\pi b^2}\right)^{3/4} \exp \left[ - \frac{ (\boldsymbol{r} - \boldsymbol{R})^2}{2b^2} \right], \end{align} \noindent and $\chi_{\tau,\sigma}$ is the spin-isospin wave function of a nucleon. Equation (\ref{Brinkwf}) is rewritten as \begin{align} & \Phi^{B}_{n\alpha}(\boldsymbol{R}_1, \cdots , \boldsymbol{R}_n) =\mathscr{A} \left[ \prod_{i=1}^n \exp \left\{ - 2 \frac{ (\boldsymbol{X}_i - \boldsymbol{R}_i )^2}{b^2} \right\} \phi(\alpha_i) \right], \label{Brinkwf2} \end{align} \noindent where ${\boldsymbol X_i}$ is the center-of-mass coordinate of the $i$th $\alpha$ cluster and $\phi(\alpha_i$) represents the internal wave function of the $i$th $\alpha$ cluster. $\mathscr{A}$ is the antisymmerization operator. The generator coordinate wave function $\Psi_{n\alpha}^{GCM}$ based on the geometrical configuration of the Brink wave function is given by \begin{eqnarray} \Psi_{n\alpha}^{GCM} &=&\int d^3 \boldsymbol{R}_1 \cdots d^3 \boldsymbol{R}_n f(\boldsymbol{R}_1, \cdots , \boldsymbol{R}_n ) \nonumber \\ & &\quad \times\Phi^{B}_{n\alpha}(\boldsymbol{R}_1, \cdots , \boldsymbol{R}_n)\,. \label{GCMwf1} \label{GCMBrink} \end{eqnarray} \begin{figure*}[t] \includegraphics[width=17.2cm]{fig1-8Be12C16O20Ne} \protect\caption{ Illustrative figures of crystallinity, condensation and supersolidity of the $\alpha$ clusters (filled circles) proposed in $^{12}$C in Ref. \cite{Ohkubo2020} are displayed for the simplest system $^8$Be discussed in the text and extended to $^{16}$O with four $\alpha$ clusters and $^{20}$Ne with five $\alpha$ clusters. As the excitation energy increases vertically, the structure change occurs. In each nucleus crystallinity, condensation associated with a coherent wave, and supersolidity with both crystallinity and the coherent wave of the $\alpha$ clusters are shown. The original Ikeda diagram based on the crystallinity picture corresponds to (a), (d), (g) and (j) in each nucleus. In (b), (e), (h) and (k) of each nucleus $\alpha$ clusters are sitting in the $0s$ state of the harmonic oscillator potential with a coherent wave (broad curve). In (c), (f), (i) and (l) of each nucleus, which illustrates supersolidity, the $\alpha$ clusters are sitting in the $0s$ state of the distinct harmonic oscillator potentials separated due to the Pauli repulsion associated with a coherent wave (broad curve). } \label{fig1} \end{figure*} \par I show that the cluster model with crystallinity of Eq.~(\ref{GCMwf1}) has the property of condensation. For the sake of simplicity I treat hereafter the simplest two $\alpha$ cluster structure of $^8$Be. The generator coordinates $\boldsymbol{R}_1$ and $\boldsymbol{R}_2$, which specify the position parameters of the two $\alpha$ clusters, are rewritten as follows by using $\boldsymbol{R}_G$ and $\boldsymbol{R}$, which are the center-of-mass and the relative vectors, respectively: \begin{align} \boldsymbol{R}_1 = \boldsymbol{R}_G + \frac{1}{2} \boldsymbol{R}\,, \quad \boldsymbol{R}_2 = \boldsymbol{R}_G - \frac{1}{2} \boldsymbol{R}\,. \label{com} \end{align} I take $\boldsymbol{R}_G$=0 to remove the spurious center-of-mass motion and use the notation $\Phi^{B}_{2\alpha}(\boldsymbol{R})$ for $ \Phi^{B}_{2\alpha}(\boldsymbol{ \frac{1}{2}R}, \boldsymbol{ -\frac{1}{2}R})$. Thus Eq.~(\ref{GCMwf1}) is written as \begin{eqnarray} \Psi_{2\alpha}^{GCM} & = & \int d^3 \boldsymbol{R} f(\boldsymbol {R}) \Phi^{B}_{2\alpha}(\boldsymbol{R})\,. \label{GCMwf8Be} \end{eqnarray} \noindent $\Psi_{2\alpha}^{GCM}$ is obtained by solving the Hill-Wheeler equation for $f(\boldsymbol {R})$. I introduce ${g}(\boldsymbol{\mu})$, which is related to $f(\boldsymbol{R})$ by the Laplace transformation \begin{align} & f(\boldsymbol {R}) = \int_{0}^{\infty} d\mu_x \int_{0}^{\infty} d\mu_y \int_{0}^{\infty} d\mu_z \exp\bigg[-(\mu_x {R_x}^2\nonumber \\ &\qquad\qquad+\mu_y {R_y}^2+\mu_z {R_z}^2)\biggr] {g}(\boldsymbol{\mu}), \label{Laplace} \end{align} \noindent where $\boldsymbol{\mu}=(\mu_x, \mu_y, \mu_z)$. Then Eq.(\ref{GCMwf8Be}) reads \begin{eqnarray} \Psi_{2\alpha}^{GCM} & = &\int d^3 \boldsymbol{\mu} {g}(\boldsymbol {\mu}) \biggl[ \int d^3 \boldsymbol{R} \exp \bigl\{-(\mu_x {R_x}^2+\mu_y {R_y}^2 \nonumber \\& & +\mu_z {R_z}^2)\bigr\} \Phi^{B}_{2\alpha}(\boldsymbol{R}) \biggr]. \label{GCMwf8Be2} \end{eqnarray} \noindent By rewriting the term $\left[\cdots\right]$ in the right-hand side of Eq. (\ref{GCMwf8Be2}) as ${\Phi}_{2\alpha}^{{PCM}}(\boldsymbol{\mu})$, defined by \begin{align} & {\Phi}_{2\alpha}^{PCM}(\boldsymbol{\mu}) \equiv \int d^3 \boldsymbol{R}\exp \biggl[-(\mu_x {R_x}^2 +\mu_y {R_y}^2\nonumber \\ &\qquad\qquad\qquad+\mu_z {R_z}^2)\biggr]\Phi^{B}_{2\alpha}(\boldsymbol{R}), \label{NCMwf8Be2-1} \end{align} \begin{align} & \propto \mathscr{A} \left[ \prod_{i=1}^{2} \exp \left\{ - 2 \left(\frac{ X_{ix}^2}{B_{x}^2}+\frac{ X_{iy}^2}{B_{y}^2}+\frac{ X_{iz}^2}{B_{z}^2} \right) \right\} \phi(\alpha_i) \right], \label{8BeNCM2} \end{align} \noindent with $ B_{k} =\sqrt{b^2 + {\mu_{k}}^{-1} } \; (k=x, y, z)$, Eq. (\ref{GCMwf8Be2}) reads \begin{eqnarray} \Psi_{2\alpha}^{GCM} & = &\int d^3 \boldsymbol{\mu} {g}(\boldsymbol{\mu}) {\Phi}_{2\alpha}^{{PCM}}(\boldsymbol{\mu}). \label{GCMNCMwf8Be2-1} \end{eqnarray} The ${\Phi}_{2\alpha}^{{PCM}}$ in Eq. (\ref{8BeNCM2}), which is { called a nonlocalized cluster model (NCM) or a THSR cluster wave function in Refs. \cite{Tohsaki2001,Zhou2020}} {and suggested to be a pseudo-condensate model (PCM) in Ref. \cite{Ohkubo2020}}, shows that $\alpha$ clusters are sitting in the $0s$ orbit of the trapping harmonic oscillator potential with an oscillator parameter $\boldsymbol{B}=(B_x, B_y, B_z)$ and represents the condensed aspect of the $\alpha$ clusters. This trapping harmonic oscillator potential corresponds to the trapping harmonic oscillator potential in the superfluid cluster model discussed in the next section where the condensation of the trapped $\alpha$ clusters is rigorously treated by introducing the order parameter due to SSB of the vacuum. While in Eq.(\ref{GCMwf8Be}) the GCM wave function is expressed based on the geometrical picture using the Brink function $\Phi^{B}_{2\alpha}(\boldsymbol{R})$ as a base function, in Eq.(\ref{GCMNCMwf8Be2-1}) the same GCM wave function is expressed using the wave function ${\Phi}_{2\alpha}^{{PCM}}(\boldsymbol {\mu})$ as a base function. From Eqs.(\ref{GCMwf8Be}) and (\ref{GCMNCMwf8Be2-1}), it is found that the Brink $\alpha$ cluster model in the generator coordinate method has both the crystallinity and the condensate nature simultaneously. \par The above discussion for the simplest two $\alpha$ cluster system can be generalized to the $n$-$\alpha$ cluster system. The Laplace transformation relation is generalized to \begin{eqnarray} f(\boldsymbol{R}_1, \cdots , \boldsymbol{R}_n) & = & \int_{0}^{\infty} d\boldsymbol{\mu} \exp\biggl[- \sum_{i=1}^{n} (\mu_x R_{ix}^2+\mu_y R_{iy}^2 \nonumber \\ && +\mu_z R_{iz}^2)\biggr] {g}(\boldsymbol{\mu}). \label{Laplace2} \end{eqnarray} Equation (\ref{NCMwf8Be2-1}) generalized to $n$-$\alpha$ clusters is given by \begin{align} & \Phi_{n\alpha}^{PCM}(\boldsymbol{\mu}) = \int d^3 \boldsymbol{R}_1 \cdots d^3 \boldsymbol{R}_n \exp \biggl[ -\sum_{i=1}^{n} (\mu_{x} R_{ix}^2 \nonumber \\ & \qquad\qquad +\mu_{y} R _{iy}^2 + \mu_{z} R_{iz}^2) \biggr] \Phi^{B}_{n\alpha}(\boldsymbol{R}_1, \cdots , \boldsymbol{R}_n), \label{NCM1} \end{align} \begin{align} & \propto \mathscr{A} \left[ \prod_{i=1}^{n} \exp \left\{ - 2 \left(\frac{ X_{ix}^2}{B_{x}^2}+\frac{ X_{iy}^2}{B_{y}^2}+\frac{ X_{iz}^2}{B_{z}^2} \right) \right\} \phi(\alpha_i) \right]. \label{NCM2} \end{align} Similarly to Eq.(\ref{GCMNCMwf8Be2-1}), one gets \begin{align} & \Psi_{n\alpha}^{GCM}=\int d^3 \boldsymbol{\mu} {g}(\boldsymbol{\mu}) {\Phi}_{n\alpha}^{{PCM}}(\boldsymbol{\mu}). \label{GCMNCMwf-Nalpha} \end{align} Thus from Eqs. (\ref{GCMwf1}) and (\ref{GCMNCMwf-Nalpha}) it is found generally that the $n$-$\alpha$ cluster wave function in the geometrical cluster model picture has the property of condensation. This shows generally that the GCM $n$-$\alpha$ cluster wave function has the duality of crystallinity and condensation independently of the Hamiltonian used. Illustrative pictures based on the above geometrical structure and the condensate structure of the $\alpha$ clusters in $^8$Be, $^{12}$C, $^{16}$O, and $^{20}$Ne are displayed in Fig.~1. The pictures (a), (d), (g), and (j) correspond to the Ikeda diagram \cite{Ikeda1968,Horiuchi1972} based on the crystallinity. The pictures (b), (e), (h), and (k) represent the wave aspect of the $\alpha$ cluster structure due to the condensation. The two exclusive pictures, the duality of crystallinity and a condensate coherent wave, can be unified in the pictures displayed in (c), (f), (i), and (l) of each nucleus in Fig.~1 where the $\alpha$ clusters sitting in the $0s$ state of the distinct potentials due to the Pauli repulsion between the $\alpha$ clusters \cite{Tamagaki1968} form a coherent wave. The de Broglie wavelength of each $0s$ state $\alpha$ cluster with very low energy is far longer than the geometrical distance between the $\alpha$ clusters. In other word, the phases of the waves are locked to form a coherent wave function, i.e., superfluidity (condensation) of the system. This is general and independent of the geometrical configuration and number of the $\alpha$ clusters involved, $n$. Therefore in principle, whatever the geometrical configuration is ---triangle ($n$=3) structure of $^{12}$C, tetrahedron ($n$=4) structure of $^{16}$O \cite{Dennison1954,Bijker2014,Halcrow2017,Halcrow2019,Halcrow2020}, trigonal bipyramid ($n=$5) structure of $^{20}$Ne \cite{Bouten1962,Brink1968,Brink1970,Nemoto1975,Bijker2021}, linear chain $n$-$\alpha$ cluster ($n$=2, 3, 4, $\cdots$), etc.---the geometrical $\alpha$ cluster structures have the potential to form a coherent wave function (superfluidity). Whether the state is superfluid depends on the superfluid density $\rho_s$, which encapsulates the structure and degree of clustering. The previous study of $^{12}$C \cite{Ohkubo2020} finds that the superfluid ground state is stable with a condensation rate 5\%, giving energy levels similar to the GCM, RGM and experiment. \section{ FIELD THEORETICAL SUPERFLUID CLUSTER MODEL FOR CONDENSATION OF $\alpha$ CLUSTERS } The traditional cluster models involve no order parameter that characterizes condensation. A theory with no order parameter is unable to conclude whether a system under investigation is condensate or not. In Eqs. (\ref{GCMNCMwf-Nalpha}) and (\ref{GCMwf1}) no order parameter to characterize the condensation is involved. In Eq. (\ref{GCMwf1}) the parameter $\boldsymbol{R}$ is the order parameter to characterize the geometrical clustering. In fact, $\boldsymbol{R}=0$ corresponds to the shell model with no clustering and $\boldsymbol{R}\ne0$ represents the degree of geometrical clustering. On the other hand, in Eq. (\ref{GCMNCMwf-Nalpha}) the parameter $\boldsymbol{B}$ is not {self-evidently} the order parameter of condensation because global phase locking caused by spontaneous symmetry breaking due to condensation is not {explicitly} involved. In fact, $\boldsymbol{B}=0$ has no physical meaning and $\boldsymbol{B}\ne0$ does not necessarily mean condensation {in Eq. (\ref{GCMNCMwf-Nalpha})}. To conclude whether the $\alpha$ cluster structure has Bose-Einstein condensate (BEC) nature, it is necessary to use a theory in which the order parameter to characterize condensation is implemented. I briefly present the formulation of the field theoretical superfluid cluster model developed in \cite{Nakamura2016,Nakamura2018,Katsuragi2018} to study BEC of $\alpha$ clusters in the Hoyle state and the excited states above it in $^{12}$C. The model Hamiltonian for a bosonic field $\hat\psi(x)$ $(x=(\bx,t))$ representing the $\alpha$ cluster is given as follows: \begin{align} &\hat{H}=\int\! d^3x\; \hat\psi^\d(x) \left(-\frac{\nabla^2}{2m}+ V_\mathrm{ex}(\bx)- \mu \right) \hat\psi(x) \notag\\ &\,\, +\frac12 \int\! d^3x\,d^3x'\; \hat\psi^\d(x) \hat\psi^\d(x') U(|\bx-\bx'|) \hat\psi(x') \hat\psi(x) \,. \label{Hamiltonian} \end{align} Here, the potential $V_\mathrm{ex}$ is a mean field potential introduced phenomenologically to trap the $\alpha$ clusters inside the nucleus, and is taken to have a harmonic oscillator form, $ V_\mathrm{ex}(r)= m \Omega^2 r^2/2\,. $ $U (|\bm x -\bm x'|)$ is the residual $\alpha$--$\alpha$ interaction. I set $\hbar=c=1$ throughout this paper. \par When BEC of $\alpha$ clusters occurs, i.e., the global phase symmetry of $\hat\psi$ is spontaneously broken, I decompose $\hat\psi$ as $\hat\psi(x)=\xi(r)+\hat\varphi(x)$, where the $c$-number $\xi(r)=\bra{0} \hat\psi(x) \ket{0}$ is an order parameter and is assumed to be real and isotropic. To obtain the excitation spectrum, one needs to solve three coupled equations, which are the Gross--Pitaevskii (GP) equation, Bogoliubov-de Gennes (BdG) equation, and the zero-mode equation \cite{Nakamura2016, Katsuragi2018}. The GP equation determines the order parameter by \begin{equation}\label{eq:GP} \left\{ -\frac{\nabla^2}{2m}+V_\mathrm{ex}(r) -\mu + V_H(r) \right\} \xi(r) = 0 \,, \end{equation} where $ V_H(r) = \int\! d^3x'\; U(|\bx-\bx'|)\xi^2(r')\,. $ The order parameter $\xi$ is normalized with the condensed particle number $N_0$ as \begin{align} \int\! d^3x\; |\xi(r)|^2 = N_0\,. \end{align} The BdG equation describes the collective oscillations on the condensate by \begin{align} \int\! d^3x'\; \left(\begin{array}{cc} \mathcal{L} & \mathcal{M} \\ -\mathcal{M}^* & -\mathcal{L}^* \end{array}\right) \left(\begin{array}{c} u_{\bn} \\ v_{\bn} \end{array}\right) = \omega_\bn \left(\begin{array}{c} u_{\bn} \\ v_{\bn} \end{array} \right), \end{align} where \begin{eqnarray} \mathcal{M}(\bx, \bx') &= &U(|\bx-\bx'|) \xi(r) \xi(r'),\, \nonumber\\ \mathcal{L}(\bx, \bx') &=& \delta(\bx-\bx') \left\{ -\frac{\nabla^2}{2m}+V_\mathrm{ex}(r) -\mu + V_H(r) \right\} \nonumber \\ &&+ \mathcal{M}(\bx, \bx')\,. \end{eqnarray} The index $\bm{n}=(n,\, \ell,\, m)$ stands for the main, azimuthal and magnetic quantum numbers. The eigenvalue $\omega_\bn$ is the excitation energy of the Bogoliubov mode. For isotropic $\xi$, the BdG eigenfunctions can be taken to have separable forms, \begin{eqnarray} u_{\bm{n}}(\bm{x}) &= &\mathcal{U}_{n\ell}(r) Y_{\ell m}(\theta, \phi), \, \nonumber\\ v_{\bm{n}}(\bm{x}) & = &\mathcal{V}_{n\ell}(r) Y_{\ell m}(\theta, \phi). \label{BdGsolution} \end{eqnarray} One necessarily has an eigenfunction belonging to zero eigenvalue, explicitly $(\xi(r), -\xi(r))^t$, and its adjoint function $(\eta(r),\eta(r))^t$ is obtained as \begin{equation} \eta(r)= \frac{\partial}{\partial N_0}\xi(r)\,. \end{equation} The field operator is expanded as \begin{align} \hat\varphi(x)&=-i{\hat Q}(t)\xi(r)+{\hat P}(t)\eta(r) +\sum_{\bn} \left\{{\hat a}u_\bn(\bx) +{\hat a}^\dagger v^\ast_\bn(\bx) \right\}\,, \end{align} with the commutation relations $[{\hat Q}\,,\,{\hat P}]=i$ and $[{\hat a}_{\bn}\,,\,{\hat a}^\dagger_{\bn'}]= \delta_{\bn \bn'}$\,. The operator ${\hat a}_\bn$ is an annihilation operator of the Bogoliubov mode, and the pair of canonical operators ${\hat Q}$ and ${\hat P}$ originate from the SSB of the global phase and are called the NG or zero-mode operators. The treatment of the zero-mode operators is a chief feature of the present approach. The naive choice of the unperturbed bilinear Hamiltonian with respect to ${\hat Q}$ and ${\hat P}$ fails due to their large quantum fluctuations. Instead, all the terms consisting only of ${\hat Q}$ and ${\hat P}$ in the total Hamiltonian are gathered to construct the unperturbed nonlinear Hamiltonian of ${\hat Q}$ and ${\hat P}$, denoted by $\hat H_u^{QP}$\, with \begin{eqnarray} \hat H_u^{QP} &=&- \left(\delta\mu + 2C_{2002} + 2C_{1111} \right) \hat P+ \frac{I-4C_{1102}}{2}\hat P^2 \nonumber \\ &&+ 2C_{2011}\hat Q\hat P\hat Q + 2C_{1102}\hat P^3 + \frac{1}{2}C_{2020}\hat Q^4 \nonumber \\ && -2C_{2011} \hat Q^2 + C_{2002}\hat Q\hat P^2\hat Q + \frac{1}{2} C_{0202}\hat P^4\,, \label{eq:HuQP} \end{eqnarray} where \begin{eqnarray} C_{iji'j'} &= & \int\! d^3x d^3x'\, U(|\bx-\bx'|) \{\xi(\bx)\}^i \{\eta(\bx)\}^j \nonumber \\ && \times \{\xi(\bx')\}^{i'}\{\eta(\bx')\}^{j'} \,, \label{eq:Cijij} \end{eqnarray} and $\delta \mu $ is a counter term that the criterion $\bra0\hat\psi\ket0=\xi\,$ determines. I set up the eigenequation for $\hat{H}_u^{QP}$, called the zero--mode equation, \begin{align} \label{eq:HuQPeigen} \hat H_u^{QP} \ket{\Psi_\nu} = E_\nu \ket{\Psi_\nu}\qquad (\nu=0,1,\cdots)\,. \end{align} This equation is similar to a one-dimensional Schr\"odinger equation for a bound problem. \par The total unperturbed Hamiltonian ${\hat H}_u$ is ${\hat H}_u=\hat H_u^{QP} +\sum_{\bn} \omega_\bn {\hat a}_\bn^\dagger{\hat a}_\bn$. The ground state energy is set to zero, $E_0=0$. The states that I consider are $\ket{\Psi_\nu}\ket{0}_{\rm ex}$ with energy $E_\nu$, called the zero-mode state, and $\ket{\Psi_0}{\hat a}^\dagger_\bn \ket{0}_{\rm ex}$ with energy $\omega_\bn$, called the BdG state, where ${\hat a}_\bn \ket{0}_{\rm ex}=0$. \section{ $\alpha$ CLUSTER STRUCTURE IN $^{40}$C\lowercase{a}} In order to discuss the macroscopic concept of condensation in nuclei, it seems suitable to study a nucleus which involves many $\alpha$ clusters. I take $^{40}$Ca with ten $\alpha$ clusters. In this section I apply the field theoretical superfluid cluster model to $\alpha$ cluster structure study of $^{40}$Ca with the mysterious $0^+$ state at 3.35 MeV. First, I review briefly how the mysterious $0^+$ state in the doubly magic nucleus has been understood from the viewpoint of mean field theory, i.e., the shell model and the collective model, in the recent decades and how the $\alpha$ cluster model based on the crystallinity picture has explained the mysterious $0^+$ state. Second, I study whether the $\alpha$ cluster states explained in the geometrical configuration picture can be understood in the viewpoint of condensation of the duality by using the SCM. \subsection{ THE MYSTERIOUS $0^+$ STATE AND GEOMETRICAL $\alpha$ CLUSTER STRUCTURE OF $^{40}$C\lowercase{a}} \par The emergence of the $0_2^+$ at the very low excitation energy 3.35 MeV of the doubly shell-closed magic nucleus $^{40}$Ca as well as the $0_2^+$ state at 6.05 MeV in $^{16}$O had been mysterious from the viewpoint of the shell model \cite{Brown1966,Gerace1967,Gerace1969,Arima1967}. Brown and Green \cite{Brown1966} pointed out the importance of deformed four-particle four-hole (4p-4h) excitations in lowering the excitation energy of the $0_2^+$ state in $^{16}$O. Gerace and Green \cite{Gerace1967} showed that the same situation occurs for the $0_2^+$ state in $^{40}$Ca. Gerace and Green \cite{Gerace1969} showed that 8p-8h excitation is important in understanding the third $0_3^+$ state at 5. 21 MeV. In the shell model calculations in which the $^{32}$S core is assumed, Sakakura {\it et al.} \cite{Sakakura1976} argued that the $0_2^+$ and $0_3^+$ states are dominated by the 4p-4h excitations and the $0_4^+$ state at 7.30 MeV is dominated by the 8p-8h excitation. \par These studies show that the vacuum ground state of $^{40}$Ca is not a simple 0p-0h spherical shell model state but involves a non-negligible amount of 4p-4h correlations. To understand the excited structure is to reveal the correlations, the predisposition, of the vacuum ground state. From this viewpoint Marumori and Suzuki \cite{Marumori1968} suggested to understand the mechanism of the emergence of the mysterious $0^+$ state as a collective state by defining a vacuum with correlations of the 4p-4h mode. Following this idea, the four-particle and four-hole mode-mode coupling was investigated in $^{16}$O \cite{Takada1974} and $^{40}$Ca \cite{Hasegawa1976}. \par The nuclear energy density functional (EDF) approach has been applied to describe the ground state properties and the collective excitations including clustering, especially the microscopic analysis of the formation and evolution of the cluster structure from the vacuum ground state. The authors of Ref. \cite{Ebran2014} consider that cluster structures can be a transitional phase between the quantum liquid phase and the crystal phase. It is very interesting to know whether the mysterious $0^+$ states in $^{40}$Ca as well as in $^{16}$O are reproduced in the EDF and how its mechanism of low excitation energy is understood from the viewpoint of the mean field. However, the structure of the excited energy levels in $^{40}$Ca as well as $^{16}$O have not been reported yet. \par Also {\it ab initio} approaches, such as fermionic molecular dynamics (FMD) \cite{Chernykh2007} used for $^{12}$C and antisymmetric molecular dynamics (AMD) \cite{Taniguchi2014} used for $^{42}$Ca, have not been applied to explain the mysterious $0^+$ state of $^{40}$Ca. In $^{16}$O {\it ab initio} calculations have been unable to explain the very low excitation energy of the mysterious $0^+$ state providing an excitation energy 13.3 MeV in Ref. \cite{Furutachi2008} and 19.8 MeV in Ref. \cite{Wloch2005}, which are two or three times larger than the experimental value, 6.05 MeV. \par From the viewpoint of $\alpha$ cluster structure, Ogawa, Suzuki and Ikeda \cite{Ogawa1977} investigated the structure of $^{40}$Ca using the $\alpha$+$^{36}$Ar cluster model, in which no $K=0^-$ band appears, which is a parity-doublet partner of the $K=0^+$ band built on the mysterious $0_2^+$ state, was obtained. Since this situation looked very different from $^{16}$O where the well-developed $\alpha$ cluster $K=0^-$ band, which is a parity doublet partner of the $K=0^+$ band built on the mysterious $0_2^+$ (6.05 MeV) state \cite{Suzuki1976A,Suzuki1976B}, Fujiwara {\it et al.} \cite{Fujiwara1980} discussed that the $K=0^+$ band in $^{40}$Ca has rather strong shell model aspects than the $\alpha$ cluster structure. \par On the other hand, from the viewpoint of unification of cluster structure in the bound and quasibound states and backward angle anomaly (BAA) or anomalous large angle scattering (ALAS) in $\alpha$+$^{36}$Ar scattering, Ohkubo and Umehara \cite{Ohkubo1988} showed that the $2^+$ (3.90 MeV), $4^+$ (5.28 MeV), and $6^+$ (6.93 MeV) states built on the mysterious $0_2^+$ state form a $K=0^+$ band with the $\alpha$+$^{36}$Ar cluster structure and predicted the existence of a parity-doublet partner $K=0^-$ band with the well-developed $\alpha$ cluster structure at slightly above the $\alpha$ threshold energy. The observation of the predicted $\alpha$ cluster $K=0^-$ band by Yamaya {\it et al.} \cite{Yamaya1993,Yamaya1994,Yamaya1998} in an $\alpha$ transfer reaction experiment showed that the $K=0^-$ band and the $K=0^+$ band have the $\alpha$ cluster structure. The $\alpha$ spectroscopic factor, $S^2_\alpha$=0.30, extracted from $\alpha$ transfer reactions \cite{Yamaya1993,Yamaya1994,Yamaya1998} shows that the ground state has a significant $\alpha$ cluster correlation. The $\alpha$ cluster structure of $^{40}$Ca was further confirmed theoretically by the semi-microscopic $\alpha$ cluster model calculations using the orthogonality condition model by Sakuda and Ohkubo \cite{Sakuda1994,Sakuda1998}, in which not only the $\alpha$ cluster model space but also the shell model space are incorporated. In the OCM calculations not only the $\alpha$ cluster states but also the shell-model like states in $^{40}$Ca are reproduced in the $\alpha$+$^{36}$Ar cluster model. \par Thus the mysterious $0^+$ state of $^{40}$Ca was found to emerge from the ground state with the predisposition of $\alpha$ clustering. The finding that the vacuum ground state involves $\alpha$ cluster correlations is consistent with the shell model studies in Refs. \cite{Brown1966,Gerace1967,Gerace1969,Arima1967} and the collective model viewpoint in Refs. \cite{Marumori1968,Hasegawa1976}, which suggests that the ground state involves multiparticle-multihole, dominantly 4p-4h, shell model components. The geometrical $\alpha$ cluster model has been also successful in describing well the $\alpha$ cluster structure in the $^{40}$Ca - $^{44}$Ti region \cite{Ohkubo1998,Michel1998,Sakuda1998}. \par Recently Manton \cite{Manton2020} reported that the energy levels of $^{40}$Ca can be classified as the vibration and rotation of the ten $ \alpha$ clusters using a Skyrme model. Microscopic ten $\alpha$ cluster model calculations using the RGM and the GCM as well as the semi-microscopic OCM may be desired, however, such ten-body calculations are far beyond the power of the modern supercomputers. From the microscopic cluster model point of view the $\alpha$+$^{36}$Ar cluster model is an approximation of the ten $\alpha$ cluster model with $\boldsymbol{R}_1= \boldsymbol{R}_2= \cdots =\boldsymbol{R}_9$ in Eq.(\ref{GCMwf1}), as illustrated as in Fig.~2. \par Since the crystallinity picture of the $\alpha$ cluster structure in $^{40}$Ca has been confirmed theoretically and experimentally, the problem is to reveal the origin and the collective nature of the mysterious $0^+$ state as well as the excited $\alpha$ cluster states from the condensation viewpoint of the duality. \subsection{SUPERFLUID CLUSTER MODEL STUDY OF $^{40}$Ca} Because the $\alpha$ cluster structure invokes the duality of geometrical structure and condensation as discussed in Sec. II, it is expected that the $\alpha$ cluster states in $^{40}$Ca can be also understood from the condensation viewpoint using the SCM with the order parameter of condensation. In a previous paper \cite{Ohkubo2020} the SCM was applied to understand the duality of the $\alpha$ cluster structure of $^{12}$C, for which the $\alpha$ cluster condensation of the Hoyle state had been thoroughly investigated theoretically and experimentally. In contrast to the computational difficulties of the ten $\alpha$ cluster GCM calculations, the SCM calculation is tractable for many $\alpha$ cluster systems. In fact, the SCM has been successfully applied to study the BEC of $\alpha$ clusters at high excitation energies in many nuclei, $^{12}$C, $^{16}$O, $^{20}$Ne, etc., and in $^{48}$Cr and $^{52}$Fe with thirteen $\alpha$ clusters \cite{Katsuragi2018}. \begin{figure}[t] \begin{center} \includegraphics[width=8.6cm]{fig2-10alpha.pdf} \end{center} \protect\caption{ (Color online) Illustrative pictures of ten $\alpha$ clusters (filled red circles) in $^{40}$Ca. (a) The crystallinity picture of the $\alpha$ cluster structure of $^{40}$Ca with the $\alpha$+$^{36}$Ar geometrical configuration. (b) The condensation picture of the $\alpha$ cluster structure of $^{40}$Ca in the superfluid cluster model where the ten $\alpha$ clusters associated a coherent wave (broad curve) are trapped in the $0s$ orbit of the confining potential. } \label{fig2} \end{figure} \begin{figure}[!thb] \begin{center} \includegraphics[width=8.6cm]{fig3-EnergyLevel.pdf} \end{center} \protect\caption { (a) The energy levels of $^{40}$Ca calculated with the superfluid cluster model with the condensation rate 6\%. The zero-mode states are indicated by zm and others are BdG states. (b) The experimental low-lying energy levels of $^{40}$Ca \cite{Yamaya1994,Yamaya1998,ENSDF}. The member states of the experimental $K=0^+$ and $K=0^-$ bands with the $\alpha$ cluster structure are indicated by the thick solid lines. } \label{fig:SCMEnergyLevel} \end{figure} As in Refs.~\cite{Ohkubo2020,Nakamura2016,Nakamura2018,Katsuragi2018,Takahashi2020}, I take the Ali--Bodmer type two-range Gaussian nuclear potential $U (|\bm x -\bm x'|)$ $ = V_r e^{-\mu_r^2 |\bm x -\bm x'|^2} - V_a e^{-\mu_a^2 |\bm x -\bm x'|^2}\,$, with $V_r$ and $V_a$ being the strength parameters of the short-range repulsive potential due to the Pauli principle and the long-range attractive potential, respectively \cite{Ali1966}. The chemical potential is fixed by the specification of the superfluid particle number $N_0$. I assume the condensation rate to be 6\%, $N_0=0.06N$. The ground state is identified as the vacuum $\ket{\Psi_0}\ket{0}_{\rm ex}$. The range parameters $\mu_a$ and $\mu_r$ are fixed to the values $0.475$ and $0.7$ ${\rm fm}^{-1}$, respectively, determined in Ref. \cite{Ali1966} to reproduce $\alpha$+$\alpha$ scattering. The two potential parameters, $\Omega$, which controls the size of the system, and $ V_r$, which prevents collapse of the condensate, are determined to be $\Omega=2.97 $ MeV$/ \hbar$ and $ V_r$=591 MeV. These reproduce the experimental root mean square (rms) radius 3.43 fm of the ground state, $\ket{\Psi_0}\ket{0}_{\rm ex}$ and the energy level of the $0_2^+$ state identified as the first excited zero-mode state $\ket{\Psi_1}\ket{0}_{\rm ex}$. \par In Fig.~3, the calculated energy levels are compared with the experimental data. The calculation locates the $K=0^+$ band states in correspondence to the experimental band build on the mysterious $0_2^+$ state. The moment of inertia of the calculated band is smaller than the experimental one and the $6^+$ state appears at 12.75 MeV. However, it is to be noted that the $\alpha$ cluster band emerges at very low excitation energy from the spherical vacuum. The $0_2^+$ state is a state of the Nambu-Goldstone zero mode, a soft mode collective state. This soft mode nature explains naturally why the mysterious collective $0^+$ state emerges at such a low excitation energy, although it is mysteriously low for a 4p-4h state in the shell model to emerge from the spherical vacuum ground state of $^{40}$Ca. If the system is infinitely large, it would appear at zero excitation energy. The finite low excitation energy is the consequence of the finite size of the nucleus and the Pauli principle. The excited states $2^+$ and $4^+$ of the $K=0^+$ band are the BdG states built on the NG mode state. The calculated $0_3^+$ state, which is a BdG mode state, corresponds to the experimental $0_3^+$ state at 5.21 MeV above the $\alpha$ threshold energy, which is considered to be an 8p-8h state in the deformed model \cite{Gerace1969} and a 4p-4h dominant state in the $^{32}$S core shell model calculations \cite{Sakakura1976}. The calculated $0_4^+$ state, which is a second member state of the NG zero mode, corresponds to the experimental $0_4^+$ state at 7.30 MeV, which is interpreted to be a 4p-4h and 8p-8h dominant state in the shell model calculation \cite{Sakakura1976} and a 2p-2h dominant state in the deformed model \cite{Gerace1969}. As for the negative state, a collective $3^-$ state appears in accordance with experiment although the calculated energy is slightly high. The $1^-$ state also appears to be in good correspondence with the experimental energy level, which is considered to be a band head state of the parity doublet $K=0^-$ band. It is to be noted that, although no geometrical configuration of the ten $\alpha$ clusters are assumed, the important $\alpha$ cluster states are obtained in good accordance with experiment by the SCM calculation based on the picture of Fig.~\ref{fig2}. \begin{figure}[t] \includegraphics[width=8.6cm]{fig4-densityDistribution.pdf} \protect\caption{(Color online) (a) The calculated eigenfunction (order parameter) $\xi (r)$ (dashed line) and its adjoint eigenfunction $\eta(r)$ (solid line) for the ground state of $^{40}$Ca. (b) The calculated superfluid density distribution $\rho_s$ in the SCM (dashed line) and the matter density distribution $\rho$ in the OCM of Refs. \cite{Sakuda1994} (solid line) for the ground state. } \label{fig:orderparameter} \end{figure} In Fig.~4(a) the calculated eigenfunction $\xi(r)$ (the order parameter) and its adjoint eigenfunction $\eta(r)$ are displayed. The number fluctuation of the superfluid $\alpha$ clusters in the ground state, $\eta$, is large near the surface region and decreases toward the inner and outer regions. In Fig. 4(b) $\rho_s(r)=|\xi(r)|^2/N_0$ represents the calculated superfluid density distribution of the $\alpha$ clusters and $\rho(r)$ is the nuclear matter density distribution calculated in the OCM cluster model of Refs. \cite{Sakuda1994}. $\rho_s$ is largest in the center of the nucleus and gradually decreases toward the surface region. The non-superfluid normal density, $\rho_n$ $\equiv$ $\rho$ -- $\rho_s$, is much smaller than $\rho$. However, it is this small superfluid density component that causes the $0_2^+$ state at such low excitation energy as an NG zero-mode state. This may evoke that the Cooper pairs, a small fraction of nucleons near the Fermi surface, cause the superfluidity of nuclei in the heavy mass region. The superfluid fraction $\rho_s$ in the ground state is considered to be a predisposition that causes the macroscopic wave nature aspect, the condensation aspect of the duality, of the $\alpha$ cluster states in the excited energy region. \begin{figure}[t] \begin{center} \includegraphics[width=8.6cm]{Fig5BdGwf.pdf} \end{center} \protect\caption{ Calculated BdG wave functions of $^{40}$Ca, $\mathcal{U}_{n\ell}(r)$ (solid lines) and $\mathcal{V}_{n\ell}(r)$ (dashed lines), for the $2^+$ ($n=0$, $\ell$= 2) and $3^-$ ($n=0$, $\ell$= 3) states. } \label{fig:BdGwf} \end{figure} \par In Fig.~\ref{fig:BdGwf} the BdG wave functions $\mathcal{U}_{n\ell}(r)$ and $\mathcal{V}_{n\ell}(r)$ of Eq. (\ref{BdGsolution}) for the $2^+$ and $3^-$ states are displayed. The peak of $\mathcal{U}_{n\ell}(r)$ for $\ell \neq 0$ is located in the surface region because of the repulsive force between the $\alpha$ clusters and moves outward with increasing $\ell$ due to the centrifugal force. The magnitude of $\mathcal{V}_{n\ell}(r)$ is negligible for the $2^+$ and $3^-$ states, implying no Bogoliubov mixing in these states due to the small condensation rate. \begin{figure*}[t] \begin{center} \includegraphics[width=17.2cm]{fig6ZeromodeWf.pdf} \end{center} \protect\caption{ The zero-mode wave functions $\Psi_\nu(q)$ for the $0^+$ states of $^{40}$Ca calculated with the condensation rate 6 \%, $N_0=0.06N$, (a) $\nu=0$, (b) $\nu=1$, and (c) $\nu=2$. The solid lines and the dashed lines represent the real part and the imaginary part of the wave functions $\Psi_\nu(q)$. } \label{fig:Zeromodewf} \end{figure*} As for the zero-mode wave function, I introduce the eigenstate of $\hat{Q}$, denoted by $|q>$, as $\hat{Q}$ $|q> = q|q>$. To solve Eq. (\ref{eq:HuQPeigen}), I move to the $q$-diagonal representation, in which the state is represented by the wave function $\Psi_\nu (q) = <q|\Psi_\nu>$, and the operators $\hat{Q}$ and $\hat{P}$ are represented by $q$ and $\frac{1}{i}\frac{\partial}{\partial q}$ , respectively, consistent with the commutation relation, [$\hat{Q}$,$\hat{P}]=i$. In Fig.~\ref{fig:Zeromodewf} the zero-mode wave functions $\Psi_\nu(q)$ for the first three states obtained by solving Eq. (\ref{eq:HuQPeigen}) are displayed. Figure ~\ref{fig:Zeromodewf}(a) corresponds to the ground state with $\nu$=0 and Fig.~\ref{fig:Zeromodewf}(b) corresponds to the second state with $\nu$=1, the mysterious $0^+$ state at 3.35 MeV. Figure~\ref{fig:Zeromodewf}(c) corresponds to the third member state with $\nu$=2, $ 0^+_4$ at 7.30 MeV. One sees that the excitation of the NG mode is caused by the nodal excitation of $\Psi_{\nu}(q)$ with respect to $q$ in the NG subspace. It is important to note that this nodal excitation is anharmonic as seen in $\hat H_u^{QP}$ in Eq.~(\ref{eq:HuQP}), which brings the excitation energy of the $\nu=1$ state lower and closer to the vacuum, and the $\nu=2$ state closer to the $\nu=1$ state in Fig.~\ref{fig:SCMEnergyLevel}. The importance of the zero-mode in the BEC systems of $\alpha$ clusters is discussed in detail in Ref.\cite{Katsuragi2018}. \section{DISCUSSION} I study the condensation rate dependence of the calculated energy levels. In Fig.~\ref{fig7} the energy levels calculated for different condensation rates, 5\%, 6\%, 7\%, and 8\%, are displayed. The confining potential parameter $\Omega$ and the repulsive potential $V_r$ are slightly adjusted in order to prevent the system from collapsing and the excitation energy of first excited $0^+$ state corresponds to the experimental energy, $\Omega$=3.14 MeV/$\hbar$ and $V_r$=696 MeV for 5\%, $\Omega$=2.99 MeV/$\hbar$ and $V_r$=555 MeV for 7\%, and $\Omega$=3.04 MeV/$\hbar$ and $V_r$=535 MeV for 8\%. As the condensation decreases, the repulsive potential becomes larger gradually to keep the system stable, preventing collapse, while the values of $\Omega$ change little since they are related to the size of the ground state. As seen in Fig.~\ref{fig7}, the structure of the energy spectrum changes generally little. In detail, the excitation energies of the $0_2^+$, $2^+$, $4^+$, $3^-$, and $1^-$ states scarcely change for the different condensation rates. However, for the $0^+$ states, the excitation energy of the zero-mode $\nu$=2 state decreases as the condensation rate increases gradually: In the case of 5\% the excitation energy of the $\nu$=2 zero-mode state, 10.49 MeV, is higher than the BdG $0^+$ state. In the case of 6\% the $\nu$=2 zero-mode state comes down to 7.51 MeV. For 8\% the $\nu$=2 zero-mode $0^+$ state becomes lower than the BdG $0^+$ state. In the range of 6\% - 8\% the calculated $0^+$ states correspond to the experimental energy spectrum. \begin{figure}[t] \begin{center} \includegraphics[width=8.6cm]{fig7-condensation-dep.pdf} \end{center} \protect\caption { The condensation rate dependence of the energy levels of $^{40}$Ca calculated by the superfluid cluster model with the condensation rates (a) 5\%, (b) 6\%, (c) 7\%, and (d) 8\%. The three zero-mode states with $\nu$=0, 1, and 2 are indicated by zm and others are BdG states. } \label{fig7} \end{figure} Next I consider the crystallinity and the condensation aspects of the duality of the $\alpha$ cluster structure by comparing the energy levels calculated by using the OCM with the $\alpha$+$^{36}$Ar cluster model in Ref. \cite{Sakuda1994} and those by using the SCM. In Fig.~\ref{fig8}, both models reproduce the very low-lying mysterious $0^+$ state in agreement with experiment. While the OCM describes the mysterious $0^+$ state as a deformed state with the $\alpha$+$^{36}$Ar geometrical configuration, which is in line with the 4p-4h dominant state in the deformed shell model picture of Gerace and Green \cite{Gerace1967}, the SCM describes it as a Nambu-Goldstone zero-mode state, a soft mode. In the crystallinity picture, the mysteriously low excitation energy is brought about by the energy gain due to deformation caused by the geometrical $\alpha$ clustering. This mechanism is common to the deformed shell model by Gerace and Green in Ref. \cite{Gerace1967}, in which the deformation is not due to crystallinity but due to the deformation of the mean field of the shell model. It is to be noted that the observed significant $\alpha$ spectroscopic factor of the mysterious $0^+$ state, $S^2_\alpha$=0.26, is explained by taking into account the deformation due to $\alpha$ clustering \cite{Sakuda1994}. In the OCM the predisposition of $\alpha$ clustering is implemented in the ground state vacuum. In fact, the calculated ground state has the $\alpha$ spectroscopic factors $S^2_\alpha$=0.086 for the ($I$$\otimes$$L$)=(0$\otimes$0) channel with the [$^{36}$Ar($I$) $\otimes$ $\alpha_{L}]_{J=0}$ configuration, where $L$ is the orbital angular momentum of the relative motion between $^{36}$Ar and $\alpha$ clusters in $^{40}$Ca. On the other hand, in the SCM the emergence of the mysterious low excitation energy $0^+$ state is a manifestation of the emergence of a soft mode due to spontaneous symmetry breaking of the global phase of the ground state, condensation aspect of the duality. \begin{figure}[t!] \includegraphics[width=8.6cm]{fig8OcmExpSCM.pdf} \protect\caption { (a) The energy levels calculated with the $\alpha$ cluster model with the $\alpha$+$^{36}$Ar geometrical configuration in the OCM \cite{Sakuda1994} are compared with (b) the experimental energy levels \cite{ENSDF} and (c) the field theoretical superfluid cluster model calculation with 6\% condensation rate assuming no geometrical configuration. } \label{fig8} \end{figure} The moment of inertia of the $K=0^+$ band calculated in the OCM is in agreement with the experimental value while that of the SCM is slightly smaller than that in the OCM calculation. The SCM locates the $2^+$ and $4^+$ states at excitation energies higher than the experiment. However, while OCM describes the band as a rotational band of the geometrical $\alpha$+$^{36}$Ar cluster structure, the SCM gives the band as the BdG states. Although the two models, two views, are completely different, they both basically describe the $\alpha$ cluster structure aspects of the $K=0^+$ band of $^{40}$Ca. As for the other excited $0^+$ states with the $\alpha$ cluster structure above $E_x$=5 MeV, the OCM calculation locates only one $0^+$ state between 5 and 8 MeV, which has the configuration [$^{36}$Ar($2^+$) $\otimes$ $\alpha_{ L=2}]_{J=0}$. More $0^+$ states will be obtained in the extended $\alpha$+$\alpha$+$^{36}$Ar cluster model, since the existence of a $0^+$ state with 8p-8h character has been suggested in the deformed model and shell model calculations in Refs. \cite{Gerace1969,Sakakura1976} and in the $^8$Be transfer reaction experiment in Ref. \cite{Middleton1972}. The SCM locates the two $\alpha$ cluster $0^+$ states in the relevant energy region; one is a zero-mode state and the other is a BdG state. As for the negative parity states, the OCM locates the $1^-$ state of the $K=0^-$ band with the $\alpha$+$^{36}$Ar structure, which is the parity-doublet partner of the $K=0^+$ band. As seen in Fig.~\ref{fig8}, its calculated excitation is slightly higher than the observed state. On the other hand, the SCM locates the $1^-$ state as a BdG state, whose excitation energy is in good agreement with the experiment. As for the $3^-$ state, the OCM locates a low-lying $3^-$ state, which is a superposition of many channels with the [$^{36}$Ar($I$) $\otimes$ $\alpha_{ L}]_{J=3}$ configuration with the components, 0.047, 0.055, 0.037, 0.063, 0.047, 0.036. 0.034, and 0.071 for the channel ($I$$\otimes$$L$)=(0$\otimes$3), (2$\otimes$1), (2$\otimes$3), (2$\otimes$5), (4$\otimes$1), (4$\otimes$3), (4$\otimes$5), and (4$\otimes$7), respectively. On the other hand, the SCM calculation locates the first $3^-$ as a BdG state at an excitation energy slightly higher than the experiment. The two results seem to be consistent that this $3^-$ state has a vibrational character. Considering that the SCM describes the $\alpha$ clustering aspects in view of wave nature, the $\alpha$ cluster structure in $^{40}$Ca is found to be understood from both the viewpoints of crystallinity and condensation associated with superfluidity, a property of supersolidity. It is found in the SCM that the emergence of the $0^+$ state at very low excitation energy, which is mysterious from the viewpoint of multiparticle multihole excitation in the shell model, is the consequence of the first excited $0^+$ being a collective state with a soft mode nature caused by the NG zero mode due to the spontaneous symmetry breaking of the vacuum ground state due to the condensation aspect, superfluidity, of the duality of $\alpha$ clustering. This mechanism is quite general when SSB of the global phase of the vacuum occurs. In the previous paper \cite{Ohkubo2020} the mysterious $0^+$ state in $^{12}$C, the Hoyle, state was understood as a soft mode state of the zero mode due SSB due to the condensation aspect of the duality of $\alpha$ cluster structure of the vacuum ground state. What is the evidence for the supersolidity? The direct evidence of supersolidity is the observation of a Nambu-Goldstone mode \cite{Nambu1960,Goldstone1960,Nambu1961} due to SSB of the global phase as was confirmed very recently for an optical lattice supersolid \cite{Tanzi2019A,Natale2019,Guo2019}. Since the superfluid density $\rho_s$ is the order parameter of the SSB of the global phase \cite{Ohkubo2020}, the existence of $\rho_s$$\ne0$ in the GCM $\alpha$ cluster wave function of the ground state due to the duality accompanies a Nambu-Goldstone mode state, which is a very low-lying collective state and is difficult to explain in the shell model. This logic is same as the emergence of rotational band states in deformed nuclei. Quadrupole deformation with the order parameter $\delta\ne0$ is caused by a quadrupole boson condensation in the ground state due to SSB of rotational invariance \cite{Ring1980}. The appearance of the intruder collective states at a very low excitation energy near the $\alpha$ threshold such as the mysterious $0_2^+$ states in $^{40}$Ca and in $^{16}$O, analogous to the intruder $0_2^+$ state in $^{12}$C, which has been understood by the empirical threshold rule of the Ikeda diagram \cite{Ikeda1968,Horiuchi1972}, is considered to be understood from the viewpoint of the Nambu-Goldstone mode due to SSB of the global phase of the $\alpha$ cluster structure. It is to be noted that the present reasonable success of the SCM, which assumes no geometrical $\alpha$ cluster configuration, does not mean ruling out the geometrical $\alpha$ cluster model. It should be emphasized that the SCM only describes the condensation, superfluid, aspect of the duality of the $\alpha$ cluster structure, being complementary to the geometrical $\alpha$ cluster picture. In the $\alpha$ cluster structure the geometrical structure is essential. The SCM does not replace the geometrical $\alpha$ cluster model. For example, the rotation motion, the large moment of inertia, is caused by the spontaneous symmetry breaking of the rotational invariance due to the geometrical $\alpha$ cluster configuration, which is absent in the present SCM under a spherical trapping potential. The reduction of the moment of inertia in the SCM calculations is inherent to superfluidity, which is well known in the superfluid heavy nuclei \cite{Ring1980}. A cluster model with a geometrical configuration which involves the order parameter to characterize the condensation of $\alpha$ clusters is a future work to be studied. Finally I mention the importance of the Pauli principle for the duality of geometrical crystallinity and condensation of $\alpha$ cluster structure. The geometrical crystallinity of the $\alpha$ clusters has been known to be caused by the Pauli principle \cite{Tamagaki1969,Ohkubo2016}. In Fig.~1(c), 1(f), 1(i), and 1(l) of each nucleus the coherent wave of the $\alpha$ cluster structure is the consequence of the geometrical crystallinity. Thus the Pauli principle has the dual role of causing the geometrical clustering and condensation. In this sense the origin of the superfluidity of $\alpha$ cluster structure is different from that of the BCS superfluidity in heavy nuclei and cold atoms. \section{Summary} \par It was shown that the spatially localized Brink $\alpha$ cluster model in the generator coordinate method has the apparently incompatible duality of crystallinity and condensation of $\alpha$ clusters, a property of a supersolid. In order to see whether the $\alpha$ cluster structure, which in recent decades has been understood based on the crystallinity picture with a geometrical configuration of clusters, is also understood from the other aspect of the duality, condensation, a field theoretical cluster model is used in which the order parameter of condensation is introduced. The $\alpha$ cluster structure of $^{40}$Ca with a mysterious $0^+$ state at very low excitation energy was investigated by the SCM with ten $\alpha$ clusters. Since the SCM rigorously treats spontaneous symmetry breaking of the global phase, a Nambu-Goldstone collective mode, zero mode, due to condensation inevitable appears. It is shown that the mysterious $0^+$ state, which is considered to be a band head state of the $K=0^+$ band with the $\alpha$+$^{36}$Ar cluster structure in the crystallinity picture, is understood as an NG zero-mode state. The $1^-$ state, which is considered in the crystallinity picture to be a band head state of the $K=0^-$ band, which is a parity-doublet partner of the $K=0^+$ band, is obtained as a BdG state in correspondence to the experimental data. The two $\alpha$ cluster $0^+$ states at around 6 MeV are also obtained in accordance with the experimental data. Thus it is found that the low-lying $\alpha$ cluster states, which have been considered to be understood in the geometrical cluster picture, can be also understood from the other aspect of the duality, condensation. The mysterious $0^+$ state of $^{40}$Ca is a collective mode, a soft mode, of the NG mode caused by SSB of the ground state vacuum. This explains naturally why the mysterious low-lying $0^+$ state appears below or near the $\alpha$ threshold energy of $^{40}$Ca. This mechanism is logically the same as the emergence of the NG mode rotational band states in deformed nuclei, which is caused by a quadrupole boson condensation due to SSB of rotational invariance \cite{Ring1980}. The appearance of such intruder collective states near the $\alpha$ threshold, which has been understood by the empirical threshold rule of the Ikeda diagram \cite{Ikeda1968,Horiuchi1972}, is understood to be due to the NG mode state due to condensation of the $\alpha$ cluster structure. The dual property of crystallinity and condensation, a property of a supersolid, of $\alpha$ cluster structure may be a general feature of the $\alpha$ cluster structure. Since the Pauli principle is responsible for clustering \cite{Ohkubo2016,Tamagaki1969}, one can say that supersolidity of $\alpha$ cluster structure is the consequence of the Pauli principle. The author thanks J. Takahashi for the numerical calculations using the superfluid cluster model in the early stage of this work and Y. Yamanaka for interest. He also thanks the Yukawa Institute for Theoretical Physics, Kyoto University for the hospitality extended during a stay in 2019.
1,477,468,750,750
arxiv
\section{Introduction} \label{S-Intro} A liquid film flowing down a vertical fibre is often encountered in a wide variety of technological applications such as condensers, emergency cooling of nuclear fuel rods and optical fibre coating. It can also serve as a simple prototype for the study of wave instabilities and transitions in open flow hydrodynamic and other nonlinear systems. As a consequence, this problem has been an active topic of both experimental and theoretical research, especially over the past two decades. Experimental studies have revealed a complex wave dynamics dominated by axisymmetric and localised tear-drop-like structures which continuously interact with each other~\cite[]{Que90,Kli01,Cra06,Dup07,Dup09a}. These structures are robust as they propagate over long distances without changing their speed or shape significantly; they are also separated by portions of nearly flat film, and they can be referred to as traveling waves. When the portions of nearly flat film between these structures are much longer than their characteristic length, the structures can be referred to as solitary waves. The formation of solitary/traveling waves results from the interplay between two different instability mechanisms: (i) The classical instability of a liquid film flowing down an inclined planar substrate prompted by inertia effects. This mode was initially characterised experimentally and theoretically by Kapitza and his son~\cite[]{Kap49}; (ii) The interfacial instability of a liquid cylinder, considered first in experiments by Plateau and was theoretically explained by Lord Rayleigh \cite[]{Pla73,Ray78}. These two mechanisms are hereinafter referred to as the K and RP~modes of instability, respectively. The experimental study by Duprat \etal \cite{Dup09a}, in particular, provided reports on the traveling waves' characteristics, namely shape, speed and amplitude, for very viscous fluids. The parameter values in the experiments were chosen in order to investigate the interplay of the K and RP modes on the waves; thus the study by Duprat \etal~\cite{Dup09a} completed the flow regime portrait obtained by Kliakhandler \cite{Kli01} for very viscous fluids but in the inertialess limit. Regular wavetrains were generated by means of a periodic forcing at the inlet. Two flow regimes were identified. For thin fibres and/or small flow rates, the RP mode is dominant and the solitary waves resemble beads or sliding drops whose shape is affected by gravity. In fact, when rescaled by the amplitude and the tail length, the profiles are nearly superimposed. At the same time, the flow field in a frame moving with the beads is characterised by recirculation zones within the beads. When closed streamlines exist in the moving frame, a fluid particle is trapped in both moving and laboratory frames. Hence, the beads transport the trapped fluid mass downstream. For these reasons, we refer to the flow regime observed for thin fibres and/or small flow rates, as the {\sl drop-like regime\/}. For thick fibres and/or large flow rates, the K mode is dominant and a steepening of the wave front is observed with an increase of the wave amplitude. Mass transport is not observed in this regime except for a few cases corresponding to the largest waves. We refer to this regime as the {\sl wave-like regime\/}. We can conjecture that the onset of recirculation zones in the wave-like regime is a signature of the increased prevalence of the K mode. Eventually, we observe a transition from the {\sl drag-gravity\/} regime, where inertia plays a perturbative role, to the {\sl drag-inertia\/} regime, where inertia effects become dominant. Similar regimes and a transition between the two as inertia effects increase were first observed in the planar case~\cite[]{Oos99,Ruy05b}. Noteworthy is that the drop-like regime is similar to the one observed by Qu\'er\'e~\cite{Que90} on a fibre or wire being pulled out form a liquid bath and results from the same physical mechanisms. The thin annular film that coats the wire can break up into drops. This drop formation process occurs when the typical time of growth of the RP instability is much smaller than the typical time of advection of a structure by the flow, \ie for small fibre radii and/or small flow rates. At the theoretical front, a number of modelling approaches have been proposed within the framework of the long-wave approximation of the Navier-Stokes equations and associated wall and free-surface boundary conditions~\cite[]{Fre92,Tri92,Kal94,Kli01,Roy02,Cra06,Rob06,Nov09}: The basic assumption of this approximation is that of slow spatial and time modulations of the film thickness motivating the introduction of a formal perturbation parameter, the `long-wave' or `film parameter' $\epsilon$ measuring such modulations. Perturbation expansions in terms of this parameter then lead to substantial simplifications of the governing equations and boundary conditions. Additional assumptions lead to further simplifications, i.e. small film thickness $h$ in comparison to the fibre radius $R$ or negligible inertia. The resulting models are either single evolution equations for the film thickness $h$, e.g. the model by Frenkel~\cite{Fre92} based on the long-wave approximation only, or systems of two coupled evolution equations for the film thickness $h$ and streamwise flow rate $q$ which combine the long-wave approximation and other approaches, e.g. the model by~Trifonov \cite{Tri92} based on the `integral-boundary-layer' approximation and the more recent model by Novbari and Oron~\cite{Nov09} based on an `energy integral' method. It should be noted that all the above studies neglected the second-order viscous terms originating from the stream-wise momentum equation (streamwise viscous diffusion) and tangential stress balance (second-order contributions to the tangential stress at the free surface). The recent study by Ruyer-Quil \etal~\cite{Ruy08} formulated a two-evolution equation model for $h$ and $q$ that took into account the second-order viscous terms but also included inertia and was not limited to small aspect ratios $h/R$. The model was based on a combination of the long-wave approximation and a weighted-residuals approach using appropriate polynomial test functions for the velocity field -- a `weighted residuals integral boundary layer' (WRIBL) model following the terminology introduced by Oron~\cite{Oro08}. It should be noted that the WRIBL model is consistent~\footnote{Following the terminology introduced in~\cite{Ruy98,Ruy00,Ruy02}, a film flow model is {\it consistent} at $O(\epsilon^n)$ when all neglected terms are of higher order, or equivalently no terms of $O(\epsilon^n)$ or smaller have been omitted, and hence a gradient expansion of the model up to $O(\epsilon^n)$ agrees exactly with the single evolution equation for $h$ at $O(\epsilon^n)$ obtained with just the long-wave approximation.} at $O(\epsilon)$ for the inertial terms and at $O(\epsilon^2)$ for the remaining contributions, whereas the models obtained by Trifonov~\cite{Tri92} and Novbari and Oron~\cite{Nov09} are not consistent at $O(\epsilon)$. Furthermore, the study by Duprat \etal~\cite{Dup09b} compared the wavetrains generated experimentally by periodic forcing at the inlet to the traveling-wave solutions of the WRIBL model showing remarkable agreement in all cases, thus validating experimentally the model. Experimental validation was done in the study by Ruyer-Quil \emph{et al.} where the traveling-wave solutions of the WRIBL model were favorably compared to the experiments by Kliakhandler~\emph{et al.}~\cite{Kli01} while the spatio-temporal dynamics of the film computed numerically with the WRIBL model was shown in be in good agreement with the experiments by Duprat \emph{et al.}~\cite{Dup07} As was shown in \cite{Ruy08}, the second-order viscous terms have a dispersive effect on the speed of the linear waves (they introduce a wavenumber dependence on the speed) and hence we shall refer to this effect as {\sl viscous dispersion\/}. Viscous dispersion influences the shape of the capillary ripples in front of a solitary hump, more specifically, the amplitude and frequency of the capillary ripples, an effect which is amplified as the Reynolds number is increased. Hence, viscous dispersion is in fact a linear effect, but interestingly it can have some crucial consequences on the nonlinear dynamics of the film and the wave-selection process in the spatio-temporal evolution. After all, solitary pulses interact through their tails which overlap, i.e. the capillary ripples and their amplitude and frequency will affect the separation distance between the pulses. For example, smaller-amplitude ripples will allow for more overlap between the tails of neighboring pulses, thus decreasing their separation distance. These points have been analyzed in detail in the recent work by Pradas \etal~\cite{Pra11} on coherent structures interaction on falling films on planar substrates and the influence of viscous dispersion on interaction. Their analysis was based on the weighted-residuals models obtained in~\cite{Ruy98,Ruy00,Ruy02}. The main aim of the present study is to characterize theoretically the solitary/traveling waves propagating down the fibre within the framework of the WRIBL model. A first stab to the investigation of the traveling wave solutions of the WRIBL model was the recent study by Ruyer-Quil~\emph{et al.}~\cite[Sect. 6]{Ruy08}. In this study traveling wave solutions were constructed numerically and were favorably compared to the experiments by Kliakhandler~\emph{et al.}~\cite{Kli01}, as noted earlier. Here we undertake an asymptotic analysis of the governing equations for solitary/traveling waves in various limiting cases, i.e. in the limits of small/large values of the pertinent parameters. We also obtain both numerically and asymptotically static drops in the drop-like regime. Furthermore, by using elements from dynamical systems theory, we provide a detailed and systematic parametric study of their speed, shape and amplitude, i.e. we construct bifurcation diagrams for their speed as a function of pertinent parameters, as well as obtain ranges in the parameter space for which the K~or RP~modes of instability, prompted by inertia and azimuthal curvature, respectively, are dominant. We scrutinize the four different regimes described earlier (drop-like, wave-like, drag-gravity, drag-inertia), and provide detailed phase diagrams and corresponding regime maps for very viscous and less viscous fluids, thus providing a deeper understanding of the problem as well as new insights and also completing the flow regime diagram provided in~\cite{Dup09a} for viscous fluids. Section~\ref{Formulation} introduces the pertinent non-dimensionalization and the WRIBL model. Solitary wave solutions are constructed in \S~\ref{S-dyn}. Asymptotic limits for small/large values of the pertinent parameters are analyzed in \S~\ref{S-asymp}. Traveling waves corresponding to the experimental conditions considered in \cite{Kli01,Dup07} are next discussed in \S~\ref{S-TW}. Section~\ref{S-Phase} presents a phase diagram of the different regimes. Finally, a summary of our findings and concluding remarks are offered in \S~\ref{S-concl}. \section{Formulation} \label{Formulation} \subsection{Natural set of parameters} \label{S-Natural} Consider a film flowing down a vertical cylinder under the action of gravity. The liquid has dynamic and kinematic viscosity, $\mu$ and $\nu$, respectively, density $\rho$ and surface tension $\sigma$. The flow is assumed to remain axisymmetric. $\bar r$, $\bar x$, $\bar u$ and $\bar t$ denote the radial, pointing outwards from the fibre centreline coordinate, the axial coordinate oriented along gravity, the axial velocity distribution and time, respectively (bars are used to distinguish dimensional from dimensionless quantities unless the distinction is unnecessary). From simple physical considerations and without prior knowledge of the specific details of the system, the following scales can be readily identified: The fibre radius $\bar R$, the Nusselt thickness $\bar h_{\rm N}$ of the uniformly coated film, the length and time scales, $l_\nu=\nu^{2/3} g^{-1/3}$ and $t_\nu = \nu^{1/3} g^{-1/3}$, based on gravity and viscosity (making explicit the balance between gravity and viscous forces giving rise to the Nusselt flat-film solution) and the capillary length $l_c= \sqrt{\sigma/(\rho g)}$. A first set of pertinent dimensionless groups arises from these scales. The aspect ratio \begin{subequations} \label{natural-params} \begin{equation} \ti \alpha \equiv \bar h_{\rm N}/{\bar R} \end{equation} which assesses azimuthal curvature effects at the scale of the film, the Goucher number \cite[]{Que99}, \begin{equation} G\!o\equiv{\bar R}/l_c\,, \end{equation} that compares azimuthal and axial surface tension effects and the Kapitza number, \begin{equation} \Gamma\equiv\sigma/(\rho \nu^{4/3} g^{1/3}) = (l_c/l_\nu)^2\,, \end{equation} \end{subequations} comparing surface tension and viscosity. Useful combinations of these parameters are $h_{\rm N} \equiv \bar h_{\rm N}/l_\nu$ and $\bar h_{\rm N}/l_c$. The former compares the film thickness to the gravity-viscous length scale and, indirectly, inertia and viscosity since the Nusselt base flow is the result of the balance of gravity and viscosity. The latter, $\bar h_{\rm N}/l_c$ is related to the Bond number $B\!o=\rho g \bar h_{\rm N}^2/\sigma = (\bar h_{\rm N}/l_c)^2$ comparing surface tension and gravity at the scale of the film. The advantage of the set of parameters $\ti \alpha$, $G\!o$ and $\Gamma$ is that when the geometry and the working fluid are fixed, the Goucher and the Kapitza numbers $G\!o$ and $\Gamma$ are constant and the only free parameter is $\ti \alpha$. From an experimental point of view, $\ti \alpha$ can be varied independently by varying the inlet flow rate. The Kapitza number $\Gamma$ is entirely defined by the fluid properties independently of the flow characteristics, whereas the Goucher number $G\!o$ can be easily varied by replacing the fibre. Hence, the parameters $\ti \alpha$, $G\!o$ and $\Gamma$ can therefore be viewed as `natural' for the fibre problem, and we will systematically recast our results in terms of these parameters $\ti \alpha$, $G\!o$ and $\Gamma$. Table~\ref{table2} gives the physical properties of four different fluids of increasing viscosities commonly used in experiments and corresponding to a wide range of Kapitza numbers together with the corresponding capillary lengths and viscous-gravity length scales. For simplicity, our results will be presented for the Kapitza numbers listed in table~\ref{table2}. \begin{table*} \begin{center} \begin{tabular}{l|ccc|ccc} &$\nu$ (${\rm mm}^2s^{-1}$) & $\rho$ (${\rm kg}{\rm m}^{-3}$) & $\sigma$ (${\rm mN} {\rm m}^{-1}$) & $l_c$ (mm) & $l_\nu$ (mm) & $\Gamma$ \\ water & 1 & 998 & 72.5 & 2.7 & 0.047 & 3376 \\ Rhodorsil silicon oil v50 & 50 & 963 & 20.8 & 1.5 & 0.63 & 5.48 \\ castor oil & 440 & 961 & 31 & 1.8 & 2.7 & 0.45\\ Rhodorsil silicon oil v1000 & 1000 & 980 & 21.1 & 1.5 & 4.7 & 0.10 \\ \hline \end{tabular} \end{center} \caption{Fluid properties, capillary length $l_c$, gravity-viscous length $l_\nu$ and Kapitza number used in the present study. The data for silicon oil v50 and castor oil have been taken from \cite{Dup07,Kli01}. \label{table2} } \end{table*} \subsection{WRIBL model} \label{S-WRIBL} We now adapt Shkadov's scaling~\cite[]{Shk77} and introduce different length scales for the streamwise (axial) and radial directions. The length scale in the radial direction is the Nusselt thickness $\bar h_{\rm N}$, whereas the length scale in the streamwwise direction is chosen as $\kappa \bar h_{\rm N}$ defined by the balance of the streamwise pressure gradient induced by capillarity, $\propto \sigma \partial_{xxx} h$, and gravity acceleration, $\rho g$, which gives $\kappa =[\sigma/(\rho g \bar h_{\rm N}^2)]^{1/3} = (l_c/\bar h_{\rm N})^{2/3}$. The time scale is defined with reference to the Nusselt solution of uniform thickness (a result of the balance of gravity and viscosity). The volumetric flow rate per unit length of circumference, $q_{\rm N} =R^{-1} \int_{R}^{R + h_{\rm N}} {u} \,{r} d{r}$, of a film of constant thickness $\bar h_{\rm N}$ is given by \begin{equation} \label{bqN} {\bar q}_{\rm N} \equiv \frac{g\bar h_{\rm N}^3}{3\nu} \phi(\ti \alpha)\,, \end{equation} where $\phi$ is a geometric factor defined in (\ref{def-phi}) and measures the deviation of the flow-rate-to-thickness relation from the cubic dependency corresponding to the planar geometry ($\phi(0)=1$). Similarly to the streamwise length scale, the time scale is stretched by a factor $\kappa$ and thus defined as $3\kappa\bar h_{\rm N}^2/{\bar q}_{\rm N}=\nu\kappa/[g \bar h_{\rm N} \phi(\ti \alpha)]$. Our basic equations for the analysis to follow are the WRIBL model obtained in~\cite{Ruy08}, a set of two evolution equations for the local film thickness $h(x,t)$ and the local flow rate $q(x,t) \equiv R^{-1} \int_{R}^{R + h(x,t)} {u} \,{r} d{r}$. For the sake of clarity and completeness we re-write the WRIBL model here, \begin{subequations} \begin{eqnarray} \label{q} \partial_t h &=& -\frac{1}{1 + \ti \alpha h} \partial_x q\,, \\\nonumber \delta \partial_t q &=& \delta\left[- {F} (\ti \alpha h)\,\frac{q}{h} \partial_x q + G(\ti \alpha h) \,\frac{q^2}{h^2} \partial_x h \right] + \frac{I(\ti \alpha h)}{\phi(\ti \alpha)} \left[ -\frac{3\phi(\ti \alpha)}{\phi(\ti \alpha h)} \frac{q}{h^2} \right. \\&& \left. \nonumber + h \left\{1 + \partial_{xxx} h + \frac{\beta}{(1 +\ti \alpha h)^2} \partial_x h - \frac{1}{2} \partial_x \left(\frac{\ti \alpha}{1 +\ti \alpha h} (\partial_x h)^2 \right) \right\} \right] \\ && + \eta\left[ J(\ti \alpha h) \frac{q}{h^2} (\partial_x h)^2 - K(\ti \alpha h) \frac{\partial_x q \partial_x h}{h} - L(\ti \alpha h) \frac{q}{h} \partial_{xx} h + M(\ti \alpha h) \partial_{xx} q \right] \,, \label{mom-shk} \end{eqnarray} \label{model2shk} \end{subequations} in terms of the Shkadov scaling, where $\phi$, ${F}$, $G$, $I$, $J$, $K$, $L$, and $M$ are functions of the aspect ratio $\ti \alpha$ given in Appendix~\ref{S-Coeffs}. Equation~(\ref{q}) is the (exact) dimensionless mass balance whereas (\ref{mom-shk}) is the streamwise momentum equation averaged across the film with a weighted-residuals approach. It should be emphasized that the WRIBL model has been validated in \cite{Ruy08,Dup09a} through direct comparisons to the experiments in \cite{Kli01,Dup07,Dup09a} as noted in \S~\ref{S-Intro} [for both very viscous and less viscous liquids (castor oil and silicon oil V50, see table~\ref{table2}) and a wide range of the parameters ($0.15\leG\!o\le1$ and $0.5\le\ti \alpha\le4.5$)]. Shkadov's scales introduce three new dimensionless groups besides the aspect ratio $\ti \alpha=h_{\rm N}/R$, a `reduced Reynolds number', \begin{subequations} \label{Shk-param} \begin{equation} \delta \equiv 3 {\bar q}_{\rm N} /(\nu \kappa) = \left(\ti \alpha G\!o \right)^{11/3} \phi(\ti \alpha) \Gamma^{3/2}, \end{equation} which compares inertia and the viscous drag at the scale $\kappa \bar h_{\rm N}$ introduced by the balance of gravity and capillarity, a streamwise `viscous dispersion parameter', \begin{equation} \eta \equiv 1/\kappa^2 = ({\bar h_{\rm N}}/l_c)^{4/3} = \left(\ti \alpha G\!o \right)^{4/3}, \end{equation} and the dimensionless group, \begin{equation} \beta\equiv\ti \alpha^2/\eta = \ti \alpha^{2/3} G\!o^{-4/3}, \end{equation} \end{subequations} which is a combination of $\ti \alpha$ and $\eta$ and compares azimuthal to axial surface tension effects. We have made explicit in (\ref{Shk-param}) the relations of $\delta$, $\eta$ and $\beta$ to the `natural' parameters $\ti \alpha$, $G\!o$ and $\Gamma$. Notice that all second-order/viscous-dispersion terms are gathered in the last row of (\ref{mom-shk}) and are multiplied by $\eta$. Finally, we recall the usual definition of the Reynolds number based on the flow rate, $R\!e={\bar q}_{\rm N}/\nu=h_{\rm N}^3\phi(\ti \alpha)/3$ where again $h_{\rm N}=\bar h_{\rm N}/l_\nu$. The advantage of Shkadov's scaling stems from (i) the direct reference to the Nusselt uniform film flow that simplifies the comparisons between solutions, with the Nusselt solution corresponding to constant values of the film thickness and flow rates $h=1$ and $q=1/3$; (ii) the association of a single parameter to each physical effect affecting the balance of the different forces: Inertia ($\delta$), azimuthal surface tension ($\beta$), viscous dispersion ($\eta$) and geometry ($\ti \alpha$). As noted in \S~\ref{S-Intro} the WRIBL model was derived with the long-wave approximation (i.e. under the assumption of slow space and time modulations, $\partial_{x,t} \sim \epsilon$, where $\epsilon$ is the long-wave/film parameter) and a weighted-residuals approach in which the velocity field is expanded on an appropriately chosen set of test functions. This expansion takes into account the (small) deviations of the velocity field from the uniform-thickness solution. As also noted in \S~\ref{S-Intro}, contrary to the model obtained by Trifonov~\cite{Tri92} -- see also~\cite{Sis06} -- and to the model by Novbari and Oron \cite{Nov09}, (\ref{model2shk}) is consistent up to $O(\epsilon)$ for the inertia terms and up to $O(\epsilon^2)$ for the remaining contributions (and accounts for viscous dispersion). Indeed, both Trifonov's and Novbari and Oron's approaches assume a self-similar velocity distribution and do not account for the deviations of the velocity profile induced by the free-surface deformations. For this reason, their two-equation formulations lack consistency even at first order in the film parameter. Furthermore, we note that the energy-integral approach employed by Novbari and Oron \cite{Nov09} is not consistent with the kinetic energy balance of the flow. Indeed, writing formally the axial momentum equation as ${\cal M}(u)=0$, Novbari and Oron's averaged momentum equation reads $\int_R^{R+h} {\cal M}(u)u \,dr=0$ whereas the kinetic energy balance of a section of the liquid corresponds to $\int_R^{R+h} {\cal M}(u) u\, d(r^2)=0$. Truncating then ${\cal M}(u)$ at $O(\epsilon^2)$ is equivalent to the Galerkin approach that can be used to reduce the algebra leading to (\ref{mom-shk}) \cite[]{Ruy08}. Noteworthy is that the two-equation model (\ref{model2shk}) is not limited to small aspect ratios unlike, e.g. the model by Roberts and Li~\cite[]{Rob06}. To end this section let us point out one apparent drawback of the Shkadov scaling, namely the divergence of the dimensionless parameter $\beta=\ti \alpha^2/\eta$ as the viscous dispersion parameter goes to zero for $\ti \alpha=O(1)$. The divergence of $\beta$ signals that the typical length of a wave is not determined by the balance of gravitational and axial capillary forces, as assumed in Shkadov's scaling, but rather by the balance of axial and azimuthal capillary forces, in which case the typical curvatures of the beads in the azimuthal and axial directions must coincide. The beads have thus a drop-like rounded shape, the long-wave approximation starts to be violated and the viscous dispersion effects cannot be a priori discarded as in the planar case. Considering drop-like beads (see \S-\ref{S-drop}) it can thus be useful to adopt a scaling based on the radius ${\bar R}$ of the fibre, which gives the time scale $\nu/(g {\bar R})$. This scaling introduces a Galilei number, $G\!a \equiv g {\bar R}^3/\nu^2= G\!o^3 \Gamma^{3/2}$. \subsection{Surface equations and saturation numbers} \label{S-saturation} Neglecting inertia and viscous dispersion, Craster and Matar \cite{Cra06} formulated a single evolution equation for the film thickness $h$, the Craster and Matar (CM) equation, \begin{equation} \label{KDB} \partial_t \left( h + \frac{\ti \alpha}{2} h^2 \right) + \partial_x \left[ \frac{h^3}{3} \frac{\phi(\ti \alpha h)}{\phi(\ti \alpha)} \left( 1 + \frac{\beta}{(1+\ti \alpha h)^2} \partial_x h + \partial_{xxx} h \right) \right] =0\,. \end{equation} For sufficiently thin films, that is $\ti \alpha \to 0$, we obtain from (\ref{KDB}) \begin{equation} \label{Serafim} \partial_t h + \partial_x \left[ \frac{h^3}{3} \left( 1 + \beta \partial_x h + \partial_{xxx} h \right) \right] =0\,, \end{equation} which is the equation derived initially by Frenkel \cite{Fre92} (see also~\cite{Kal94}). Equations~(\ref{KDB}) and~(\ref{Serafim}) are the simplest evolution equations balancing all relevant physical effects, gravity, viscous drag, surface tension and the fibre curvature. They are equations for the free surface $h(x,t)$ only and following the terminology introduced by Ooshida~\cite{Oos99}, we shall refer to them as {\sl surface equations\/}. The CM equation offers a simple prototype for the flow, easily amenable to mathematical and numerical scrutiny. It can be obtained asymptotically from (\ref{model2shk}) in the limit $\delta\to0$ and $\eta\to0$ when the nonlinear term $- 1/2 \partial_x \left(\ti \alpha/(1 +\ti \alpha h) (\partial_x h)^2 \right)$ of the azimuthal curvature gradient is omitted. Yet, as already stated, $\ti \alpha=O(1)$ implies that $\beta$ diverges to infinity and therefore that second-order viscous dispersive terms cannot be a priori discarded and that the long-wave assumption is invalid. By contrast, the derivation of the Frenkel's equation (\ref{Serafim}) can be obtained in the distinguished limit $\ti \alpha\to0$, $\eta\to 0$ and $\beta=\ti \alpha^2/\eta=O(1)$. Nevertheless, comparisons of the solutions to the CM equation to the experiments show good agreement~\cite[]{Ruy08,Dup09b}. Using the surface equation (\ref{Serafim}), Kalliadasis and Chang~\cite{Kal94}, and Chang and Demekhin~\cite{Cha99} analysed the mechanism of the formation of drops observed by Qy\'er\'e~\cite{Que90} when a fibre or wire is drawn out of a liquid bath. Qu\'er\'e observed suppression of the RP mode for sufficiently small coating films. More specifically, below a certain critical thickness the film deposited on the wire develops small-amplitude interfacial waves with the flow preventing their growth into drops (such drops would always be observed when the wire is horizontal, as the suppression of the RP mode induced by the flow is absent in this case). On the other hand, beyond the critical film thickness growth of the interfacial waves into drops was observed. Kalliadasis and Chang~\cite{Kal94} found that the amplitude of the solitary-wave solutions to (\ref{Serafim}) diverges/`blows-up' for $\beta$ larger than a critical value $\beta_c=1.413$, which closely corresponds to the formation of drops in Qu\'er\'e's experiments. These authors also performed an analytical construction of the solitary waves for $\beta \rightarrow \beta_c$ using matched asymptotic expansions. They showed that drop formation results from the inability of the flow advection to saturate the instability growth through a nonlinear saturation mechanism. The advection and the growth of the instability can be compared through the definition of the advection time $\tau_a$ of an interfacial structure over its length and the definition of a typical time of growth of the structure $\tau_g$ as the inverse of the maximum growth rate. Based on the Fenkel equation (\ref{Serafim}), the stability analysis of the uniform film leads to the dispersion relation \begin{equation} \label{disp-Frenk} \omega = k+ \frac{{\rm i} k^2}{3} (\beta - k^2) \end{equation} which governs infinitesimal perturbations around the base Nusselt flow of wavenumber $k$ and angular frequency $\omega$. Based on (\ref{disp-Frenk}) the ratio $\tau_a/\tau_g$ reads: \begin{equation} \label{tauKDB} \tau_a/\tau_g =\omega_i/\omega_r|_{k=\sqrt{\beta/2}} \propto \beta^{3/2} \end{equation} (see \cite{Ruy08} for details). Therefore, $\beta$ compares $\tau_a$ to $\tau_g$. For, $\beta<\beta_c$, the instability growth is slower and thwarted than the advection by the flow. The same mechanism is also in play in the saturation of the drops though it is then strongly nonlinear. For these reasons, we refer to $\beta$ as a {\sl saturation number\/}, a term that was first introduced in~\cite{Dup09a}. From the CM equation (\ref{KDB}), we get $\tau_a/\tau_g\propto (\beta^\star)^{3/2}$ where the composite parameter $\beta^\star$ is defined as \cite[]{Dup07,Ruy08}: \begin{equation} \label{def-bstar} \beta^\star= \beta c_k^{-2/3}(1+\ti \alpha)^{-8/3}\,. \end{equation} $c_k$ is the speed at which infinitesimal deformations of the free surface are kinematically transported by the flow, or {\sl kinematic wave speed\/}: \begin{equation} \label{ck} c_k = \frac{1}{1 + \ti \alpha} \left[ 1+ \frac{\ti \alpha \phi'(\ti \alpha)}{3 \phi(\ti \alpha)} \right] = \frac{8 (b-1) \left(2 \log (b) b^2-b^2+1\right)}{3 \left(4 \log (b) b^4-3 b^4+4 b^2-1\right)}\,, \end{equation} with $b = 1 + \ti \alpha$. Since $c_k(\ti \alpha=0) = 1$, $\lim_{\ti \alpha\to0}\beta^\star = \beta$. In this limit the Frenkel equation~(\ref{Serafim}) limited to very thin films ($\ti \alpha \ll 1$) applies and $\beta^\star$ is a generalisation of the saturation number $\beta$ to film flows with thicknesses comparable to the fibre radius. \section{Solitary waves and dynamical systems theory} \label{S-dyn} Experimental studies \cite{Kli01, Cra06, Dup07,Dup09a} reported the formation of axisymmetric traveling waves (TWs) propagating without deformations and at constant speed over long distances. Solitary waves, \ie TWs separated by constant-thickness layers of fluid, or substrates, much longer than the characteristic length of the waves, were commonly observed sufficiently far downstream. Theoretically, solitary waves can be viewed as periodic TWs with an infinitely long wavelength. The aim of this section is to investigate infinite-domain solitary waves using elements from dynamical systems theory. In a frame of reference moving with the speed $c$ of the waves, $\xi = x -c t$, the flow is stationary and the set of partial differential equations (\ref{model2shk}) can be converted into a set of ordinary differential ones. The mass balance (\ref{q}) can be integrated once, \begin{equation} \label{q0} q -c\left(h + \frac{\ti \alpha}{2} h^2 \right) \equiv q_0\,, \end{equation} where $q_0$ is the rate at which the fluid flows under the waves. The averaged momentum balance (\ref{mom-shk}) next reads, \begin{eqnarray} \nonumber \delta\left[c\, q' - {F} (\ti \alpha h)\,\frac{q}{h} q' + G(\ti \alpha h) \,\frac{q^2}{h^2} h' \right] + \frac{I(\ti \alpha h)}{\phi(\ti \alpha)} \left[ -\frac{3\phi(\ti \alpha)}{\phi(\ti \alpha h)} \frac{q}{h^2} \right. &&\\ \left. \nonumber + h \left\{1 + h''' + \frac{\beta}{(1 +\ti \alpha h)^2} h' + \frac{\ti \alpha}{1 +\ti \alpha h} h' \left[ h'' - \frac{1}{2}\frac{\ti \alpha}{1 +\ti \alpha h} (h')^2 \right] \right\} \right] &&\\ + \eta\left[ J(\ti \alpha h) \frac{q}{h^2} (h')^2 - K(\ti \alpha h) \frac{q' h'}{h} - L(\ti \alpha h) \frac{q}{h} h'' + M(\ti \alpha h) q'' \right] &=&0 \,, \label{MB_ssdyn} \end{eqnarray} where the primes denote differentiation with respect to the moving coordinate $\xi$. Using (\ref{q0}), (\ref{MB_ssdyn}) can be recast into $h''' = f(h,h',h'')\,,$ where $f$ is a function of the thickness $h$, its first and second derivatives and the parameters $\delta$, $\ti \alpha$, $\eta$ and $c$. We thus end up with a dynamical system of dimension three, \begin{equation} \label{ssdyn} \frac{d}{d\xi} {\bf U} = (U_2, U_3, f(U_1,U_2,U_3))^t, \end{equation} where ${\bf U} = (U_1,U_2, U_3)^t \equiv (h,h',h'')^t$. Solitary waves correspond to homoclinic orbits in the phase space connecting a fixed point to itself. Here we restrict our attention to single-loop homoclinic orbits corresponding to single-hump solitary waves in real space. The fixed points of the dynamical system (\ref{ssdyn}) satisfy $h'=h''=0$ and, \begin{equation} \label{fp} \frac{h^3}{3} \frac{\phi(\ti \alpha h)}{\phi(\ti \alpha)} -c\left(h + \frac{\ti \alpha}{2} h^2 \right) = q_0\,. \end{equation} Requiring that $h=1$ is a solution to (\ref{fp}) gives: \begin{equation} \label{q0_h1} q_0 = \frac{1}{3} - c \left(1+ \frac{\ti \alpha}{2} \right)\,. \end{equation} In addition to the solution $h=1$, there is one more real positive solution to (\ref{fp}) with (\ref{q0_h1}) and the dynamical system (\ref{ssdyn}) admits two fixed points ${\bf U}_{\rm I} = (1,0,0)^t$ and ${\bf U}_{\rm II} = (h_{\rm II},0,0)^t$ whose positions are displayed in Fig.~\ref{fig-fp} as functions of the wave speed $c$ and aspect ratio $\ti \alpha$. The two fixed points coincide when the wave speed $c$ is equal to the speed of linear waves of infinitesimal amplitude and infinite length that are neutrally stable. This situation corresponds to the definition of the linear kinematic waves in the zero-wavenumber limit, whose speed is given in (\ref{ck}). In the limit $\ti \alpha =0$, we recover the expression $h_{\rm II} = -1/2 + \sqrt{3(c-1/4)}$ corresponding to a film flowing down an inclined plane \cite[]{Pum83,Ruy05b}. We note that extending this expression to the axisymmetric geometry by replacing $c$ with $c/c_k$, gives a reasonable approximation $h_{\rm II} \approx -1/2 + \sqrt{3[(c/c_k)-1/4]}$ for the position of the second fixed point even for $\ti \alpha = O(1)$ (see Fig.~\ref{fig-fp}). \begin{figure} \begin{center} \begin{psfrags} \psfrag{h}{$h$} \psfrag{hI}{$h_{\rm I}$} \psfrag{hII}{$h_{\rm II}$} \psfrag{c/ck}{$c/c_k$} \includegraphics[width=0.5\textwidth]{Fig/fp.eps} \end{psfrags} \end{center} \caption{Film thicknesses $h_{\rm I}=1$ and $h_{\rm II}$ corresponding to the location of the fixed points as function of the ratio of the normalised wave speed $c$ to the kinematic wave speed $c_k$ defined in (\ref{ck}). Solid, dashed and dotted lines refer to $\ti \alpha=0$ (planar case), $\ti \alpha=0.5$ and $\ti \alpha=1$, respectively.} \label{fig-fp} \end{figure} Finally, we note that it is sufficient to consider homoclinic orbits around only one of the two fixed points because of the presence of a continuous family of Nusselt flat-film solutions parameterized by the reduced Reynolds number $\delta$ (or $h_{\rm N} = \bar h_{\rm N}/l_\nu$) when $G\!o$ and $\Gamma$ are held constant. Indeed, homoclinic orbits connecting ${\bf U}_{\rm II}$ correspond to phase-space trajectories connecting ${\bf U}_{\rm I}$ through the transformation $h_{\rm N} \to h_{\rm N} h_{\rm II}$. The shape of the tail and front of a solitary wave can be determined by considering how the corresponding homoclinic orbit in the phase space approaches and leaves the fixed point it connects to. Let us consider the linear stability of the fixed point ${\bf U}_{\rm I}$. The dispersion relation governing infinitesimal perturbations $\sim \exp(\lambda \xi)$ is \begin{equation} \label{dispU1} \lambda^3 + \lambda^2 \eta D_\eta + \lambda \delta D_\delta - 3 (1+\ti \alpha) \left( c -c_k \right) =0\,, \end{equation} where \begin{equation} D_\eta = \frac{\phi}{I} \left[- \frac{L}{3} + c (1+\ti \alpha) M \right] \quad \hbox{and}\quad D_\delta = \left\{ \frac{\phi}{I} \left[(1 + \ti \alpha) \left( c^2 - \frac{{F}}{3} c \right) + \frac{G}{9} \right] +\frac{\beta}{\delta(1+\ti \alpha)^2} \right\}\,. \end{equation} Equation~(\ref{dispU1}) can be reduced to the canonical form $P(y)=y^3 + py + q=0$ by the change of variable $\lambda=y - \eta D_\eta/3$, where \begin{equation} p = \delta D_\delta - \frac{\eta^2 D_\eta^2}{3} \quad \hbox{and}\quad q = -3(1+\ti \alpha)(c-c_k) - \frac{\delta\eta}{3} D_\delta D_\eta + \frac{2\eta^3}{27} D_\eta^3\,. \end{equation} Using the Cardan formulae, (\ref{dispU1}) admits a real eigenvalue and a complex conjugate pair when the discriminant $\Delta=4p^3+27q^2$ is $>0$. When $\Delta <0$, (\ref{dispU1}) admits three real eigenvalues. Therefore ${\bf U}_{\rm I}$ changes from a saddle to a saddle-focus at $\Delta=0$. When ${\bf U}_{\rm I}$ is a saddle, the homoclinic orbit departs and returns to the fixed point monotonically along the two eigenspaces corresponding to the eigenvalues of smallest absolute value, and the corresponding tail and front of the solitary wave are monotonic. Conversely, when ${\bf U}_{\rm I}$ is a saddle-focus, the homoclinic orbit leaves monotonically along the eigenspace spanned by the eigenvector corresponding to the real eigenvalue and returns to the fixed point by spiralling on the eigenspace spanned by the eigenvectors corresponding to the complex pair. This spiral corresponds to capillary ripples at the front of the solitary wave while the tail remains monotonic. Figure~\ref{fig-sol} displays the speed and maximum amplitude of the solitary waves corresponding to homoclinic orbits around ${\bf U}_{\rm I}=(1,0,0)^t$. Parameters correspond to the fluid properties of silicon oil v50 ($\Gamma=5.48$, see table~\ref{table2}) and to different fibre radii, $R=1.5$~mm, $0.35$~mm and $0.25$~mm ($G\!o=1$, $0.24$ and $0.17$). The solutions have been computed by continuation using {\sc Auto97} together with its subroutine {\sc HomCont} \cite[]{auto97}. In all cases $\Delta>0$, i.e. ${\bf U}_{\rm I}$ is a saddle-focus and the solitary waves are characterised by monotonic tails and oscillatory fronts. Two different behaviours can be observed for small and large thicknesses, or for $\delta\ll1$ and $\delta \gg 1$. \footnote{Even though, strictly speaking $\delta$ is not allowed to tend to infinity, the question of the behaviour of different quantities of interest for large $\delta$ is a valid one within the context of the WRIBL model as model equations.} \begin{figure} \begin{center} \begin{psfrags} \psfrag{D}{$\delta$} \psfrag{c/ck}{$c/c_k$} \psfrag{h}{$h_{\rm max}$} \subfigure[]{\label{sol-b \includegraphics[width=0.48\textwidth]{Fig/KDB_KaRb.eps}}\hfil \subfigure[]{\label{sol-d \includegraphics[width=0.48\textwidth]{Fig/KDB_KaRb_hN.eps}} \end{psfrags} \end{center} \caption{Speed $c$ (left) and maximum height $h_{\rm max}$ (right) of the solitary waves as function of the reduced Reynolds number $\delta$ for different fibre radii: $R=1.5$~mm (Curves~1), $0.35$~mm (2) and $0.25$~mm (3). Solutions to (\ref{model2shk}) (solid lines) are compared to the solutions of the CM~equation (\ref{KDB}) and of the model (\ref{model2shk}) when inertial terms are set to zero (dashed and dashed-dotted lines). The fluid properties correspond to Rhodorsil silicon oil v50 (see table~\ref{table2}).} \label{fig-sol} \end{figure} For small thicknesses ($\delta \ll 1$), the maximum height $h_{\rm max}$ and speed of the solitary waves exhibit local maxima which strongly increase for thin fibres corresponding to a stronger RP~instability. In Figs.~\ref{sol-b} and \ref{sol-d}, the characteristics of the solitary wave solutions of the WRIBL model (\ref{model2shk}) and of the CM~equation (\ref{KDB}) are compared showing reasonable agreement. As the CM~equation is parameterized by the aspect ratio $\ti \alpha$ and $\beta$, the local maxima are the result of the balance of curvature effects and the advection by the flow. This is reminiscent of the sharp increase, or `blow-up', of speed and amplitude observed by Kalliadasis and Chang~\cite{Kal94} at $\beta=\beta_c \approx 1.413$ in their study of the solitary wave solutions of the Frenkel equation (\ref{Serafim}). One might expect that the sharp increase of the local maxima of the speed and amplitude observed by lowering the fibre radius is related to the nonlinear saturation mechanism of the instability by the advection discussed earlier and, therefore, it should be correlated with the saturation number $\beta^\star$. The validity of this hypothesis is checked in Fig.~\ref{fig-beta-star} where the maximum height of the solitary waves is depicted as a function of $\beta^\star$. At a given value of the Goucher number $G\!o$, $\beta^\star$ reaches a maximum for $\ti \alpha \approx 0.44$ and tends to zero for $\ti \alpha \to 0$ and $\ti \alpha \to \infty$. For a fixed radius $R$, $\delta$ and $\ti \alpha$ have the same trend. This explains why $\beta^\star$ tends to zero when $\delta \to 0$ and $\delta \to \infty$ and justifies the shapes of the curves. The local maximum of $h_{\max}$ occurs at $\beta^\star$ close to the maximum reached by this parameter as $\delta$ is varied. The increase of the local maximum of $h_{\max}$ is related to the increase of the maximal value of $\beta^\star$ achieved as the ratio $G\!o$ is lowered. \begin{figure} \begin{center} \begin{psfrags} \psfrag{hM}{$h_{\rm max}$} \psfrag{betaM}{$\beta^\star$} \includegraphics[width=0.6\textwidth]{Fig/KDB_KaRb_betap.eps} \end{psfrags} \end{center} \caption{Maximum height of solitary waves versus saturation parameter $\beta^\star$. See also caption of Fig.~\ref{fig-sol}.} \label{fig-beta-star} \end{figure} We have also computed solutions of model (\ref{model2shk}) when the inertial terms are cancelled ($\delta\to0$). The results are shown as dashed-dotted lines in Figs.~\ref{sol-b} and \ref{sol-d}. As they asymptote to the solid lines corresponding to (\ref{model2shk}) in the limit $\delta \ll 1$, we can conclude that the difference in speed and amplitude between solutions to (\ref{model2shk}) and to the CM~equation result from the viscous dispersion effects that contribute to the increase of the speed and amplitude of the solitary waves. Figure~\ref{fig-sol} reveals a sharp increase of the maximum height and speed of the waves above $\delta\approx 2$ corresponding to the predominance of the K mode (as the planar case is approached, i.e. when the fibre radius increases, this sharp increase corresponds precisely to the transition between the drag-gravity and drag-inertia regimes). The characteristics of the waves in this region will be considered in detail later on in \S\,\ref{S-DI}. The separation between the local maxima for the speed and amplitude corresponding to the RP-dominated waves ($\delta\ll1$) and the K-dominated waves at large $\delta$ increases for even more viscous fluids like the castor oil used by Kliakhandler \cite{Kli01} (cf. Table~\ref{table2}) as can be observed from Fig.~\ref{solv440-b} where the solutions of the model (\ref{model2shk}) with and without inertia, and of the CM~equation are compared for fibre radii corresponding to the same ratios $G\!o=1$, $0.24$ and $0.17$ as in figure~\ref{fig-sol}. This increase can be understood by considering the definition of $\beta^\star$, which is a function of the aspect ratio $\ti \alpha$ and $G\!o$, and the definition of $\delta=(\ti \alpha G\!o)^{11/3} \phi(\ti \alpha) \Gamma^{3/2}$. Since the Kapitza number $\Gamma$ decreases with the kinematic viscosity, the maximum of $\beta^\star$ for a given value of the Goucher number $G\!o$ corresponds to smaller and smaller values of the reduced Reynolds number $\delta$ as the viscosity of the fluid is increased. In contrast with the results for silicon oil v50 (cf. Fig.~\ref{fig-sol}), a transition from a saddle-focus to a saddle fixed point has been observed corresponding to the disappearance of the capillary ripples at the front of the solitary wave at values of $\delta$ above $0.1$. The precise location of this transition varies with the fibre radius $R$ and is indicated by squares in figure~\ref{solv440-a}. \begin{figure} \begin{center} \begin{psfrags} \psfrag{D}{$\delta$} \psfrag{c/ck}{$c/c_k$} \psfrag{h}{$h_{\rm max}$} \subfigure[]{\label{solv440-a \includegraphics[width=0.48\textwidth]{Fig/KDB_KaRbv440.eps}}\hfil \subfigure[]{\label{solv440-b \includegraphics[width=0.48\textwidth]{Fig/KDB_KaRbv440_hN.eps}} \end{psfrags} \end{center} \caption{Speed $c$ (left) and maximum height $h_{\max}$ (right) as function of the reduced Reynolds number $\delta$ for different fibre radii: $R=1.83$~mm (Curves~1), $0.43$~mm (2) and $0.31$~mm (3). Solutions to (\ref{model2shk}) (solid lines) are compared to the solutions of the CM~equation (\ref{KDB}) and of the model (\ref{model2shk}) when inertial terms are set to zero (dashed and dashed-dotted lines). Homoclinic orbits connecting a saddle-focus (saddle) fixed point to itself are shown in thick (thin) solid lines. Squares indicate the loci of saddle to saddle-focus transitions. The fluid properties correspond to castor oil ($\Gamma=0.45$).} \label{fig-solv440} \end{figure} Conversely, the separation of the solitary wave characteristics, such as speed and maximum height, as a function of $\delta$ into two distinct regions, at low and high reduced Reynolds numbers, vanishes at low viscosities. Indeed at low viscosities, or equivalently, high Kapitza numbers, the RP~mode occurs at relatively high values of $\delta$ where the K~mode already takes over. In Fig.~\ref{sol2-b} we have redrawn Fig.~\ref{sol-d} for water ($\Gamma=3376$, see Table~\ref{table2}), which is fifty times less viscous than silicon oil v50. The figure compares the maximum height of the solitary waves for a ratio of the Goucher number $G\!o$ equal to the ones chosen for the computations shown in Figs.~\ref{fig-sol}, to the amplitude of the solutions of the CM~equation (\ref{KDB}). At small values of $\delta$, the amplitude of the waves is significantly larger than the amplitude of the solutions of the CM~equation, which signals the influence of the K~mode on the RP~instability. Notice that this effect cannot arise from the second-order viscous terms as the cancellation of the inertial terms leads to results comparable to the solutions of the CM~equation. On the other hand, the influence of the RP instability on the K mode is illustrated in Fig.~\ref{sol2-c} where $\beta^\star$ and $h_{\rm max}$ are plotted versus $\delta$. At small fibre radii $R=0.64$~mm and $R=0.46$~mm ($G\!o=0.24$ and $0.17$), the characteristic sharp increase of the solitary-wave amplitude is displaced to values of $\delta$ smaller than $\delta \approx 2$ corresponding to a generalised saturation number $\beta^\star \gtrapprox 1$. \begin{figure} \begin{center} \begin{psfrags} \psfrag{D}{$\delta$} \psfrag{c/ck}{$c/c_k$} \psfrag{h}{$h_{\rm max}$} \psfrag{hmax, bstar}{$h_{\rm max}$, $\beta^\star$} \subfigure[]{\label{sol2-b \includegraphics[width=0.48\textwidth]{Fig/KDB_KaRbv1_hN.eps}}\hfil \subfigure[]{\label{sol2-c \includegraphics[width=0.48\textwidth]{Fig/KDB_KaRbv1_hNbis.eps}} \end{psfrags} \end{center} \caption{(a) Maximum height $h_{\rm max}$ of the solitary waves as function of $\delta$. Solutions of (\ref{model2shk}) (solid lines) are compared to the solutions of the CM~equation (\ref{KDB}) (dashed lines) and solutions of (\ref{model2shk}) where the inertia terms are set to zero (dashed-dotted lines). (b) Maximum height $h_{\rm max}$ (solid line) and parameter $\beta^\star$ (dashed lines). Parameters correspond to water ($\Gamma=3376$) and different fibre radii: $R=2.75$~mm (Curves~1), $0.64$~mm (2) and $0.46$~mm (3).} \label{fig-sol2} \end{figure} \section{Limiting cases} \label{S-asymp} In this section, we focus on the different regions of the wave-characteristics' diagrams displayed in Figs.~\ref{fig-sol}, ~\ref{fig-solv440} and \ref{fig-sol2}, and consider all possible asymptotic limits. \subsection{Small Goucher number limit: the drop-like regime} \label{S-drop} Let us first consider the local maxima of the amplitude $h_{\rm max}$ and speed $c$ with respect to $\delta$ for given values of the Goucher number $G\!o$, or equivalently $\ti \alpha$, observed for viscous fluids like silicon oils in the inertia-less limit ($\delta\ll1$). Table~\ref{table1} depicts these quantities as obtained from the CM equation. As $G\!o$ tends to zero, the RP instability mechanism becomes more and more efficient and we observe a sharp increase of the local maxima of the amplitude $h_{\rm max}$ and of the maxima of the speed of the waves. For such waves, the amplitude can be several orders of magnitude larger than the Nusselt flat film on which they stand, and variations of the Nusselt thickness should only slightly modify the wave characteristics (except perhaps when the film becomes so thin that the corner dissipation at the wave front dominates over the dissipation in the bulk). \begin{table} \begin{center} \begin{tabular}{c|cccccc} \hline $G\!o$ & $\ti \alpha$ & $h_{\rm max}$ & $c$ & $\bar h_{\rm max}/{\bar R}$ & $ c \,\nu/(g {\bar R}^2)$ & $c_{\rm drops} \,\nu/(g {\bar R}^2)$ \\ \hline 0.236 & 0.23 & 3.3 & 3.1 & 0.77 & 0.22 & 0.25\\ 0.168 & 0.15 & 6.9 & 8.8 & 1.05 & 0.25 & 0.30\\ 0.110 & 0.075 & 18 & 38.5 & 1.36 & 0.24 & 0.29\\ 0.055 & 0.022 & 81.2 & 383 & 1.80 & 0.19 & 0.22\\ 0.044 & 0.014 & 129 & 773 & 1.85 & 0.16 & 0.18\\ \hline \end{tabular} \end{center} \caption{Local maxima of the speed $c$ and amplitude $h_{\rm max}$ with respect to $\delta$ for given values of the Goucher number $G\!o$, or equivalently $\ti \alpha$, obtained from the CM~equation (\ref{KDB}). $c_{\rm drops}$ refers to the speed of quasi-static drops sliding downwards coated fibres (see text and Appendix~\ref{S-Static-Drops}). \label{table1} } \end{table} Since $\bar R\ll l_c$, azimuthal surface tension effects dominate over gravity and the typical length scale and height of a wave should correspond to the radius $\bar R$ of the fibre as already pointed out in \S\,\ref{S-WRIBL}. The wave speed should then be determined by the balance of viscosity and gravity at the scale $\bar R$ which gives a typical velocity of $g {\bar R}^2/\nu$ for viscous-gravitational drainage. Justification of the neglect of the inertial terms demands that the Galilei number $G\!a=G\!o^3\Gamma^{3/2}$ is small, which is satisfied for all tested fluids except for water ($\Gamma=3376$). Our computations of the solutions to the CM~equation (\ref{KDB}) confirm these scaling arguments as shown in table~\ref{table1}. In Fig.~\ref{fig-prof-tab2}, we contrast the wave profiles rescaled with respect to $\bar R$ corresponding to Table~\ref{table1}. Except from the front and back of the waves corresponding to the return to the fixed point, the wave profiles are rather symmetric. This front-to-back symmetry shows that gravity does not affect the wave profile, as expected, since the typical size of the wave $\bar R$ is much smaller than the capillary length $l_c$. Therefore, solitary waves in this regime resemble isolated drops sliding under the action of gravity on a wettable fibre, which is precisely why we refer to this regime as the drop-like regime. [When $\bar R$ is larger and approaches $l_c$, on the other hand, the drops will feel the effect of gravity as they grow and they will eventually resemble falling pendant drops.] It corresponds to the observation by Qu\'er\'e~\cite{Que90,Que99} in the coating of wires or thin fibres drawn out of a bath of viscous liquids that the thin annular film deposited on the wires/fibres breaks up into drops. We have checked this analogy by computing the shape of static drops with zero contact angles sitting on a fibre coated with a substrate film of the same liquid (details of the calculation are given in Appendix~\ref{S-Static-Drops}). The agreement of the wave shapes to the symmetrical static-drop shapes is remarkable even in the case of nearly spherical drops such as those obtained at $G\!o=0.044$, where the long-wave assumption is no longer valid. We can therefore conclude that the CM equation (\ref{KDB}), and by extension the WRIBL model (\ref{model2shk}), are accurate in the drop-like regime where surface tension is dominant and even if the long-wave approximation does not hold. Besides, the remarkable agreement between the solutions to the CM~equation (\ref{KDB}) and the WRIBL model (\ref{model2shk}) already noted in Fig.~\ref{fig-sol}, shows that the second-order streamwise viscous terms do not affect the amplitude and speed of the drops. \begin{figure} \begin{center} \begin{psfrags} \psfrag{x/R}{$\xi/R$} \psfrag{r/R}{$r/R$} \psfrag{0.044}{$G\!o=0.044$} \psfrag{0.110}{$0.110$} \psfrag{0.168}{$0.168$} \psfrag{0.236}{$0.236$} \includegraphics[width=\textwidth]{Fig/prof-tab2.eps} \end{psfrags} \end{center} \caption{Wave profiles corresponding to the solutions to the CM equation (solid lines) and static drop shape (thin dashed lines). Labels correspond to the Goucher number. Values of the other parameters are listed in table~\ref{table1}. \label{fig-prof-tab2} } \end{figure} Following \cite{Kal94}, an analytical estimate of the amplitude and speed of the drop-like waves in the limit $G\!o\to0$ may be obtained via matched asymptotic expansions. The appropriate small parameter is the dimensionless speed of the drops. By balancing viscous and capillary forces at the back of the waves, one can easily extend to sliding drops the Landau-Levich-Derjaguin law obtained by Qu\'er\'e~\cite{Que99} in the case of fibres drawn out of a bath. The speed of sliding drops is thus governed by Eq. (\ref{Landau}) which compares favorably to the results from the CM equation in table~\ref{table1}. As a matter of fact, this agreement shows that the thickness of the residual film on which the drops slide is determined by the balance of surface tension and viscous dissipation in the meniscus region. To estimate the speed and amplitude of drop-like waves, one would have to take into account the gravity acceleration and higher-order corrections in the outer region, namely the viscous dissipation in the drop. A task which is difficult, as (a) with the Frenkel equation resolving fully the leading-order outer region requires matching up to third order~\cite[]{Kal94} and quite likely this is the case here, (b) a single `composite equation' for the whole domain, i.e. for both the drop and residuals films, as e.g. in the `drag-out' problem in coating theory~\cite[]{Wil82}, does not exist. \subsection{Large $\delta$ limit: the drag-inertia regime} \label{S-DI} To understand the change of behaviour of the solitary-wave characteristics for $\delta\gg1$, we look at the wave profiles and their projections on the plane ($h$, $h'$). Figure~\ref{fig-profhom} compares two single-loop homoclinic orbits for a small and a large value of $\delta$. For $\delta\ll 1$, solitary waves have a relatively symmetric shape. Except from the neighbourhood of the fixed point ${\bf U}_{\rm I}$, where the escape from ${\bf U}_{\rm I}$ is monotonic and the return to it is oscillatory, a symmetry between the front and back of the waves is observed with a steep front and a steep back. On the contrary, at large values of $\delta$, the front and back of the solitary waves present radically different shapes, with a gentle sloping back edge and a steep front edge. \begin{figure} \begin{center} \begin{psfrags} \psfrag{h}{$h$} \psfrag{hx}{$h'$} \psfrag{xi}{$\xi$} \subfigure[]{\label{fig-profhom_a} \includegraphics[width=0.45\textwidth]{Fig/prof_10j_14-15.eps}}\hfil \subfigure[]{\label{fig-profhom_b} \includegraphics[width=0.45\textwidth]{Fig/hhx_10j_14-15.eps}} \end{psfrags} \end{center} \caption{Profile (a) and projected trajectory (b) onto the plane ($h$, $h'$) of single-loop homoclinic solutions to (\ref{model2shk}) corresponding to single-hump solitary waves. Parameters correspond to Rhodorsil silicon oil v50 ($\Gamma=5.48$) and $R=0.25$~mm. Solid and dashed lines refer to $\delta=5$ and $\delta=3\,10^{-5}$, respectively.} \label{fig-profhom} \end{figure} The observed difference between front and back of the solitary waves in the large-$\delta$ case can be explained by examining the linearised dynamics around ${\bf U}_{\rm I}$. The dispersion relation governing infinitesimal perturbations varying as $\exp(\lambda \xi)$ is given in (\ref{dispU1}). At a given radius $R$ and capillary length $l_c$, large thickness, $h_{\rm N} \gg 1$, corresponds to $\delta \simh_{\rm N}^{11/3} \phi(\ti \alpha) \gg 1$, $\beta/\delta = (l_c /R)^2 /(3 R\!e) \ll 1$ and possibly large viscous dispersion since $\eta \sim h_{\rm N}^{4/3}$. Let $\lambda_1$ be the positive eigenvalue corresponding to the unstable manifold of ${\bf U}_{\rm I}$. The eigenvalues of the tangent subspace to the unstable manifold satisfy $\Re(\lambda_3) \le \Re(\lambda_2) < 0$. When ${\bf U}_{\rm I}$ is a saddle-focus ($\Delta>0$), we further denote $\lambda_{2,3}$ by $- \Sigma \pm{\rm i} \Omega$ with $\Sigma >0$ and $\Omega > 0$. From (\ref{dispU1}) we obtain the estimate $\lambda_1 \sim \delta^{-1} \ll 1$. Since $\lambda_1 + \lambda_2 + \lambda_3= - \eta D_\eta$, we immediately get an estimate of the mean value, $(\lambda_2 + \lambda_3)/2 \sim -\eta$, so that $\Sigma \sim \eta$ when $\Delta>0$. As a consequence, at the back of a solitary wave, the monotonic escape from the fixed point is slow, whereas at the front, the return to ${\bf U}_{\rm I}$ is fast. The above estimates have been confirmed by computations of $\lambda_1$ and $\Sigma$ for the solitary waves shown in Fig.~\ref{fig-sol} corresponding to silicon oil v50. Focusing now at the back of the solitary wave and defining a slow variable ${\tilde \xi} = \xi/\delta$, (\ref{MB_ssdyn}) reads, \begin{eqnarray} \nonumber && \left\{ (1 + \ti \alpha h) \left[ c^2 - c {F}(\ti \alpha h) \frac{q}{h} \right] + G(\ti \alpha h) \frac{q^2}{h^2} + \frac{\beta I(\ti \alpha h)}{\delta (1+\ti \alpha h)^2 \phi(\ti \alpha)} \right\} \frac{{\rm d} h }{{\rm d} {\tilde \xi}} \\ && \qquad \qquad \qquad \qquad = \frac{I(\ti \alpha h)}{\phi(\ti \alpha)} \left[ \frac{3 \phi(\ti \alpha)}{\phi(\ti \alpha h)} \frac{q}{h^2} - h \right] +O (\eta/\delta^2,\delta^{-3})\,, \label{MB_tixi} \end{eqnarray} where $q$ is given by (\ref{q0}) and (\ref{q0_h1}). Equation~(\ref{MB_tixi}) can be formally rewritten as, \begin{equation} \label{MB_tixi_formal} {\cal G}(h,c;\ti \alpha,\beta/\delta) \frac{{\rm d} h }{{\rm d} {\tilde \xi}} = - {\cal H}(h,c;\ti \alpha) + O(\eta/\delta^2,\delta^{-3})\,, \end{equation} expressing the balance at the back of the solitary waves of inertia (at the left-hand side), viscous drag and gravity acceleration (at the right-hand side). As a consequence, the roots of the right-hand side of (\ref{MB_tixi}) correspond to the fixed points of (\ref{ssdyn}). As the homoclinic orbit departs from ${\bf U}_{\rm I}$, $h$ increases up to $h_{\rm II}$ which is larger than unity since $c > c_k$ (cf. Fig.~\ref{fig-fp}). At this point, $h$ goes through a maximum if ${\cal G}$ is nonzero. The resulting orbit is thus a heteroclinic one linking ${\bf U}_{\rm I}$ and ${\bf U}_{\rm II}$ which contradicts the fact that we are considering a single-loop homoclinic orbit \cite[]{Ruy05b}. As a consequence, ${\cal G}$ must go to zero at $h=h_{\rm II}$ which signals a `critical film thickness' $h_c$ at which inertial terms must go to zero. In the limit $\delta\to\infty$, we thus obtain the condition \begin{equation} \label{def-crit} h_{\rm II}(c;\ti \alpha) = h_c(c;\ti \alpha,\beta/\delta) \end{equation} which gives a unique solution $c_{\rm crit}$ and then $h_{\rm crit}$ as function of $\ti \alpha$ and $\beta/\delta$. As the limit speed $c_{\rm crit}$ is governed by the balance of inertia, wall friction and gravity acceleration, we may refer to this situation as the {\it drag-inertia} regime \cite[]{Oos99}. Having shown that ${\cal G}$ possesses at least one root, ${\cal G}$ can be factorised into ${\cal G} = (1 + \ti \alpha h) [c-c_{d+}(\ti \alpha h, q/h,\beta/\delta)][c-c_{d-}(\ti \alpha h, q/h,\beta/\delta)]$ where, \begin{subequations} \label{cdpmahq/h} \begin{eqnarray} c_{d\pm}(\ti \alpha h, q/h) &=& \frac{q}{h} \frac{{F}(\ti \alpha h)}{2} \pm \sqrt{\Delta_{\ti \alpha h,q/h,\beta/\delta}} \\\hbox{and} \quad \Delta_{\ti \alpha h, q/h,\beta/\delta} &=& \left(\frac{q}{h} \right)^2 \left[\frac{{F}(\ti \alpha h)^2}{4} - \frac{G(\ti \alpha h)}{1 + \ti \alpha h} \right] - \frac{\beta I(\ti \alpha h)}{\delta (1+\ti \alpha h)^3\phi(\ti \alpha)} \,, \end{eqnarray} \end{subequations} are the speeds of linear dynamic waves for a uniform layer of thickness $h$ and averaged speed $q/h$ \cite[]{Ruy08}. In other words, the position of the second fixed point must coincide with the critical layer $h_c$ at which the speed of the solitary wave $c$ is equal to the speed of one of the dynamic waves $c_{d\pm}$ -- in fact the fastest one with speed $c_{d+}$ --, which separates the flow into a `subcritical region' ($c<c_{d+}$) and a `supercritical region' ($c>c_{d+}$). The condition $h_{\rm II} = h_{\rm crit}$ is similar to the `Thomas condition' derived in the mathematical treatment of periodic bores on steep slopes, or {\it roll-waves} \cite[]{Tho39,2Liu05} made of the regular succession of laminar flows and hydraulic jumps. For $\delta \gg 1$, the front of the solitary waves, where surface tension arrests the breaking of the waves, plays a role similar to that of a hydraulic jump connecting the subcritical and supercritical regions of a roll wave. For this reason, we may also refer to this regime as the {\it roll-wave regime\/}. Considering now large but finite values of $\delta$, condition (\ref{def-crit}) is not verified and one has to go back to (\ref{MB_tixi_formal}). At the critical point $h=h_c$ we have ${\cal G}(h_c,c;\ti \alpha) = 0$ and a Taylor expansion close to criticality gives \begin{equation} \label{Taylor-G} h_c - h_{\rm crit} \approx - \left[\partial_c {\cal G}/\partial_h {\cal G}\right] ( h_{\rm crit}, c_{\rm crit}) \left(c - c_{\rm crit}\right) \end{equation} whenever $\partial_h {\cal G}( h_{\rm crit}, c_{\rm crit}) \ne 0$. Proceeding next to a Taylor expansion of ${\cal H}(h,c)$, (\ref{MB_tixi_formal}) yields \begin{equation} \label{Taylor-H} \left[ \partial_c {\cal H} -\frac{\partial_h {\cal H} \partial_c {\cal G}}{\partial_h {\cal G}} \right] (c -c_{\rm crit}) \approx K_{\eta} \frac{\eta}{\delta^{2}} + \frac{K_{\rm st}}{\delta^{3}}\,, \end{equation} where the constants $K_{\eta}$ and $K_{\rm st}$ are functions of $h_{\rm crit}$, $c_{\rm crit}$ and the derivatives $h_{\rm crit}'$, $h_{\rm crit}''$ and at $h_{\rm crit}'''$ at the critical thickness of the asymptotic solution. Since for $h_{\rm N} \gg 1$, $\delta \sim h_{\rm N}^{11/3}$ and $\eta \sim h_{\rm N}^{4/3}$ the asymptotically dominant term in the right-hand-side of (\ref{Taylor-H}) are the viscous terms $K_{\eta} \eta \delta^{-2}$. Consequently, the convergence of the speed of the waves to the asymptotic value $c_{\rm crit} (\ti \alpha)$ satisfies \begin{equation} \label{conv-rate} c - c_{\rm crit} (\ti \alpha,\beta/\delta) \propto \eta/\delta^2 \sim 1/R\!e^2 \end{equation} in agreement with our numerical computations. Figure~\ref{fig-DI} compares the phase speed $c$ of the solitary waves to the asymptotic limit $c_{\rm crit}$ as function of $\delta$ for the four different fluids of increasing viscosity that we considered (fluid properties are gathered in Table~\ref{table1}). For weakly viscous flows, the RP and K mode reinforce each other and the speed $c$ of the waves is much higher than the asymptote $c_{\rm crit}$ (see Fig.~\ref{DI-sol2-a}). \begin{figure} \begin{center} \begin{psfrags} \psfrag{D}{$\delta$} \psfrag{c/ck}{$c/c_k$} \subfigure[water ($\Gamma=3376$)]{\label{DI-sol2-a \includegraphics[width=0.48\textwidth]{Fig/KaRbv1.eps}}\hfil \subfigure[silicon oil v50 ($\Gamma=5.48$)]{\label{DI-sol-a \includegraphics[width=0.48\textwidth]{Fig/KaRb.eps}} \subfigure[castor oil ($\Gamma=0.45$)]{\label{DI_solv440-a \includegraphics[width=0.48\textwidth]{Fig/KaRbv440.eps}}\hfil \subfigure[silicon oil v1000 ($\Gamma=0.10$)]{\label{DI-solv1000-a \includegraphics[width=0.48\textwidth]{Fig/KaRbv1000.eps}} \end{psfrags} \end{center} \caption{Speed $c$ of the solitary waves as function of the reduced Reynolds number $\delta$ for three values of the Goucher number $G\!o=1.01$ (curves 1), $0.236$ (curves 2) and $0.168$ (curves 3) for fluids of decreasing Kapitza numbers. Solutions to (\ref{model2shk}) (solid lines) are compared to the asymptotic predictions $c_{\rm crit}$ (dashed lines). } \label{fig-DI} \end{figure} At larger viscosities (Figs.~\ref{DI-sol-a} and \ref{DI_solv440-a}) the K mode ceases to be affected by the RP instability and a rapid convergence to $c_{\rm crit}$ is observed as $\delta$ is increased. Whereas for very viscous fluids like silicon oil v1000 (one thousand times more viscous than water, cf.~Table~\ref{table2}), the curves start again to separate signalling the arrest of the convergence to the asymptote $c_{\rm crit}$ by the axial viscous effects. \subsection{Weakly nonlinear analysis} \label{S-weakNL} Let us now consider the regions of the speed and maximum height diagrams in Figs.~\ref{fig-sol}, \ref{fig-solv440} and \ref{fig-sol2} corresponding to the transition between the drop-like regime and the drag-inertia regime. They correspond to situations where neither the K nor the RP~instability mechanisms are strong enough to promote large amplitude waves, \ie when $\delta\lessapprox 1$ and $\beta^\star \lessapprox 1$. We can then proceed to a weakly nonlinear analysis to characterize the shape, speed and amplitude of the waves. Since $\delta\ll1$, we substitute $h = 1+H$ and $q= 1/3 + Q$ where $H= O(\varepsilon)$ and $Q=O(\varepsilon)$ are small deviations from the base state $(h,q)=(1,1/3)$, (\ie $\varepsilon \ll 1$). Taking the distinguished limit $\varepsilon\ll \epsilon^2$, we are led from the WRIBL (\ref{model2shk}) model to a single evolution equation for the deviation $H$: \begin{eqnarray} \partial_t H + c_k \partial_x H + \Phi(\ti \alpha) H \partial_x H + \frac{\beta}{3(1+\ti \alpha)^3} \partial_{xx} H + \frac{1}{3(1+\ti \alpha)}\partial_{xxxx} H \nonumber&&\\ +\delta \frac{\phi(\ti \alpha)}{3 I(\ti \alpha)} \left[\partial_{tt} H + \frac{{F}(\ti \alpha)}{3} \partial_{tx} H + \frac{G(\ti \alpha)}{9(1+\ti \alpha)} \partial_{xx} H \right] \nonumber&&\\ +\eta \frac{\phi(\ti \alpha)}{3 I(\ti \alpha)} \left[M(\ti \alpha) \partial_{xxt} H - \frac{L(\ti \alpha)}{3(1+\ti \alpha)} \partial_{xxx} H \right] &=& O(\varepsilon \epsilon^5,\varepsilon^2 \epsilon^2)\,, \label{H} \end{eqnarray} where \begin{equation} \Phi(\ti \alpha) = \frac{3(2+\ti \alpha)\phi + \ti \alpha[(6+5\ti \alpha)\phi' + \ti \alpha(1+\ti \alpha)\phi'']} {3(1+\ti \alpha)^2\phi} \end{equation} is a function of the aspect ratio $\ti \alpha$. Noteworthy is that a stability analysis of the base state $(h,q)=(1,1/3)$ based on (\ref{H}) leads to the same dispersion relation as for (\ref{model2shk}). \subsubsection{Drag-gravity regime} We consider here the limit of a thin film compared to the fibre radius, $\ti \alpha\ll 1$, when the radius of the fibre is constant or equivalently $G\!o$ is constant. Since $\eta = (\bar h_{\rm N}/l_c)^{4/3} = \ti \alpha^ {4/3}G\!o^{4/3}$, viscous dispersion is negligible in this limit and (\ref{H}) reduces to \begin{equation} \label{H6} \partial_t H + \partial_x H + 2 H \partial_x H + \frac{\beta}{3} \partial_{xx} H +\frac{2}{5}\delta \left[\partial_{tt} H + \frac{17}{21} \partial_{tx} H +\frac{1}{7} \partial_{xx} H \right] + \frac{1}{3}\partial_{xxxx} H = 0\,. \end{equation} By making use of the first-order equivalence of the time- and space-derivatives of $H$, $\partial_t H \approx- \partial_x H$, (\ref{H6}) reduces to the Kuramoto-Sivashinsky (KS) equation: \begin{equation} \label{H6b} \partial_t H + \partial_x H + 2 H \partial_x H + \frac{1}{3}\left[\frac{2}{5}\delta + \beta \right] \partial_{xx} H + \frac{1}{3}\partial_{xxxx} H = 0\,. \end{equation} To look for the TW solutions to (\ref{H6b}) in their moving frame, $\xi = x- ct$, we rescale the velocity as $c= 1+ C$, and the amplitude as $H = \varepsilon A$ and stretch the moving coordinate as $\xi=B X$: \begin{equation} \label{H7} -3 C B^3 A + 3 \varepsilon B^3 A^2 + B^2 \Upsilon \frac{d}{dX} A + \frac{d^3}{dX^3} A = 0\,, \end{equation} where $\Upsilon = \frac{2}{5}\delta + \beta$ and the condition $\lim_{\xi \to \pm \infty} H =0$ has been enforced yielding a vanishing integration constant. Balancing each term in (\ref{H7}) gives $C B^3 \sim \varepsilon B^3 \sim B^2 \Upsilon\sim 1$ so that $B \sim \Upsilon^{-1/2}$, $\varepsilon \sim \Upsilon^{3/2}$ and $C \sim \Upsilon^{3/2}$. Writing $B = \Upsilon^{-1/2}$, $\varepsilon = \frac{2}{3} \Upsilon^{3/2}$ and $C = \mu \Upsilon^{3/2}/3$ then leads to \begin{equation} \label{H5} -\mu A + 2A^2 + \frac{d}{dX} A + \frac{d^3}{dX^3} A = 0\,, \end{equation} which is an ordinary-differential equation governing the TW solutions to the KS equation propagating at speed $\mu$. Equation~(\ref{H5}) admits a one-hump solitary-wave solution for a particular value of $\mu = \mu_0\approx 1.216$ and an amplitude $A_{\rm max} \approx 0.784$. Consequently, in the limit $\ti \alpha \to 0$, the speed $c$ and the amplitude $h_{\rm max}$ of the one-hump solutions to (\ref{model2shk}) follow power laws of the form: \begin{equation} \label{power-laws-bis} c \approx 1 + 0.405\, {\Upsilon}^{3/2} \qquad h_{\rm max} \approx 1 + 0.523\,{\Upsilon}^{3/2}\,. \end{equation} Recall that the relations (\ref{power-laws-bis}) have been obtained under the assumption of small amplitude, $h_{\rm max} - 1 \ll 1$, hence $\Upsilon$ must be small, which implies that both inertia and azimuthal curvature effects must be small. Viscous drag and gravity acceleration are the dominant physical effects. Following \cite{Oos99}, we may refer to this regime as the {\it drag-gravity regime\/.} It is possible to interpret this regime as one characterized by the arrest of the instability growth by the flow advection. The dispersion relation corresponding to the KS~equation (\ref{H6b}) and governing infinitesimal perturbations around the base Nusselt flow of wavenumber $k$ and angular frequency $\omega$ is identical to (\ref{disp-Frenk}) when $\beta $ is substituted with $\Upsilon$. One can then follow the same line of reasoning in going from (\ref{disp-Frenk}) to (\ref{tauKDB}) and define the ratio of the typical time of advection of the structure over its length, $\tau_a= k^{-1}$, to the typical time of growth, $\tau_g=[\max(\omega_i)]^{-1}$, of the instability: \begin{equation} \label{tauDG} \tau_a/\tau_g =\omega_i/\omega_r|_{k=\sqrt{\Upsilon/2}} \propto \Upsilon^{3/2}\,. \end{equation} Therefore $\Upsilon \ll 1$ corresponds to $\tau_a \ll \tau_g$. The flow advection is much faster than the instability and the ratio $\tau_a/\tau_g$ controls the amplitude and speed of the observed structures as reflected by (\ref{power-laws-bis}). \subsubsection{Soliton-like regime} We consider in this section the limit of a thick film, $\ti \alpha\gg 1$, though in practice, if $\ti \alpha$ is large, the axisymmetry of the flow might be difficult to achieve. We thus have $\phi(\ti \alpha) \sim (3/4) \ti \alpha \log(\ti \alpha)$, $c_k \sim 4/3 \ti \alpha^{-1}$ and $\beta^\star \sim (3/4)^{2/3} \left(\ti \alpha G\!o\right)^{-4/3}$, $\delta \sim \frac{3}{4}\ti \alpha\log(\ti \alpha) \Gamma^{3/2} \left(\ti \alpha G\!o\right)^{11/3}$ while in all cases $\eta =(\ti \alphaG\!o)^{4/3}$. Therefore the conditions $\delta \ll 1$ and $\beta^\star \lessapprox 1$, necessary to sustain the weakly nonlinear analysis, can be justified if e.g. $\ti \alpha G\!o = O(1)$, i.e. $\eta=O(1)$ and $\Gamma^{3/2} \ll 1$. For silicon oil v1000, $\Gamma=0.1$ so that $\Gamma^{3/2}=0.03$. [We note that strictly speaking the derivation of the WRIBL model (\ref{model2shk}) demands that $\Gamma$ is at least $O(1)$ since the long-wave approximation is sustained only when surface tension effects are strong \cite[]{Ruy08}.] Neglecting inertial effects and looking for the dominant terms in (\ref{H}) leads to \begin{eqnarray} \nonumber \partial_t H + \frac{4}{3 \ti \alpha} \partial_x H + \frac{8}{3\ti \alpha} H \partial_x H + \frac{1}{3\ti \alpha\eta} \partial_{xx} H + \frac{1}{3 \ti \alpha} \partial_{xxxx} H &&\\ - \eta\log(\ti \alpha)\left[\frac{3}{2} \partial_{xxt} H + \frac{1}{\ti \alpha}\partial_{xxx} H \right] &=& 0\,. \label{H2} \end{eqnarray} The first-order equivalence of the time and space derivatives of $H$, \ie $\partial_t H = -(4/(3\ti \alpha) \partial_x H + O(\epsilon\varepsilon^2,\epsilon^2\varepsilon)$, can again be utilized. Equation (\ref{H2}) is then simplified into \begin{equation} \partial_t H + \frac{4}{3 \ti \alpha} \partial_x H + \frac{8}{3\ti \alpha} H \partial_x H + \frac{1}{3\ti \alpha\eta} \partial_{xx} H + \frac{\eta\log(\ti \alpha)}{\ti \alpha} \partial_{xxx} H + \frac{1}{3 \ti \alpha} \partial_{xxxx} H = 0\,, \label{H10} \end{equation} the `Kawahara' equation that was scrutinized numerically by Kawahara and coworkers~\cite[]{Kaw83,Kaw88}. The equation is also often referred to as the `generalized KS' equation~\cite[]{Dup09b,Tse10a,Tse10b}, and it is the KS equation appropriately extended to include dispersion ($\partial_{xxx} H$). As before, we look for TW solutions in their moving frame, $\xi = x- ct$. Stretching the moving coordinate as $\xi = \sqrt{\eta} X$, the amplitude as $H= \eta^{-3/2} A/2$, and the speed as $c=(4/3)\ti \alpha^{-1} (1+ \mu\, \eta^{-3/2}/4)$, then gives \begin{equation} \label{H9} -\mu A + 2 A^2 + \frac{d}{dX} A +\delta_K \frac{d^2}{dX^2} A + \frac{d^3}{dX^3} A = 0\,, \end{equation} with $\delta_K = 3\eta^{3/2} \log(\ti \alpha)$ is a parameter that measures the relative importance of dispersion. Equation~(\ref{H9}) governs TW solutions of the Kawahara equation propagating at speed $\mu$. Since $\ti \alpha \gg 1$ and $\eta=O(1)$, the dispersion parameter $\delta_K$ is large, in which case the speed of the one-hump solitary waves solutions to (\ref{H9}) is given by $\mu \approx 0.3256 \delta_K$. Solitary wave solutions of (\ref{model2shk}) in the limit $\ti \alpha \gg 1$ and $\delta \ll 1$ should therefore satisfy: \begin{equation} \label{c_ck_aN} \frac{c}{c_k} \approx 1 + 0.24 \log(\ti \alpha). \end{equation} The speed of the one-hump solitary wave solutions to (\ref{model2shk}) is compared to the asymptotic limit (\ref{c_ck_aN}) in Fig.~\ref{fig-KDB_KaRb_aN_log_c_e}. However, our computations show that for moderate values of $\ti \alpha$ the wave speed is in fact closer to $c/c_k \approx 1.65 + 0.24 \log(\ti \alpha)$. The reason for this discrepancy can be traced in the violation of the assumption of small amplitude deviations from the base state ($h-1\ll 1$) that was necessary to obtain (\ref{H10}) from (\ref{model2shk}). Indeed, TW solutions to the Kawahara equation have a speed and an amplitude that diverge as $\delta_K$ becomes large. \begin{figure} \begin{center} \begin{psfrags} \psfrag{c/ck-1}{$c/c_k - 1$} \psfrag{aN}{$\ti \alpha$} \psfrag{xi}{$\xi$} \psfrag{h}{$h$} \subfigure[]{\label{fig-KDB_KaRb_aN_log_c_e} \includegraphics[width=0.45\textwidth]{Fig/KDB_KaRb_aN_log_c_e.eps}}\hfil \subfigure[]{\label{fig-prof_14e_KDBev440_23} \includegraphics[width=0.45\textwidth]{Fig/prof_14e_KDBev440_23.eps}} \end{psfrags} \end{center} \caption{(a) Speed of solitary wave solutions to (\ref{model2shk}) when inertia is neglected ($\delta\to0$) as function of the aspect ratio $\ti \alpha$. Dotted lines refer to (\ref{c_ck_aN}) and to $c/c_k \approx 1.65 + 0.24\log(\ti \alpha)$. (b) Profile of the solitary wave solutions to (\ref{model2shk}) with (solid line) and without (dashed line) inertial terms (castor oil $\Gamma=0.45$, $R =0.31$~mm, $\delta =0.5$, $\eta=0.45$ and $\ti \alpha=4.06$). } \label{fig-KDB_KaRb_e} \end{figure} \subsection{Speed to amplitude relation} \label{S-eed-amp} The weakly nonlinear analysis presented in subsection~\ref{S-weakNL} provides us with the dependencies (\ref{power-laws-bis}) of the speed and maximum height as functions of $\Upsilon = \frac{2}{5}\delta + \beta$ for small film thicknesses. Combining these two relations give a linear dependence of the deviation amplitude $h_{\rm max} - 1$ to the deviation speed $c/c_k - 1$, where the wave speed $c$ has been normalised with the kinematic wave speed $c_k$: \begin{equation} \label{c-hm-asymp} h_{\rm max} - 1 \approx 1.29 \left( \frac{c}{c_k} - 1\right) \end{equation} A linear dependence of the speed as a function of amplitude was initially found by Chang~\cite{Cha86} \begin{equation} \label{speed-Chang} h_{\rm max} -1 \approx \frac{c}{c_k} - 1 \end{equation} by utilizing a normal form analysis of the TW solutions of the KS equation (\ref{H6b}). This linear dependence is a characteristic of the drag-gravity regime and must be contrasted with the experimental relation \begin{equation} \label{c-hm-Tihon} h_{\rm max} - 1 \approx 1.67 \left( \frac{c}{c_k} - 1\right) \end{equation} obtained by Tihon \etal~\cite{Tih06} for solitary waves running down a plane inclined at an angle $5^\circ$. A linear dependence of $h_{\rm max}$ with respect to $c/c_{k}$ has also been found experimentally in the recent study by \cite{Dup09a} for solitary waves on a relatively thick fibre. Similarly, in the drop-like regime, a law relating $c$ and $h_{\rm max}$ can easily be obtained by recognizing that the wave amplitude $h_{\rm max} -1$ should be proportional to the distance $h_{\rm II} - 1$ separating the fixed points. The constant of proportionality is determined by continuity with (\ref{c-hm-asymp}) in the limit $h_{\rm II} - 1 \ll 1$ for which $h_{\rm II} - 1 \approx c/c_k-1$ (see Fig.~\ref{fig-fp}). Next, by approximating $h_{\rm II}$ with $-1/2 + \sqrt{3[c/c_k-1/4]}$ we arrive at: \begin{equation} \label{c-hm-asymp-2} h_{\rm max} - 1 \approx 1.29 \left(-\frac{3}{2} + \sqrt{3\left[\frac{c}{c_k} - \frac{1}{4}\right]} \right). \end{equation} Figure~\ref{fig-KaRb-c-hmv50} presents the dependency of $h_{\rm max}-1$ on the relative speed $c/c_k -1$ for the results in figure~\ref{fig-sol} corresponding to the WRIBL model (\ref{model2shk}) and to silicon oil v50. The direction of increasing reduced Reynolds number $\delta$ along the curves is indicated by arrows. There is good agreement with relation (\ref{c-hm-asymp}) in the limit $c \approx c_k$. A good agreement with relation (\ref{c-hm-asymp-2}) is also found in the case of the fast waves ($c/c_k > 6$), typical of the drop-like regime observed at small Goucher numbers. In the drag-inertia regime, however, at large values of $\delta$, the speed of the waves saturates to the limit $c_{\rm crit}(\ti \alpha,\beta/\delta)$ and no univocal mapping of the amplitude to the wave speed is observed. When the drop-like regime is affected by inertia for weakly viscous liquids, the linear relation (\ref{c-hm-asymp-2}) is no longer applicable reflecting the influence of the K mode on the drop-like waves and in particular the steepening of the wave fronts (see Fig.~\ref{fig-KaRb-c-hmv1}). \begin{figure} \begin{center} \begin{psfrags} \psfrag{c/ck-1}{$c/c_k -1$} \psfrag{hmax-1}{$h_{\rm max} - 1$} \psfrag{D>>1}{$\delta\gg1$} \psfrag{D<<1}{$\delta\ll1$} \subfigure[silicon oil v50 ($\Gamma=5.48$)]{ \label{fig-KaRb-c-hmv50} \includegraphics[width=0.45\textwidth]{Fig/KaRb_c_hm.eps}}\hfill \subfigure[water ($\Gamma=3376$)]{ \label{fig-KaRb-c-hmv1} \includegraphics[width=0.45\textwidth]{Fig/KaRbv1_c_hm.eps}} \end{psfrags} \end{center} \caption{Deviation wave amplitude $h_{\rm max}- 1$ versus relative wave speed $c/c_k - 1$ for the solitary-wave solutions to (\ref{model2shk}) for $G\!o=1.01$, $0.236$ and $0.168$. The dashed lines correspond to the estimates (\ref{c-hm-asymp}) and (\ref{c-hm-asymp-2}). \label{fig-KaRb-c-hm} } \end{figure} It is then clear from the above, that equation~(\ref{c-hm-asymp-2}) provides a good approximation to the speed-to-amplitude dependence for solitary waves both in the drop-like and the drag-gravity regime. \section{Traveling waves} \label{S-TW} In this section, we compare periodic TW solutions of the WRIBL model~(\ref{model2shk}) ---\ie limit cycles of the dynamical system (\ref{ssdyn})--- with ($\eta \neq 0$) and without viscous dispersion ($\eta\to0$) and of the CM~equation (\ref{KDB}) (both $\delta\to0$ and $\eta\to0$), for the experimental conditions considered by~\cite{Kli01} and~\cite{Dup07}. We revisit the description of the TWs branches of solutions given in \cite{Ruy08} with reference to the characteristics of TWs of infinite extension, i.e. as TWs approach homoclinicity. In the experimental set-ups of~\cite{Kli01,Dup07} the flow rate was controlled and maintained at a constant value at the inlet. Amplification of the ambient noise resulted in a wavy dynamics with waves traveling with constant shapes and speeds over long distances. Periodic TWs can be produced experimentally by applying periodic perturbations at the inlet \cite[]{Liu94,Dup09a}. If the signal remains periodic in time at each location in space, an integration in time of the mass balance (\ref{q}) shows that the time average of the flow rate, $T^{-1} \int_0^T q \,dt$, where $T$ is the period, is conserved all along the fibre and is equal to its value at the inlet which gives the condition $T^{-1} \int_0^T q \,dt = 1/3$ \cite[]{Sch05}. Periodic TW solutions of the model equations aiming to describe the experimental conditions must therefore verify the condition $\langle q \rangle\equiv k/(2\pi) \int_0^{2\pi/k} q\,d\xi =1/3$, where $k$ again denotes the wavenumber. Let us first consider the experimental conditions corresponding to the very viscous fluid considered by \cite{Kli01} (castor oil, $\Gamma=0.45$; see Table~\ref{table2}). The bifurcation diagram of TW~solutions of the WRIBL model (\ref{model2shk}) and of the CM~equation (\ref{KDB}) are compared in Figure~\ref{Kliac_a}. The parameters in the figure correspond to regime `c' reported by Kliakhandler {\it et al.} ($q_{\rm N}=21$~mg/s and $R=0.25$~mm). We have normalised the wavenumber $k$ with the reference $k_{\rm RP} \equiv \sqrt{\beta}/(1+\ti \alpha)$ corresponding to the marginal stability condition (real $k$ ad $\omega$) for linear waves solutions to the CM~equation $k_{\rm RP}$ corresponds to a dimensional wavelength $2\pi(\bar R + \bar h_{\rm N})$ proportional to the diameter of the liquid cylinder. Figure~\ref{Kliac_b} shows corresponding wave profiles with regularly spaced streamlines in the moving frame, i.e. iso-contours of the function $\psi = \int_R^r u_x\,rdr +c (R^2-r^2)/2$ at levels $n q_0/N$, $1\le n \le N$ with $N=10$. The surface of the fibre corresponds to $\psi=0$ and the free surface to $\psi = q_0$. \begin{figure} \begin{center} \begin{psfrags} \psfrag{c/ck}{$c/c_k$} \psfrag{k/kRP}{$k/k_{\rm RP}$} \psfrag{h}{$h$} \psfrag{10x}{(iii)} \psfrag{19}{(ii)} \psfrag{20}{(i)} \psfrag{16}{(iv)} \psfrag{14}{(v)} \psfrag{12}{(vi)} \psfrag{9}{(vii)} \psfrag{10}{(viii)} \psfrag{11}{(ix)} \psfrag{g1}{$\gamma_1$} \psfrag{g2}{$\gamma_2$} \subfigure[]{\label{Kliac_a} \includegraphics[width=0.45\textwidth]{Fig/Kliac.eps}}\hfill \subfigure[]{\label{Kliacha} \includegraphics[width=0.45\textwidth]{Fig/Kliac_h.eps}} \subfigure[]{\label{Kliac_b} \begin{minipage}[c]{0.6\textwidth} \includegraphics[width=0.32\textwidth]{Fig/lc_7_Kliac_20.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_11_Kliac_16.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_13a_Kliac_9.eps}\\ \includegraphics[width=0.32\textwidth]{Fig/lc_7_Kliac_19.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_11_Kliac_14.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_13a_Kliac_10.eps}\\ \includegraphics[width=0.32\textwidth]{Fig/lc_7b_Kliac_10.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_11b_Kliac_12.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_13a_Kliac_11.eps}\\ \end{minipage}} \end{psfrags} \end{center} \caption{TW branch of solutions bifurcating from the marginal stability conditions.(a) Normalised speed $c/c_k$ as function of the normalised wavenumber $k/k_{\rm RP}$ where $k_{\rm RP} = \sqrt{\beta}/(1+\ti \alpha)$ (see text). Solid and dashed lines refer to (\ref{model2shk}) and to the CM~equation (\ref{KDB}), respectively. (b) Maximum thickness $h_{\rm max}$ (dashed line) and substrate thickness $h_s$ (dashed-dotted line) as functions of the normalised wavenumber $k/k_{\rm RP}$. The solid line refers to the relative maximum thickness $h_{\rm max}/h_s$. Labels~1 refer to model (\ref{model2shk}), whereas labels~2 refer to the CM~equation (\ref{KDB}). (c) Wave profiles and streamlines in the moving frame for solutions indicated by crosses in panel~a; left: solutions to (\ref{model2shk}); right: solutions to (\ref{KDB}). Parameters correspond to the experimental conditions of \cite{Kli01}: $q_{\rm N}=5.3$~mg/s and $R=0.25$~mm ($G\!o=0.138$, $\ti \alpha=2.04$, $\delta=0.01$, $\eta=0.18$ and $\beta^\star = 2.1$). \label{Kliac}} \end{figure} In the case of the WRIBL model (\ref{model2shk}), only one branch of TW~solutions has been found emerging from the marginal linear stability conditions at $k \approx k_{\rm RP}$ through a Hopf bifurcation, whereas a secondary branch has also been found by period doubling for the CM equation (\ref{KDB}). We note that weakly nonlinear TW~solutions of~(\ref{model2shk}) travel at a smaller speed than the TW~solutions of the CM~equation (\ref{KDB}) since the speed of linear kinematic waves is significantly affected by the second-order viscous effects~\cite[]{Ruy08}. At small wavenumbers, TW solutions of the branch emerging from $k \approx k_{\rm RP}$ accelerate, become more and more localized and terminate into the single-hump solitary waves discussed in~\S\,\ref{S-dyn}. The speed, amplitude and shape of the solutions of (\ref{model2shk}) and (\ref{KDB}) are comparable in this limit. Following the terminology introduced by Chang and co-workers for falling films on a planar substrate~\cite[]{Cha93a,Cha02}, we refer to this branch of solutions as the `$\gamma_2$ waves'. The branch of solutions to (\ref{KDB}) emerging through period doubling terminates by slow waves with a shape made of a trough followed by capillary ripples. Following again \cite{Cha93a,Cha02} for planar substrates, we refer to these TWs as the `$\gamma_1$ waves'. For the conditions of Fig.~\ref{Kliac}, the fibre radius is small compared to the capillary length ($G\!o =0.138$) and the Nusselt film thickness is comparable to the fibre radius ($\ti \alpha=2.04$). As a consequence, the RP instability mechanism is strong and TWs have amplitudes comparable to the fibre radius, which corresponds to the typical situation of the `drop-like' regime of the solitary waves discussed in \S~\ref{S-drop}. Yet, a direct transposition of the results obtained in \S~\ref{S-drop} to solitary-like wavetrains in the limit $k\to0$ is not possible since the reference thickness of the substrates of the solitary-like waves is not constant (recall that in the treatment of solitary wave solutions, the constant film thickness far from the solitary waves was the reference thickness). We have computed the parameter $\beta_s^\star$ based on the substrate thickness $h_s$ defined by the position of the fixed point in the phase space when the corresponding limit cycle approaches homoclinicity. The maximum of the relative thickness $h_{\rm max}/h_s$, $6.5$ here, is reached for a value of the local number $\beta_s^\star$ close to its maximum, namely $3.65$, as expected since the waves' characteristics are piloted by the balance between the advection by the flow and the RP instability in the drop-like regime. Figure~\ref{Kliacha} shows the evolution of the maximum thickness, $h_{\rm max}$, of the substrate thickness $h_s$ and of the ratio of the two as function of the normalised wavenumber $k/k_{\rm RP}$. At small $k$, a TW is solitary-like and travels on a portion of nearly flat substrate of increasing length. The mass transported by the wave decreases in comparison to the mass carried by the substrate and $h_s$ asymptotes to the Nusselt film thickness. As a consequence $h_s$ increases as $k$ tends to zero. This trend is followed by the maximum height $h_{\rm max}$. As a TW gets more and more localized, it tends to have bigger and bigger size. However, the maximum relative thickness, $h_{\rm max}/h_s$, is a rapidly decreasing function of $k$ at small wavenumbers. Indeed, larger substrate thicknesses imply weaker RP instability mechanisms and stronger advection: $\beta_s^\star$ (not shown) follows a trend similar to $h_{\rm max}/h_s$. We therefore obtain a rather paradoxical picture: Despite the weakening of the RP instability --- thus the lowering of $h_{\rm max}/h_s$---, a TW has its absolute amplitude augmented as $k$ tends to zero. For the experiments by Kliakhandler {\it et al.}, the RP instability mechanism is strong while inertia and viscous dispersion are weak ($\delta=0.01$, $\eta=0.18$) which explains the good agreement obtained with the results from the CM~equation (\ref{KDB}). However, as the RP~instability is weakened by raising the Goucher number, streamwise viscous dispersion should play an increasingly important role. We have checked this by computing the TW~solutions of~(\ref{model2shk}) and~(\ref{KDB}) corresponding to a flow rate $q_{\rm N}=10$~mg/s and a larger radius $R=0.5$~mm, i.e. for $\delta=0.01$, $\beta^\star=1.16$ and $\eta=0.22$, parameters that are comparable to those corresponding to the regime `c' discussed by Kliakhandler {\it et al.} ($\delta=0.01$, $\beta^\star=2,1$ and $\eta=0.18$) but with a value of $\beta^\star$ $\sim$ two times smaller. Branches of solutions are compared in figure~\ref{KliacR0.5a} in the $(k/k_{\rm RP},c/c_k)$-plane. $\gamma_2$ fast TW solutions of (\ref{KDB}) are again observed to emerge from the marginal conditions $k\approx k_{\rm RP}$ whereas $\gamma_1$ solutions are obtained through a period doubling bifurcation of the $\gamma_2$ branch. We note again that only the $\gamma_2$ branch of solutions can be found for the WRIBL model (\ref{model2shk}), with no period doubling bifurcation being detected in this case. We therefore conclude that the inclusion of second-order viscous effects reduces the number of wave families and, as a consequence, it may drastically simplify the complex sequence of bifurcations and topological structure of associated bifurcation diagrams. In fact, such a reduction of the number of wave families by the second-order viscous terms was also evidenced in the planar case \cite[]{Ruy99}. In comparison to Fig.~\ref{Kliac_a}, the $\gamma_2$ branches portrayed in figure~\ref{Kliac_a} now deviate significantly from each other even at small wavenumbers. The TW solutions of model~(\ref{model2shk}) have a larger amplitude than the TW solutions of equation~(\ref{KDB}) as indicated in Fig.~\ref{KliacR0.5} which compares the wave profiles of waves of equal wavenumber. When approaching homoclinicity, this effect is enhanced, and solutions of~(\ref{model2shk}) travel with a notably higher speed and larger amplitude than solutions of~(\ref{KDB}). They are also preceded by smaller capillary ripples. Figure~\ref{Kliachb} compares $h_s$ and $h_{\rm max}$. Again, the substrate thickness and the maximum thickness have the same trend as $k$ tend to zero whereas the maximum relative thickness $h_{\rm max}/h_s$ evolves in the opposite direction. However, a comparison of Fig.~\ref{Kliachb} to Fig.~\ref{Kliacha} reveals that the maximum reached by $h_{\rm max}/h_s$ is $\sim$ three times smaller than with the experimental conditions considered by \cite{Kli01}. Computations of the local values of the parameters based on the substrate thickness show that $\beta_s^\star$ does not exceed $1.41$ and decreases with $k$ whereas $\eta_s$ increases. The local parameters are $\beta_s^\star=1.2$ and $\eta_s=0.19$ for the wave profile labelled (iii) in Fig.~\ref{KliacR0.5} so that viscous dispersion effects start to be comparable with the RP instability mechanism. We conclude that viscous dispersion has a non-trivial amplifying effect on the RP~instability mode. \begin{figure} \begin{center} \begin{psfrags} \psfrag{c/ck}{$c/c_k$} \psfrag{k/kRP}{$k/k_{\rm RP}$} \psfrag{h}{$h$} \psfrag{i}{(i)} \psfrag{ii}{(ii)} \psfrag{iii}{(iii)} \psfrag{iv}{(iv)} \psfrag{v}{(v)} \psfrag{vi}{(vi)} \psfrag{5}{(vii)} \psfrag{4}{(viii)} \psfrag{4b}{(ix)} \psfrag{g1}{$\gamma_1$} \psfrag{g2}{$\gamma_2$} \subfigure[]{\label{KliacR0.5a} \includegraphics[width=0.45\textwidth]{Fig/Kliac_R0.5.eps}}\hfil \subfigure[]{\label{Kliachb} \includegraphics[width=0.45\textwidth]{Fig/Kliac_R0.5_h.eps}} \subfigure[]{\label{KliacR0.5b} \begin{minipage}[c]{0.6\textwidth} \includegraphics[width=0.32\textwidth]{Fig/lc_7_Kliac_R0.5_10.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_11_Kliac_R0.5_12.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_13_Kliac_R0.5_5.eps}\\ \includegraphics[width=0.32\textwidth]{Fig/lc_7_Kliac_R0.5_11.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_11_Kliac_R0.5_13.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_13_Kliac_R0.5_4.eps}\\ \includegraphics[width=0.32\textwidth]{Fig/lc_7b_Kliac_R0.5_10.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_11b_Kliac_R0.5_12.eps} \includegraphics[width=0.32\textwidth]{Fig/lc_13b_Kliac_R0.5_4.eps} \end{minipage}} \end{psfrags} \end{center} \caption{TW branch of solutions bifurcating from the marginal stability conditions. Parameters correspond to castor oil ($\Gamma=0.45$), $q_{\rm N}=10$~mg/s and $R=0.5$~mm ($\delta=0.01$, $\eta=0.22$, $\ti \alpha=1.15$ and $\beta^\star=1.16$). See caption of figure~\ref{Kliac}.} \label{KliacR0.5} \end{figure} For less viscous fluids like Rhodorsil silicon oil v50, one may expect a more significant effect of inertia and of the K~mode of instability. In Fig.~\ref{q0.151Rb0.23h}, we compare the substrate, maximum and relative maximum thicknesses, $h_s$, $h_{\rm max}$ and $h_{\rm max}/h_s$, respectively, corresponding to the TW branch of solutions to (\ref{model2shk}). Parameters are chosen to correspond to one of the experimental conditions considered by \cite{Dup07} ($R=0.23$~mm, $q_{\rm N}=151$~mg/s) and to a Goucher number $G\!o=0.155$ close to the corresponding one for the experiments in \cite{Kli01} ($G\!o=0.137$). As $k$ is lowered and the TWs approach homoclinicity, $h_s$ and $h_{\rm max}$ grow, a trend similar to what is observed in figures~\ref{Kliacha} and \ref{Kliachb} for a more viscous fluid. However, the relative amplitude $h_{\rm max}/h_s$ follows a surprising non-monotonic behaviour with an S-shape curve in the plane ($h$,$k/k_{\rm RP}$). This behaviour can be understood by examining the local values of $\delta$ and $\beta^\star$ based on the substrate thickness $h_s$ and presented in figure~\ref{q0.151Rb0.23Dbs}. As $k$ is lowered to zero, $\beta_s^\star$ decreases whereas $\delta_s$ varies in the opposite direction. As a result, the two curves ultimately cross and very long waves correspond to a relatively stronger K instability mode than the initially dominant RP instability. We thus recover the usual trend observed with the solitary waves propagating on a planar film: Larger substrates imply larger amplitudes $h_{\rm max}$ and also larger relative amplitude $h_{\rm max}/h_s$ \cite[]{Ale94,Cha02}. \begin{figure} \begin{center} \begin{psfrags} \psfrag{k/kRP}{$k/k_{\rm RP}$} \psfrag{h}{$h$} \psfrag{D}{$\delta_s$} \psfrag{bs}{$\beta^\star_s$} \psfrag{D,bs}{$\delta_s$, $\beta^\star_s$} \subfigure[]{\label{q0.151Rb0.23h \includegraphics[width=0.45\textwidth]{Fig/q0.151_Rb_0.23_h.eps}}\hfil \subfigure[]{\label{q0.151Rb0.23Dbs \includegraphics[width=0.45\textwidth]{Fig/q0.151_Rb_0.23_Dbs.eps}} \end{psfrags} \end{center} \caption{(a) Maximum thickness $h_{\rm max}$ (dashed line) and substrate thickness $h_s$ (dashed-dotted line) versus the normalised wavenumber $k/k_{\rm RP}$. The solid line refers to the relative thickness $h_{\rm max}/h_s$. (b) Local reduced Reynolds number $\delta_s$ and $\beta^\star_s$ versus $k/k_{\rm RP}$. Parameters correspond to Rhodorsil v50 silicon oil ($\Gamma=5.48$), $q_{\rm N}=151$~mg/s and $R=0.23$~mm ($G\!o=0.155$, $\delta=3.9$, $\ti \alpha=3.0$ and $\eta=0.36$). } \label{q0.151Rb0.23-h-Dbs} \end{figure} We now turn to the experimental conditions considered in \cite{Dup09a} corresponding again to Rhodorsil silicon oil v50 but with a larger fibre, $R=0.475$~mm. The chosen flow rate $q_{\rm N}=296$~mg/s gives values of the reduced Reynolds number $\delta$ and of the viscous dispersion parameter $\eta$ which do not vary significantly (compare e.g. $\delta=4.1$ and $\eta=0.44$ to $\delta=3.9$ and $\eta=0.36$). Inertia and viscous dispersion effects are therefore comparable whereas the RP instability mechanism is significantly lowered ($\beta^\star$ is reduced from $1.30$ to $0.77$). As for $R=0.5$~mm and $q_{\rm N}=10$~mg/s, we found a single branch of TW solutions bifurcating from the marginal stability curve and the obtained bifurcation diagram (not shown) is similar to Fig.~\ref{KliacR0.5}. Figure~\ref{R0.475q0.296h} compares the corresponding substrate thickness $h_s$, the absolute and relative amplitudes $h_{\rm max}$ and $h_{\rm max}/h_s$. As $k$ is lowered, all curves exhibit a similar monotonic shape similar to what is expected in the planar case. Again, this can be understood by examining the local values of the two governing parameters (cf. Fig.~\ref{R0.475q0.296Dbs}). The local reduced Reynolds number $\beta_s^\star$ does not exceed unity whereas $\delta_s$ sharply increases for very long waves, which signals the prevalence of the K mode over the RP mode. \begin{figure} \begin{center} \begin{psfrags} \psfrag{k/kRP}{$k/k_{\rm RP}$} \psfrag{h}{$h_{\rm max}$} \psfrag{D}{$\delta_s$} \psfrag{bs}{$\beta^\star_s$} \psfrag{D,bs}{$\delta_s$, $\beta^\star_s$} \subfigure[]{\label{R0.475q0.296h \includegraphics[width=0.45\textwidth]{Fig/R0.475q0.296_h.eps}}\hfil \subfigure[]{\label{R0.475q0.296Dbs \includegraphics[width=0.45\textwidth]{Fig/R0.475q0.296_Dbs.eps}} \end{psfrags} \end{center} \caption{(a) Maximum thickness $h_{\rm max}$ (dashed line) and substrate thickness $h_s$ (dashed-dotted line) versus the normalised wavenumber $k/k_{\rm RP}$. The solid line refers to the maximum relative thickness $h_{\rm max}/h_s$. (b) Local reduced Reynolds number $\delta_s$ and $\beta^\star_s$ versus $k/k_{\rm RP}$. Parameters correspond to Rhodorsil v50 silicon oil, $q_{\rm N}=296$mg/s and $R=0.475$~mm ($G\!o=0.32$, $\delta=4.1$, $\ti \alpha=1.7$ and $\eta=0.44$).} \label{R0.475q0.296} \end{figure} Interestingly, TWs can be found at a wavenumber $k$ higher than the critical wavenumber $k_c\approx k_{\rm RP}$ corresponding to the marginal stability conditions, in which case the Nusselt solution is linearly stable. This subcritical bifurcation of TW solutions from the Nusselt uniform film occurs at small radii of the fibre, i.e. small Goucher number (see figures~\ref{Kliac} and~\ref{KliacR0.5}). The subcritical onset of TWs has also been reported in the recent work by Novbari and Oron~\cite{Nov11} based on an `energy integral method' though these authors did not give a physical interpretation of this phenomenon. Since the RP instability mechanism is strong when $G\!o\ll1$, we can conjecture that the onset of subcriticality is related to surface tension effects associated with the azimuthal curvature (hence this effect is absent in the planar case). From a thermodynamic viewpoint, surface forces tend to reduce the free energy of the system and equilibrium is reached when the contact area between the liquid and the surrounding gas is minimum. Obviously, the flow is out of equilibrium and a thermodynamic argument must be taken with care \footnote{Indeed, Kapitza himself erroneously predicted the instability threshold of a falling film using thermodynamic arguments~\cite[]{Kap48}, a question that was settled with Benjamin's work a few years later~\cite[]{Ben57}.}. Yet, when $\delta \ll1$, $\eta \ll 1$ and $G\!o \ll 1$, model~(\ref{model2shk}) reduces at leading order to the long-wave Young-Laplace equation and the shape of the TWs thus only slightly differs from the shape of static drops on fibres as was shown in \S\,\ref{S-drop} and in Fig.~\ref{fig-prof-tab2}. We have computed the TW solutions to the CM equation (\ref{KDB}) for $G\!o=0.02$ and $G\!o=0.055$, and for $\ti \alpha=0.5$ and $\ti \alpha=8$. The maximum amplitude $h_{\rm max}$, and the interfacial area $A$ of the waves normalised by the area of the uniform film solution of the same volume are displayed in Fig.~\ref{R0.02aN8}. The subcritical onset of the TW solutions is accompanied by a normalized interfacial area lower than unity, and therefore a lowering of the free surface energy in comparison to the Nusselt uniform film solutions, which supports our conjecture that subcriticality is promoted by capillary effects. We note that the minimum wavenumber at which TWs are observed seem to depend only on $\ti \alpha$ independently of $G\!o$. Conversely, at a given value of $\ti \alpha$, the amplitude of the waves is strongly affected by the value of $G\!o$, a small Goucher number implying larger waves as expected, since the saturation number $\beta=\ti \alpha^{2/3} G\!o^{-4/3}$ is also larger. \begin{figure} \begin{center} \begin{psfrags} \psfrag{k/kRP}{$k/k_{\rm RP}$} \psfrag{Anorm}{$A$} \psfrag{hM}{$h_{\rm max}$} \subfigure[]{\label{R0.02aN8-h \includegraphics[width=0.45\textwidth]{Fig/R0.02aN8_h.eps}}\hfil \subfigure[]{\label{R0.02aN8-Anorm \includegraphics[width=0.45\textwidth]{Fig/R0.02aN8_Anorm.eps}} \end{psfrags} \end{center} \caption{\label{R0.02aN8} Traveling wave solutions to the CM equation (\ref{KDB}) for $\ti \alpha=8$ and $\ti \alpha=0.5$. Thick and thin solid lines refer to $G\!o=0.02$ and $G\!o=0.055$, respectively. Curves labeled 1 (respectively 2) correspond to $\ti \alpha=8$ ($\ti \alpha=0.5$).} \end{figure} In Fig.~\ref{R0.02LP} the maximum of the normalised wavenumber $k/k_{\rm RP}$ of the TW solutions of the CM equation is depicted as function of the aspect ratio $\ti \alpha$ for $G\!o=0.02$ and $G\!o=0.055$. We note again that the location of this local maximum only weakly depends on the Goucher number. The shape of the corresponding TW solutions (not shown) is also nearly symmetrical. We thus conclude that the subcritical behavior of the TW solutions is not affected by gravity effects at small values of the Goucher number but is strongly dependent on the parameter $\ti \alpha$ and therefore on the geometry of the base flow. By neglecting gravity effects and assuming a wettable substrate, the shape of a static drop is governed by its length and volume. Therefore, for a normalized area $A$ set to unity, the shape of a static drop depends only on the aspect ratio $\ti \alpha$ of the uniform film solution with the same volume. The solid line curve in figure~\ref{R0.02LP} represents the wavenumber of static drops with unit normalized areas. Above this curve, $A>1$ and the static drop solution is not energetically favorable, whereas below it the static drop solution is favored. Indeed, large drops have a nearly spherical shape with an interfacial area that is lower than the uniform film with the same volume, with further lowering of the interfacial area being forbidden by the presence of the fibre. For a given amount of liquid, the thicker the uniform film solution is, the lower the normalized area of the static drop would be. Therefore, the range of energetically favored wavenumbers of the static drop solution increases with the aspect ratio $\ti \alpha$ as can be observed from figure~\ref{R0.02LP}. We can now deduce two more arguments from Fig.~\ref{R0.02LP} which support a capillary origin of the subcritical behavior of the branch of TW solutions that emerge from the Hopf bifurcation of the uniform film solution: (i) Since all points lie below the solid curve, TWs only exist when an energetically favorable static drop solution is available; (ii) The trend of the maximum of the TW wavenumber with respect to the aspect ratio $\ti \alpha$ is similar to the trend of the boundary separating energetically favored and unfavored static drop solutions. \begin{figure} \begin{center} \begin{psfrags} \psfrag{k/kRP}{$k/k_{\rm RP}$} \psfrag{aN}{$\ti \alpha$} \includegraphics[width=0.45\textwidth]{Fig/R0.02LP.eps} \end{psfrags} \end{center} \caption{\label{R0.02LP} Location of the maximum of the normalised wavenumber $k/k_{\rm RP}$ as function of the aspect ratio $\ti \alpha$ for the TW solutions of the CM equation (\ref{KDB}). Crosses (respectively squares) correspond to $G\!o=0.02$ ($G\!o=0.055$). The solid line denotes the static drop solution to (\ref{Kum88-LY-eq}) with a normalized area $A=1$ (see text and Appendix~\ref{S-Static-Drops}). } \end{figure} \section{Phase diagram} \label{S-Phase} We are now in a position to give a phase diagram of the different regimes found for all possible TW solutions on a film flow down a fibre. The onset of the `drop-like' regime and the `drag-inertia' regime corresponds roughly to $\beta^\star\approx 1$ and $\delta \approx 1$. The `soliton-like' regime arises when the instability mechanisms are weak ($\delta \lessapprox 1$ and $\beta^\star\lessapprox 1$), the film is thick, $\ti \alpha = O(1)$, and viscous dispersion is strong, $\eta = O(1)$. Finally, the `drag-gravity' regime is observed when all other effects are weak ($\delta \lessapprox 1$ and $\beta^\star\lessapprox 1$ and $\eta \ll 1$). Therefore, a phase diagram can be obtained for a given fluid, thus a given Kapitza number $\Gamma$, by drawing the curves $\delta=1$, $\eta=1$ and $\beta^\star=1$ in the plane ($\ti \alpha$, $G\!o$). Since $\eta=(\ti \alphaG\!o)^{4/3}$ and $\beta^\star=\{\ti \alpha c_k(\ti \alpha)/[G\!o^2 (1+\ti \alpha)^4]\}^{2/3}$ are functions of $\ti \alpha$ and the Goucher number only, the corresponding curves $\beta^\star=1$ and $\eta=1$ are independent of the working fluid considered. Thus, $\delta=1$ is the only boundary that moves in the plane ($\ti \alpha$, $G\!o$) when $\Gamma$ is varied. Figure~\ref{domaine} is a tentative representation of the phase diagrams for the four working fluids considered in this study, from weakly viscous fluids like water with a high Kapitza number, $\Gamma=3376$, to highly viscous fluids like silicon oil v1000 corresponding to a small Kapitza number, $\Gamma=0.10$. \begin{figure} \begin{center} \begin{psfrags} \psfrag{RP}{DL} \psfrag{VIS}{SOL} \psfrag{DI}{DI} \psfrag{DG}{DG} \psfrag{aN}{$\ti \alpha$} \psfrag{R/lc}{$G\!o$} \psfrag{beta=1}{$\beta^\star=1$} \psfrag{D=1}{$\delta=1$} \psfrag{e=1}{$\eta=1$} \subfigure[water ($\Gamma=3376$)]{\label{domainev1} \includegraphics[width=0.45\textwidth]{Fig/domainev1.eps}}\hfil \subfigure[silicon oil v50 ($\Gamma=5.48$)]{\label{domainev50} \includegraphics[width=0.45\textwidth]{Fig/domainev50.eps}}\\ \subfigure[castor oil ($\Gamma=0.45$)]{\label{domainev440} \includegraphics[width=0.45\textwidth]{Fig/domainev440.eps}}\hfil \subfigure[silicon oil v1000 ($\Gamma=0.10$)]{\label{domainev1000} \includegraphics[width=0.45\textwidth]{Fig/domainev1000.eps}} \end{psfrags} \end{center} \caption{Maps of the different regimes in the $G\!o-\ti \alpha$ parameter space for fluids of increasing viscosity. The different curves are the loci of $\delta=1$, $\eta=1$ and $\beta^\star=1$. `DI' refers to the `drag-inertia' regime, `DL' to the Rayleigh-Plateau regime, `SOL' to the `soliton-like' regime and `DG' to the `drag-gravity' regime. } \label{domaine} \end{figure} In the case of water, a large overlap region exists for the `drop-like' and the `drag-inertia' regime corresponding to a mutual reinforcement of the two K and RP instabilities. The `soliton-like' regime takes over at relatively high viscosities where the curve $\delta=1$ moves below the curve $\eta=1$. \section{Conclusions and discussion} \label{S-concl} We have presented new results and insights on the characteristics of axisymmetric waves propagating down a fibre. Our analysis was based on the two-equation model derived in \cite{Ruy08} using a weighted-residuals approach, equation (\ref{model2shk}). The model accounts for all physical effects, namely inertia, azimuthal curvature and viscous dispersion and has been validated in the studies by Ruyer-Quil \emph{et al.}~\cite{Ruy08} and Duprat \emph{et al.}~\cite{Dup09a} through direct comparisons to the experiments by Kliakhandler \emph{et al.}~\cite{Kli01} and Duprat \emph{et al.}~\cite{Dup07,Dup09a}. We first focused on isolated waves running on a constant thickness substrate, or solitary waves. The dynamics of the film seems to be dominated by these structures even when the flow becomes disordered \cite[]{Kli01,Cra06}. We examined in detail, via both asymptotic analysis and through elements from dynamical systems theory, the shape, speed and amplitude of the waves for four fluids of increasing viscosities: Water, Rhodorsil silicon oils v50, v1000 and the castor oil utilised in the experiments by \cite{Kli01}. We identified four distinct regimes corresponding to the competition of the two instability modes, the K and RP modes, prompted by inertia and azimuthal curvature (the fibre curvature effectively), respectively, with the viscous dispersion (\ie the second-order axial viscous diffusion and second-order viscous contributions to the tangential stress at the free surface) and with the advection of the structures by the flow, which results from the balance between gravity and viscous drag. Two of these regimes are similar to what is found in the planar geometry \cite[]{Oos99}. The `drag-gravity' regime corresponds to the predominance of the flow advection over the instability mechanisms, either when inertia effects are weak, \ie for $\delta \lessapprox 1$ or when the azimuthal curvature effects are non-dominant, $\beta^\star \lessapprox 1$. In both cases it is possible to interpret the `drag-gravity' regime as one where the instability growth is arrested by the flow which determines the amplitude and speed of the solitary waves as reflected by the asymptotic relations (\ref{power-laws-bis}). The `drag-inertia' regime is observed at large reduced Reynolds numbers, $\delta \gg 1$, when the wave characteristics are determined by the balance of inertia, drag and gravity. We have obtained the asymptotic limit of the speed and showed that the rate of convergence to this limit is governed by $\eta/\delta^2 \propto R\!e^{-2}$. The `drop-like' regime corresponding to the predominance of the RP instability mechanism over the flow advection. It is specific to the cylindrical geometry and is observed for small fibre radii $R$ compared to the capillary length $l_c$, that is at small Goucher numbers $G\!o$, when the typical time of growth of the RP instability is greater than the advection time of a wavy structure, \ie for $\beta^\star \gtrapprox 1$. The maximum reachable amplitude and speed of the waves in this regime is governed by the radius $R$ of the fibre and the balance of gravity and viscous drag. Comparisons to quasi-static drop solutions of the Laplace--Young equation (\ref{Kum88-LY-eq}) sliding down a fibre with a speed verifying the Landau--Levich--Derjaguin law (\ref{Landau}) show excellent agreement, even in the case of spherical drops where the long-wave lubrication assumption does not strictly apply. In this regime, waves have a drop-like nearly symmetrical shape determined by capillary effects. The thickness of the substrate film on which the drops slide is governed by the balance of viscosity and capillarity. `Drop-like' TW solution branches subcritically emerge from the Nusselt uniform film branch. We have given an explanation for this subcritical onset based on geometric and thermodynamic arguments and thus completed its recent investigation in~\cite{Nov11}. This phenomenon arises from capillary effects and depends only on the aspect ratio $\ti \alpha$ for sufficiently low Goucher numbers. We have also found a possible fourth regime for very viscous fluids and thick films $\ti \alpha = O(1)$), for which both K and RP instability mechanisms are weak ($\delta\lessapprox 1 $ and $\beta^\star \lessapprox 1$) and viscous dispersion is significant ($\eta =O(1)$). This `soliton-like' regime corresponds to the balance of the nonlinearities with the dispersion induced by second-order viscous effects, with the speed and amplitude of the solitary waves being functions of the logarithm of the aspect ratio $\ti \alpha$. Our study of the solitary-wave solutions has been followed by construction of the TW branches of solutions corresponding to the experimental conditions for which the average flow rate is the true control parameter, with the substrate thickness being determined by the solution itself. If the substrate thickness $h_s$ and the maximum amplitude $h_{\rm max}$ grow when TWs approach homoclinicity, the ratio of the two, $h_{\rm max}/h_s$, evolves in a manner that strongly depends on which instability mechanism is dominant. If the RP instability is dominant, $h_{\rm max}/h_s$ decreases as the wavenumber $k$ tends to zero, whereas if the K mode is dominant $h_{\rm max}/h_s$ has an opposite trend. This picture can be even more complex since the predominance of the instability modes can be exchanged by varying the periodicity of the waves and thus the substrate thickness (cf. Fig.~\ref{q0.151Rb0.23-h-Dbs}). The selected wave regime depends not only on the properties of the Nusselt flow at the inlet but also on the periodicity selected by the system. Indeed, the boundaries separating the different regimes are not only functions of $G\!o$ and $\ti \alpha$ but also functions of the thickness of the substrate, which is determined by the typical distance separating solitary-like waves. Therefore, the phase diagrams displayed in Fig.~\ref{domaine} must be taken with caution. The wave selection process of a noise-driven falling film is the complex result of the linear amplification of inlet perturbations and the downstream nonlinear interaction mechanisms \cite[]{Cha95,Kal07,Pra11}. \begin{figure} \begin{center} \begin{psfrags} \psfrag{x (mm)}{$x$ (m)} \psfrag{h (m)}{$h$ (mm)} \psfrag{x}{{\large \color{white} $x$}} \psfrag{t}{{\large \color{white} $t$}} \subfigure[$t=0$~s]{ \includegraphics[height=0.45\textwidth,angle=-90] {Fig/snap_Rlc06-q03_shX_1.eps}}\hfil \subfigure[$t=4$~s]{ \includegraphics[height=0.45\textwidth,angle=-90] {Fig/snap_Rlc06-q03_shX_100.eps}}\\ \subfigure[$t=8$~s]{ \includegraphics[height=0.45\textwidth,angle=-90] {Fig/snap_Rlc06-q03_shX_200.eps}}\hfil \subfigure[$t=12$~s]{ \includegraphics[height=0.45\textwidth,angle=-90] {Fig/snap_Rlc06-q03_shX_300.eps}}\\ \subfigure[]{ \includegraphics[width=\textwidth,height=0.3\textwidth] {Fig/Rlc06-q03-spatio.eps}} \end{psfrags} \end{center} \caption{Simulation of the response of the film in the `soliton-like' regime. Initial conditions consist of two pulse-like perturbations of different amplitudes. Parameters are $\delta=0.18$, $\ti \alpha=1.9$ and $\eta=1.2$ (Rhodorsil silicon oil v1000, $q_{\rm N}=300$~mg/s and $R=0.89$~mm). (a-d) Snapshots of the film thickness at increasing times; (e) Spatio-temporal diagram. Vertical and horizontal ranges are $1.25$~m and $28$~s respectively. Elevations (depressions) of the free surface are coded in dark (light) grey. \label{Rlc06-q03} } \end{figure} Noteworthy is that in our previous study on the fibre problem~\cite[]{Ruy08}, we have shown that the wave selection process of the noise-driven wave dynamics down the fibre is strongly affected by viscous dispersion. This effect is generally weak in the experiments devoted to falling films on planar substrate, where working fluids were often weakly viscous, i.e. water~\cite[]{Ale85,Liu94,Tih06}. In the case of films down fibres, the oils used in the experiments are much more viscous than water which explains that viscous dispersion can be dominant and can promote a `soliton-like' regime. In this regime and despite the dissipative nature of the flow, it is possible to observe the formation of solitons, \ie solitary waves whose shape and speed are not altered by collisions with other solitary waves (such waves are still dissipative but they share several common features with solitons in conservative systems~\cite[]{Chr95}). We explore such effects in Fig.~\ref{Rlc06-q03} which depicts a simulation of the interaction between two solitons. Chosen parameters correspond to Rhodorsil silicon oil v1000 and $G\!o = 0.6$. Two pulse-like perturbations of different amplitudes are initially placed in the computational domain (cf. panel~a). The excess mass carried by these two perturbations are then drained behind the two pulses creating two wave-packets. The spatial extend of these packets grows in time due to the convective instability of the film~\cite[]{Dup07,Ruy08}. Panel~b shows the interaction of the second pulse with the mass ejected by the first pulse. The spatio-temporal diagram shown in panel~e indicates that the second pulse first overtakes this excess mass and next the first pulse without absorbing them (cf. panels~c and d). The coalescence of the two pulses is accompanied by a phase shift but no notable modifications of the speeds and amplitudes of the pulses as in Hamiltonian systems: The initial wave profiles get superimposed as the waves collide and reappear as the waves move apart. Yet, a slow evolution of the amplitude and speed of the two solitons can be observed towards the speed $c= 7.5$~cm/s and amplitude $h_{\rm max}=3.3$~mm of the infinite-domain solitary-wave solution for the given set of parameters. Our hope is that this new evidence of existence of solitons on a liquid film flowing down a fibre, a dissipative system as opposed to a Hamiltonian one, may motivate a renewed interest for the experimental investigation of the resulting wave dynamics and, in particular, on the role of viscous dispersion. \begin{acknowledgments} We acknowledge financial support through a travel grant supported by the Franco-British Alliance Research Partnership Programme and from the Multiflow ITN Marie Curie network funded by the European Commission (GA-2008-214919). The authors thank Imperial College and Laboratoire FAST for hospitality. \end{acknowledgments} \begin{appendix} \section{Coefficients of the model (\ref{model2shk})} \label{S-Coeffs} The expressions of the coefficients entering the averaged momentum balance (\ref{mom-shk}) consist of ratios of polynomials in $b$ and $\log(b)$, where $b=1 +\alpha h$ is the ratio of the total radius $R+h$ to the fibre radius $R$: \begin{subequations} \label{F-I} \begin{eqnarray} \label{def-phi} \phi &=& \left[3 \left((4 \log(b)-3) b^4+4 b^2-1\right)\right]/ \left[16 (b-1)^3\right]\,,\\ {F} &=& 3 {F}_a/[ 16 (b-1)^2 \phi {F}_b]\,,\\ \nonumber {F}_a &=& -301 b^8+622 b^6-441 b^4+4 \log(b) \left\{197 b^6-234 b^4 +6 \log(b) \right.\\ && \left. \times\left[16 \log(b) b^4-36 b^4+22 b^2+3\right] b^2 +78 b^2+4\right\} b^2 +130 b^2-10 \,,\\ {F}_b &=& 17 b^6+12 \log(b) \left[2 \log(b) b^2-3 b^2+2\right] b^4 -30 b^4+15 b^2-2\,,\\ G &=& G_a/[ 64 (b-1)^4 \phi^2 {F}_b]\,, \\ \nonumber G_a &=& 9 b \left\{ \vphantom{\left(b^2-1\right)^2} 4 \log(b) \left[-220 b^8+456 b^6-303 b^4 +6 \log(b) \left(61 b^6-69 b^4 \right. \right. \right. \\ && \nonumber \left. \left. \left. +4 \log(b) \left(4 \log(b) b^4-12 b^4+7 b^2+2\right) b^2+9 b^2+9\right) b^2+58 b^2+9\right] b^2 \right. \\ && \left. +\left(b^2-1\right)^2 \left(153 b^6-145 b^4+53 b^2-1\right)\right\}\,, \\ I &=& 64 (b-1)^5 \phi^2/[3 {F}_b]\,,\\ J &=& 3J_a/[128 (b-1)^4 \phi^2 {F}_b]\,,\\ \nonumber J_a &=& 9 \left\{\left(490 b^8-205 b^6-235 b^4+73 b^2-3\right) \left(b^2-1\right)^3 \right. \\ \nonumber && +4 b^2 \log (b) \left[2 b^4 \log (b) \left(72 \log (b) \left(2 \log (b) b^4 - 6 b^4+b^2+6\right) b^4 \right.\right. \\ && \left. \nonumber +(b-1) (b+1) \left(533 b^6-109 b^4-451 b^2+15\right)\right) \\ && \left.\left. -3 \left(b^2-1\right)^2 \left(187 b^8-43 b^6-134 b^4+17 b^2+1\right)\right]\right\}\,, \\ K &=& 3 K_a/[16 b^3 (b-1)^2 \phi {F}_b]\,,\\ K_a &=& 4 b^4 \log(b) \left(233 b^8-360 b^6+12 \log(b) \left(12 \log(b) b^4-25 b^4+12 b^2 +9\right) b^4 \right. \nonumber \\ && \left. +54 b^4+88 b^2-15\right)-\left(b^2-1\right)^2 \left(211 b^8-134 b^6-56 b^4+30 b^2-3\right)\,, \\ L &=& L_a/[ 8 b(b-1)^2 \phi {F}_b]\,, \\ \nonumber L_a &=& 4 b^2 \log(b) \left\{ 6 \log(b) \left(12 \log(b) b^4-23 b^4+18 b^2+3\right) b^4+(b-1) (b+1) \right. \\ && \left. \times\left(95 b^6-79 b^4-7 b^2+3\right)\right\} -\left(b^2-1\right)^2 \left(82 b^6-77 b^4+4 b^2+3\right)\,,\\ M &=& 3 + \left[24 \log(b) b^8-25 b^8+48 b^6-36 b^4+16 b^2-3\right] /[2 b^2 {F}_b]\,. \end{eqnarray} \end{subequations} We note that in Appendix~B in \cite{Ruy08} a small misprint can be found in the definition of the factor $J$ ---a missing factor of three--- that is here corrected. \section{Static drops on coated fibres: Computations and asymptotic analysis} \label{S-Static-Drops} The shape of an axisymmetric drop sitting on a vertical fibre has been computed numerically by Kumar and Hartland~\cite{Kum88} and determined analytically by Carroll~\cite{Car76} by neglecting gravity effects. When the contact angle of the liquid with the solid fibre vanishes, the analytical solution corresponds to an unduloid \cite[]{Del41,Pla73} that can be written parametrically using elliptic integrals of the first and second kind. However, the analytical solution is cumbersome to use and requires numerical evaluation of the different integrals involved. Hence, we choose to determine the solution by solving numerically the Laplace--Young equation parametrically rewritten as \cite[]{Kum88}: \begin{subequations} \label{Kum88-LY-eq} \begin{eqnarray} \frac{d\phi}{d{\tilde s}} &=& \left(\frac{d\phi}{d{\tilde s}}\right)_t + \frac{1}{R_t} - \frac{\sin\phi}{{\tilde r}}\,,\\ \frac{d {\tilde r}}{d{\tilde s}} &=& \cos \phi\,,\qquad \hbox{and} \qquad \frac{d {\tilde x}}{d{\tilde s}} = \sin \phi\,, \end{eqnarray} where the length scale is the fibre radius, ${\bar R}$. The radial and axial coordinates are denoted by ${\tilde r}$ and ${\tilde x}$, respectively. ${\tilde s}$ denotes the curvilinear arc length along the drop interface whereas $\phi$ is the angular inclination of the drop interface to the radial axis whereas $\left(d\phi/ds\right)_t + 1/R_t$ denotes the mean curvature at the top of the drop, $s=0$. The set of equations is completed by the boundary conditions: \begin{equation} \phi = \pi/2\,,\qquad {\tilde r}=R_t\,, \qquad {\tilde x}=0\,, \qquad \hbox{at} \qquad {\tilde s}=0. \end{equation} The contact area ${\tilde A}$ separating the liquid and gas phases and the volume ${\tilde V}$ of the drop has been computed by solving: \begin{equation} \frac{d {\tilde A}}{d{\tilde s}} = 2\pi {\tilde r} \,\qquad \hbox{and} \qquad \frac{d{\tilde V}}{d{\tilde s}} = \pi({\tilde r}^2- R_t^2) \end{equation} \end{subequations} We have solved system (\ref{Kum88-LY-eq}) using the {\sc Auto07p} software~\cite{auto97}. We have adjusted $R_t$ to the coated fibre radius $1+\ti \alpha$, and the drop volume $V$ to the volume of the corresponding solitary wave, after subtracting the volume of the residual film. The speed of the quasi-static drops sliding on a vertical fibre can be estimated using the Landau-Levich-Derjaguin theory~\cite[]{LL42,Der43}. For this purpose we divide a large amplitude sliding drop in two separate regions. The `inner' one corresponding to the thin films at the upper and lower ends of the drop where viscosity balances capillary forces, and the `outer' one, the quasi-static drop itself which is governed by the Laplace-Young equation given above. In the inner region, $h\ll R$ and the equation to be solved reduces to (\ref{Serafim}), whose TW solutions are governed by: \begin{equation} \frac{h^3}{3}(1 + \beta h' + h''') - c(h-1) -1 =0 \end{equation} Following~\cite{Kal94}, we introduce the inner coordinates $X=c^{-1/3}\xi$ where $c\gg1$ and get to leading order \begin{equation} \label{Bretherton} h''' = 3(h-1)/h^3 \end{equation} which is the so-called `Bretherton equation' \cite[]{Bret61,Kal94}, originating from Bretherton's work on the motion of a long gas bubble in a capillary tube. In the upper region, the solution of the Bretherton equation with boundary conditions $h=1$, $h'=h''=0$ at $X=0$ is monotonic. The numerical solution of~(\ref{Bretherton}) yields $\lim_{X\to\infty} h''=1.34$. The speed of the drops is then selected by asymptotically matching the solutions of (\ref{Kum88-LY-eq}) and (\ref{Bretherton}) in the upper overlap region between inner and outer domains. This can be easily done by imposing that the Laplace pressures corresponding to the two solutions are equal in this region giving: \begin{equation} \label{Landau} \left(\frac{d\phi}{d{\tilde s}}\right)_t + \frac{1}{R_t} = 1 + 1.34 \frac{{C\!a}^{2/3}}{\ti \alpha}, \end{equation} where ${C\!a} = c_{\rm drops} B\!o = \mu {\bar c}_{\rm drops}/\sigma$ in the limit $\ti \alpha\ll1$. \end{appendix} \bibliographystyle{apsrev4-1}
1,477,468,750,751
arxiv
\section{\label{sec:Intro}INTRODUCTION\protect\\} The 7-and-8 TeV run (Run 1) of the LHC has been a great success, but no New Physics (NP) has emerged. The hint of NP in the forward-backward asymmetry of $B\to K^*\ell^+\ell^-$ decay~\cite{Wei:2009zv} from the B factories was eliminated by LHCb~\cite{Aaij:2011aa} early on. The mild hint at the Tevatron~\cite{Aaltonen:2007-2012, Abazov:2008-2012} for \emph{large} $CP$ violating (CPV) phase $\phi_s$ in $B_s^0$ mixing was also swiftly eliminated by LHCb~\cite{LHCb:2011aa,LHCb:2011ab}, vanquishing the suggested possible correlation~\cite{Hou:2006mx} with large direct CPV difference $\Delta {\cal A}_{K\pi} \equiv {\cal A}(B^+\to K^+\pi^0) - {\cal A}(B^0\to K^+\pi^-)$~\cite{Lin:2008zzaa}. Finally, the hot pursuit for $B_s^0\to \mu^+\mu^-$ at the Tevatron culminated in the recent observation by the LHCb~\cite{Aaij:2013aka} and CMS~\cite{Chatrchyan:2013bka} experiments, albeit again consistent with the Standard Model (SM). The combined LHC result for $B_q^0\to \mu^+\mu^-$ is~\cite{B2mumu-LHCbCMS}, \begin{align} {\cal B}(B_s^0\to \mu^+\mu^-) &= (2.8^{+0.7}_{-0.6}) \times 10^{-9}, \label{eq:BsmumuLHC} \\ {\cal B}(B_d^0\to \mu^+\mu^-) &= (3.9^{+1.6}_{-1.4}) \times 10^{-10}. \label{eq:BdmumuLHC} \end{align} At 6.2$\sigma$, the $B_s^0\to \mu^+\mu^-$ mode is established, but SM expectation is 7.6$\sigma$. The $B_d^0\to \mu^+\mu^-$ mode deviates from SM expectation of $(1.06 \pm0.09) \times 10^{-10}$~\cite{Bobeth:2013uxa} by 2.2$\sigma$, with central value more than $3$ times the SM value. Thus, $B_d^0\to \mu^+\mu^-$ should be keenly followed at the up and coming LHC Run 2 (13 and 14 TeV). The 1 fb$^{-1}$ LHCb update for $\phi_s$~\cite{Aaij:2013oba} is: \begin{equation} \phi_s = 0.01 \pm 0.07 \pm 0.01,\ \ \ {\rm (1\ fb}^{-1},\ {\rm LHCb)} \label{eq:phis_LHCb-1} \end{equation} which combines two results opposite in sign, \begin{align} \phi_s &= 0.07 \pm 0.09 \pm 0.01,\ \; {\rm (1\ fb}^{-1}\ J/\psi K\bar K,\ {\rm LHCb)} \label{eq:phis_LHCb-1-KK} \\ \phi_s &= -0.14^{+0.17}_{-0.16} \pm 0.01.\ \ {\rm (1\ fb}^{-1}\ J/\psi\,\pi\pi,\ {\rm LHCb)} \label{eq:phis_LHCb-1-pipi} \end{align} Eq.~(\ref{eq:phis_LHCb-1}) dominates the Heavy Flavor Averaging Group (HFAG) combination~\cite{Amhis:2012bh} of all experiments, \begin{equation} \phi_s = 0.00 \pm 0.07,\ \ \ {\rm (PDG2014)} \label{eq:phis_PDG2014} \end{equation} adopted by the Particle Data Group (PDG)~\cite{PDG14}, and is in good agreement with SM value of $\phi_s \simeq -0.04$. A preliminary result~\cite{CMS:2014jxa} of CMS, \begin{equation} \phi_s = -0.03 \pm 0.11 \pm 0.03,\ \ {\rm (20\ fb}^{-1},\ {\rm CMS)} \label{eq:phis_CMS-20} \end{equation} is not included in PDG2014, but is fully consistent. Also not making PDG2014 is the 3 fb$^{-1}$ update by LHCb for $B_s^0\to J/\psi\,\pi^+\pi^-$ with full Run 1 data~\cite{Aaij:2014dka}, \begin{equation} \phi_s = 0.070 \pm 0.068 \pm 0.008,\ \ {\rm (3\ fb}^{-1}\ J/\psi\,\pi\pi,\ {\rm LHCb)} \label{eq:phis_LHCb-3-pipi} \end{equation} with sign change from Eq.~(\ref{eq:phis_LHCb-1-pipi}), becoming same sign with Eq.~(\ref{eq:phis_LHCb-1-KK}). Because the analysis is done simultaneously with $\Delta\Gamma_s$ measurement, the $B_s^0\to J/\psi\,K\bar K$ mode took much longer. Intriguingly, it also switched sign~\cite{LHCb_1411}, \begin{equation} \phi_s = -0.058 \pm 0.049 \pm 0.006,\ \; {\rm (3\ fb}^{-1}\; J/\psi\, \phi,\; {\rm LHCb)} \label{eq:phis_LHCb-3-KK} \end{equation} and the combined 3~fb$^{-1}$ result is, \begin{equation} \phi_s = -0.010 \pm 0.039, \ \ \ \rm{(3~fb}^{-1},\ {\rm LHCb}) \label{eq:phis_LHCb-3} \end{equation} with no indication of New Physics. A third measurement of interest by LHCb is the so-called $P_5^\prime$ anomaly. The significance (3.7$\sigma$), however, did not change from 1 fb$^{-1}$~\cite{Aaij:2013qta} to 3 fb$^{-1}$~\cite{P5'-Moriond}. Given that this is one out of many angular variables in $B^0\to K^{*0}\mu^+\mu^-$, it remains to be seen whether the ``anomaly'' is genuine. Unfortunately, it will take several years to accumulate and analyze an equivalent amount of data at Run~2. We note in passing the so-called $R_K$ anomaly~\cite{Aaij:2014ora} in lepton universality violation in $B^+\to K^+\ell^+\ell^-$ decays, which has 2.6$\sigma$ deviation from SM expectation. It may or may not be related to the $P_5^\prime$ anomaly. What are the prospects for Run 2? A total of $8$ fb$^{-1}$ or more data is expected by LHCb up to 2018. Data rate is much higher for CMS, but trigger bandwidth is an issue. The Belle~II experiment should be completed in this time frame in Japan. Though not particularly competitive in $\phi_s$ and $B_q^0\to \mu^+\mu^-$, it could crosscheck $P_5^\prime$. Given that the two former measurables correspond to $b \leftrightarrow s$ and $b\to d$ transitions, one involving CPV, the other not, there is one particular process that comes to mind: $K\to \pi\nu\bar\nu$ decays, which are $s\to d$ transitions. The neutral $K_L^0 \to \pi^0\nu\nu$ decay, pursued by the KOTO experiment~\cite{KOTO} in Japan, is purely CPV. The charged $K^+\to \pi^+\nu\nu$ mode is pursued by the NA62 experiment~\cite{NA62} at CERN. Both experiments run within a similar time frame. If one has indications for NP in $B_q^0\to \mu^+\mu^-$ and/or $\phi_s$, likely one would find NP in $K\to \pi\nu\bar\nu$, and \emph{vice versa}. An element of competition between high- and low-energy luminosity frontiers would be quite interesting. In this letter we study the \emph{correlations} between the measurables $B_d^0\to \mu^+\mu^-$, $B_s^0\to \mu^+\mu^-$, $\phi_s$, and $K\to \pi\nu\bar\nu$ (especially $K_L^0 \to \pi^0\nu\nu$), in the 4th generation (4G) model, which cannot address $P′_5/R_K$ anomalies. It was pointed out quite some time ago~\cite{Hou:2005yb} that 4G can bring about an enhanced $K_L^0 \to \pi^0\nu\nu$, and now that KOTO is running, one should check whether it remains true. Although some may now find 4G extreme, our aim is towards enhanced $B_d^0\to \mu^+\mu^-$ rate by a factor of three and still survive all \emph{flavor} constraints. The issue with 4G is the observation of a light Higgs boson, without the anticipated factor of 9 enhancement in cross section. On one hand it has been argued~\cite{MHK} that there still exists other interpretation of this 125 GeV boson, that is to identify it as dilaton from a 4G theory with strong Yukawa interaction. On the other hand, Higgs boson practically does not enter (i.e. is ``orthogonal" to) low energy flavor changing processes, and, \emph{if} one discovers an enhanced $B_d^0\to \mu^+\mu^-$ decay~\cite{Hou:2013btm}, it may put some doubt on the Higgs nature of the observed 125 GeV particle. We view the issue, different interpretation of this boson, is still opens and would be settled by 2018. Our 4G study serves to illustrate how New Physics in $B_q^0\to \mu^+\mu^-$, $\phi_s$, and $K\to \pi\nu\bar\nu$ might be accommodated. In what follows, we give the formulas and data inputs, then our numerical results and end with some discussions. \section{Formulas and Data Input} We define the parameters $x_q = {m_q^2}/{M_W^2}$, $\lambda_q^{ds}\equiv V_{qd}V_{qs}^*$ ($q=u,c,t,t^\prime$), with \begin{align} V_{t^\prime d}^*V_{t^\prime s} & \equiv (\lambda_{t^\prime}^{ds})^* \equiv r_{ds}e^{i\phi_{ds}}. \label{eq:rdsphids} \end{align} We adopt the parametrization of Ref.~\cite{Hou:1987hm} for the $4\times 4$ CKM matrix, with convention and treatment of Ref.~\cite{Hou:2005yb}. In particular, we assume SM-like values for $s_{12}$, $s_{23}$, $s_{13}$ and $\phi_{ub}\simeq \gamma/\phi_3$, with following input: $|V_{us}|=0.2252\pm 0.0009$, $|V_{cb}|= 0.0409 \pm 0.0011$, $|V_{ub}^{\rm ave}|=(4.15\pm 0.49)\times 10^{-3}$, $\gamma/\phi_3 = (68^{+10}_{-11})^\circ$. This is a simplification, since we try to observe trends, rather than making a full fit. We find taking the ``exclusive'' measurement value for $|V_{ub}|$ allows less enhancement range for $K_L\to \pi^0\nu\bar\nu$. Having a 4th generation of quarks brings in three new angles and two new phases. In this paper, we take \begin{equation} m_{t^\prime} = 1000~{\rm GeV}, \quad s_{34} \simeq {m_W}/{m_{t^\prime}} \simeq 0.08, \label{eq:tp_s34} \end{equation} for sake of illustration, thereby fixing one of the angles. A second angle and one of the two phases are fixed by the discussion illustrated below. We are then left with two mixing parameters, and for our interest in $K\to \pi\nu\bar\nu$ decays, we take as $r_{ds}$ and $\phi_{ds}$ in Eq.~(\ref{eq:rdsphids}). Our choice~\cite{foot} of $m_{t^\prime} = 1000$~GeV is above the experimental bound~\cite{PDG14}, which is beyond the nominal~\cite{Chanowitz:1978uj} unitarity bound. Even with vector-like 4G quarks, the experimental bound has reached beyond 700~GeV~\cite{PDG14}. We adhere to sequential 4G to reduce the number of parameters. \begin{figure}[t!] { \includegraphics[width=70mm]{Fig-1.eps} } \vskip0.55cm \caption{ Update of Fig.~3(a) of Ref.~\cite{Hou:2013btm}, taking $|V_{ub}^{\rm ave}|$ and $m_{t'} = 1000$ GeV. The pink-shaded contours correspond to 1(2)$\sigma$ regions of $\Delta m_{B_d}$ allowed by $f_{B_d} = (190.5\pm 4.2)$ MeV while the green-shaded bands are for 1(2)$\sigma$ in $\sin2\beta/\phi_1 = 0.682 \pm 0.019$~\cite{Amhis:2012bh}. Solid-blue lines are labeled $10^{10}\mathcal B(B_d\to \mu^+\mu^-)$ contours, with upper bound of 7.4~\cite{Aaij:2013aka} applied. Marked points S1, S2, S2$^\prime$ are explained in text. } \label{fig:b2d} \end{figure} We have suggested~\cite{Hou:2013btm} that an enhanced $B_d\to\mu^+\mu^-$ could indicate the presence of 4G, while suppression of $B_s \to \mu^+\mu^-$ could also be accommodated~\cite{Hou:2012xe}. These are supported by current data~\cite{B2mumu-LHCbCMS}, but more data is clearly needed. In Fig.~\ref{fig:b2d}, we update Fig.~3(a) of Ref.~\cite{Hou:2013btm} on the $r_{db}$--$\phi_{db}$ plane, where $V_{t^\prime d}^*V_{t^\prime b}\equiv r_{db}e^{i\phi_{db}}$. We use the FLAG~\cite{Aoki:2013ldr} average of $N_f =2+1$ lattice results $f_{B_d} = (190.5\pm 4.2)$ MeV, $f_{B_d}{\hat B_{B_d}}^{1/2} = (216\pm 15)$ MeV. We no longer take the ratio with $\Delta m_{B_d}$ for $B_d\to\mu^+\mu^-$ branching ratios, but update the constraints: $\sin2\beta/\phi_1 = 0.682\pm 0.019$ (HFAG~Winter~2014~\cite{Amhis:2012bh}), $\mathcal B(B_d\to\mu^+\mu^-) < 7.4\times 10^{-10}$ (95\% CL~limit of LHCb~\cite{Aaij:2013aka}). The latter is softer than the recent CMS and LHCb combination of Eq.~(\ref{eq:BdmumuLHC}). From Fig.~\ref{fig:b2d}, we shall consider two scenarios \begin{equation} r_{db}e^{i\phi_{db}}=0.00040 e^{i\,330^\circ}, \ 0.00045 e^{i\,260^\circ}, \label{eq:rdbphidb} \end{equation} marked as S1, S2 in Fig.~\ref{fig:b2d}, to illustrate \begin{equation} \mathcal B(B_d\to \mu^+\mu^-) \sim 3\times 10^{-10}, \ \; 1\times 10^{-10}, \ \; \end{equation} where we stay within 1$\sigma$ boundaries of \emph{both} $\Delta m_{B_d}$ (uncertainty in $f_{B_d}$) and $\sin2\beta/\phi_1$. $B_d\to \mu^+\mu^-$ is SM-like for S2, but carries a near maximal 4G CPV phase $\phi_{db}$. The point S2$^\prime$ will be discussed towards the end. For $b\to s$ observables, we update both formulas and input parameters of Ref.~\cite{Hou:2011fw}. For the CPV phase $\phi_s\equiv 2\Phi_{B_s}$ in $B_s$-$\bar B_s$ mixing, we use~\cite{Buras:2010pi} $2\Phi_{B_s} = \arg \Delta_{12}^s$ with $\Delta_{12}^s = (\lambda_t^{bs})^2 S_0(x_t) +2\lambda_t^{bs}\lambda_{t^\prime}^{bs}S_0(x_t,x_{t^\prime}) +(\lambda_{t^\prime}^{bs})^2 S_0(x_{t^\prime})$, where $\lambda_q^{bs} \equiv V_{qs}^*V_{qb}$ ($q=t,t^\prime$). We adopt the $\phi_s$ value of Eq.~(\ref{eq:phis_LHCb-3}) and impose $1(2)\sigma$ constraints. For $\mathcal B(B_s\to \mu^+\mu^-)$, because $\Delta\Gamma_s$ is sizable, experimental results should be compared with the $\Delta\Gamma_s$-corrected~\cite{DeBruyn2012} branching ratio denoted with a bar, \begin{equation} \bar{\mathcal B}(B_s\to \mu^+\mu^-) = \frac{1-y_s^2}{1+ \mathcal A_{\Delta \Gamma}^{\mu\mu}y_s} \mathcal B(B_s\to \mu^+\mu^-), \end{equation} where $y_s = \Delta\Gamma_s/2\Gamma_s=0.069\pm 0.006$ \cite{PDG14}, $A_{\Delta \Gamma}^{\mu\mu} = \,\cos[2\arg(C_{10}) -2\Phi_{B_s}]$, and~\cite{Buras:2010pi} \begin{align} &\mathcal B (B_s\to \mu^+\mu^-) = \tau_{B_s} \frac{G_F^2}{\pi}\left( \frac{\alpha}{4\pi\sin^2\theta_W} \right)^2 f_{B_s}^2m_\mu^2m_{B_s} \notag \\ & \hskip0.15cm \times\sqrt{1-{4m_\mu^2}/{m_{B_s}^2}}\,\eta_{\rm eff}^2 \left|\lambda_t^{bs}Y_0(x_t)+ \lambda_{t^\prime}^{bs}Y_0(x_{t^\prime})\right|^2. \end{align} We use $\eta_{\rm eff}=0.9882\pm 0.0024$ which is at NNLO for QCD and NLO for electroweak corrections~\cite{Buras:2013dea}, and we use the FLAG~\cite{Aoki:2013ldr} average of $N_f =2+1$ lattice results $f_{B_s} = (227.7\pm 4.5)$ MeV and $f_{B_s}{\hat B_{B_s}^{1/2}} = (266\pm 18)$ MeV, where the latter enters $B_s$-mixing. We use Eq.~(\ref{eq:BsmumuLHC}) and impose $1(2)\sigma$ experimental constraints, which is much larger than the hadronic uncertainty. We find that $\Delta m_{B_s}$ does not give further constraints in the parameter space of our interest, within hadronic uncertainty of $f_{B_s}$. The ratio $\Delta m_{B_s}/\Delta m_{B_d}$ has reduced hadronic uncertainty, as can be read from $\xi \equiv f_{B_s}{\hat B_{B_s}^{1/2}}/f_{B_d}{\hat B_{B_d}^{1/2}} = 1.268 \pm 0.063$~\cite{Aoki:2013ldr}, hence provides stronger constraint than individual $\Delta m_{B_s}$ or $\Delta m_{B_d}$. For $K^+\to\pi^+\nu\bar\nu$ and $K_L\to\pi^0\nu\bar\nu$, we use the formulas of Ref.~\cite{Hou:2005yb} and update input parameters: $m_t(m_t)=163$ GeV~\cite{Hoang:2008yj}, $\kappa_+ = (5.173\pm 0.025) \times 10^{-11}\times(|V_{us}|/0.225)^8$ and $\kappa_L = (2.231\pm 0.013) \times 10^{-10} \times(|V_{us}|/0.225 )^8$~\cite{Mescia:2007kn}, and $P_c(X) = 0.41\pm 0.05$~\cite{Buras:2004uu}. We impose $\mathcal B(K^+\to\pi^+\nu\bar\nu)_{\rm exp} < 3.35\times 10^{-10}$ (90\% C.L. from E949~\cite{Artamonov:2009sz}), implying the Grossman-Nir (GN) bound~\cite{Grossman:1997sk} \begin{align} \mathcal B (K_L\to\pi^0\nu\bar\nu) & < \frac{\kappa_L}{\kappa_+}\mathcal B (K^+\to\pi^+\nu\bar\nu) \nonumber \\ &\lesssim 1.4 \times 10^{-9}, \label{eq:GNbound} \end{align} which is stronger than the direct limit by E391a~\cite{Ahn:2009gb}. For Short-Distance (SD) contribution to $K_L\to \mu^+\mu^-$, \begin{align} &\mathcal B(K_L\to \mu^+\mu^-)_{\rm SD} =\kappa_\mu|V_{us}|^{-10} \left[ {\rm Re} \lambda_c^{ds}\, |V_{us}|^{4} P_c(Y) \right. \notag \\ &\quad\quad\quad\quad \left. +\;{\rm Re} \lambda_t^{ds}\, \eta_Y Y_0(x_t) +{\rm Re} \lambda_{t^\prime}^{ds}\eta_Y Y_0(x_{t^\prime}) \right]^2, \end{align} we use $\kappa_\mu = (2.009\pm 0.017) \times 10^{-9}\times(|V_{us}|/0.225)^8$, $P_c(Y)= (0.115\pm 0.018)\times(0.225/|V_{us}|)^4$~\cite{Gorbahn:2006bm}, and $Y_0(x)$ as given in Ref.~\cite{Buras:2010pi}. With the common QCD correction factor $\eta_Y=1.012$, we adopt the estimate~\cite{Isidori:2003ts} $\mathcal B(K_L\to \mu^+\mu^-)_{\rm SD} \leq 2.5\times 10^{-9}$ as upper bound. For $K\to\pi\pi$ indirect CP violation, we use~\cite{Buras:2010pi}. \begin{align} \varepsilon_K =&\frac{\kappa_\varepsilon e^{i\varphi_\varepsilon}}{\sqrt{2}(\Delta m_K)_{\rm exp}} {\rm Im}(M_{12}^K), \\ (M_{12}^K)^* =&\frac{G_F^2M_W^2}{12\pi^2}m_Kf_K^2\hat B_K \left[ (\lambda_c^{ds})^2 \eta_{cc}S_0(x_c) \right. \notag\\ &+(\lambda_t^{ds})^2 \eta_{tt}S_0(x_t) +2\lambda_c^{ds}\lambda_t^{ds} \eta_{ct}S_0(x_c,x_t) \notag\\ & +(\lambda_{t^\prime}^{ds})^2 \eta_{t^\prime t^\prime}S_0(x_{t^\prime}) +2\lambda_c^{ds}\lambda_{t^\prime}^{ds} \eta_{ct^\prime}S_0(x_c,x_{t^\prime}) \notag\\ &\left. +\;2\lambda_t^{ds}\lambda_{t^\prime}^{ds} \eta_{tt^\prime}S_0(x_t,x_{t^\prime}) \right], \end{align} where $(\Delta m_K)_{\rm exp}=(5.293\pm 0.009)\times 10^{9}~{\rm s}^{-1}$~\cite{PDG14}, $\varphi_{\varepsilon}= (43.52\pm 0.05)^\circ$~\cite{PDG14} and $\kappa_{\varepsilon}=0.94\pm 0.02$~\cite{Buras:2010pza}. We use $\eta_{cc}=1.87\pm 0.76$~\cite{Brod:2011ty}, $\eta_{tt}= 0.5765\pm 0.0065$~\cite{Buras:1990fn,Brod:2010mj}, $\eta_{ct}=0.496\pm 0.047$~\cite{Brod:2010mj} and approximate $\eta_{tt}=\eta_{tt^\prime}=\eta_{t^\prime t^\prime}$, $\eta_{ct}=\eta_{ct^\prime}$. Theoretical uncertainty is around $11$\%~\cite{Buras:2013ooa}, far larger than experimental error~\cite{PDG14}: $|\varepsilon_K| =(2.228\pm 0.011)\times 10^{-3}$ and ${\rm Re}(\varepsilon_K)>0$. We thus impose $\varepsilon_K$ to be within $\pm 11$\% from data. \begin{figure*}[t!] \centering {\includegraphics[width=80mm]{Fig2a.eps} \includegraphics[width=80mm]{Fig2b.eps} } \caption{ [left] Allowed region in $|V_{t'd}^*V_{t's}|$--$\arg (V_{t'd}^*V_{t's})$ (i.e. $r_{ds}$--$\phi_{ds}$) plane for Scenario S1, $r_{db}\,e^{i\phi_{db}} = 0.0004\,e^{i330^\circ}$ (enhanced $B_d \to \mu^+\mu^-$) and $\phi_s = -0.010\pm 0.039$ (Eq.~(\ref{eq:phis_LHCb-3})), where the constraint source for each boundary is indicated. The leading constraint is $B_s \to \mu^+\mu^-$, where 1(2)$\sigma$ region --- towards larger (smaller) BR in central region (4th-extending-to-1st quadrants) --- is (very) light shaded, separated by dashed lines, except: $K_L \to \mu^+\mu^-$ cuts off at upper left, as well as center-right, indicated by light-blue solid lines; 1(2)$\sigma$ allowed $\phi_s$ cuts off the 1(2)$\sigma$ allowed $B_s \to \mu^+\mu^-$ in right-center, plus a sliver in 1st quadrant. [right] The allowed region is further overlaid with $\varepsilon_K$ (blue-shaded), $\varepsilon'/\varepsilon$ (narrow green bands corresponding to $R_6$ in increasing order from 1.0, 1.5, 2.0, 2.5) and ${\cal B}(K_L\to \pi^0\nu\nu)$, labeled in $10^{-10}$ units. The illustration is for $m_{t'}= 1000$ GeV (Eq.~(11)). } \label{fig:fig2} \end{figure*} Direct CP violation in $K\to\pi\pi$ is affected even more by hadronic uncertainties. We use~\cite{Hou:2005yb} \begin{align} \frac{\varepsilon^\prime}{\varepsilon} =a\left[ {\rm Im}(\lambda_c^{ds})P_0 +{\rm Im}(\lambda_t^{ds})F(x_t) +{\rm Im}(\lambda_{t^\prime}^{ds})F(x_{t^\prime}) \right], \notag \end{align} where $a=0.92\pm 0.03$~\cite{Buras:2014sba} is a correction from $\Delta I=5/2$ transitions~\cite{Cirigliano:2003nn}. The function $F(x)$, which relies on hadronic parameters $R_6$ and $R_8$\footnote{ Furthermore, the relations between hadronic parameters $R_{6,8}$ and bag parameters $B_{6}^{(1/2)}$ and $B_8^{(3/2)}$ which are calculated in Lattice QCD, are $R_{6}=1.13B_{6}^{(1/2)} \left[\frac{114\, \text{MeV}} {m_s(m_c)+m_d(m_c)}\right]^2$ and $R_8=1.13B_8^{(3/2)}\left[\frac{114\, \text{MeV}} {m_s(m_c)+m_d(m_c)}\right]^2$, see \cite{Buras:2014sba}.} , is defined as $F(x)=P_X X_0(x)+P_Y Y_0(x)+P_Z Z_0(x)+P_E E_0(x)$ with $P_i=r_i^{(0)}+r_i^{(6)}R_6 +r_i^{(8)}R_8$ ($i=0,X,Y,Z,E$). We also update numerical values for the coefficients $r_i^{(0)}$, $r_i^{(6)}$ and $r_i^{(8)}$, for $\alpha_s(M_Z)=0.1185$~\cite{PDG14} given in Table~1 of Ref.~\cite{Buras:2014sba}, by reversing the sign of $r_0^{(j)}$ as done in Ref.~\cite{Hou:2005yb}. We take $R_8 = 0.7$ from lattice~\cite{Blum:2012uk}, with the translation by Ref.~\cite{Buras:2014sba}. There is still no reliable result from lattice QCD for $R_6$, so we treat~\cite{Hou:2005yb,BijnensPrades} it as a parameter. That is, for each value of $R_6=1.0, 1.5, 2.0, 2.5$, we require $\varepsilon^\prime/\varepsilon$ to agree within $1\sigma$ experimental error~\cite{PDG14}, \begin{align} &\frac{\varepsilon^\prime}{\varepsilon} \simeq {\rm Re}\left( \frac{\varepsilon^\prime}{\varepsilon}\right) = (1.66\pm 0.23)\times 10^{-3}. \end{align} Comments on other potentially important observables are in order. In contrast to $\Delta m_{B_{d,s}}$, $\Delta m_K$ is polluted by Long-Distance (LD) effects. We have checked that there is no significant change of the SD part from the SM value and $(\Delta m_K)_{\rm SD}$ is still below the measured value. $D^0$-$\bar D^0$ mixing is also subject to LD effects. We checked that the SD contribution to the mixing amplitude $M_{12}^D$ from $b^\prime$ (with $m_{b^\prime}\sim m_{t^\prime}$) could be enhanced up to 3 times the SM value in the parameter space of our interest, but it is still well below the measured value of $\Delta m_D$. $B\to K^{(*)}\mu^+\mu^-$ observables are subject to precise measurements at the LHC and severely constrain NP effects. We checked that the 4G $t'$ effects on $C_9$ is within 5\% of the SM value ($\sim 4.3$) in our parameter space. The 4G effects on $C_{10}$ can be as large as unity in some part of the target parameter space. However, adopting the model independent constraint in Ref.~\cite{Altmannshofer:2014rta}, we checked that the changes are within $2\sigma$ for various modes. It cannot explain $P_5^\prime$, nor $R_K$, anomalies. \section{Results} To illustrate the connection between $B_d\to\mu^+\mu^-$ and $K_L \to \pi^0\nu\bar\nu$, we explore two scenarios (see Fig.~1): \begin{itemize} \item Scenario S1: \ \ $r_{db}e^{i\phi_{db}} = 0.00040\, e^{i\,330^\circ}$\\ $\mathcal B(B_d\to\mu^+\mu^-)\gtrsim 3\times$ SM, with $e^{i\phi_{db}}$ complex; \item Scenario S2: \ \ $r_{db}e^{i\phi_{db}} = 0.00045\, e^{i\,260^\circ}$\\ $\mathcal B(B_d\to\mu^+\mu^-) \sim$ SM, $\phi_{db}$ is near maximal CPV; \end{itemize} With formulas and data input given in Sec.~II, we plot in Fig.~2[left] the region in the $|V_{t'd}^* V_{t's}|$--$\arg(V_{t'd}^* V_{t's})$ or $r_{ds}$--$\phi_{ds}$ plane allowed by various constraints for S1. The golden-hued (very) light shaded regions are for 1(2)$\sigma$ of the $B_s \to\mu^+\mu^-$ mode. Other constraints, labeled by the process, cut in at certain regions: ${\cal B}(K_L \to \mu\mu)_{\rm SD}$ at the upper-left corner, and just right of center; $\phi_s = -0.049(-0.088)$ at 1(2)$\sigma$ cuts off near center of right-hand side, and a tiny sliver in first quadrant. The remaining 1$\sigma$ contours for $B_s \to\mu^+\mu^-$ correspond to $3.5\times 10^{-9}$ (central-left region) and $2.2\times 10^{-9}$ (4th quadrant extending into 1st quadrant) in rate, and for 2$\sigma$ contours, $4.3\times 10^{-9}$~\cite{B2mumu-LHCbCMS} from 1st to 2nd quadrant and $1.6\times 10^{-9}$ in 4th quadrant only. We find that $R = \Delta m_{B_d}/\Delta m_{B_s}$ does not provide further constraint within 2$\sigma$. The allowed region of Fig.~2[left] is further overlaid, in Fig.~2[right], by the constraints of $\varepsilon_K$, $\varepsilon'/\varepsilon$, and give $K_L \to \pi^0\nu\nu$ contours in red-solid, labeled by BR values in $10^{-10}$ units. Note that ``15'' is just above the nominal GN bound of Eq.~(\ref{eq:GNbound}), while the region $\lesssim$ SM strength is marked by red-dash lines with label ``SM''. The $\varepsilon_K$ constraint, plotted in shaded blue with theoretical error (experimental error negligible), prefers small $|V_{t'd}^* V_{t's}|$ values, except two ``chimneys'' where the phase of $V_{t'd}^* V_{t's}$ is small for one near $180^\circ$, and the other is tilted in the fourth quadrant. The $\varepsilon'/\varepsilon$ constraint is more subtle, because of the less known~\cite{BijnensPrades} hadronic parameter $R_6$ (we fix $R_8 \simeq 0.7$~\cite{Blum:2012uk}). We illustrate~\cite{Hou:2005yb} with $R_6 = 1.0,\ 1.5,\ 2.0,\ 2.5$, in ascending order of green bands determined by experimental error of $\varepsilon'/\varepsilon$. \begin{figure*}[t!] \centering {\includegraphics[width=80mm]{Fig3a.eps} \includegraphics[width=80mm]{Fig3b.eps} } \caption{ Scenario S2, $r_{db}\,e^{i\phi_{db}} = 0.00045\,e^{i\,260^\circ}$ (SM-like $B_d \to \mu^+\mu^-$): [left] Similar to Fig.~2a, where for the 4th quadrant of interest, the $2\sigma$ dashed line is for $B_s \to \mu^+\mu^-$ and solid is for $K_L \to \mu^+\mu^-$ (plus a bit from $K^+ \to \pi^+\nu\bar\nu$); [right] Similar to Fig.~2b, with $\varepsilon_K$ (blue-shaded) $\varepsilon^\prime/\varepsilon$ (green bands) and $K_L \to \pi^0\nu\bar\nu$ (red labelled contours) overlaid. } \label{fig:fig3} \end{figure*} First, we observe that the $\varepsilon_K$ and $\varepsilon'/\varepsilon$ constraints disfavor the possible enhancements for $K_L \to \pi^0\nu\nu$ when $\arg(V_{t'd}^* V_{t's})$ is in the first two quadrants. Second, if one keeps all constraints to 1$\sigma$, then $K_L \to \pi^0\nu\nu$ could reach a factor $\sim 7$ above SM, with modest $R_6$ values. However, if one allows larger $R_6$ (up to 2.5) as well as 2$\sigma$ variations, the $\varepsilon_K$ ``chimney'' in the 4th quadrant allows $K_L \to \pi^0\nu\nu$ to be enhanced up to 1/3, even 1/2, the GN bound. There is a correlation between larger $K_L \to \pi^0\nu\nu$ and smaller $B_s\to\mu\mu$. If KOTO observes $K_L \to \pi^0\nu\nu$ shortly after reaching below the GN bound, a rather large $R_6$ value could be implied. One argument for larger $K_L \to \pi^0\nu\nu$ or smaller $B_s\to\mu\mu$ is for larger values of $|V_{t'd}^* V_{t's}|$: since $|V_{t'd}| \sim 0.005$, to have $|V_{t's}| > |V_{t'd}|$ would demand $|V_{t'd}^* V_{t's}| \gtrsim 0.25\times 10^{-4}$. We see the prowess, still, of the various kaon measurements, with $K_L \to \pi^0\nu\nu$ as the main frontier ($K^+\to\pi^+\nu\bar\nu$ did not enter discussion), on a par with the ongoing $B_{d,\,s} \to\mu\mu$ and $\phi_s$ measurement efforts. For Scenario S2, where $B_d\to\mu\mu$ is taken as consistent with SM, but $\phi_{db} \equiv \arg(V_{t'd}^*V_{t'b})\simeq 260^\circ$ is close to maximal CPV phase (in our convention, $V_{t'b}$ is real) with $K_L \to \pi^0\nu\bar\nu$ in mind, we plot in Fig.~3[left] the results corresponding to Fig.~2[left]. The regions marked by long dashed lines and very lightly shaded are all beyond 1$\sigma$ level, indicating more tension, including in $R = \Delta m_{B_d}/\Delta m_{B_s}$. The $B_s\to\mu\mu$ constraint at 2$\sigma$ is interspersed with the ${\cal B}(K_L \to \mu\mu)_{\rm SD}$ constraint, plus short segments from $K^+\to \pi^+\nu\nu$. As in Fig.~2[right], we overlay the constraints of $\varepsilon_K$, $\varepsilon'/\varepsilon$, as well as $K_L \to \pi^0\nu\nu$ contours, in Fig.~3[right]. Again, $K_L \to \pi^0\nu\nu$ cannot get enhanceed in first two quadrants. For the blue-shaded ``chimney'' in 4th quadrant, as the $R_6$ value rises, $K_L \to \pi^0\nu\nu$ could get enhanced even up to GN bound, but $B_s\to\mu\mu$ would become relatively suppressed, and there is some tension with SD contribution to $K_L \to \mu\mu$. Note that ${\cal B}(B_d\to\mu\mu) \sim$ SM in this case. Here, having $|V_{t's}| > |V_{t'd}|$ would demand $|V_{t'd}^* V_{t's}| \gtrsim 0.32\times 10^{-4}$, hence in favor of larger $K_L \to \pi^0\nu\nu$. We have marked a point S2$^\prime$ in Fig.~1, which has same $\phi_{db} \simeq 260^\circ$ as S2, but enhances $B_d\to\mu\mu$ by a larger $r_{db} \equiv |V_{t'd}^*V_{t'b}| \simeq 0.00075$. The trouble with S2$^\prime$ is that $\Delta m_{B_d}/\Delta m_{B_s}$ ratio becomes inconsistent at 2$\sigma$ level, which we do not consider as viable. However, from S2 towards S2$^\prime$, one could enhance $B_d\to\mu\mu$ while $K_L \to \pi^0\nu\nu$ is more easily enhanced up to GN bound. The cost would be some tension in $\Delta m_{B_d}/\Delta m_{B_s}$. In all these discussions, $\phi_s$ is well within range of the 3 fb$^{-1}$ result of LHCb, Eq.~(\ref{eq:phis_LHCb-3}). \section{\label{sec:Discussion} \boldmath Discussion and Conclusion\protect\\} We are interested in the correlation between $B_d\to \mu^+\mu^-$ and $K_L\to \pi^0\nu\bar\nu$ in 4G, as constrained by $B_s \to \mu\mu$ and $\phi_s$. Scenario S1 illustrates enhanced $B_d\to \mu^+\mu^-$ with generic $V_{t'd}^*V_{t'b}$. Every measurement other than $B_d\to \mu^+\mu^-$ would be close to SM expectation, and a mild enhancement of $K_L \to \pi^0\nu\nu$ is possible. But it would take some while for KOTO to reach this sensitivity. Larger $K_L \to \pi^0\nu\nu$ correlates with smaller $B_s\to\mu^+\mu^-$, as well as larger hadronic parameter $R_6$. The $\phi_s$ constraint basically suppresses the phase of $V_{t's}^*V_{t'b}$. It could happen that $B_d\to \mu^+\mu^-$ ends up SM-like, which is illustrated by Scenario S2. In the 4G framework that accounts (within 1$\sigma$) for the $\sin2\beta/\phi_1$ ``anomaly'', this occurs when $\phi_{db} \equiv \arg(V_{t'd}^*V_{t'b})$ phase is near maximal, which is of interest for enhancing $K_L \to \pi^0\nu\nu$, a purely CPV process. We find that $K_L \to \pi^0\nu\nu$ can be enhanced up to practically the GN bound at the cost of large $R_6$, while staying within the $\phi_s$ constraint. There is the same correlation of larger $K_L \to \pi^0\nu\nu$ for smaller $B_s\to \mu^+\mu^-$. While the S2$^\prime$ point would push $\Delta m_{B_d}/\Delta m_{B_s}$ beyond 2$\sigma$ tolerance, some $|V_{t'd}^*V_{t'b}| \equiv r_{db}$ value below 0.00075 could still enhance $B_d\to \mu^+\mu^-$ a bit from SM, but $K_L \to \pi^0\nu\nu$ can more easily saturate the Grossman-Nir bound, with implication that $K^+\to \pi^+\nu\bar\nu$ is towards the large side allowed by E949, $B_s \to \mu\mu$ is visibly suppressed, while $R_6$ must be sizable. This would clearly be a bonanza situation for faster discovery! We have used 4G for illustration~\cite{foot}, since it supplies $V_{t's}$ and $V_{t'd}$ that affect $b\to s$ and $b\to d$ transitions, and induces correlations with $s\to d$ transitions. It is generally viewed that the fourth generation is ruled out by the SM-like Higgs boson production cross section. But we have argued~\cite{Hou:2013btm} that the Higgs boson does not enter the low energy processes discussed here, hence these processes are independent \emph{flavor} checks. Furthermore, loopholes exist for the SM-Higgs interpretation~\cite{MHK}. If one does not accept 4G, we stress that $B_d\to \mu^+\mu^-$ may well turn out have enhanced rate. Whatever new flavor physics one resorts to, there is the myriad of constraints of Sec.~II. We believe no model can survive intact and ``without blemish''~\cite{Mimura}. Thus, the modes $B_{d,\,s}\to \mu^+\mu^-$, $\phi_s$ and $K_L \to \pi^0\nu\nu$ provide ``pressure tests" to our understanding of flavor and $CP$ violation, where genuine surprises may emerge. Though differences must exist, we believe there would be correlations between the above four modes in any New Physics model with a limited set of new parameters. The NA62 experiment has started~\cite{NA62} running. If $K^+ \to \pi^+\nu\nu$ turns out to be above the 90\% CL limit from E949. the GN bound for $K_L \to \pi^0\nu\nu$ moves up, making things more interesting for KOTO, where the aim~\cite{KOTO} for the 2015 run is to reach the GN bound around $1.4\times 10^{-9}$. In conclusion, enhanced $B_d^0 \to \mu^+\mu^-$ could correlate with enhanced $K_L \to \pi^0\nu\bar\nu$ up to the Grossman-Nir bound in the 4th generation model. $B_s^0 \to \mu^+\mu^-$ becomes somewhat suppressed, with CPV phase $\phi_s \simeq 0$. Together with $K^+\to\pi^+\nu\bar\nu$, these measurements would provide ``pressure tests" to our understanding of flavor and $CP$ violation for any New Physics model. They should be followed earnestly in parallel to the scrutiny of the nature of the 125 GeV boson at LHC Run~2. \vskip0.3cm \noindent{\bf Acknowledgement}. WSH is supported by the the Academic Summit grant NSC 103-2745-M-002-001-ASP of the National Science Council, as well as by grant NTU-EPR-103R8915. MK is supported under NTU-ERP-102R7701 and NSC 102-2112-M-033-007-MY3. FX is supported under NSC 102-2811-M-002-205, as well as by NSFC under grant No. 11405074. We thank T. Yamanaka for discussions that stimulated this work. \vskip0.1cm \noindent{\bf Note Added}. During the revision, the long-waited result of $B_6^{(1/2)}$ appeared \cite{Buras:2015yba}, extracted from a new lattice calculation carried out by RBC-UKQCD collaboration \cite{Bai:2015nea}, which indicates a small $R_6$.
1,477,468,750,752
arxiv
\section{Introduction \label{introduction} } The Caltech Faint Galaxy Redshift Survey, (henceforth CFGRS), is designed to measure the properties of field galaxies in the redshift interval $0.3\lesssim z\lesssim1.3$. It uses complete samples to a fixed limiting magnitude in a particular bandpass within a small solid angle on the sky. Spectra are obtained for every object in the sample with the Low Resolution Imaging Spectrograph (henceforth LRIS) (Oke {\it et al.\/}\ 1995) on the 10~m Keck Telescope. The defining features of this program that distinguish it from existing and ongoing or planned surveys, such as the CfA1 Survey (\cite{huchra83}), the Las Campanas Redshift Survey (\cite{shectman96}), the Canada-France Redshift Survey (henceforth CFRS; Lilly {\it et al.\/}\ 1995a,b\markcite{lilly95a,lilly95b}), the Autofib survey (\cite{ellis96}), the Century Survey (\cite{geller97}), the ESO slice survey (\cite{vettolani97}), the Sloan Digital Sky Survey (\cite{gunn93}), the Deep Survey planned by the Lick group (\cite{koo98}) and the 2dF Survey (\cite{colless98}), are that it is possible to reach to fainter magnitudes ($K \sim 20$~mag, $R \sim 24$~mag) and that completeness is emphasized rather than sparse sampling over a large field. The closest counterpart to this approach is the work of Cowie {\it et al.\/}\ (1996\markcite{cowie96}), although we note that the present survey covers a larger solid angle at the survey limit. See Koo \& Kron (1992\markcite{koo92}) and Ellis (1997\markcite{ellis97}) for reviews of the subject. There are four limitations to ``pencil beam'' redshift surveys of the type described in the present paper when addressing issues of galaxy evolution. The first is that the faintest fluxes at which sources can be observed in a complete redshift survey are still over four magnitudes brighter than the fluxes of the faintest detectable sources in imaging data. The second is that the total number of sources and total sky area are both limited by available telescope time because the faintest possible sources are being studied. This is particularly relevant to attempts to characterize large scale structure. The third is that it is difficult to measure spectroscopic redshifts in the range $1.3<z<2.3$---the lower limit is set by the [O~II] line at 3727\AA\ moving out of the optical regime, the upper limit by when Ly$\alpha$ enters it. This may be a crucial period in the formation and evolution of normal galaxies (\cite{madau98}). The final limitation is the difficulty of generalizing the observed properties in a small field (which may contain unique features) to the properties of the field galaxy population as a whole. This paper presents spectroscopic results from one field located at J005325+1234, the central region of which is part of the Medium Deep Survey (\cite{griffiths94}) and among the deepest fields imaged with HST prior to the Hubble Deep Field. The field measures $2 \times 7.3$~arcmin$^2$ with a statistical sample containing 195 infrared-selected objects complete to $K = 20$~mag, of which 24 are spectroscopically confirmed Galactic stars and 32 cannot be assigned spectroscopic redshifts. (21 of these have spectra). There are 13 additional objects with redshifts (including two more stars) that are just outside the field boundary or are within the field but too faint. This paper is structured as follows. The sample is defined in \S\ref{sampledef}, which is followed by a description of the spectroscopic observations and redshift determinations in \S\ref{redshiftdet}. The galaxy spectra are assigned to spectroscopic and quality classes, a procedure discussed in \S\ref{specclass}. \section{Sample Definition \label{sampledef} } The sample, defined in Pahre {\it et al.\/}\ (1998\markcite{pahre98apjs}), is selected by the criteria that $K_s < 20$~mag within a $2 \times 7.3$~arcmin$^2$ field centered at 00 53 23.20 +12 33 58 (J2000). In an effort to cut down the thermal background of the standard Johnson $K$ filter, which has a central wavelength 2.2$\mu$m and FWHM 0.4$\mu$m, there are at least two non-standard filters in this same atmospheric window in common use: the $K^{'}$ filter of Wainscoat \& Cowie (1992\markcite{wainscoat92}) and the $K_s$ ($K$--short) filter of Persson (private communication) and Skrutskie {\it et al.\/}\ (1999). The latter is used here, and covers at half maximum transmission the wavelength range $2.00 < \lambda < 2.32 \mu$m. Throughout this series of papers, unless otherwise specified, we use $K$ (denoting the standard Johnson filter) and $K_s$ (denoting the $K$--short filter) interchangeably, although the appropriate $K_s$ transmission curve is used when deriving the $k$--corrections in Cohen et al (1998). This sample contains 195 objects, and we refer to this as the ``main'' sample. It includes 24 spectroscopically confirmed Galactic stars. The remaining 171 objects constitute the ``galaxy'' sample. We were able to assign redshifts to 139 of these galaxies and this is the ``redshift'' sample. Redshifts were also obtained for several objects that either had $K > 20$~mag or lie just outside the boundary of the sample. A few objects which are very faint at both $R$ and $K$ turned up serendipitously in slitlets intended for nearby brighter objects. Adding these 13 additional objects to the main sample defines the ``total'' sample of 208. The photometric data of Pahre {\it et al.\/}\ (1998\markcite{pahre98apjs}) are already corrected for reddening by the ISM in the Galaxy. The variation in $E(B-V)$ across the small field of our survey is insignificant. As is conventional, we ignore reddening internal to the galaxies themselves and from the IGM. \section{Redshift Determinations \label{redshiftdet} } \subsection{Data Acquisition and Reduction \label{dataacquire} } The objects in this sample were observed using the LRIS at the Keck Observatory beginning in October 1994 and ending in January 1998. The LRIS detector is a $2048 \times 2048$ pixel Tektronix CCD. Its dual amplifier readout mode was normally used to save time. Observations were carried out even when the weather was not photometric. Fifteen slit masks were used. Each mask has 25 to 33 slitlets and many objects were observed more than once. A few of the brightest galaxies were selected as aids to align the masks, and hence were observed many times. Except for the last three masks (the 1997 and 1998 data), the 300 g/mm grating was used with 1.1~arcsec wide slits. This produces a scale of 2.46 \AA/pixel and a resolution of 10\AA. (This is $\sim$4 times higher spectral resolution than that of the CFRS.) The spectral coverage is $\sim$5000\AA, with the central wavelength depending on the location of the object on the sky with respect to the centerline of the slitmask as well as on the grating tilt. Two 3000 sec exposures were obtained for each of these masks. The last three masks, containing most of the faintest objects in the sample, were observed with the 155 g/mm grating with 1.0~arcsec wide slits giving a spectral resolution of 20\AA. Two or three 1800 sec exposures were obtained for each of these slitmasks. Each of these lower dispersion spectra was slightly shifted in central wavelength (i.e., dithered spectrally between exposures). This forces a more complex reduction, but better signal-to-noise spectra are achieved. Full spectral coverage from the UV through 10,000\AA\ is achieved with this grating for essentially all objects at the price of lower dispersion. These multi-slit spectra were reduced in a standard way. The bias (actually right and left amplifier bias levels) was subtracted. Then, the relevant pieces were cut out of each of the multi-slit spectra for a particular object (and out of the flat field calibration exposure), cosmic rays were removed, the subsets were flattened and the bowing of the spectra (the S-distortion) was removed. When necessary, the distortion in the vertical direction along the slit which manifests itself as tilted night sky lines was also removed. The night sky lines within the spectra themselves were used to define the wavelength scale; a third order polynomial fit is sufficient. This means in effect that there are no ``arc'' lines bluer than 5199\AA. Sometimes that night sky emission line could not be detected, in which case 5577\AA\ became the bluest night sky line. Sky subtraction was accomplished by fitting a second order polynomial to a range of pixels above and below the object spectrum. Linear fits were used when necessary, particularly when the object was near the top or bottom edge of the slitlet. The spectra were reduced by JGC using Figaro (\cite{shortridge88}) scripts and about 20\% were also done by DWH using IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc.\ (AURA) under cooperative agreement with the National Science Foundation.} scripts. JGC measured all the redshifts uniformly, including remeasurement and in some cases rereduction of the spectra done by DWH as well. The redshifts were measured by visually determining the wavelengths of spectral features from plots of the spectra over various wavelength ranges on an interactive display. The assignment of rest wavelengths to the observed spectral features was straightforward for the brighter objects and for those objects with strong emission lines, but became more difficult for the faint objects showing only absorption line spectra. While a manual redshift determination may seem outmoded in light of more automatic techniques such as that developed by Glazebrook, Offer, \& Deeley (1998\markcite{glazebrook98}) for the 2dF or that of Kurtz \& Mink (1998\markcite{kurtz98}) for the CfA surveys, in this situation it is more appropriate. Here the objects are very faint relative to the background sky, the SNR is low, the sky subtraction is difficult, the exposures are long (thus increasing the number of cosmic ray hits), and most importantly we do not fully understand what features to expect in the spectra of such faint and distant galaxies. Construction of a set of templates for a machine based redshift search from (or even for) this data set would be impossible. The spectra have been fluxed using long slit observations of standard stars from Oke (1990\markcite{oke90}) taken with the same LRIS configuration as was used for the multi-slits. The absolute scale of a fluxed spectrum should be regarded with caution, as the nights were not all photometric, there are substantial slit losses for the more extended objects, and slitmask alignment causes an additional light loss which is variable from mask to mask and from object to object on a given mask. However, the {\it shape} of the continuum for a particular object should be more or less correct, and is often quite useful. \subsection{Redshift Measurements \label{redshiftdeterminations} } Redshifts were measured for each object in each of the slitmasks. Records were also kept of the results for any serendipitous objects that fell onto slits for other targeted galaxies. At the end, the results were compiled, and the very small number of objects with multiple observations that showed discrepancies were studied more carefully to resolve them. This involved summing all the spectra together, examining the wavelength range of each of the spectra, etc. The most difficult cases involved objects that show no emission features, but only absorption lines which do not correspond to the standard pattern of Balmer lines, H+K \ion{Ca}{2} doublet, etc. seen in the region of the 4000\AA\ break. Since the 4000\AA\ region is redshifted into the strong night sky emission lines redward of the optical bandpass for $z > 1$, it was suspected that for such objects $z > 1$. To deal with these particular cases, the spectra were compared directly with spectra of high redshift galaxies (\cite{cowie95}; \cite{steidel96}), the sum of spectra of absorption line galaxies in a cluster at $z$ = 1.50 kindly supplied by J.~B.~Oke, as well as the sums of spectra of galaxies in the higher redshift peaks in the present sample. The library of mean ultraviolet spectral energy distributions for stellar groups from IUE spectra (\cite{fanelli92}) and those of model galaxies from the grid of Bruzual \& Charlot (1993\markcite{bruzual93}) were also examined. The absorption features in the spectra of many of those objects suspected to be at $z > 1$ are real and repeatable. The uncertainty in the redshifts is because of limited knowledge of this spectral region (for which the 2800\AA\ region is shifted into the optical) and lack of confidence when there are only two certain features in a spectrum, both absorption lines, as to what the correct identification should be. A good example is D0K42 (which has $R_{obs} = 23.12$, $R-K = 5.14$), which could have either $z = 0.54$ or the final adopted value of $z = 1.14$ depending on whether the identification assigned to the strongest absorption features is H+K of \ion{Ca}{2} or 2800+2850\AA\ (\ion{Mg}{2}+\ion{Mg}{1}). In this case, the spectral energy distribution of the fluxed spectra and the absence of a 4000\AA\ jump in the $z = 0.54$ case were considered to be persuasive evidence for the higher value. The composite quasar spectrum of Francis {\it et al.\/}\ (1991\markcite{francis91}) provided a template for line identification in the broad emission line objects. \subsection{Quality Classes \label{qualityclasses} } We assign a quality class to the redshift of each object to give an indication of the associated uncertainty. Table~1 lists the quality classes (0 through 9) with a brief description of each. Objects with multiple features are assigned quality class 1 -- 3 redshifts, depending upon the security of the redshift. Centroiding uncertainties for high SNR spectra are generally considered to be less than $0.1 \times $~FWHM , but with the lower SNR prevalent here, we adopt a FWHM as a more appropriate uncertainty. (The uncertainty from the wavelength fit itself using the night sky lines is small by comparison.) For a feature at 6000\AA\ with $z < 1.5$, a 2 pixel error with the 300 g/mm grating corresponds to a redshift uncertainty $\le 0.002$. \placetable{tab1} Single emission line spectra are placed in classes 4,5. The emission line is always assumed to be 3727\AA\ [\ion{O}{2}]. The CFRS has adopted a similar strategy (\cite{lilly95b}), although their situation is more complex as they are more likely to have confusion between 3727\AA\ emission and H$\alpha$ emission due to the preponderance of lower redshift galaxies in the CFRS. In our case, with our relatively broad spectral coverage, the absence of H$\beta$ or 4959\AA\ emission serves to rule out the possibility that a single emission line is 5007\AA\ of [\ion{O}{3}]. It cannot be H$\alpha$ for lines that appear at wavelengths bluer than the rest wavelength of H$\alpha$, while for redder lines, one would then expect to see H$\beta$ or the 5007\AA\ [\ion{O}{3}] line in emission. The only other serious possibility consistent with the absence of features in optical spectra with broad spectral coverage is Ly$\alpha$; this is a serious possibility for some of the faintest objects in the sample, particularly the serendipitous ones, where only a single emission line from a very faint object falls by chance onto our slit, cf. Dey {\it et al.\/}\ (1998\markcite{dey98}). In such cases, broad band colors might be used to look for breaks or UV dropouts, although we have chosen to assign redshifts on the basis of spectroscopy alone. The presence of a few high redshift objects in our sample should not affect our conclusions which are restricted to the properties of galaxies with $z<1.5$. Objects showing a single break, assumed to be the 4000\AA\ break, are assigned quality class 8. Again this is a conservative choice, since at a modest redshift well within the range covered by our survey, the break just to the red of the \ion{Mg}{2} doublet at 2800\AA\ will lie in the optical. There are only two objects in this class in the total sample. Quality class 9 is for objects showing a single strong absorption feature which, because of the shape of the continuum, we interpret as \ion{Mg}{2}+\ion{Mg}{1} at 2800+2850\AA\ rather than as Ca~H+K. There are only three such objects. Quality class 0 is for objects with very low signal and no redshift. For future reference, the term ``high quality redshifts'' refers to those with a quality class indicating a secure $z$, specifically quality classes 1, 2, 4, or 6. \placefigure{fig1} To demonstrate that the quality class 1 spectra (which comprise more than 50\% of the extragalactic objects) actually do have the precision claimed above, Figure~1 shows the values of $\sigma$ calculated from multiple spectra of a single object where each individual measurement has a quality 1 rating as a function of $R$ magnitude. Only the galaxies with multiple spectra among the 60 brightest objects in the sample were used. There are four or more spectra for three of these galaxies. \section{Galaxy Spectroscopic Classifications \label{specclass} } A simple spectral classification was used (Table~2). ``{$\cal E$}'' (emission) denotes an galaxy where the emission lines dominate the spectrum, while the few extreme starburst galaxies that could be identified are denoted by ``{$\cal B$}''. In the subsequent discussion, these two classes are usually referred to collectively as ``{$\cal E$}'' (emission) galaxies. ``{$\cal A$}'' (absorption) denotes an object where no emission lines are detected, while ``{$\cal C$}'' (composite) is an intermediate case where both emission and absorption (usually 3727 and H+K) are seen. When both 3727 and 5007\AA\ emission lines are present, the object is denoted ``{$\cal E$}'', or, if the absorption spectrum is also visible, ``{$\cal EC$}''. The three AGN are designated ``{$\cal Q$}''. In addition, stars are classified according to the presence or absence of TiO bands (and/or CaH bands, for M subdwarfs) as ``{$\cal M$}'' or ``{$\cal S$}''and, although there are no examples in the present sample, as ``{$\cal W$}'' for white dwarfs. \placetable{tab2} The assignment of a spectral class to a galaxy will depend on its redshift and on the wavelength range covered by the observations. For example, a galaxy at low redshift with strong emission at 3727 and 5007\AA\ will be classified as ``{$\cal E$}'', but the same object if seen at $z \sim 1.3$ when the 2800\AA\ region is observed in the optical will show no apparent emission features and will be listed as ``{$\cal A$}''. This is particularly true for objects where the spectral coverage of the data did not extend to $\lambda > 7500$\AA. We emphasize that for $z < 0.9$, an ``{$\cal A$}'' galaxy is an object that has a spectrum similar to that of local elliptical galaxies. But at higher redshift, only the strongest emission lines can be detected through the thicket of night sky lines, and a much more diverse group of galaxies will be classified as ``{$\cal A$}'' based on the presence of absorption lines in the 2500\AA\ region. \placefigure{fig2} Figure~2 gives illustrative examples of the five spectral classes used for extragalactic objects. The spectrum of the second brightest AGN and the second brightest starburst galaxy are shown, while for the other three spectral classes, the spectrum of the fifth brightest galaxy (at $R$) assigned to that class is shown. The observed $R$ magnitudes and $z$ are indicated for each object. For the AGN and the spectral class ``$\cal C$'' galaxy, the residuals from sky subtraction around the very strong night sky line at 5577\AA\ were removed by interpolation. The ``$\cal C$'' galaxy shown has a rather weak 3727\AA\ emission line compared to most of the objects in this galaxy spectral class, but the distinction between ``$\cal C$'' and ``$\cal A$'' spectral classes is still clear from the relative strengths of absorption of H and K of CaII, the Balmer lines, and the molecular bands of CH and CN in this spectral region. The spectrum of a somewhat more typical higher-$z$ ``$\cal C$'' galaxy is also shown for comparison. As will be discussed in Cohen {\it et al.\/}\ (1998\markcite{cohen98}), there are many galaxies in this field with $z \approx 0.58$. At this redshift, the 3727\AA\ emission line of [\ion{O}{2}], a key spectral type diagnostic, is observed at 5889\AA, where it overlaps the strong NaD doublet in emission from the night sky. Unless the emission from such a galaxy at 3727\AA\ [\ion{O}{2}] is very strong, it may be lost in the uncertainty of subtracting away the much stronger night sky line. Thus, for this redshift only, the distinction between ``$\cal A$'' and ``$\cal C$'' spectral types has been made on the basis of the presence of the 3880\AA\ CN band and the G band of CH at 4300\AA. We assign such a galaxy a spectral class of ``$\cal A$'' if those molecular bands are strong, while objects that show 4101\AA\ H$\delta$ absorption in their spectra are assigned the spectral class ``$\cal C$''. A check of 19 objects originally classified as ``$\cal A$'' in this redshift regime revealed three that had to be reclassified from ``$\cal A$'' to ``$\cal C$'' on this basis as well as three others too faint to determine which classification is most appropriate without the guidance of 3727\AA\ emission, which were left as ``$\cal A$''. For the brighter part of this sample with high precision spectra, considerably finer distinction among galaxy spectral classes is possible. A detailed discussion of the spectral features in the brighter galaxies of this sample is deferred to a future paper (Cohen et al 1999\markcite{cohen99}). \section{Redshifts for Objects in the Sample \label{redshifts} } Table~3 lists the object numbers, positions on the sky, measured $K$ magnitudes (from \cite{pahre98apjs}), redshifts, spectral types, and quality classes for 195 objects in the main sample, followed by the 13 additional objects that constitute the total sample. Table~4 gives the distribution of objects in the main sample over the spectral and redshift quality classes. \placetable{tab3} \placetable{tab4} Within the main sample, 84\% of the objects have measured redshifts. The completeness is shown as a function of observed $R$ and of observed $K$ magnitude in Figure~3. As expected and supported by Figure~3, it is the $R$ magnitude that primarily determines whether or not a redshift can be measured, rather than the $K$ magnitude. The median $R$ magnitude of the objects never observed is 24.6~mag, while the median $R$ magnitude of the 21 objects that were observed, but for which no redshift was determined, is 24.3 mag. This latter group cannot contain any strong emission line objects with $z < 1.1$ otherwise their spectra would have already yielded redshifts. All of the objects with redshifts in the main sample have $R$, $I$ and $K$ magnitudes from the data of Pahre {\it et al.\/}\ (1998\markcite{pahre98apjs}), but several are too faint to be detected in the available $U$, $B$, or $V$ images. \placefigure{fig3} The median redshift of the extragalactic objects in the main sample with measured $z$ is 0.58. Even if it is assumed that all of the objects without redshifts are galaxies with $z > 1$, then the median redshift can only increase to $z_{med} = 0.65$. Table~5 presents the median redshift for the extragalactic objects in the main sample in half magnitude bins. The objects without redshifts have been ignored. \placetable{tab5} \section{Summary} This paper provides a description of the techniques for analyzing and characterizing the spectra obtained by the Caltech Faint Galaxy Redshift Survey. We have given here the basic observational results of our spectroscopic investigation of a complete sample of objects with $K_s<20$~mag in a 2 by 7.3~arcmin field at J005325+1234. Redshifts were successfully obtained for 163 of the 195 objects in the sample using the LRIS at the Keck Observatory. An analysis of these data, combined with the six-color photometric data of Pahre {\it et al.\/} (1998\markcite{pahre98apjs}), for the extragalactic objects in this field will be given in Cohen {\it et al.\/}\ (1998\markcite{cohen98}). \acknowledgements The entire Keck/LRIS user community owes a huge debt to Jerry Nelson, Gerry Smith, Bev Oke, and many other people who have worked to make the Keck Telescope and LRIS a reality. We are grateful to the W. M. Keck Foundation, and particularly its late president, Howard Keck, for the vision to fund the construction of the W. M. Keck Observatory. JGC is grateful for partial support from STScI/NASA grant AR-06337.12-94A. KR was supported in part by a Summer Undergraduate Research Fellowship at Caltech. RDB acknowledges support under NSF grant AST95-29170. DWH and MAP were supported in part by Hubble Fellowship grants HF-01093.01-97A and HF-01099.01-97A from STScI (which is operated by AURA under NASA contract NAS5-26555). \clearpage \begin{deluxetable}{ll} \tablenum{1} \tablewidth{0pc} \scriptsize \tablecaption{Redshift Quality Classes} \label{tab1} \tablehead{ \colhead{Quality Class} & \colhead{Description of Class}} \startdata 1 & multiple features, $\sigma(z) \le 0.002$/feature \nl 2 & multiple features, $\sigma(z) \le 0.004$/feature \nl 3 & multiple features, faint, id uncertain, \nl & ~~~~$\sigma(z)$ small 75\% of time, and wildly off 25\% of time \nl 4 & 1 emission line only, solid, assume 3727\AA \nl 5 & 1 emission line only, reality uncertain, assume 3727\AA \nl 6 & Multiple features, at least 1 broad emission line \nl 7 & Only 1 broad emission line, assumed to be 2800\AA \nl 8 & Single break, assumed to be 4000\AA\ break \nl 9 & Single strong absorption feature, assumed to be 2800\AA \nl & ~~~~because of shape of continuum \nl 0 & No redshift \nl \enddata \end{deluxetable} \clearpage \begin{deluxetable}{cl} \tablenum{2} \tablewidth{0pc} \scriptsize \tablecaption{Definition of Spectral Types} \label{tab2} \tablehead{\colhead{Spectral Type} & \colhead{Defining Features} } \startdata \multicolumn{2}{c}{Galactic Stars:} \nl $\cal M$ & TiO bands (M dwarfs); CaH bands (M subdwarfs) \nl $\cal S$ & Absorption in Mg triplet, Balmer lines \nl $\cal W$ & White dwarf, broad Balmer line absorption \nl & \nl \multicolumn{2}{c}{Extragalactic Objects:} \nl $\cal Q$ & At least one broad emission line \nl $\cal B$ & Emission in $H\delta$ and $H\epsilon$ as well as 3727, $H\beta$, 5007 \nl $\cal E$ & Dominated by emission lines, 3727, 5007 \nl $\cal C$ & composite, 3727 + Balmer line absorption \nl $\cal A$ & No emission lines, only absorption features \nl & \nl \multicolumn{2}{c}{Unknown Objects:} \nl $\cal F$ & Observed, but no redshift \nl $\cal U$ & Never observed spectroscopically \nl \enddata \end{deluxetable} \begin{deluxetable}{rrrcccc|lrrcccc} \tablenum{3} \tiny \tablecolumns{14} \tablewidth{0pc} \tablecaption{Survey Objects} \label{tab3} \tablehead{ \colhead{ID} & \colhead{RA\tablenotemark{1}} & \colhead{Dec} & \colhead{K} & \colhead{z} & \colhead{QC} & \colhead{SC} \vline & \colhead{ID} & \colhead{RA\tablenotemark{1}} & \colhead{Dec} & \colhead{K} & \colhead{z} & \colhead{QC} & \colhead{SC} \\ \colhead{[D0K]} & \colhead{[$-0^{\rm h}$]} & \colhead{[$-12$\arcdeg]} & \colhead{[mag]} & \colhead{} & \colhead{} & \colhead{} \vline & \colhead{[D0K]} & \colhead{[$-0^{\rm h}$]} & \colhead{[$-12$\arcdeg]} & \colhead{[mag]} & \colhead{} & \colhead{} & \colhead{} } \startdata 1 & 5327.93 & 3018.40 & 12.73 & 0.000 & 1 & $\cal M $ & 2 & 5327.69 & 3214.60 & 14.43 & 0.000 & 1 & $\cal S $ \nl 3 & 5322.88 & 3546.60 & 14.91 & 0.000 & 1 & $\cal S $ & 4 & 5322.94 & 3352.90 & 15.38 & 0.000 & 1 & $\cal M $ \nl 5 & 5328.48 & 3717.70 & 15.58 & 0.000 & 1 & $\cal M $ & 6 & 5328.72 & 3029.50 & 15.68 & 0.428 & 1 & $\cal A $ \nl 7 & 5328.48 & 3731.40 & 15.87 & 0.000 & 1 & $\cal S $ & 8 & 5327.87 & 3332.50 & 16.27 & 0.579 & 1 & $\cal A $ \nl 9 & 5327.94 & 3613.70 & 16.30 & 0.441 & 1 & $\cal C $ & 10 & 5325.61 & 3718.90 & 16.49 & 0.000 & 1 & $\cal M $ \nl 11 & 5323.82 & 3729.90 & 16.67 & 0.346 & 1 & $\cal A $ & 12 & 5324.73 & 3342.40 & 16.75 & 0.681 & 6 & $\cal Q $ \nl 13 & 5324.12 & 3027.40 & 16.83 & 0.588 & 1 & $\cal C $ & 14 & 5325.89 & 3536.90 & 16.88 & 0.428 & 1 & $\cal A $ \nl 15 & 5324.55 & 3732.00 & 16.89 & 0.000 & 1 & $\cal M $ & 16 & 5328.16 & 3044.40 & 16.95 & 0.431 & 1 & $\cal A $ \nl 17 & 5322.22 & 3209.90 & 16.98 & 0.392 & 1 & $\cal A $ & 18 & 5322.56 & 3252.30 & 17.02 & 0.173 & 1 & $\cal C $ \nl 19 & 5325.95 & 3151.20 & 17.02 & 0.581 & 1 & $\cal A $ & 20 & 5324.71 & 3437.80 & 17.10 & 0.000 & 1 & $\cal M $ \nl 21 & 5327.85 & 3652.30 & 17.13 & 0.000 & 1 & $\cal M $ & 22 & 5325.87 & 3144.90 & 17.14 & 0.581 & 1 & $\cal A $ \nl 23 & 5323.46 & 3415.40 & 17.21 & 0.000 & 1 & $\cal M $ & 24 & 5325.94 & 3423.30 & 17.30 & 0.679 & 1 & $\cal C $ \nl 25 & 5324.24 & 3639..40 & 17.39 & 0.309 & 1 & $\cal C $ & 26 & 5322.37 & 3059.00 & 17.44 & 1.115 & 6 & $\cal Q $ \nl 27 & 5326.13 & 3147.80 & 17.44 & 0.577 & 1 & $\cal A $ & 28 & 5324.43 & 3409.10 & 17.44 & 0.582 & 1 & $\cal A $ \nl 29 & 5326.30 & 3239.20 & 17.45 & 0.582 & 1 & $\cal AC $ & 30 & 5328.38 & 3501.80 & 17.52 & 0.621 & 1 & $\cal C $ \nl 31 & 5329.12 & 3222.70 & 17.53 & 0.430 & 1 & $\cal A $ & 32 & 5329.32 & 3330.10 & 17.54 & 0.577 & 1 & $\cal A $ \nl 33 & 5325.91 & 3558.40 & 17.54 & 0.429 & 1 & $\cal A $ & 34 & 5324.73 & 3411.80 & 17.55 & 0.679 & 1 & $\cal AC $ \nl 35 & 5329.07 & 3457.20 & 17.56 & 0.428 & 1 & $\cal C $ & 36 & 5323.73 & 3711.00 & 17.58 & 0.605 & 1 & $\cal AC$ \nl 37 & 5326.88 & 3634.40 & 17.69 & 0.772 & 1 & $\cal A $ & 38 & 5327.08 & 3610.40 & 17.75 & 0.763 & 1 & $\cal A $ \nl 39 & 5323.21 & 3328.60 & 17.75 & 0.583 & 1 & $\cal C $ & 40 & 5324.94 & 3202.30 & 17.80 & 0.582 & 1 & $\cal A $ \nl 41 & 5322.00 & 3303.90 & 17.82 & 0.654 & 1 & $\cal AC$ & 42 & 5327.65 & 3522.40 & 17.82 & 1.136 & 3 & $\cal A $ \nl 43 & 5321.79 & 3018.00 & 17.86 & 0.000 & 1 & $\cal M $ & 44 & 5321.92 & 3320.30 & 17.88 & 0.653 & 1 & $\cal C $ \nl 45 & 5325.90 & 3158.70 & 17.88 & 0.581 & 1 & $\cal A $ & 46 & 5325.50 & 3047.30 & 17.91 & 0.763 & 1 & $\cal C $ \nl 47 & 5324.59 & 3347.00 & 17.99 & 0.582 & 1 & $\cal C $ & 48 & 5327.76 & 3545.30 & 18.00 & 0.578 & 2 & $\cal C $ \nl 49 & 5323.74 & 3441.70 & 18.06 & 0.584 & 1 & $\cal C $ & 50 & 5323.10 & 3234.50 & 18.08 & 0.000 & 1 & $\cal M $ \nl 51 & 5321.54 & 3135.40 & 18.09 & 0.677 & 1 & $\cal C $ & 52 & 5327.78 & 3457.30 & 18.10 & 0.680 & 1 & $\cal C $ \nl 53 & 5329.39 & 3307.20 & 18.10 & 0.000 & 1 & $\cal S $ & 54 & 5328.49 & 3724.10 & 18.11 & 0.209 & 1 & $\cal EC$ \nl 55 & 5324.02 & 3336.60 & 18.13 & 0.432 & 1 & $\cal C $ & 56 & 5325.90 & 3229.60 & 18.27 & 0.689 & 3 & $\cal A $ \nl 57 & 5327..08 & 3132.00 & 18.28 & 0.369 & 1 & $\cal EB$ & 58 & 5328.86 & 3251.90 & 18.28 & 0.509 & 1 & $\cal C $ \nl 59 & 5322.50 & 3549.70 & 18.29 & 0.771 & 1 & $\cal C $ & 60 & 5329.13 & 3719.00 & 18.30 & 0.781 & 3 & $\cal A $ \nl 61 & 5326.73 & 3650.20 & 18.32 & 0.000 & 1 & $\cal S $ & 62 & 5327.54 & 3417.70 & 18.34 & 0.584 & 2 & $\cal A $ \nl 63 & 5321.73 & 3141.00 & 18.35 & 0.535 & 3 & $\cal C $ & 64 & 5324.50 & 3701.90 & 18.42 & 1.048 & 3 & $\cal A $ \nl 65 & 5326.91 & 3051.30 & 18.44 & 1.232 & 3 & $\cal A $ & 66 & 5325.60 & 3243.90 & 18.46 & 0.771 & 1 & $\cal EC$ \nl 67 & 5327.95 & 3255.90 & 18.46 & 0.584 & 1 & $\cal A $ & 68 & 5321.40 & 3153.70 & 18.47 & 1.392 & 2 & $\cal A $ \nl 69 & 5321.32 & 3514.40 & 18.51 & 1.336 & 3 & $\cal A $ & 70 & 5328.97 & 3156.00 & 18.54 & 0.581 & 1 & $\cal A $ \nl 71 & 5323.70 & 3634.50 & 18.54 & 0.209 & 1 & $\cal E $ & 72 & 5321.96 & 3341.10 & 18.56 & ... & 0 & $\cal F $ \nl 73 & 5326.51 & 3419.00 & 18.57 & 0.493 & 1 & $\cal C $ & 74 & 5321.62 & 3117.90 & 18.60 & 0.676 & 2 & $\cal C $ \nl 75 & 5322.20 & 3658.30 & 18.60 & 0.763 & 1 & $\cal C $ & 76 & 5322.76 & 3155.00 & 18.63 & 1.153 & 6 & $\cal Q $ \nl 77 & 5325.63 & 3510.80 & 18.64 & ... & 0 & $\cal F $ & 78 & 5329.70 & 3104.30 & 18.66 & 0.633 & 1 & $\cal B $ \nl 79 & 5321.35 & 3346.80 & 18.68 & 0.533 & 1 & $\cal A $ & 80 & 5325.68 & 3444.50 & 18.74 & 0.585 & 2 & $\cal C $ \nl 81 & 5326.81 & 3030.10 & 18.75 & 0.414 & 1 & $\cal C $ & 82 & 5323.64 & 3343.70 & 18.76 & 0.000 & 1 & $\cal M $ \nl 83 & 5321.70 & 3426.30 & 18.80 & 0.429 & 1 & $\cal C $ & 84 & 5326.45 & 3359.20 & 18.81 & 0.678 & 1 & $\cal C $ \nl 85 & 5326.15 & 3350.40 & 18.81 & 0.626 & 1 & $\cal C $ & 86 & 5324.44 & 3515.60 & 18.82 & 0.549 & 1 & $\cal E $ \nl 87 & 5327.41 & 3308.50 & 18.82 & 0.440 & 1 & $\cal E $ & 88 & 5327.26 & 3627.90 & 18.85 & 0.194 & 1 & $\cal E $ \nl 89 & 5325.48 & 3108.40 & 18.86 & 0.581 & 1 & $\cal A $ & 90 & 5328.99 & 3442.40 & 18.87 & 0.511 & 1 & $\cal C $ \nl \tablebreak 91 & 5324.24 & 3709.00 & 18.88 & ... & 0 & $\cal F $ & 92 & 5324.11 & 3645.90 & 18.89 & 0.000 & 1 & $\cal M $ \nl 93 & 5322.02 & 3216.60 & 18.90 & 1.043 & 2 & $\cal A $ & 94 & 5322.52 & 3732.00 & 18.91 & ... & 0 & $\cal F $ \nl 95 & 5326.78 & 3626.20 & 18.91 & 0.000 & 1 & $\cal M $ & 96 & 5324.18 & 3020.20 & 18.94 & 0.677 & 1 & $\cal C $ \nl 97 & 5328.80 & 3434.70 & 18.94 & 0.531 & 8 & $\cal A $ & 98 & 5322.04 & 3045.50 & 18.97 & ... & 0 & $\cal F $ \nl 99 & 5324.95 & 3035.40 & 18.98 & 0.000 & 1 & $\cal M $ & 100 & 5326.95 & 3552.90 & 19.00 & 0.680 & 2 & $\cal C $ \nl 101 & 5325.93 & 3214.10 & 19.01 & 0.578 & 1 & $\cal A $ & 102 & 5323.44 & 3336.00 & 19.01 & 0.448 & 3 & $\cal A $ \nl 103 & 5321.72 & 3038.80 & 19.01 & 0.607 & 1 & $\cal EC$ & 104 & 5323.46 & 3558.00 & 19.05 & 0.182 & 1 & $\cal B $ \nl 105 & 5328.83 & 3334.80 & 19.05 & 0.681 & 1 & $\cal E $ & 106 & 5322.29 & 3646.70 & 19.06 & ... & 0 & $\cal U $ \nl 107 & 5324.77 & 3148.00 & 19.10 & ... & 0 & $\cal F $ & 108 & 5328.14 & 3527.30 & 19.13 & 1.013 & 1 & $\cal E $ \nl 109 & 5321.56 & 3607.60 & 19.15 & 0.000 & 1 & $\cal M $ & 110 & 5327.24 & 3048.70 & 19.15 & 0.391 & 1 & $\cal E $ \nl 111 & 5327.27 & 3136.90 & 19.17 & 0.582 & 1 & $\cal C $ & 112 & 5324.19 & 3230.00 & 19.19 & 1.298 & 2 & $\cal EC$ \nl 113 & 5326.12 & 3428.50 & 19.19 & 1.264 & 3 & $\cal A $ & 114 & 5329.23 & 3447.00 & 19.21 & 0.583 & 1 & $\cal C $ \nl 115 & 5327.94 & 3559.20 & 19.21 & ... & 0 & $\cal F $ & 116 & 5326.62 & 3432.50 & 19.24 & 0.584 & 1 & $\cal A $ \nl 117 & 5328.22 & 3721.70 & 19.26 & ... & 0 & $\cal F $ & 118 & 5327.88 & 3429.80 & 19.27 & ... & 0 & $\cal U $ \nl 119 & 5326.11 & 3347.50 & 19.27 & ... & 0 & $\cal F $ & 120 & 5325.79 & 3125.60 & 19.27 & ... & 0 & $\cal F $ \nl 121 & 5325.47 & 3535.00 & 19.28 & 0.948 & 1 & $\cal E $ & 122 & 5323.85 & 3413.40 & 19.30 & 0.570 & 2 & $\cal C $ \nl 123 & 5322.47 & 3116.80 & 19.30 & 1.440 & 4 & $\cal E $ & 124 & 5322.09 & 3037.40 & 19.30 & ... & 0 & $\cal U $ \nl 125 & 5323.10 & 3253.10 & 19.31 & 1.009 & 4 & $\cal E $ & 126 & 5324.27 & 3017.20 & 19.31 & 1.301 & 5 & $\cal E $ \nl 127 & 5323.24 & 3350.60 & 19.33 & 0.556 & 1 & $\cal B $ & 128 & 5322.88 & 3551.10 & 19.34 & 1.307 & 3 & $\cal A $ \nl 129 & 5329.21 & 3537.40 & 19.35 & ... & 0 & $\cal U $ & 130 & 5320.96 & 3318.20 & 19.35 & ... & 0 & $\cal F $ \nl 131 & 5323.83 & 3312.00 & 19.36 & 0.580 & 2 & $\cal C $ & 132 & 5323.76 & 3215.40 & 19.36 & 0.391 & 1 & $\cal E $ \nl 133 & 5323.64 & 3234.80 & 19.38 & 0.582 & 1 & $\cal C $ & 134 & 5321.16 & 3120.10 & 19.39 & 0.000 & 1 & $\cal M $ \nl 135 & 5324.48 & 3224.70 & 19.40 & 1.223 & 3 & $\cal A $ & 136 & 5324.83 & 3430.40 & 19.40 & ... & 0 & $\cal F $ \nl 137 & 5323.66 & 3335.90 & 19.42 & 0.872 & 1 & $\cal EC$ & 138 & 5322.55 & 3502.00 & 19.43 & 0.428 & 1 & $\cal C $ \nl 139 & 5327.35 & 3245.10 & 19.43 & 0.573 & 3 & $\cal A $ & 140 & 5328.40 & 3647.70 & 19.44 & 0.772 & 1 & $\cal C $ \nl 141 & 5321.35 & 3124.30 & 19.45 & ... & 0 & $\cal F $ & 142 & 5322.40 & 3627.80 & 19.46 & 0.761 & 1 & $\cal A $ \nl 143 & 5321.47 & 3319.80 & 19.47 & 0.750 & 3 & $\cal A $ & 144 & 5323.55 & 3430.00 & 19.51 & 0.370 & 1 & $\cal E $ \nl 145 & 5321.31 & 3158.60 & 19.51 & 0.581 & 1 & $\cal C $ & 146 & 5329.14 & 3551.90 & 19.51 & ... & 0 & $\cal F $ \nl 147 & 5326.38 & 3116.20 & 19.52 & 0.415 & 1 & $\cal C $ & 148 & 5328.79 & 3508.10 & 19.52 & ... & 0 & $\cal U $ \nl 149 & 5323.18 & 3711.10 & 19.53 & 0.269 & 1 & $\cal EB$ & 150 & 5321.73 & 3724.30 & 19.53 & 0.862 & 3 & $\cal C $ \nl 151 & 5322.98 & 3141.10 & 19.53 & 0.492 & 2 & $\cal C $ & 152 & 5328.83 & 3103.40 & 19.55 & 1.075 & 9 & $\cal A $ \nl 153 & 5326.38 & 3412.10 & 19.61 & 0.495 & 1 & $\cal C $ & 154 & 5329.14 & 3542.20 & 19.62 & ... & 0 & $\cal U $ \nl 155 & 5321.35 & 3730.20 & 19.63 & ... & 0 & $\cal F $ & 156 & 5324.38 & 3241.30 & 19.63 & ... & 0 & $\cal F $ \nl 157 & 5323.71 & 3609.30 & 19.63 & 0.535 & 1 & $\cal C $ & 158 & 5328.04 & 3140.90 & 19.65 & 1.306 & 1 & $\cal E $ \nl 159 & 5321.43 & 3256.00 & 19.66 & 0.394 & 1 & $\cal E $ & 160 & 5327.69 & 3607.30 & 19.68 & 0.000 & 1 & $\cal M $ \nl 161 & 5324.43 & 3626.70 & 19.70 & 0.432 & 1 & $\cal C $ & 162 & 5322.76 & 3644.80 & 19.71 & 0.643 & 1 & $\cal C $ \nl 163 & 5324.72 & 3356.00 & 19.72 & 0.580 & 3 & $\cal A $ & 164 & 5328.66 & 3139.30 & 19.72 & 0.587 & 2 & $\cal C $ \nl 165 & 5328.48 & 3056.80 & 19.73 & 0.581 & 8 & $\cal A $ & 166 & 5326.60 & 3547.70 & 19.74 & 0.611 & 1 & $\cal EC$ \nl 167 & 5329.19 & 3351.70 & 19.74 & 0.442 & 1 & $\cal C $ & 168 & 5321.60 & 3204.00 & 19.74 & ... & 0 & $\cal U $ \nl 169 & 5325.51 & 3530.70 & 19.76 & .... & 0 & $\cal F $ & 170 & 5322.02 & 3547.60 & 19.76 & 0.533 & 1 & $\cal E $ \nl 171 & 5326.33 & 3020.90 & 19.77 & 0.345 & 1 & $\cal E $ & 172 & 5322.74 & 3209.00 & 19.77 & 0.974 & 9 & $\cal A $ \nl 173 & 5321.11 & 3243.50 & 19.81 & ... & 0 & $\cal U $ & 174 & 5325.24 & 3640.70 & 19.83 & 0.968 & 5 & $\cal E $ \nl 175 & 5325.66 & 3434.70 & 19.84 & 0.939 & 4 & $\cal E $ & 176 & 5324.05 & 3536.10 & 19.85 & ... & 0 & $\cal F $ \nl 177 & 5325.52 & 3559.00 & 19.86 & ... & 0 & $\cal U $ & 178 & 5326.15 & 3359.60 & 19.88 & 0.745 & 3 & $\cal A $ \nl 179 & 5328.56 & 3618.20 & 19.89 & 0.763 & 2 & $\cal C $ & 180 & 5322.23 & 3650.60 & 19.90 & ... & 0 & $\cal F $ \nl \tablebreak 181 & 5324.83 & 3254.70 & 19.91 & 1.040 & 9 & $\cal A $ & 182 & 5323.55 & 3357.60 & 19.92 & 1.137 & 3 & $\cal A $ \nl 183 & 5323.03 & 3437.60 & 19.92 & 0.922 & 1 & $\cal E $ & 184 & 5322.63 & 3509.80 & 19.92 & 0.622 & 1 & $\cal C $ \nl 185 & 5327.51 & 3611.40 & 19.92 & 0.975 & 1 & $\cal E $ & 186 & 5321.63 & 3020.50 & 19.95 & ... & 0 & $\cal U $ \nl 187 & 5324.24 & 3630.60 & 19.96 & 0.344 & 1 & $\cal C $ & 188 & 5324.47 & 3617.90 & 19.98 & 1.218 & 2 & $\cal A $ \nl 189 & 5321.69 & 3303.70 & 19.98 & 0.490 & 4 & $\cal A $ & 450 & 5329.09 & 3030.30 & 18.12 & 0.000 & 1 & $\cal M $ \nl 956 & 5325.20 & 3615.60 & 19.48 & 0.000 & 1 & $\cal M $ & 494 & 5326.02 & 3534.90 & 18.91 & ... & 0 & $\cal F $ \nl 504 & 5322.56 & 3252.30 & 19.06 & ... & 0 & $\cal F $ & 506 & 5322.88 & 3546.60 & 19.07 & ... & 0 & $\cal U $ \nl 507 & 5324.73 & 3408.00 & 19.12 & 1.002 & 4 & $\cal E $ \nl \cutinhead{Supplemental objects} 451 & 5322.12 & 3354.90 & 20.65 & 0.430 & 8 & $\cal A $ & 951 & 5329.56 & 3644.50 & -20.00 & 0.440 & 1 & $\cal C $ \nl 953 & 5330.64 & 3632.40 & -20.00 & 0.280 & 1 & $\cal E $ & 958 & 5327.06 & 3546.90 & 20.02 & 0.442 & 1 & $\cal C $ \nl 961 & 5318.41 & 3516.40 & -20.00 & 0.000 & 1 & $\cal S $ & 968 & 5324.21 & 3325.70 & 20.58 & 0.000 & 1 & $\cal S $ \nl 971 & 5331.26 & 3212.00 & -20.00 & 0.352 & 1 & $\cal E $ & 973 & 5330.40 & 3041.00 & -20.00 & 0.340 & 1 & $\cal E $ \nl 980 & 5326.00 & 3157.01 & 20.20 & 0.437 & 1 & $\cal E $ & 982 & 5329.11 & 3020.81 & 20.45 & 1.420 & 9 & $\cal A $ \nl 983 & 5325.62 & 3504.53 & -20.00 & 0.460 & 8 & $\cal A $ & 984 & 5321.50 & 3123.00 & 20.60 & 1.119 & 1 & $\cal E $ \nl 985 & 5321.16 & 3004.50 & -20.00 & 0.773 & 4 & $\cal E $ \nl \enddata \tablenotetext{1}{ID, RA, Dec, and R magnitude from Pahre {\it et al.\/}\ 1998.} \tablecomments{QC is the quality code described in \S\ref{qualityclasses}. SC is the spectral classification described in \S\ref{specclass}.} \end{deluxetable} \clearpage \begin{deluxetable}{crcr} \tablenum{4} \tablewidth{0pc} \scriptsize \tablecaption{Distribution of Sample Among Galaxy Spectral Types and Quality Classes} \label{tab4} \tablehead{\colhead{Spectral Class } & \colhead{No. of Objects} & \colhead{Quality Class} & \colhead{No. of Objects} } \startdata $\cal M$ & 19 & 1 & 117\tablenotemark{1} \nl $\cal S$ & 5 & 2 & 14 \nl $\cal Q$ & 3 & 3 & 17 \nl $\cal B$ & 3 & 4 & 5 \nl $\cal E$ & 30 & 5 & 2 \nl $\cal C$ & 50 & 6 & 3 \nl $\cal A$ & 53 & 7 & 0 \nl $\cal F$ & 21 & 8 & 2 \nl $\cal U$ & 11 & 9 & 3 \nl & & 0 & 32 \nl \enddata \tablenotetext{1}{Includes 24 Galactic stars.} \end{deluxetable} \begin{deluxetable}{crrrr} \tablenum{5} \tablewidth{0pc} \scriptsize \tablecaption{Median Redshift with Magnitude} \label{tab5} \tablehead{\colhead{magnitude} & \multicolumn{2}{c}{$R$} & \multicolumn{2}{c}{$K$} \nl \colhead{bin} & \colhead{$N$} & \colhead{$\left< z \right>$} & \colhead{$N$} & \colhead{$\left< z \right>$}} \startdata 15.5-16.0 & 0 & & 1 & 0.43 \nl 16.0-16.5 & 0 & & 2 & 0.44 \nl 16.5-17.0 & 0 & & 6 & 0.43 \nl 17.0-17.5 & 0 & & 9 & 0.58 \nl 17.5-18.0 & 0 & & 18 & 0.58 \nl 18.0-18.5 & 0 & & 17 & 0.58 \nl 18.5-19.0 & 1 & 0.68 & 23 & 0.58 \nl 19.0-19.5 & 2 & 0.17 & 30 & 0.61 \nl 19.5-20.0 & 1 & 0.31 & 33 & 0.58 \nl 20.0-20.5 & 7 & 0.43 & & \nl 20.5-21.0 & 4 & 0.43 & & \nl 21.0-21.5 & 23 & 0.44 & & \nl 21.5-22.0 & 23 & 0.58 & & \nl 22.0-22.5 & 21 & 0.58 & & \nl 22.5-23.0 & 28 & 0.64 & & \nl 23.0-23.5 & 20 & 0.86 & & \nl 23.5-24.0 & 5 & 0.76 & & \nl 24.0-24.5 & 4 & 0.75 & & \nl \enddata \end{deluxetable} \clearpage
1,477,468,750,753
arxiv
\section{Introduction}% The capillary interaction of liquids with soft solids is a ubiquitous situation in natural or technological settings~\cite{Andreotti:ARFM2020,Bense:2019,Chen:COCIS2018,Style:ARCMP2017}, ranging from droplets that interact with epithelia, for instance in the human eye~\cite{Holly:EER1971}, or epithelial cells governed by capillary physics~\cite{PerezGonzalez:NP2018}, to ink-jet printing on flexible materials~\cite{Wijshoff:COCIS2018}. The capillary tractions, exerted by the liquid onto their soft support, cause strong deformations if the substrate is soft or the considered length scale is sufficiently small~\cite{Style:PRL2013,Zhao:NL2021}. The typical scale below which capillarity deforms solids is given by the elastocapillary length, $\ell = \gamma/G_0$, the ratio of surface tension $\gamma$ and static shear modulus $G_0$. At three-phase contact lines, the length scale of the exerted traction lies in the molecular domain, deforming the solid into a sharp-tipped wetting ridge~\cite{Park:NC2014}. As a liquid spreads over a soft surface, the traction moves relative to the material points of the substrate. The necessary rearrangement of the solid deformation leads to strong viscoelastic dissipation which counteracts the motion, a phenomenon called viscoelastic braking~\cite{Carre:N1996,Long:L1996,Karpitschka:NC2015,Zhao:PNAS2018,Dervaux:SM2020,Henkel:SM2021,Coux:PNAS2020,Leong:PF2020,SmithMannschott:PRL2021}. At small speeds, the motion remains steady~\cite{Carre:N1996,Long:L1996}, whereas at large speeds, unsteady motion, frequently termed stick-slip, has been observed~\cite{Pu:L2008,Kajiya:SM2013,Karpitschka:NC2015,Park:SM2017,Gorcum:PRL2018,Gorcum:SM2020}. In this mode, the contact line velocity and the apparent contact angle undergo strong, periodic oscillations. On paraffin gels, Kajiya et al. observed stick-slip motion only in an intermediate velocity range, returning to continuous motion if the speed was increased even further~\cite{Kajiya:SM2013}. \begin{figure*}\begin{center}% \includegraphics[scale=0.65]{Simulation_Profiles_Merged11.pdf} \caption{(a)~Dynamical wetting of a cylindrical cavity (radius $R$) with a soft viscoelastic wall (grey, thickness $h_s$) by a two-phase fluid (blue \& transparent). The contact line speed is controlled by the flux boundary condition on the rear end of the cavity. Inset: definition of the liquid-liquid interface rotation $\phi = \theta-\theta_{eq}$ relative to the equilibrium angle $\theta_{eq} = 90^{\circ}$. (b)~Quasi-stationary wetting ridge on the cavity wall for different velocities, comparison between FEM simulations (symbols) and the analytical model for constant $v$ (lines). The liquid interface is aligned at $x=0$, the blue region indicates the advancing liquid phase.}% \label{fig:simulation_profiles}% \end{center}\end{figure*}% The origin of this stick-slip motion remains debated in literature. It is clear that the pinning and depinning is not associated with permanent surface features, but rather with the dynamics of the wetting ridge itself: the solid deformation cannot follow the fast contact line motion of the depinned (slip) phase of a stick-slip cycle~\cite{Gorcum:PRL2018,Gorcum:SM2020}. Unclear, however, are the conditions upon which a contact line may escape from its ridge, thus eliminating the viscoelastic braking force. The depinning of a contact line from a sharp-tipped feature on a surface is governed by the Gibbs inequality~\cite{Dyson:PF1988,Gorcum:PRL2018}. Van Gorcum et al. \cite{Gorcum:PRL2018} postulated a dynamical solid surface tension, which would alter the local force balance and thus allow the contact line to slide down the slope of the ridge. Still, the physico-chemical origin of such dynamic solid surface tension remains elusive. Roche et al.~\cite{Dervaux:SM2020} postulated the existence of a point force due to bulk viscoelasticity, but the shear-thinning nature of typical soft polymeric materials would prevent such a singularity at the strain rates encountered in soft wetting~\cite{Karpitschka:PNAS2018,Gorcum:SM2020}. Unclear as well is the role of the fluid phase during the cyclic motion, mainly because a comprehensive multi-physics model for the unsteady soft wetting problem is not available to date. Here we present the first fully unsteady numerical simulations of dynamical soft wetting, fully accounting for liquid and solid mechanics, and for the capillarity of the interfaces, by which we reveal the life cycle of stick-slip motion. We derive phase diagrams of steady and unsteady contact line motion by tuning parameters over large ranges, recovering stick-slip behavior at intermediate speeds. At small and large speeds, we observe steady motion, in quantitative agreement with an analytical model. \section{Setup}% Figure~\ref{fig:simulation_profiles}~(a) shows the geometric setup of the numerical simulations. A hollow cylinder (undeformed inner radius $R$), made of a soft viscoelastic material (gray, thickness $h_s\ll R$), with a fixed (rigid) outer surface, is partially filled with a liquid (blue) and an ambient fluid phase (transparent). To keep physics conceivable, we use a minimal model and apply the Stokes limit and identical viscosities for fluid and ambient, and assume constant and equal surface tensions $\gamma=\gamma_s$ on all interfaces (liquid-ambient, solid-liquid, and solid-ambient). The inner surface of the soft viscoelastic cylinder wall is deformed into a wetting ridge due to the capillary action of the liquid meniscus (cf. panel~(b)). We use an incompressible Kelvin-Voigt constitutive model, characterized by a frequency dependent complex modulus $G^* = G_0 + i\,\eta_s\,\omega$, with static shear modulus $G_0$ and effective substrate viscosity $\eta_s$, obtaining an elastocapillary length $\ell = \gamma/G_0 \ll h_s$, a characteristic time scale $\tau = \eta_s/G_0$, and a characteristic velocity $v_{\ell} = \ell/\tau=\gamma/\eta_s$. \begin{table}% \caption{\label{tab:st:parameters}Material parameters}% \begin{center}\begin{tabular}{ccc} symbol & value & meaning\\\hline $\eta$ & $\unit[1]{mPa\,s}$ & liquid viscosity\\ $\gamma$ & $\unit[38]{mN/m}$ & liquid surface tension\\ $\gamma_s$ & $\unit[38]{mN/m}$ & solid surface tension\\ $G_0$ & $\unit[1]{kPa}$ & static shear modulus\\ $\eta_s$ & $\unit[3]{Pa\, s}$ & substrate viscosity\\ $h_s$ & $\unit[1]{mm}$ & substrate thickness\\ $\epsilon$ & $\unit[4.75]{\upmu m}$ & interface thickness\\\hline $\ell$ & $\unit[38]{\upmu m}$ & elastocapillary length\\ $\alpha_s = \frac{\gamma_s}{G_0\,h_s}$ & $0.038$ & elastocapillary number\\ $v_{\ell}=\gamma_s/\eta_s$ & $\unit[0.0126]{m/s}$ & characteristic velocity \end{tabular}\end{center}% \end{table} Our numerical model is formulated with cylindrical symmetry, implementing the two-phase fluid by a phase-field approach. Thus the liquid-ambient interface has a finite thickness $\epsilon \ll \ell \ll h_s$, and the capillary traction of the meniscus onto the solid is distributed over this characteristic width. The solid is modelled by a finite element approach, with a sharp interface toward the fluid. The grid size is about $5\%$ of the elastocapillary length at the liquid-ambient interface, and typically about $20\%$ outside of the interface region. All phases are fully coupled to each other by kinematic and stress boundary conditions. The material parameters are listed in table~\ref{tab:st:parameters}, details on the numerical model are given in~\cite{AlandMokbel2020} and the supplementary material~\cite{supplementary}. The liquid meniscus is forced to move by imposing the fluxes $\Phi$ on either end of the cylinder, but can freely change its shape (curvature) in response to the the fluid flow. Thus the instantaneous contact line velocity $v$ is not imposed, but rather it's long-term mean $\overline{v} = \Phi/(\pi\,R^2)$. All simulations are started at $t=0$ with a flat substrate, a flat meniscus, and a constant imposed flux at the boundaries, and run until a steady state or limit cycle has been reached. We compare our simulation results for the solid deformation to the analytical plane-strain model from~\cite{Karpitschka:NC2015}, imposing a constant contact line velocity $v=\overline{v}$ and replacing the $\delta$-shaped traction by \begin{equation} \label{eq:phasetraction} T(x) = \frac{3 \gamma}{4 \sqrt{2}\, \epsilon } \left(1-\tanh ^2\left(\frac{x - v t}{\sqrt{2}\,\epsilon }\right)\right)^2, \end{equation} which can be derived for the phase field model in equilibrium (see supplementary material for details~\cite{supplementary}). Since $h_s\ll R$, the substrate deformation is well approximated by plane-strain conditions. Importantly, since $\epsilon\ll\ell$, our analytical and numerical results do not significantly depend on the actual value of $\epsilon$. Figure~\ref{fig:simulation_profiles}~(b) shows the quasi-stationary substrate deformation for several imposed velocities, comparing simulation results (markers) to the analytical model (lines), in excellent agreement. Note, that this comparison is only possible as steady ridge shapes are observed for the chosen velocities. In an intermediate velocity range we find unsteady cyclic shape dynamics, as detailed below, which cannot be grasped by the analytical model. \section{Modes of contact line motion}% \begin{figure}\begin{center}% \includegraphics{20220107_MacroangleVsDisplacement.pdf} \caption{Rotation $\phi$ of the liquid-ambient interface at the contact line, as a function of the contact line position, for different imposed mean velocities. Slow and fast speeds show a continuous motion (blue and yellow, respectively). For intermediate velocities (red) we observe a strong stick-slip behavior, in which the liquid angle oscillates with an amplitude that is comparable to its mean.} \label{fig:simulation_angle_displacement}% \end{center}\end{figure}% The dynamics of the contact line motion are characterized by the time-dependent rotation $\phi = \theta-\theta_{eq}$ of the liquid interface at the triple line (cf. Fig.~\ref{fig:simulation_profiles}~(a)). Figure~\ref{fig:simulation_angle_displacement} shows $\phi$ as a function of contact line position for the three characteristic regimes that we find in our simulations. At small speed ($v\lesssim v_{\ell}$, blue), after some initial transient the contact line moves steadily, with a constant dynamic contact angle. Here, the relation between $v$ and $\phi$ is permanently dominated by viscoelastic braking~\cite{Carre:N1996,Long:L1996,Karpitschka:NC2015}. Once the forcing velocity exceed a critical value, the motion becomes unsteady, finding a limit cycle after an initial transient (red): the liquid interface rotation $\phi$ shows large oscillations, of peak-to-peak amplitude $\Delta\phi$ on the order of the mean rotation $\overline{\phi}$, with a non-trivial waveform, as the contact line advances. This behavior is not captured by the simple analytical model. For larger speeds (yellow), we observe again a constant $\phi$ after an initial transient. We note here that the motion in this regime is very sensitive to discretization artifacts and requires rather fine grid resolutions to give consistent results. Movies illustrating contact line motion and substrate dynamics for the three modes in Figure \ref{fig:simulation_angle_displacement} can be found in the supplementary data. \begin{figure}\begin{center}% \includegraphics{20220107_Phaseportrait_LOGLOG.pdf}% \caption{Phase portraits for contact line motion. Blue: stable, stationary-motion regime. After an initial transient, the contact line finds a stationary constant value in the $v$-$\phi$-plane. Discretization artifacts are visible for small speeds (mind the logarithmic scale scale). Red: stick-slip motion, characterized by a large limit cycle in the $v$-$\phi$-plane. Yellow: at large speeds, stick-slip motion is suppressed, finding stationary point in the $v$-$\phi$-plane again. }% \label{fig:phaseportrait}% \end{center}\end{figure}% Figure~\ref{fig:phaseportrait} shows a phase portrait of the contact line motion i.e., in terms of the physically relevant variables $\phi$ and $v$: $\phi=\theta-\theta_{eq}$ is a measure of the imbalance in Young's equation, and thus a measure of the total dissipative force (liquid and solid). Multiplied with the instantaneous velocity, one obtains the total dissipated power per unit length of contact line, since our equations of motion are overdamped. For slow forcing speeds (blue), we observe a continuous, steady contact line motion, up to the scale of grid artifacts (mind the logarithmic scales). For intermediate forcing speeds (red), we observe a limit-cycle: As the liquid rotation exceeds a well-defined maximum (\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}} in Fig.~\ref{fig:phaseportrait}}), the contact line accelerates. In this phase, it surfs down it's own wetting ridge, releasing energy stored in the meniscus curvature, rate-limited partly by liquid dissipation (\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}} in Fig.~\ref{fig:phaseportrait}). It thus decelerates, and a new wetting ridge starts to grow, opposing the contact line motion further (\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {3}}} in Fig.~\ref{fig:phaseportrait}) until the next cycle starts. For larger forcing speeds, the region covered by the limit cycle decreases until it virtually vanishes (yellow), up to grid artifacts. This is caused by the growing importance of liquid dissipation, which effectively limits, and finally prevents, the large-speed excursions during the slip phases. \section{Regimes of contact line motion}% \begin{figure}\begin{center}% \includegraphics{20211226_MeanAngleVsVelocity_loglog.pdf}% \caption{Rotation $\phi$ of the liquid-ambient interface relative to its equilibrium orientation, as a function of the imposed mean velocity $\overline{v}$. In the red region, the contact line motion is unsteady (stick-slip) in the simulations. The solid black line depicts the analytical calculation of the ridge tip rotation, for an imposed constant contact line velocity. Markers depict the maximum, mean, and minimum angle observed in the simulations. The onset of unsteady motion correlates with the maximum in ridge rotation. At large speeds, the amplitude of the angle oscillations decreases and the motion becomes stationary again until, finally, liquid dissipation becomes relevant.}% \label{fig:simulation_angle_v}% \end{center}\end{figure}% We characterize the contact line motion by $\phi_{max}$ (\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}} in Fig.~\ref{fig:phaseportrait}), $\phi_{min}$ (\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {3}}} in Fig.~\ref{fig:phaseportrait}), and $\overline{\phi}$, the minimum, maximum, and mean values of $\phi$ in the stationary/limit cycle regime. Figure~\ref{fig:simulation_angle_v} shows these values as a function of the imposed (long-term mean) $\overline{v}$. For small speeds, $v=\overline{v}$, and the simulated $\phi$ (symbols) coincides with the result of the analytical model (black line), indicating the stability of steady contact line motion. The onset of stick-slip motion (red region) aligns with the maximum of $\phi$ observed in the analytical model where $v$ is imposed instead of $\overline{v}$. This was stipulated in~\cite{Karpitschka:NC2015} since the rotation $\phi$ is a measure for the dissipative (viscoelastic braking) force: A dissipative force that decreases with speed causes acceleration, and thus an unstable motion. The maximum braking force is observed at $\overline{v} = v_{\ell} = \gamma/(G_0\,\tau)$, the elastocapillary velocity: The finite width of the traction distribution regularizes the dissipation singularity at the scale $\epsilon\ll\ell\ll h_s$. Thus the contact line motion excites a dominant frequency $\sim \overline{v}/\epsilon$ in the solid, corresponding to a dynamical elastocapillary length $\ell_{\overline{v}}\sim\gamma_s/G^*(\overline{v}/\epsilon) = \gamma_s\,\epsilon/(\eta_s\,\overline{v})$. Resonance is expected at $\epsilon\sim\ell_{\overline{v}}$ i.e., $\overline{v}\sim\gamma_s/\eta_s$, independent of the choice of $\epsilon$, which is confirmed in our analytical model~\cite{supplementary}. $\phi_{max}$ remains approximately constant upon entering the stick-slip regime, indicating a well-defined upper limit of the viscoelastic braking force also in unsteady situations (cf. location \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}} in Fig.~\ref{fig:phaseportrait}). However, this force periodically drops to much smaller values, as indicated by the much smaller values of $\phi_{min}$. In these \emph{surfing} phases (\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}} in Fig.~\ref{fig:phaseportrait}), liquid dissipation and the finite capillary energy stored in the curved meniscus are the rate-limiting factors. As the imposed $\overline{v}$ is increased further, the amplitude of the oscillation $\Delta\phi$ shrinks, reaching virtually zero (indicated by the fading red region). In this regime, the reduced viscoelastic braking force ($\phi_{max}$) limits the build-up of capillary energy in the meniscus, while liquid dissipation prevents its fast release. Thus the oscillatory motion is effectively damped out by liquid dissipation, while the overall motion is still governed by viscoelastic braking: $\overline{\phi}\sim \overline{v}^{-1}$ closely follows the result from the analytical model. The increased mean liquid rotation for the largest velocity $\sim 0.28 v_{\ell}$, relative to the prediction of the analytical model, is caused by liquid dissipation. This can be rationalized by a comparison with the Cox-Voinov law for moving contact lines on rigid surfaces, which, given the capillary number $\sim 10^{-2}$, predicts rotations on this order of magnitude~\cite{Cox:JFM1986,Voinov:FD1977,Snoeijer:ARFM2013}. In this hydrodynamic regime, one returns to the classical wetting physics on rigid surfaces. \begin{figure}\begin{center}% \includegraphics{20220107_PhaseDiagrams_Merged.pdf} \caption{Phase diagrams for stick-slip behavior vs. contact line speed. Blue: steady contact line motion; red: stick-slip; yellow: high-speed continuous motion. (a) Tuning the magnitude of capillary forces $\gamma = \gamma_s$ on the vertical axis. (b) Varying substrate viscosity on the vertical axis.}% \label{fig:simulation_Phasediagram}% \end{center}\end{figure}% In Figure~\ref{fig:simulation_Phasediagram}, we summarize the dynamical wetting behavior in terms of these three modes, as a function of the imposed mean speed and the solid parameters. Steady small-speed, stick-slip, and steady high-speed modes are indicated by blue, red, and yellow discs, respectively. On panel (a), we vary on the vertical axis the solid and liquid surface tensions and thus the elastocapillary number $\alpha_s = \ell / h_s$, while keeping the Neumann angles of static wetting constant. The onset of stick-slip is located near $\overline{v}=v_{\ell}$, given by the maximum of $\phi$ vs. $v$. This maximum is independent of $\alpha_s$, up to a small correction due to the finite thickness $\epsilon$ of the fluid interface, as can be shown by the analytical model (see Fig.~1 of the supplementary material~\cite{supplementary}). In physical units, however, the onset of stick-slip is inversely proportional to $\gamma_s$ since $v_{\ell} \sim \gamma_s$. The transition to the fast continuous mode is, in scaled units, nearly independent of the surface tension.\ Consequentially, the stick-slip mode disappears at very low $\gamma$. Similarly, the solid viscosity $\eta_s$ (panel (b)) has no measurable impact on the critical $v/v_{\ell}$ for the transition to stick slip, but the physical critical velocity is proportional to $\eta_s$ since $v_{\ell}\sim\eta_s^{-1}$. Thus, for small solid viscosities, the damping effect of liquid dissipation and thus the transition back to steady motion becomes noticeable already at smaller $\overline{v}/v_{\ell}$, such that the stick-slip region ultimately disappears at very low $\eta_s$. In any case, at very large speeds, liquid dissipation will take over, leading to wetting dynamics equivalent to those on rigid surfaces. \section{Discussion}% In this Letter, we provide a comprehensive numerical analysis of dynamical soft wetting, including the physics of all relevant elements, the liquid, the solid, and the interfaces. For each element, we used the minimal required level of complexity, to keep physics intact and conceivable: The Stokes limit for the fluid, a Kelvin-Voigt constitutive relation for the soft solid, regularized at a constant scale $\epsilon\ll\ell$, and constant and equal solid surface tensions on all three interfaces. This simple model already requires a complex, strongly coupled multi-physics modelling approach, and exhibits rich behaviors. Our numerical experiments cover a wide range of system parameters, and we reveal three regimes in which the dominant physical mechanisms differ: (i) a slow regime, in which the contact line motion is entirely dominated by the the dissipation in the solid. This regime is observed as long as the viscoelastic braking force increases with speed. (ii) an intermediate regime, in which the dominant rate-limiting mechanism periodically switches from solid to liquid dissipation. This regime starts where the viscoelastic braking force exhibits a maximum with respect to the contact line velocity. This maximum is caused by a resonance effect, due to the regularization of a singular dissipation at some finite (constant) length scale. Other mechanisms, like dynamic solid surface tensions (surface constitutive relations~\cite{Xu:SM2018,Xu:NC2017,Gorcum:PRL2018,Liu:SM2020,Heyden:PRSA2021,Zhao:SM2021}) or a constitutive relation that exhibits resonance (e.g., standard-linear solid~\cite{Karpitschka:NC2015}) would lead to the same phenomenology. (iii) a large-$\overline{v}$-regime with continuous motion, yet governed by viscoelastic braking, in which liquid dissipation prevents strong oscillations of the meniscus. Since the viscoelastic braking force, in contrast to liquid dissipation, does not increase with velocity, we ultimately find again liquid dissipation dominating the contact line motion, and one recovers the wetting physics of rigid surfaces. With this first comprehensive overview of soft wetting physics scenarios, we provide a strong basis for interpreting different phenomenology observed in experiments, ranging from paraffins~\cite{Kajiya:SM2013} over microelectronic sealants~\cite{Gorcum:PRL2018,Style:PRL2013} to biology~\cite{Holly:EER1971,Prakash:S2008,PerezGonzalez:NP2018}, and motivate experiments in the so-far little explored large-$\overline{v}$ regimes. \acknowledgments{SK and SA acknowledge funding by the German Research Foundation (DFG project no. KA4747/2-1 to SK and AL1705/5-1 to SA). Simulations were performed at the Center for Information Services and High Performance Computing (ZIH) at TU Dresden.}
1,477,468,750,754
arxiv
\section{Introduction} \label{sec:intro} A personalized recommendation, tailored to individual characteristics, is becoming increasingly popular in real life such as healthcare, education, e-commerce, etc. An optimal \textit{individualized treatment rule} (ITR) is defined as maximizing \textit{the mean value function }of an outcome of interest over the target population if all individuals in the whole population follows the given rule. Many machine learning approaches to estimating the optimal ITR are available, such as outcome-weighted learning with support vector machine \citep{zhao2012estimating,zhou2017residual}, regression-based methods with Adaboost \citep{kang2014combining}, regularized linear basis function \citep{qian2011performance}, K-nearest neighbor \citep{zhou2017causal} and generalized additive model \citep{moodie2014q}. However, the ITRs derived by machine learning methods can be too complex to extract practically meaningful insights for policymakers. For example, in healthcare, clinicians need to scrutinize the estimated ITRs for scientific validity but the black-box rules conceal the clear relationship between patient characteristics and treatment recommendations. In these cases, parsimonious and interpretable ITRs are desirable. It is well known that randomized experiments are the gold standard for learning the optimal ITRs because randomization of treatments in the design stage can ensure no unmeasured confounding \citep{greenland1990randomization}. However, randomized experiments might lack external validity or representativeness \citep{rothwell2005external} because of the selective criteria for individual eligibility in the experiments. Consequently, the distribution of the covariates in the randomized experiments might differ from that in the target population. Thus, a parsimonious ITR constructed from randomized experiments cannot directly generalize to a target population \citep{zhao2019robustifying}. Alternatively, real-world data (RWD) such as population survey data usually involve a large and representative sample to the target population. A large body of literature in statistics has focused on transporting or generalizing the average treatment effects from the experiments to a target population or a different study population \citep[e.g.,][]{cole2010generalizing,hartman2015sample}. One exception for the ITRs is \citet{zhao2019robustifying, mo2020learning} which estimates the optimal linear ITR from the experimental data by optimizing the worst-case quality assessment among all covariate distributions in the target population satisfying some moment conditions or distributional closeness, while we address the problem in a different situation where we already have a representative sample at hand. In this article, we propose a general framework of estimating a generalizable interpretable optimal ITR from randomized experiments to a target population represented by the RWD. We propose\textit{ transfer weighting} that shifts the covariate distribution in the randomized experiments to that in the RWD. To correctly estimate the population optimal ITR, transfer weights are used in two places. First, the empirical estimator of the population value function is biased when considering only the experimental sample, and hence the transfer weights are used to correct the bias of the estimator of the value function. Second, the nuisance functions in the value function such as the propensity score and the outcome mean also require transfer weighting in order to estimate the population counterparts. Then we can obtain the weighted interpretable optimal ITR by reformulating the problem of maximizing the weighted value to that of minimizing the weighted classification error with flexible loss function choices. We consider different parametric and nonparametric approaches to learning transfer weights. Moreover, a practical issue arises regarding the choice of transfer weights. Weighting can correct for the bias but also reduce the effective sample size and increase the variance of the weighted value estimator. To address the bias-variance tradeoff, we propose a cross-validation procedure that maximizes the estimated population value function to choose among the weighting schemes. Lastly, we provide the theoretical guarantee of the risk consistency of the proposed transfer weighted ITRs. The remaining of the article is organized as follows. We introduce our proposed method in Section \ref{sec:meth}. In Section \ref{sec:asy}, we study the asymptotic properties of the proposed estimator to guarantee its performance in the target population theoretically. In Section \ref{sec:sim}, we use various simulation settings to illustrate how this approach works. In Section \ref{sec:application}, we apply the proposed method to the National Supported Work program \citep{lalonde1986evaluating} for improving the decisions of whether to recommend the job training program based on the individual characteristics over the target population. We conclude the article in Section \ref{sec:Discussion}. \section{Basic Setup} \subsection{Notation} Let $\boldsymbol{X}\in\mathcal{X}\subseteq\mathbb{R}^{p}$ be a vector of pre-treatment covariates, $A\in\mathcal{A}=\{0,1\}$ the binary treatment, and $Y\in\mathbb{R}$ the outcome of interest. Under the potential outcomes framework \citep{rubin1974estimating}, let $Y^{*}(a)$ denote the potential outcome had the individual received treatment $a$ $\in\mathcal{A}$. Define a treatment regime as a mapping $d:\mathcal{X}\rightarrow\mathcal{A}$ from the space of individual characteristic to the treatment. Then we define the potential outcome under regime $d$ as $Y^{*}(d)=\sum_{a\in\mathcal{A}}Y^{*}(a)\bone\{d(\boldsymbol{X})=a\}$, the value of regime $d$ as $V(d)=\mathbb{E}\{Y^{*}(d)\}$, where the expectation is taken over the distribution in the target population, and the optimal regime $d^{\text{opt}}$ as the one satisfying $V(d^{\text{opt}})\geq V(d),\:\forall d.$ As discussed in the introduction, in many applications such as healthcare, interpretable treatment regimes are desirable since they might bring more domain insights and be more trustworthy to clinicians compared with the complex black-box regimes. Thus, it is valuable to learn an optimal interpretable regime. Specifically, we focus on the regimes with linear forms, i.e, $d(\boldsymbol{X};\eta)=\bone(\eta_{0}+\eta_{1}^{\intercal}\boldsymbol{X}>0),$ denoted by $d_{\eta}$ for simplicity. We aim at learning ITRs which lead to the highest benefits for the target population with size $N$, $\{\boldsymbol{X}_{i},Y_{i}^{*}(0),Y_{i}^{*}(1)\}_{i=1}^{N}$. Suppose we have access to two independent data sources: the experiment data with $\{(\boldsymbol{X}_{i},A_{i},Y_{i})\}_{i\in\mathcal{I}_n}$ and the RWD with $\{\boldsymbol{X}_{i}\}_{i\in\mathcal{I}_m}$, where $\mathcal{I}_n,\mathcal{I}_m\subseteq\{1,\dots,N\}$ are samples' index sets with size $n,m$, respectively. In the experimental data, individuals are selected based on certain criteria and then are completely randomized to receive treatments. The experimental data might induce selection bias which leads to a different distribution of $\boldsymbol{X}$ in experiments from that in the target population. On the other hand, we assume that individuals in the RWD are a random sample from the target population, thus the distribution of the covariates in the RWD can characterize that of the target population. Let $S_{i}$ be the binary indicator of the $i$th individual participates in the randomized experiment: $S_{i}=1$ if $i\in\mathcal{I}_{n}$ and $0$ if $i\in\mathcal{I}_{m}$. We denote $\pi_{A}(\boldsymbol{X})$ as the propensity score of receiving the active treatment $P(A=1\mid\boldsymbol{X})$, $Q(\boldsymbol{X},A)$ as the conditional expectation of outcomes given covariates and treatments $\mathbb{E}(Y\mid\boldsymbol{X},A)$, and $\tau(\boldsymbol{X})$ as the contrast function $Q(\boldsymbol{X},1)-Q(\boldsymbol{X},0)$. We make the following assumptions, which are standard and well-studied assumptions in the causal inference and treatment regime literature \citep{chakraborty2013statistical}. \begin{assumption} \begin{enumerate*}[label={\alph*:}, ref={Assumption~\theassumption.\alph*}] \item \label{asu: nuc} $A\protect\mathpalette{\protect\independenT}{\perp}{\{Y^{*}(0),Y^{*}(1)\}}\mid(\boldsymbol{X},S=1)$. \item \label{asu: consistency}$Y=Y^{*}(A)$. \item \label{asu: positivity}There exist $c_{1},c_{2}\in(0,1)$ such that $\pi_{A}(\boldsymbol{X})\in[c_{1},c_{2}]$ for all $\boldsymbol{X}\in\mathcal{X}.$ \end{enumerate*} \end{assumption} \subsection{Existing learning methods for ITRs \label{subsec:Existing-learning-methods}} Identifying the optimal individualized treatment rules has been studied in a large body of statistical literature. One category is regression-based by directly estimating the conditional expectation of the outcome given patients' characteristics and treatments received \citep{murphy2005generalization}. Both parametric and nonparametric models can be used to approximate $Q(\boldsymbol{X},A)$. To encourage the interpretability, one strategy is to posit a parametric model as $Q(\boldsymbol{X},A;\beta)$, and the estimated optimal ITR within the class indexed by $\beta$ is defined as $\bone\{\tau(\boldsymbol{X};\hat{\beta})>0\}$. Another category is value-based by first estimating the value of an ITRs and then searching the ITR that renders the highest value. One strategy is to estimate $\pi_{A}(\boldsymbol{X})$ first, where one common way is to posit a parametric model as $\pi_{A}(\boldsymbol{X}_{i};\gamma)$ such as logistic regression or a constant, and then use inverse probability weighting (IPW; \citealp{horvitz1952generalization}) to estimate the value of an ITR indexed by $\eta$. This estimator is consistent to the true value only if $\pi_{A}(\boldsymbol{X}_{i};\gamma)$ is correctly specified and can be unstable if the estimate $\pi_{A}(\boldsymbol{X}_{i};\hat{\gamma})$ is close to zero. The augmented inverse probability weighting (AIPW; \cite{zhang2012robust}) approach is introduced to combine IPW and outcome regression estimator $\widehat{Q}(\boldsymbol{X}_i, A)$ to estimate the value \begin{equation} \begin{split} \widehat{V}_{\text{aw}}(d_{\eta};\mathcal{I}_{n})=\frac{1}{n}\sum_{i\in\mathcal{I}_n}\Bigg(&\left[\frac{A_{i}d(\boldsymbol{X}_{i};\eta)}{\pi_{A}(\boldsymbol{X}_{i};\hat{\gamma})}+\frac{(1-A_{i})\left\{ 1-d(\boldsymbol{X}_{i};\eta)\right\} }{1-\pi_{A}(\boldsymbol{X}_{i};\hat{\gamma})}\right]\left[Y_{i}-\widehat{Q}\left\{ \boldsymbol{X}_{i},d(\boldsymbol{X}_{i};\eta)\right\} \right]\\ &-\widehat{Q}\left\{ \boldsymbol{X}_{i},d(\boldsymbol{X}_{i};\eta)\right\}\Bigg). \label{eq:awV} \end{split} \end{equation} The AIPW estimator is doubly robust in the sense that it will be consistent to the value of the ITR $d_{\eta}$ if either the model for $\pi_{A}(\boldsymbol{X})$ or $Q(\boldsymbol{X},A)$ is correctly specified, but not necessarily both. Learning the optimal ITR by optimizing the value can also be formulated as a classification problem \citep{zhang2012estimating}. The classification perspective has equivalent forms to all the regression-based and value-based methods above, and can also bring us additional optimization advantages over directly value searching. \begin{lemma}\label{lemma:1} The optimal treatment rule by maximizing $V(d_{\eta})$ in $\eta$ is equivalent to that by minimizing the risk \begin{equation} \mathbb{E}\left(\big|\tau(\boldsymbol{X})\big|\Big[\bone\big\{\tau(\boldsymbol{X})>0\big\}-d(\boldsymbol{X};\eta)\Big]^{2}\right)\label{eq:classificationObj} \end{equation} in $\eta$ \citep{zhang2012estimating}. \end{lemma} Lemma \ref{lemma:1} reformulates the problem of estimating an optimal treatment rule as a weighted classification problem, where $\mid\tau(\boldsymbol{X})\mid$ is regarded as the weight, $\bone\{\tau(\boldsymbol{X})>0\}$ the binary label, and $d(\boldsymbol{X};\eta)$ the classification rule of interest. Let $l_{0-1}$ be a 0-1 loss, i.e., $l_{0-1}(u)=\bone(u\leq0)$. The objective function (\ref{eq:classificationObj}) can also be rewritten as \begin{equation} \resizebox{.94\hsize}{!}{$\mathcal{R}_{\mathcal{F}}(\tau;\eta,l_{0-1})=\mathbb{E}\left\{ \mathcal{\mathcal{L}}(\tau;\eta,l_{0-1})\right\} ,\ \ \mathcal{\mathcal{L}}(\tau;\eta,l_{0-1})=\mid\tau(\boldsymbol{X})\mid l_{0-1}\left\{ \left[2\bone\{\tau(\boldsymbol{X})>0\}-1\right]f(\boldsymbol{X};\eta)\right\},\label{eq:obj_0_1_loss}$} \end{equation} where $f(\boldsymbol{x};\eta)=\eta^{\intercal}\boldsymbol{x}$, $d(\boldsymbol{x};\eta)=\bone\left\{f(\boldsymbol{x};\eta)>0\right\}$, $\mathcal{F}$ is a certain class covering the parameters of the ITRs, i.e., $\eta\in\mathcal{F}$. Let $\hat{\tau}_{i}$ be an estimator of $\tau(\boldsymbol{X}_{i})$. From Lemma \ref{lemma:1}, to estimate the optimal ITR, one can minimize the following empirical objective function: \begin{eqnarray} n^{-1}\sum_{i\in\mathcal{I}_n}\mid\hat{\tau}_{i}\mid\{\bone{(\hat{\tau}_{i}>0)}-d(\boldsymbol{X}_{i};\eta)\}^{2} & = & n^{-1}\sum_{i\in\mathcal{I}_n}\mid\hat{\tau}_{i}\mid\bone\left\{ \bone\left(\hat{\tau}_{i}>0\right)\neq d(\boldsymbol{X}_{i};\eta)\right\} \nonumber \\ & = & n^{-1}\sum_{i\in\mathcal{I}_n}\mid\hat{\tau}_{i}\mid l_{0-1}\left[\left\{ 2\bone(\hat{\tau}_{i}>0)-1\right\} f(\boldsymbol{X}_{i};\eta)\right]\nonumber \\ & = & n^{-1}\sum_{i\in\mathcal{I}_n}\mathcal{L}(\hat{\tau}_{i};\eta,l_{0-1}).\label{eq:empObj} \end{eqnarray} In this classification framework, we can also consider the three different approaches to estimating $\tau(\boldsymbol{X}_{i})$, which have the equivalent forms and enjoy the same properties with the regression- and value-based estimator. For example, The AIPW estimator of $\tau(\boldsymbol{X}_{i})$ is \[ \hat{\tau}_{\mathrm{aw},i}=\left[\frac{\bone(A_{i}=1)\big\{ Y_{i}-\widehat{Q}(\boldsymbol{X}_{i},1)\big\}}{\hat{\pi}_{A}(\boldsymbol{X}_{i})}+\widehat{Q}(\boldsymbol{X}_{i},1)\right]-\left[\frac{\bone(A_{i}=0)\big\{ Y_{i}-\widehat{Q}(\boldsymbol{X}_{i},0)\big\}}{1-\hat{\pi}_{A}(\boldsymbol{X}_{i})}+\widehat{Q}(\boldsymbol{X}_{i},0)\right]. \] The solution obtained by minimizing (\ref{eq:empObj}) in $\eta$ with replacing $\hat{\tau}_{i}$ with $\hat{\tau}_{\text{aw},i}$ is equivalent to that obtained by maximizing the AIPW value estimator $\widehat{V}_{\text{aw}}(d_{\eta};\mathcal{I}_n)$ in (\ref{eq:awV}) and enjoys the doubly robust property. When the experimental sample is representative of the target population, the above ITR learning methods can estimate the population optimal ITRs that leads to the highest value. To correct for the selection bias of the experimental sample, we propose an integrative strategy that leverages the randomization in the experimental sample and the representativeness of the RWD. As pointed out by \citet{zhao2019robustifying}, the optimal rule obtained from the experimental sample is also optimal for the target population if no restriction to the class of $d,$ but such optimal rules are too complex. Therefore, we focus on learning interpretable and parsimonious ITRs and thus integrative methods are required to correct for the selection bias of the experimental sample. \section{Integrative transfer learning of ITRs\label{sec:meth}} \subsection{Assumptions for identifying population optimal ITRs \label{sec:rule} } To identify population optimal ITRs, we assume the covariates $\boldsymbol{X}$ can fully explain the selection mechanism of inclusion into the randomized experiment and the covariate distribution in the RWD is representative to the target population. \begin{assumption} \begin{enumerate*}[label={\alph*:}, ref={Assumption~\theassumption.\alph*}] \item \label{asu:cond indep} $S\protect\mathpalette{\protect\independenT}{\perp}\left\{ Y^{*}(0),Y^{*}(1),A\right\} \mid\boldsymbol{X}$. \item \label{asu:unbias}$f(\boldsymbol{X}\mid S=0)=f(\boldsymbol{X})$. \item \label{asu:positivity_pi_S} There exist $c_{3},c_{4}\in(0,1)$ such that $\pi_{S}(\boldsymbol{X})\in[c_{3},c_{4}]$ for all $\boldsymbol{X}\in\mathcal{X}.$ \end{enumerate*} \end{assumption} Combining \ref{asu:cond indep} and \ref{asu: consistency} implies that the experimental sample is non-informative, i.e., the sampling score is $P(S=1\mid A,\boldsymbol{X},Y)=P(S=1\mid\boldsymbol{X})=\pi_{S}(\boldsymbol{X})$. It requires that the observed covariates $\boldsymbol{X}$ is rich enough such that it can decide whether or not the subject is selected into the experimental data. \ref{asu:unbias} requires that covariate distribution in the RWD is the same as that in the target population so that we can leverage the representativeness of the RWD. As a result, it implies that $m^{-1}\sum_{i\in\mathcal{I}_{m}}g(\boldsymbol{X}_{i})$ would be an unbiased estimator of $\mathbb{E}{\{g(\boldsymbol{X})\}}$, the covariate expectation in the target population for any $g$. \ref{asu:cond indep} and \ref{asu:unbias} are also adopted in the covariate shift problem in machine learning, a sub-area in domain adaptation, where standard classifiers do not perform well when the independent variables in the training data have a different covariate distribution from that in the testing data. Re-weighting individuals to correct the over- or under-representation is one way to cope with the covariate shift in classification problems \citep{kouw2018introduction}. \ref{asu:positivity_pi_S} requires each subject has chance to be selected into the experimental data so as to guarantee the existence of $\pi_{S}^{-1}(\boldsymbol{X})$. The following proposition identifies the population value function using the inverse of sampling score as the transfer weight, and leads to an equivalent weighted classification loss function to identify the population optimal ITR. \begin{proposition} \label{prop:1} Let the transfer weight be $w=\pi_{S}^{-1}(\boldsymbol{X})$, under \ref{asu: consistency}, \ref{asu:cond indep} and \ref{asu:positivity_pi_S}, the population value of an ITR $d_{\eta}$ is identified by $V(d_{\eta})=\mathbb{E}\left[\omega SY^{*}\left\{ d_{\eta}(\boldsymbol{X})\right\} \right]$. Then the population optimal ITR can be obtained by minimizing \[ \mathbb{E}\left(wS\big|\tau(\boldsymbol{X})\big|\Big[\bone\big\{\tau(\boldsymbol{X})>0\big\}-d_\eta(\boldsymbol{X})\Big]^{2}\right) \text{ in $\eta$.} \] \end{proposition} Two challenges arise for using Proposition \ref{prop:1} to estimate the optimal ITR: (i) the weights $w$ are unknown and (ii) the estimators of $\tau(\boldsymbol{X})$ discussed so far such as $\hat{\tau}_{\text{aw}}$ are biased based only on the experimental sample. In the following subsections, we present the estimation of the transfer weights and unbiased estimators of $\tau(\boldsymbol{X})$. \subsection{Estimation of transfer weights} We consider various parametric and nonparametric methods to estimate the transfer weights $w$. Each method has its own benefits: parametric methods are easy to implement and efficient if the models are correctly specified, while nonparametric methods are more robust and less sensitive to model misspecification. \subsubsection{Parametric approach \label{subsec:Parametric-approach}} For parametric approaches, we assume the target population size $N$ is known. Similar to modeling propensity score $\pi_{A}(\boldsymbol{X})$, we can posit a logistic regression model for $\pi_{S}(\boldsymbol{X})$; i.e., $\text{logit}\{\pi_{S}(\boldsymbol{X};\alpha)\}=\alpha_{0}+\alpha_{1}^{\intercal}\boldsymbol{X}$, where $\alpha=(\alpha_{0},\alpha_{1}^{\intercal})^{\intercal}\in\mathbb{R}^{p+1}$. The typical maximum likelihood estimator of $\alpha$ can be obtained by \begin{align} \Hat{\alpha}= & \underset{\alpha}{\mathrm{argmax}}\frac{1}{N}\sum_{i=1}^{N}\left[S_{i}\log\pi_{S}(\boldsymbol{X}_{i};\alpha)+(1-S_{i})\log\left\{ 1-\pi_{S}(\boldsymbol{X}_{i};\alpha)\right\} \right]\nonumber \\ = & \underset{\alpha}{\mathrm{argmax}}\frac{1}{N}\sum_{i=1}^{N}\left[S_{i}(\alpha_{0}+\alpha_{1}^{\intercal}\boldsymbol{X}_{i})-\mathrm{log}\left\{ 1+\mathrm{exp}(\alpha_{0}+\alpha_{1}^{\intercal}\boldsymbol{X}_{i})\right\} \right].\label{eq:mle} \end{align} However, the second term $N^{-1}\sum_{i=1}^{N}\mathrm{log}\left\{ 1+\mathrm{exp}(\alpha_{0}+\alpha_{1}^{\intercal}\boldsymbol{X}_{i})\right\} $ in (\ref{eq:mle}) is not feasible to calculate because we can not observe $\boldsymbol{X}$ for all individuals in the population. The key insight is that this term can be estimated by $m^{-1}\sum_{i\in\mathcal{I}_m}\mathrm{log}\left\{ 1+\mathrm{exp}(\alpha_{0}+\alpha_{1}^{\intercal}\boldsymbol{X}_{i})\right\} $ based on the RWD. This strategy leads to our modified maximum likelihood estimator \[ \Hat{\alpha}=\underset{\alpha}{\mathrm{argmax}}\left[\frac{1}{N}\sum_{i=1}^{N}S_{i}(\alpha_{0}+\alpha_{1}^{\intercal}\boldsymbol{X}_{i})-\frac{1}{m}\sum_{i=1}^{m}\mathrm{log}\left\{ 1+\mathrm{exp}(\alpha_{0}+\alpha_{1}^{\intercal}\boldsymbol{X}_{i})\right\} \right]. \] Alternatively, \ref{asu:unbias} leads to unbiased estimating equations of $\alpha$ \[ \mathbb{E}\left\{ \frac{Sg(\boldsymbol{X})}{\pi_{S}(\boldsymbol{X};\alpha)}\right\} =\mathbb{E}\{g(\boldsymbol{X})\}, \] for any $g$. Therefore, we can use the following estimating equations \[ \frac{1}{N}\sum_{i=1}^{N}\frac{S_{i}g(\boldsymbol{X}_{i})}{\pi_{S}(\boldsymbol{X}_{i};\alpha)}=\frac{1}{m}\sum_{i\in\mathcal{I}_m}g(\boldsymbol{X}_{i}) \] for $\alpha$, where $g(\boldsymbol{X})\in\mathbb{R}^{p+1}$. For simplicity, one can take $g(\boldsymbol{X})$ as $\partial\text{logit}\{\pi_{S}(\boldsymbol{X};\alpha)\}/\partial\alpha$. \subsubsection{Nonparametric approach\label{subsec:Nonparametric-approach}} The above parametric approaches require $\pi_{S}(\boldsymbol{X};\alpha)$ to be correctly specified and the population size $N$ to be known. These requirements may be stringent, because the sampling mechanism into the randomized experiment is unknown and the population size is difficult to obtain. We now consider the constrained optimization algorithm of \citet{wang2020minimal} to estimate the transfer weights by \begin{align} & \underset{w}{\text{min}}\ \sum_{i=1}^{N}S_{i}w_{i}\log w_{i}\nonumber \\ & \text{subject to}\ \Big|\sum_{i=1}^{N}w_{i}S_{i}g_{k}(\boldsymbol{X}_{i})-\frac{1}{m}\sum_{i\in\mathcal{I}_m}g_{k}(\boldsymbol{X}_{i})\Big|\leq\sigma_{k}\ \:\:(k=1,\dots,K),\label{eq: minimial weight} \end{align} where $\sigma_{k}\geq0;w_{i}\in[0,1],\:i\in\mathcal{I}_{n},$ $\sum_{i\in\mathcal{I}_{n}}w_{i}=1,$ $\text{ and }\{g_{1},\ldots,g_{K}\}$ can be chosen as the first-, second- and higher order moments of the covariate distribution. The constants $\sigma_{k}$ are the tolerate limits of the imbalances in $g_{k}$. When all the $\sigma_{k}$'s are taken 0, (\ref{eq: minimial weight}) becomes the exact balance, and the solution $w$ becomes the entropy balancing weights \citep{hainmueller2012entropy}. The choice of $\sigma_{k}$ is related to the standard bias-variance tradeoff. On the one hand, if $\sigma_{k}$ is too small, the weight distribution can have a large variability in order to satisfy the stringent constraints, and in some extreme scenarios the weights may not exist. On the other hand, if $\sigma_{k}$ is too large, the covariate imbalances remain and therefore the resulting weights are not sufficient to correct for the selection bias. Following \citet{wang2020minimal}, we choose $\sigma_{k}$ from a pre-specified set by the tuning algorithm described in Supplementary Materials \subsection{Estimation of $\tau(\boldsymbol{X})$} The transfer weights are also required to estimate $\tau(\boldsymbol{X})$ from the experimental data. In parallel to Section \ref{subsec:Existing-learning-methods}, we can obtain the weighted estimators of $\tau(\boldsymbol{X}).$ For example, the weighted AIPW estimator of $\tau(\boldsymbol{X}_{i})$ \[ \resizebox{0.98\hsize}{!}{$ \hat{\tau}_{\mathrm{aw},i}^{w}=\Big[\frac{\bone(A_{i}=1)\big\{ Y_{i}-\widehat{Q}_{w}(\boldsymbol{X}_{i},1)\big\}}{\hat{\pi}_{A}^{w}(\boldsymbol{X}_{i})}+\widehat{Q}_{w}(\boldsymbol{X}_{i},1)\Big]-\Big[\frac{\bone(A_{i}=0)\big\{ Y_{i}-\widehat{Q}_{w}(\boldsymbol{X}_{i},0)\big\}}{1-\hat{\pi}_{A}^{w}(\boldsymbol{X}_{i})}+\widehat{Q}_{w}(\boldsymbol{X}_{i},0)\Big]$}, \] where $\widehat{Q}_{w}(\cdot,a)$ can be estimated by weighted least square for linear $Q$ functions with the estimated transfer weights, and $\hat{\pi}_{A}^{w}(\boldsymbol{X}_{i})=\widehat{P}_{w}(A_{i}=1\mid\boldsymbol{X}_{i})$ is the weighted propensity score which can be obtained by the weighted regression model with the estimated transfer weights using the experimental data. Let $\hat{\tau}_{i}^{w}$ be a generic weighted estimator of $\tau(\boldsymbol{X})$. Proposition \ref{prop:1} implies an empirical risk function \begin{equation} \begin{split} & \frac{1}{n}\sum_{i\in\mathcal{I}_n}\hat{w}_{i}\mid\hat{\tau}_{i}^{w}\mid\{\bone{(\hat{\tau}_{i}^{w}>0)}-d(\boldsymbol{X}_{i};\eta)\}^{2} \\ = & \frac{1}{n} \sum_{i\in\mathcal{I}_n}\hat{w}_{i}\mid\hat{\tau}_{i}^{w}\mid l_{0-1}\left[\left\{ 2\bone(\hat{\tau}_{i}^{w}>0)-1\right\} f(\boldsymbol{X}_{i};\eta)\right] \label{eq:empObj-w} \end{split} \end{equation} to estimate $\eta$. It is challenging to optimize the objective function (\ref{eq:empObj-w}) since it involves a non-smooth non-convex objective function of $\eta$. One way is to optimize it directly using grid search or the genetic algorithm in \citet{zhang2012robust} for learning linear decision rules (implemented with \texttt{rgenoud} in R), but grid search becomes untenable for a higher dimension of $\eta$, and the genetic algorithm cannot guarantee a unique solution. Alternatively, one can replace $l_{0-1}$ with a surrogate loss function, such as a hinge loss used in the support vector machine \citep{Vapnik1998} and outcome weighted learning \citep{zhao2012estimating}, or a ramp loss \citep{collobert2006trading} which truncates the unbounded hinge loss by trading convexity for robustness to outliers \citep{wu2007robust}. We adopt the smoothed ramp loss function $l(u)$ proposed in \citet{zhou2017residual}, which retains the robustness and also gains computational advantages due to being smooth everywhere, defined in \eqref{eq:sramp loss}. $l(u)$ can be decomposed into the difference of two smooth convex functions, $l(u)=l_{1}(u)-l_{0}(u)$, where $l_s(u)$ is defined in \eqref{eq:formula:decom}. \begin{minipage}{0.49\textwidth} \begin{align} l(u) & =\begin{cases} 0 & \text{if }u\geq1,\\ (1-u)^{2} & \text{if }0\leq u<1,\\ 2-(1+u)^{2} & \text{if }-1\leq u<0,\\ 2 & \text{if \ensuremath{u\leq-1}.} \end{cases}\label{eq:sramp loss} \end{align} \end{minipage}\hspace{0.5cm} \begin{minipage}{0.38\textwidth} \begin{align} l_{s}(u) & =\begin{cases} 0 & \text{if }u\geq s,\\ (s-u)^{2} & \text{if }s-1\leq u<s,\\ 2s-2u-1 & \text{if }u<s-1. \end{cases}\label{eq:formula:decom} \end{align} \end{minipage} Similarly to \citet{zhou2017residual}, we can apply the d.c. algorithm \citep{le1997solving} to solve the non-convex minimization problem by first decomposing the original objective function into the summation of a convex component and a concave component and then minimizing a sequence of convex subproblems. Applying (\ref{eq:formula:decom}) to (\ref{eq:empObj-w}), our empirical risk objective becomes \begin{equation} \underbrace{\frac{1}{n}\sum_{i\in\mathcal{I}_n}\hat{w}_{i}\mid\hat{\tau}_{i}^{w}\mid l_{1}(u_{i})}_{\text{a convex function of \ensuremath{\eta}}}+\underbrace{\frac{1}{n}\sum_{i\in\mathcal{I}_n}\left\{-\hat{w}_{i}\mid\hat{\tau}_{i}^{w}\mid l_{0}(u_{i})\right\}}_{\text{a concave function of \ensuremath{\eta} }},\label{formula:sramp obj} \end{equation} where $u_{i}=\left\{2\bone(\hat{\tau}_{i}^{w}>0)-1\right\}f(\boldsymbol{X}_{i};\eta)$. Let $C_{i}^{\text{cav}}=-\hat{w}_{i}\mid\hat{\tau}_{i}^{w}\mid l_{0}(u_{i})$ and \begin{equation} \xi_{i}=\frac{\partial C_{i}^{\text{cav}}}{\partial u_{i}}=-\hat{w}_{i}\mid\hat{\tau}_{i}^{w}\mid\frac{\partial l_{0}(u_{i})}{\partial u_{i}},\label{formula:derivative} \end{equation} for $i\in\mathcal{I}_n.$ Therefore, the convex subproblem at $(t+1)$th iteration in the d.c. algorithm is to \begin{equation} \underset{\eta}{\text{min}}\;\;\;\frac{1}{n}\sum_{i\in\mathcal{I}_n}\hat{w}_{i}\mid\hat{\tau}_{i}^{w}\mid l_{1}(u_{i})+\frac{1}{n}\sum_{i\in\mathcal{I}_n}\xi_{i}^{(t)}u_{i},\label{formula:min gamma} \end{equation} which is a smooth unconstrained optimization problem and can be solved with many efficient algorithms such as L-BFGS \citep{nocedal1980updating}. Algorithm \ref{alg:dc} summarizes the proposed procedure to obtain the minimizer $\hat{\eta}$. \begin{algorithm}[ht!] \caption{The d.c. Algorithm for Finding Linear Decision Rule Parameters} \label{alg:dc} \begin{algorithmic} \State Set a small value for error tolerance, say $\epsilon=10^{-6}$; Initialize $\xi_{i}^{(0)}=2\mid\hat{\tau}_{i}\mid$. \Repeat \State Obtain $\hat{\eta}$ by solving (\ref{formula:min gamma}) with L-BFGS; \State Update $u_{i}=\left\{2\bone(\hat{\tau}_{i}>0)-1\right\}f(\boldsymbol{X}_{i};\hat{\eta})$; \State Update $\xi_{i}^{(t)}$ by (\ref{formula:derivative}). \Until {$\mid\mid\mathbb{\mathbf{\bm{\xi}}}^{(t+1)}-\bm{\xi}^{(t)}\mid\mid_{\infty}\leq\epsilon$} \State \textbf{Return} $\hat{\eta}$. \end{algorithmic} \end{algorithm} \subsection{Cross-validation\label{sec:cv} } We have discussed various approaches for the estimation of transfer weights and individual contrast function. Moreover, although transfer weights can correct the selection bias of the experimental data, they also increase the variance of the estimator compared to the unweighted counterpart. The effect of weighting on the precision of estimators can be quantified by the effective sample size originated in survey sampling \citep{kish1992weighting}. If the true contrast function can be approximated well by linear models, the estimated ITR with transfer weights may not outperform the one without weighting. As a result, for a given application, it is critical to develop a data-adaptive procedure to choose the best method among the aforementioned weighted estimators and whether or not using transfer weights. Toward this end, we propose a cross-validation procedure. To proceed, we split the sample into two parts, one for learning the ITRs and the other for evaluating the estimated ITRs. We also adopt multiple sample splits in order to increase the robustness since single split might have dramatic varying results due to randomness. For evaluation of an ITR $d_{\eta}$, we estimate the value for the given ITR by the weighted AIPW estimator in (\ref{eq:w_aipw}) with the estimated nonparametric transfer weights $\hat{w}$, \begin{equation} \begin{split} \widehat{V}(d_{\eta};\hat{w},\mathcal{I}_n)=\sum_{i\in\mathcal{I}_n}\hat{w}_{i} \Bigg(& \Big[\frac{A_{i}d(\boldsymbol{X}_{i};\eta)}{\hat{\pi}_{A}^{w}(\boldsymbol{X}_{i})}+\frac{(1-A_{i})\left\{ 1-d(\boldsymbol{X}_{i};\eta)\right\} }{1-\hat{\pi}_{A}^{w}(\boldsymbol{X}_{i})}\Big]\left[Y_{i} -\widehat{Q}_{w}\left\{ \boldsymbol{X}_{i},d(\boldsymbol{X}_{i};\eta)\right\} \right]\\ &-\widehat{Q}_{w}\left\{ \boldsymbol{X}_{i},d(\boldsymbol{X}_{i};\eta)\right\}\Bigg).\label{eq:w_aipw} \end{split} \end{equation} Since we are more interested in the comparisons among different weighting and unweighting methods instead of different contrast function estimation methods, thus we fix an estimation method for the contrast function at first, and then use cross-validation to choose among the set of candidate methods $\mathcal{G}$. This $\mathcal{G}$ contains the proposed learning method with nonparametric transfer weights described in Section \ref{subsec:Nonparametric-approach}, and unweighted estimation method. If the target population size $N$ is available, we can also include the weighted estimators with the two parametric approaches introduced in Section \ref{subsec:Parametric-approach} into $\mathcal{G}$. Then we can get the value estimates averaged over all the sample splits for each learning method in $\mathcal{G}$, and return the method with the highest value estimates. The more explicit description of the procedure is presented in Algorithm \ref{alg2}. \begin{algorithm}[h!] \caption{Cross-validation with Multi-Sample Splits} \label{alg2} \begin{algorithmic} \State \textbf{Input} The experimental data $\mathcal{D}_{n}=\{(\boldsymbol{X}_{i},A_{i},Y_{i})\}_{i\in\mathcal{I}_{n}}$, the RWD $\mathcal{D}_{m}=\{\boldsymbol{X}_{i}\}_{i\in\mathcal{I}_{m}}$, the number of sample splits $B$, the set of candidate methods need to compare $\mathcal{G}$, the target sample size $N$ is optional (needed if $\mathbb{\mathcal{G}}$ contains parametric weighting methods). \For{b = 1, \dots , $B$} \State Randomly split $\mathcal{D}_{n}$ into two equal-size parts as the experimental training and testing data: $\mathcal{D}_{n}^{\text{train}}$ and $\mathcal{D}_{n}^{\text{test}}$; let $\mathcal{I}_{n}^{\text{test}}$ be the index set of $\mathcal{D}_{n}^{\text{test}}$. Similarly, randomly split $\mathcal{D}_{m}$ into two equal-size parts $\mathcal{D}_{m}^{\text{train}}$ and $\mathcal{D}_{m}^{\text{test}}$; \State Estimate ITRs with all methods in $\mathcal{G}$ based on $\mathcal{D}_{n}^{\text{train}}$ and $\mathcal{D}_{m}^{\text{train}}$ (use $N/2$ as the target population size if $\mathbb{\mathcal{G}}$ contains parametric weighting methods). Denote the estimated ITRs as $\{\hat{d}_{g}^{\text{train}}\}_{g\in\mathcal{G}}$; \State Obtain the nonparametric transfer weights $\{\widetilde{w}_{i}\}_{i\in\mathcal{I}_{n}^{\text{test}}}$ based on the test data $\mathcal{D}_{n}^{\text{test}}$ and $\mathcal{D}_{m}^{\text{test}}$; \State Estimate the value of the estimated ITRs with weighted AIPW $\widehat{V}(\hat{d}_{g}^{\text{train}};\widetilde{w},\mathcal{I}_{n}^{\text{test}})$ , denoted as as $\widehat{V}_{g}^{(b)}, g\in\mathcal{G}$. \EndFor \State Average the estimated values over $B$ splits for each estimated ITR, denoted as $\overline{V}_{g}=B^{-1}\sum_{b=1}^{B}\widehat{V}_{g}^{(b)}$, $g\in\mathcal{G}$. \State \textbf{Return} $\underset{g\in\mathcal{G}}{\arg\max}=\overline{V}_{g}$. \end{algorithmic} \end{algorithm} \section{Asymptotic Properties \label{sec:asy} } In this section, we provide a theoretical guarantee that the estimated ITR within a certain class $\mathcal{F}$ is consistent for the true population optimal ITR within $\mathcal{F}$. Toward this end, we show that the expected risk of the estimated ITR is consistent for that of the true optimal ITR within $\mathcal{F}$. In particular, we consider the ITRs learned by the nonparametric transfer weights, weighted AIPW estimator $\hat{\tau}_{\text{aw}}^{w}$ with the logistic regression model $\pi_{A}(\boldsymbol{X};\gamma)$ and the linear $Q$ function. The extensions to other cases with parametric transfer weights and regression and IPW estimators are straightforward. We introduce more notation. Let $\theta=(\mathbf{\gamma^{\intercal},}\beta^{\intercal})^{\intercal}$ and let the contrast function \begin{equation} \resizebox{0.92\hsize}{!}{$ \tau_{\text{aw}}^{\theta}(\boldsymbol{X})=\frac{\bone(A=1)\left\{ Y-Q(\boldsymbol{X},1;\beta)\right\} }{\pi_{A}(\boldsymbol{X};\gamma)}+Q(\boldsymbol{X},1;\beta)-\frac{\bone(A=0)\{Y-Q(\boldsymbol{X},0;\beta)\}}{1-\pi_{A}(\boldsymbol{X};\gamma)}-Q(\boldsymbol{X},0;\beta)$}.\label{eq:tau_theta} \end{equation} Denote the general smooth ramp loss function $l^{\zeta}(u)=l(\zeta_{1}u)/\zeta_{2}$, where $\zeta_{1},\zeta_{2}>0$, $l$ is defined in \eqref{eq:sramp loss}, so it is easy to see that when $\zeta=(\zeta_{1},\zeta_{2})\rightarrow\zeta^{*}\overset{\mathrm{\Delta}}{=}(+\infty,2)$, the general smooth ramp loss function will converge to the 0-1 loss function, i.e., $l^{\zeta}(u)\rightarrow l_{0-1}(u),\forall u$. Note that the general smooth ramp loss function can also be written as the difference of two convex functions, $l^{\zeta}(u)=l_{1}(\zeta_{1}u)/\zeta_{2}-l_{0}(\zeta_{1}u)/\zeta_{2}$, thus the d.c. algorithm in Algorithm \ref{alg:dc} can be easily generalized to $l^{\zeta}(u)$. Recalling the loss function $\mathcal{\mathcal{L}}(\tau;\eta,l_{0-1})$ and the corresponding $\mathcal{F}$-risk $\mathcal{R}_{\mathcal{F}}(\tau;\eta,l_{0-1})$ defined in \eqref{eq:obj_0_1_loss}, we replace\textbf{ $\tau$} with $\tau_{\text{aw}}^{\theta}$ and obtain: \[ \resizebox{0.98\hsize}{!}{$ \mathcal{R}_{\mathcal{F}}(\tau_{\text{aw}}^{\theta};\eta,l_{0-1})=\mathbb{P}\left\{ \mathcal{\mathcal{L}}\left(\tau_{\text{aw}}^{\theta};\eta,l_{0-1}\right)\right\} ,\:\mathcal{\mathcal{L}}\left(\tau_{\text{aw}}^{\theta};\eta,l_{0-1}\right)=\mid\tau_{\text{aw}}^{\theta}(\boldsymbol{X})\mid l_{0-1}\left([2\bone\{\tau_{\text{aw}}^{\theta}(\boldsymbol{X})>0\}-1]f(\boldsymbol{X};\eta)\right),\:\forall\eta\in\mathcal{F}$}, \] denoted as $\mathcal{R}_{\mathcal{F}}(\eta)$ and $\mathcal{L}(\eta,\theta)$ respectively for simplicity. We define the minimal $\mathcal{F}$-risk as $\mathcal{R}_{\mathcal{F}}^{*}=\inf_{\eta\in\mathcal{F}}\mathcal{R}_{\mathcal{F}}(\eta)$. Similarly, replacing $l_{0-1}$ in $\mathcal{R}_{\mathcal{F}}(\eta)$ and $\mathcal{L}(\eta,\theta)$ with the general smooth ramp loss $l^{\zeta}$, we can obtain $\mathcal{R}_{\mathcal{F}}(\tau_{\text{aw}}^{\theta};\eta,l^{\zeta})$ and $\mathcal{\mathcal{L}}\left(\tau_{\text{aw}}^{\theta};\eta,l^{\zeta}\right)$, denoted as as $\mathcal{R}_{\zeta\mathcal{F}}(\eta)$ and $L^{\zeta}(\eta,\theta)$, respectively. Likewise, the minimal $\mathcal{F}$-risk with $l^{\zeta}$ is defined as $\mathcal{R}_{\zeta\mathcal{F}}^{*}=\inf_{\eta\in\mathcal{F}}\mathcal{R}_{\zeta\mathcal{F}}(\eta)$. The goal is to show that the decision rule is $\mathcal{F}$-consistency \citep{bartlett2006convexity}, i.e., \[ \lim_{N\rightarrow\infty}\mathcal{R}_{\mathcal{F}}(\hat{\eta})=\mathcal{R}_{\mathcal{F}}^{*}, \] where $\hat{\eta}=\arg\min_{\eta\in\mathcal{F}}\sum_{i=1}^{N}S_{i}\hat{w}_{i}L^{\zeta}(\eta,\hat{\theta}_{w})$, where $\hat{\theta}_{w}=(\hat{\gamma}_{w}^{\intercal},\hat{\beta}_{w}^{\intercal})^{\intercal}$, the estimated logistic regression coefficients $\hat{\gamma}_{w}=\arg\max_{\gamma}\sum_{i=1}^{N}\hat{w}_{i}S_{i}[A_{i}\log\pi_{A}(\boldsymbol{X}_{i};\gamma)+(1-A_{i})\log\left\{ 1-\pi_{A}(\boldsymbol{X}_{i};\gamma)\right\}]$ which is converge in probability to $\arg\max_{\gamma}\mathbb{E}[A\log\pi_{A}(\boldsymbol{X};\gamma)+(1-A_{i})\log\left\{ 1-\pi_{A}(\boldsymbol{X};\gamma)\right\}]$ $\overset{\mathrm{\Delta}}{=}\gamma^{*}$ as $N\rightarrow\infty$ under regular conditions, and the estimated outcome regression coefficients $\hat{\beta}_{w}=\arg\min_{\beta}\sum_{i=1}^{N}\hat{w}_{i}S_{i}(Y_{i}-\beta^{\intercal}L_{i})^{2}$, where $L_{i}=(1,\boldsymbol{X}_{i}^{\intercal},A_{i},A_{i}\boldsymbol{X}_{i}^{\intercal})$, thus we have \[ \begin{split}\hat{\beta}_{w} & =\left(\sum_{i=1}^{N}N\hat{w}_{i}S_{i}L_{i}L_{i}^{\intercal}\right)^{-1}\sum_{i=1}^{N}N\hat{w}_{i}S_{i}L_{i}Y_{i}\rightarrow\{\mathbb{E}(LL^{\intercal})\}^{-1}\mathbb{E}(LY)\overset{\mathrm{\Delta}}{=}\beta^{*},\end{split} \] in probability as $N\rightarrow\infty$. The convergences of $\hat{\gamma}_{w}$ and $\hat{\beta}_{w}$ are implied from the weak law of large numbers and the result $\max_{i}\mid N\hat{w}_{i}-1/\pi_{S}(\boldsymbol{X}_{i})\mid=o_{p}(1)$ proved in Theorem 2 in \citet{wang2020minimal}. Let $\theta^{*}=(\gamma^{*\intercal},\beta^{*}{}^{\intercal})^{\intercal}$, thus we have \begin{equation} \hat{\theta}_{w}\xrightarrow{p}\theta^{*}.\label{eq: thetahat} \end{equation} It is easy to see that $\mathcal{R}_{\zeta\mathcal{F}}(\eta)$ will converge to $\mathcal{R}_{\mathcal{F}}(\eta)$ when $\zeta\rightarrow\zeta^{*},$ therefore, to obtain $\mathcal{F}$-consistency, it is suffice to show that $\lim_{N\rightarrow\infty}\mathcal{R}_{\zeta\mathcal{F}}(\hat{\eta})=\mathcal{R}_{\zeta\mathcal{F}}^{*}$ which is stated in the theorem below. \begin{theorem} \label{thm1} Under Assumptions S1--S10, we have $\lim_{N\rightarrow\infty}\mathcal{R}_{\zeta\mathcal{F}}(\hat{\eta})=\mathcal{R}_{\zeta\mathcal{F}}^{*}.$ \end{theorem} \section{Simulation study\label{sec:sim} } We evaluate the finite-sample performances of the proposed estimators with a set of simulation studies. We first generate a target population of a size $N=10^{6}$. The covariates are generated by $\boldsymbol{X}_{i}=(X_{i1},X_{i2})^{\intercal}\sim N(\mathbf{1}_{2\times1},I_{2\times2})$, and the potential outcome is generated by $Y_{i}^{*}(a)\mid\boldsymbol{X}_{i}=1+2X_{i1}+3X_{i2}+a\tau(\boldsymbol{X}_{i})+\epsilon_{i}(a),$ where $\epsilon_{i}(a)$ i.i.d. follows $N(0,0.5^{2})$ for $a=0,1.$ We consider three contrast functions: (I). $\tau(\boldsymbol{X}_{i})=\arctan\{\exp(1+X_{i1})-3X_{i2}-5\};$ (II). $\tau(\boldsymbol{X}_{i})=\cos(1)+\cos(X_{i1})+\cos(X_{i2})-3/2;$ (III). $\tau(\boldsymbol{X}_{i}) = 1+2X_{i1}+3X_{i2}$. In order to evaluate the necessity and performances of weighting methods, we consider two categories of data generating models. One is the scenarios where the true optimal policies are not in linear forms , i.e., the settings (I) and (II), while the other one is considering the true optimal policy is in a linear form as the setting (III). We start with the first two scenarios. From the data visualization, we observed that the covariates shifted between the experimental data and RWD to a varying extent in the two generative models: the covariates generated by (I) have similar distributions between the experimental data and RWD, while those generated by (II) have very different ones. Then we generate the RWD by randomly sample $m=5000$ subjects from the target population $\{\boldsymbol{X}_{i}\}_{i=1}^{N}$, denoted as $\{\boldsymbol{X}_{i}\}_{i\in\mathcal{I}_{m}}$. To form the experimental sample, we generate the indicator of selection according to $S_{i}\mid\boldsymbol{X}_{i}\sim\text{Bernoulli}\left[\exp(\alpha_{0}+\alpha_{1}^{^{\intercal}}\boldsymbol{X}_{i})/\{1+\exp(\alpha_{0}+\alpha_{1}^{\intercal}\boldsymbol{X}_{i})\}\right]$, where $\alpha_{1}=(1,-2)^{\intercal}$ and $\alpha_{0}=-8$ which gives the average sample sizes of the experimental data over the simulation replicates round 1386, respectively. In the experimental data denoted as $\mathcal{I}_n$, the treatments are randomly assigned according to $A_{i}\sim\mathrm{Bernoulli}(0.5)$, and then the actual observed outcomes $Y_{i}=Y_{i}^{*}(A_{i})$, $i\in\mathcal{I}_{n}$. For each setting, we compare the following estimators: 1. \emph{``$w_{1}$'': }our proposed weighted estimator with parametric weights learned by the modified maximum likelihood introduced in Section \ref{subsec:Parametric-approach}. 2. \emph{``$w_{2}$'': }our proposed weighted estimator with parametric weights learned by the estimating equations introduced in Section \ref{subsec:Parametric-approach}. 3. \emph{``cv''}: our proposed weighted estimator using cross-validation procedure introduced in Section \ref{sec:cv}, and we set the number of sample splits as 10 in all the simulation studies. 4. \emph{``np''}: our proposed weighted estimator with nonparametric weights. 5. \emph{``unweight'': }only using the experimental data to learn the optimal linear ITRs. 6.\emph{ ``bm''}: using the weighted estimator but we replace the estimated transfer weights $\hat{w}_{i}$ with the normalized true weights $w_{i}=\{\pi_{S}(\boldsymbol{X}_{i})\}^{-1}/\sum_{i\in\mathcal{I}_n}\{\pi_{S}(\boldsymbol{X}_{i})\}^{-1}$, and replace the estimated contract functions $\hat{\tau}_{i}$ with the true contrast values $Y_i^*(1)-Y_i^*(0)$. 7. \emph{``Imai''}: the method proposed in \citet{imai2013estimating}. They construct the transfer weights by fitting a Bayesian additive regression trees (BART) model using covariates as predictors and whether to be in the experimental data as outcomes, and then use a variant of SVM to estimate the conditional treatment effect (implemented in the R package \texttt{FindIt}). Then given $\boldsymbol{X}$, we can assign treatment $1$ if its estimated conditional treatment effect is positive, and $0$ otherwise. 8. \emph{``DBN''}: the method proposed in \citet{mo2020learning}, where they propose to use distributionally robust ITRs, that is, value functions are evaluated under all testing covariate distributions that are "close" to the training distribution, and the worse-case takes a minimal one, and then the ITRs are learned by maximizing the defined worst-case value function. For the estimators 1. -- 5. in the above, we estimate the contrast function using $\tau_\text{aw}$ with linear $Q$ functions and constant propensity scores estimated via the proportion of $A=1$ in the experimental data, $n^{-1}\sum_{i\in\mathcal{I}_n}A_{i}$. To study the impact of model misspecification of $\pi_{S}(\boldsymbol{X})$, we consider to estimate $\alpha$ by fitting logistic regression wrongly: $\pi_{S}(\boldsymbol{X};\alpha)=\exp(\alpha_{0}+\alpha_{1}^{\intercal}\widetilde{\boldsymbol{X}})/\{1+\exp(\alpha_{0}+\alpha_{1}^{\intercal}\widetilde{\boldsymbol{X}})\}$, where $\widetilde{\boldsymbol{X}}=(X_{1}^{2},X_{2}^{2})$. And as described in the proposed methods, only estimators 1. -- 3. need to specify the sampling model. To measure the performances of the different approaches, we use the mean value MSE defined as $N^{-1}\sum_{i=1}^N \left(Y_i^*\{d_{\eta}(\boldsymbol{X}_i)\} - Y_i^*[\bone\{\tau(\boldsymbol{X}_i)>0\}]\right)^2$. Thus the lower the MSE, the better performances the approach has. The results are summarized in Figure \ref{fig:similar & diff}, the dark red point in each boxplot is the mean value MSE averaged over 200 Monte Carlo replicates. For the generative model (I) shown in Figure \ref{fig:similar & diff}, unweighted and DBN rules perform worst in all the cases. The second parametric weighting method is relatively robust compare with the first parametric one when the sampling model is misspecified; this is reasonable since the first one uses maximum likelihood estimation which is more reliable on the correctness of the parametric model specification. When the training data and testing data are similarly distributed, the performances of parametric, nonparametric weighting method and the cross-validation procedure are similar; in this case, sampling model misspecification does not have much influence on the rule learning. \begin{figure}[ht!] \includegraphics[width=1\textwidth]{jcgs-template/linear.VSsetting_I_II_alpha0_-8.pdf} \caption{Value MSE boxplots of different methods for the generative model (I) and (II). The red point in each box is the average of the value MSE.} \label{fig:similar & diff} \end{figure} For the generative model (II) shown in Figure \ref{fig:similar & diff}, among the performances of weighted and unweighted estimators, we can see the weighted estimators outperform the unweighted estimators when model correctly specified; the nonparametric weighted estimator outperforms other estimators and the performance of the parametric weighted estimators have dramatically decreased when the sampling model misspecified. These findings make sense, because when the more different the training and testing data are, the more crucial the bias correction is. Thus sampling model misspecification might have huge influence on the learned rules. From the both generative models, we can see that the nonparametric weighting and cross-validation procedure are more robust when the sampling model is unknown and recommended in practice use. As for DBN estimator, it performs badly in terms of MSE magnitude or variability, one possible reason might be that the tuning parameters when measuring the "closeness" of two distributions need to be chosen more carefully in practice \cite{mo2020learning}. As for the comparison with Imai also has relative good performance among all estimators and has relative small variability. As for setting (III), we consider the correctly specified sample score model. As shown in Figure \ref{fig:linear}, unweighted method performances the best, close to the benchmark. This demonstrates that when the true optimal polices are in linear forms, then learning with only the experimental data performances the best. Thus weighting is not a necessity, and it even will induce the variance of the estimators. The other competitive estimator is using cross-validation. Therefore, when we have no idea of the true data generating models, using CV is a good choice in practice. \begin{figure}[ht!] \includegraphics[width=0.9\textwidth]{jcgs-template/LinearModel_linear.VS_diff_alpha0_-8.pdf} \caption{Value MSE boxplots of different methods for the generative model (III).} \label{fig:linear} \end{figure} \section{Real Data Application\label{sec:application} } We apply the proposed methods to personalized recommendations of a job training program on post-market earning. Our analysis is based on two data sources (downloaded from \url{https://users.nber.org/~rdehejia/data/.nswdata2.html}): an experimental dataset from the National Supported Work (NSW) program \citep{lalonde1986evaluating} and a non-experimental dataset from the Current Population Survey (CPS; \citet{lalonde1986evaluating}). The NSW dataset includes $185$ individuals who received the job training program ($A=1$) and $260$ individuals who did not receive the job training program ($A=0$), and the CPS dataset includes $15992$ individuals without the job training program. Let $\boldsymbol{X}$ be a vector including an intercept $1$ and the pre-intervention covariates: age, education, Black (1 if black, 0 otherwise), Hispanic (1 if Hispanic, 0 otherwise), married (1 if married, 0 otherwise), whether having high school degree (1 if no degree, 0 otherwise), earning in 1974, and earning in 1975. The outcome variable is the earning in 1978. We transfer the earnings in the year 1974, 1975, and 1978 by taking the logarithm of the earnings plus one, denoted as ``log.Re74'', ``log.Re75'' and ``log.Re78''. We aim to learn a linear rule $d(\boldsymbol{X};\eta)=\bone(\boldsymbol{X}^{\intercal}\eta>0)$ tailoring the recommendation of the job training program to an individual with characteristics $\boldsymbol{X}$. The CPS sample is generally representative of the target population; while the participants to the NSW program may not represent the general population. Table \ref{tab: real data} contrasts sample averages of each pre-intervention covariate in the NSW and CPS samples and shows that the covariates are highly imbalanced between the two samples. On average, the NSW sample is younger, less educated, more black people, fewer married, more without high school degree, fewer earnings in 1974 and 1975.\textbf{ }Because of imbalance, ITRs cannot be learned based only on the NSW sample, thus we are motivated to use the proposed weighted transfer learning approaches to learn optimal linear ITRs which benefit the target population most. \begin{table}[ht] \caption{Sample averages of pre-intervention variables in the NSW and CPS samples and results from the weighted transfer and unweighted learning methods: The first two rows are sample averages of each pre-intervention covariate in the NSW and CPS samples, respectively. The last two rows are estimated linear coefficients of each corresponding variable in the linear decision rule from the weighted transfer and unweighted learning methods, respectively.} \label{tab: real data} \centering \resizebox{\textwidth}{!} \begin{tabular}{cccccccccc} \hline & intercept & Age & Educ & Black & Hisp & Married & Nodegr & log.Re74 & log.Re75\tabularnewline \hline experiment NSW & 1 & 25.37 & 10.20 & 0.83 & 0.09 & 0.17 & 0.78 & 2.27 & 2.70\tabularnewline Non-experiment CPS & 1 & 33.23 & 12.03 & 0.07 & 0.07 & 0.71 & 0.30 & 8.23 & 8.29\tabularnewline \hline & $\eta_{0}$ & $\eta_{1}$ & $\eta_{2}$ & $\eta_{3}$ & $\eta_{4}$ & $\eta_{5}$ & $\eta_{6}$ & $\eta_{7}$ & $\eta_{8}$\tabularnewline \hline Unweighted learning $\hat{\eta}_{{\rm u}}$ & -1 & 0.03 & 0.07 & 0.81 & 2.31 & 1.89 & -0.62 & -0.60 & 0.47\tabularnewline Weighted transfer learning $\hat{\eta}_{{\rm w}}$ & -1 & 0.02 & 0.05 & 0.66 & 0.36 & 0.14 & 0.28 & -0.27 & 0.15\tabularnewline \hline \end{tabular}} \end{table} We consider the unweighted and weighted transfer learning methods. For the weighted transfer learning method, we consider the nonparametric weighting approach, because the parametric approach requires the target population size $N$ which is unknown in this application. To estimate the weighted and unweighted decision rules, we use the AIPW estimator $\hat{\tau}_{\text{aw}}$ with linear Q functions since it outperforms other competitors in the simulation study in Section \ref{sec:sim}. We also implement the cross-validation procedure to choose between the unweighted and weighted transfer learning methods. Table \ref{tab: real data} presents the estimated linear coefficients of each corresponding variable in the linear decision rule from the weighted transfer and unweighted learning methods, respectively. Moreover, from the cross-validation procedure, the weighted transfer learning method produces a smaller population risk than the unweighted learning method and therefore is chosen as the final result. Compared with the learned decision rule with only the experiment data NSW which oversamples the low-income individuals, individuals without high school degrees (i.e. Nodegr = 1) are more likely to have higher outcomes if offered the job training program in the target population. To assess the performances of the learned rules for the target population, following \citet{kallus2017recursive}, we create a test sample that is representative of the target population. We sample $100$ individuals at random from the CPS sample to form a control group and then find their closest matches in the NSW treated sample to form the treatment group (use \texttt{pairmatch} in R package \texttt{optmatch}). The test sample thus contains the $100$ matched pairs, mimicking a experimental sample where binary actions are randomly assigned with equal probabilities. We use random forest on the test data to estimate $Q$ function $\mathbb{E}(Y\mid\boldsymbol{X},a)$, denoted the estimator as $\widehat{Q}_{\text{test}}(\cdot,a),$ $a=0,1$, then we can use $\sum_{i\in\text{CPS}}\widehat{Q}_{\text{test}}\left\{ \boldsymbol{X}_{i},d(\boldsymbol{X}_{i};\eta)\right\} /\mid\text{CPS}\mid$ to estimate the value of an ITR $d(\cdot;\eta)$, where $\mid\text{CPS}\mid$ is the sample size of CPS. Besides the weighted linear rule $d(\cdot;\hat{\eta}_{w})$ and the unweighted linear rule $d(\cdot;\hat{\eta}_{u})$ learned from the above, we also compare the Imai method introduced in \citet{imai2013estimating}: first to estimate the conditional probability of being selected into NSW given $\boldsymbol{X}$ from the total sample which consists of NSW and CSP using BART model, and then to estimate the conditional treatment effect weighted by the inverse of the sampling probability. Then the Imai rule will recommend the job training program if the conditional treatment effect is positive and will not recommend it otherwise. We replicate this sampling test data procedure 100 times, estimate values of the three ITRs for each replicate, and get the averaged estimated values (standard error) as follows: 8.395 (0.009), 6.106 (0.011), and 8.049 (0.006) for $d(\cdot;\hat{\eta}_{w})$, $d(\cdot;\hat{\eta}_{u})$, and Imai respectively, thus the transfer weighted rule has the highest value estimate. Although this evaluation method needs assumption of no unmeasured confounders for the non-experimental data CPS, which is not needed in our learning procedure, it can still bring us confidence, to some degree, that the weighted linear rule has better generalizability to the target population. \section{Discussion\label{sec:Discussion}} We develop a general framework to learn the optimal interpretable ITRs for the target population by leveraging no unmeasured confounding in the experimental data and the representativeness of covariates in the RWD. The proposed procedure is easy to implement. By constructing transfer weights to correct the distribution of covariates in the experimental data, we can learn the optimal interpretable ITRs for the target population. Besides, we provide a data-adaptive procedure to choose among different weighting methods by modified cross-validation. Moreover, we establish the theoretical guarantee for the consistency of the proposed estimator of ITRs. Some future directions worth considering. One is to quantify the uncertainty of the estimated ITRs and therefore to conduct statistical inference. Moreover, in some scenarios, beside the covariate information, comparable treatment and outcome information are also available in the RWD. Unlike the randomized experiment, the treatment assignment in the RWD is determined by the preference of physicians and patients. In this case, an interesting topic is to develop integrative methods that can leverage the strengths of both data sources for efficient estimation of the ITRs. \bibliographystyle{dcu}
1,477,468,750,755
arxiv
\section{Introduction} The formation and growth of supermassive black holes (SMBHs) and their host-galaxies are related processes. This is supported by various observational signatures: the SMBH mass correlates with the mass of the bulge of the host-galaxy (\citealt{1998AJ....115.2285M,2003ApJ...589L..21M}), with the velocity dispersion of the bulge (\citealt{2000ApJ...539L...9F,2002ApJ...574..740T}), and with the luminosity of the bulge (\citealt{1995ARA&A..33..581K}). From theoretical models, AGN seem to be able to switch off cooling flows in clusters (\citealt{2004ApJ...617..896H}) and star formation in galaxies (\citealt{2008MNRAS.391..481S}), with the result that the SMBH mass is related to the host-galaxy bulge mass (or vice-versa). Feedback between an accreting SMBH and the host-galaxy may play an important role in galaxy formation and evolution. Understand the role of feedback is a demanding problem for both observers and theorists. Semi-analytical models and hydrodynamical simulations have been developed to attempt to link the formation and evolution of SMBHs to the structure formation over cosmic time. These models invoke different mechanisms to fuel the central SMBHs and to build the host-galaxy bulges, such as major/minor mergers of galaxies (e.g., \citealt{2000ApJ...536L..73C,2000MNRAS.311..576K,2005MNRAS.361..776S,2006ApJS..163....1H}), smooth accretion of cold gas from filamentary structures (e.g., \citealt{2009MNRAS.395..160K,2009ApJ...703..785D}), or accretion of recycled gas from dying stars (e.g., \citealt{2010ApJ...717..708C}). Several works also consider radiative feedback which can reproduce two important phases of galaxy evolution, namely an obscured-cold-phase, when the bulk of star formation and black hole accretion occurs, and the following quiescent hot phase in which accretion remains highly sub-Eddington and unobscured (e.g., \citealt{2005MNRAS.358..168S,2011A&A...525A.115L}). In some of these models, the obscured/unobscured AGN dichotomy is more related to two different phases of galaxy evolution (\citealt{2008ApJS..175..356H}), rather than to an orientation effect (i.e., unified model scheme). \par The obscured/unobscured time dependent AGN dichotomy could be related to the bimodality in the rest-frame color distribution of host-galaxies (\citealt{2007A&A...475..115R,2007ApJ...660L..11N,2009A&A...507.1277B,2009ApJ...696..396S}), namely the red-sequence (or ``red-cloud'') and blue-cloud galaxies. Broad-line AGN (if the morphology of the host-galaxy is available) are likely to be associated to galaxies belonging to the blue-cloud, while obscured objects to red passive galaxies. The green valley should be populated by transition objects. The picture above is probably a too crude approximation. Moreover, one should note that red sequence galaxies may well be passively evolving galaxies without significant star formation (e.g., \citealt{2007A&A...475..115R,2007ApJ...660L..11N}), rather than dusty starforming objects (e.g., \citealt{2009A&A...507.1277B}). \par Disentangling the contribution of the nuclear AGN from the host-galaxy properties in the broad band SED is fundamental to constrain the physical evolution of AGN and to place them into the context of galaxy evolution. In the standard picture the AGN energy output is powered by accretion onto SMBHs. The disk accretion emission is visible in the optical-UV as the blue-bump feature. The X--ray emission is believed to be due to a hot-electrons corona that surrounds the accretion disk, while the infrared emission is likely due to the presence of a dusty torus around the disk at few parsec from the center, which reprocesses the nuclear radiation. According to the unified model of AGN (e.g., \citealt{1993ARA&A..31..473A,1995PASP..107..803U}), hot dust is located in the inner edge of the torus. However, recent studies predict and observe exceptions to the unified model. From the theoretical point of view, an alternative solution to the torus is the disk-wind scenario (e.g., \citealt{1992ApJ...385..460E,2006ApJ...648L.101E}). From the observational side, AGN without any detectable hot dust emission (e.g., \citealt{2010Natur.464..380J}) and weak infrared emission (e.g., \citealt{2010ApJ...724L..59H}) are predicted and observed. The vast majority of studies performed so far concern unobscured (Type-1) AGN for which their SED is well known from low-$z$ ($\langle z \rangle\sim0.206$, see \citealt{1994ApJS...95....1E}) to high-$z$ ($\langle z \rangle\sim1.525$, see \citealt{2006ApJS..166..470R}). An obvious complication in the study of their host-galaxy properties is that the emission of the central AGN outshines the galaxy light in UV, optical and infrared bands; therefore it is extremely difficult to derive constraints on the colors, stellar populations, and morphologies of the host. On the other hand, for obscured (Type-2) AGN the host-galaxy light is the dominant component in the optical/near-infrared SED, while it is difficult to recover the AGN intrinsic nuclear emission. The lack of a proper characterization of the nuclear componentof the SED of obscured Type-2 AGN is a major limitation. As a consequence, the relations between stellar masses, SFR, morphologies and accretion luminosity remain poorly known. Since the relative contribution in the SED of the different components (AGN/host-galaxy) varies with wavelength, a proper decomposition can be obtained by an SED-fitting approach, complemented by a morphological analysis. This will provide a robust estimate of the nuclear emission (bolometric luminosities and bolometric corrections, absorption column density distributions, etc) and its relation with the host-galaxy properties (mass, star formation rates, morphological classification). The AGN structure is reflected in the shape of the SED, specifically the big-blue bump and the infrared-bump are related to the accretion disk and the surrounding torus, respectively. Therefore, a densely sampled SED over a broad wavelength interval is mandatory to extract useful information from SED fitting procedures, allowing to tightly constrain physical parameters from multi-component modeling and, in particular, to properly disentangle the emission associated to stellar light from that due to accretion. \par The combination of sensitive X--ray and mid-IR observatories allows us to model the obscuring gas that in Type-2 AGN hides the nuclear region from the near-IR to the UV. As supported by previous investigations, the reprocessed IR emission could be a good proxy of the intrinsic disk emission. \citet{2009A&A...502..457G} confirm the correlation between the X--ray luminosity at [2-10]keV and the IR emission for a sample of Seyfert galaxies (see also \citealt{2004A&A...418..465L}). Their data are the best estimate of the nuclear (non stellar) IR flux in AGN to date. A highly significant correlation between $L_{[2-10]{\rm keV}}$ and the intrinsic nuclear IR luminosity at 12.3$\mu$m is observed in the high quality near-IR and X--ray data discussed by \citet{2009A&A...502..457G}. This reinforces the idea that ``uncontaminated'' mid-IR continuum is an accurate proxy for the intrinsic AGN emission. \par In this work we present the largest study of the multi-wavelength properties of an X--ray selected sample of obscured AGN using the XMM-Newton wide field survey in the COSMOS field (XMM-COSMOS). Following a similar approach to that of \citet{2007A&A...468..603P} and \citet{2010MNRAS.402.1081V}, we use the infrared emission to evaluate the nuclear bolometric luminosity from a multi-component fit. The paper is aimed at a detailed characterization of a large sample of obscured AGN over a wide range of frequencies. The SEDs, morphology of the host-galaxies, stellar masses, colors, bolometric luminosities and bolometric corrections for the sample of obscured AGN are presented. \par This paper is organized as follows. In Sect.~\ref{The Data Set} we report the selection criteria for the sample used in this work. Section~\ref{Rest-frame monochromatic fluxes and Spectral Energy Distributions} presents the multi-wavelength data-set, while in Sect.~\ref{Average SED} the method to compute average SED is described. Section~\ref{SED-fitting} concerns the multi-component modeling used to disentangle the nuclear emission from the stellar light. In Section~\ref{Bolometric luminosities and bolometric corrections} the method used to compute intrinsic bolometric luminosites and bolometric corrections for Type-2 AGN is described, while in Sect.~\ref{Calibrating the method} we have applied the same method to a sample of Type-1 AGN. The discussion of our findings is given in Sect.~\ref{Results and discussion}, while in Sect.~\ref{Summary and Conclusions} we summarize the most important results. \par We adopted a flat model of the universe with a Hubble constant $H_{0}=70\, \rm{km \,s^{-1}\, Mpc^{-1}}$, $\Omega_{M}=0.27$, $\Omega_{\Lambda}=1-\Omega_{M}$ (\citealt{komatsu09}). \section{The Data Set} \label{The Data Set} The XMM-COSMOS catalog comprises $1822$ point--like X--ray sources detected by XMM-\textit{Newton} over an area of $\sim 2\,\rm deg^2$. The total exposure time was $\sim 1.5$ Ms with a fairly homogeneous depth of $\sim 50$ ks over a large fraction of the area (\citealt{hasinger07}, \citealt{cappelluti09}). Following \citet{2010ApJ...716..348B}, we excluded from our analysis 25 sources which turned out to be a blend of two {\it Chandra} sources. This leads to a total of 1797 X--ray selected point-like sources. We restricted the analysis to 1078 X--ray sources detected in the [2-10]~keV band at a flux larger than $2.5\times10^{-15}{\rm erg}\;{\rm s}^{-1}{\rm cm}^{-2}$ (see Table 2 in \citealt{cappelluti09}). The objects for which no secure optical counterpart could be assigned are often affected by severe blending problems, so that we consider in this analysis the 971 sources (hereafter $\text{971-{\rm XMM}}$) for which a secure optical counterpart can be associated (see discussion in \citealt{2010ApJ...716..348B})\footnote{The multi-wavelength XMM-COSMOS catalog can be retrieved from: http://www.mpe.mpg.de/XMMCosmos/xmm53\_release/, version $1^{\rm st}$ April 2010.}. \par From the $\text{971-{\rm XMM}}$ catalog we have selected $255$ sources, which do not show broad (FWHM$<2000$ km s$^{-1}$) emission lines in their optical spectra\footnote{The origin of spectroscopic redshifts for the $255$ sources is as follows: $11$ objects from the SDSS archive, $2$ from MMT observations (\citealt{prescott06}), $70$ from the IMACS observation campaign (\citealt{trump07}), $156$ from the zCOSMOS bright $20$k sample (see \citealt{lilly07}), $7$ from the zCOSMOS faint catalog and 9 from the Keck/DEIMOS campaign.} (hereafter we will refer to them as the Type-2 AGN sample): 223 are classified not-broad-line AGN, while 32 are absorption-line galaxies. Not-broad-line AGN are objects with unresolved, high-ionization emission lines, exhibiting line ratios indicating AGN activity, and, when high-ionization lines are not detected, or the observed spectral range does not allow to construct line diagnostics, objects without broad line in the optical spectra. Absorption-line galaxies are sources consistent with a typical galaxy spectrum showing only absorption line. \begin{figure} \includegraphics[width=8cm]{17175fig1} \caption{Plot of the $[2-10]$keV flux versus the total $i^*$ CFHT magnitude for our sample of 255 Type-2 AGN. The red circles represent sources with a de-absorbed 2--10 keV luminosity lower than $10^{42}$ erg s$^{-1}$. The dashed lines represent a constant X--ray to optical flux ratio of ${\rm Log}~(X/O)=\pm 1$.} \label{fluxMHi} \end{figure} In Figure \ref{fluxMHi} we plot the 2--10 keV X--ray flux as a function of $i^*$ CFHT magnitude. The dashed lines limit the region typically occupied by AGN along the X--ray to optical flux ratio ${\rm Log}~(X/O)=\pm1$\footnote{${\rm Log}~(X/O)={\rm Log}~ f_x+i^*/2.5+5.6$.}. Nine sources have a de-absorbed 2--10 keV luminosity lower than $10^{42}$ erg s$^{-1}$, the conventional threshold below which the X--ray sources that can plausibly be explained by moderate-strength starbursts, hot gas in elliptical galaxies, or other sources besides accretion onto a nuclear SMBH (\citealt{2001ApJ...554..742H}). The three sources inside the dashed lines have X--ray luminosities close to $10^{42}$ erg s$^{-1}$, while six AGN (6/255, 2\%) lie in the part of the diagram usually occupied by star-forming galaxies, and have X--ray luminosities $<10^{42}$ erg s$^{-1}$. Their inclusion in the analysis does not affect the main results. The Type-2 AGN sample used in our analysis comprises 255 X--ray selected AGN, all of them with spectroscopic redshifts, spanning a wide range of redshifts ($0.045<z<3.524$) and X--ray luminosities ($41.06 \leq {\rm Log}~ L_{[2-10]{\rm keV}} \leq 45.0$). The redshift distribution of the total sample and the distribution of the de-absorbed hard X--ray luminosities are presented in Figure \ref{histredshift}. The mean redshift is $\langle z\rangle=0.76$, while the mean ${\rm Log}~ L_{[2-10]{\rm keV}}$ is 43.34 with a dispersion of 0.64. \begin{figure*} \centering {\label{histredshift}\includegraphics[width=0.3\textwidth]{17175fig2}} {\label{histlx}\includegraphics[width=0.3\textwidth]{17175fig3}} {\label{histnh}\includegraphics[width=0.3\textwidth]{17175fig4}} \caption{\textit{Left panel:} Redshift distribution of the $255$ Type-2 AGN considered in this work. \textit{Center panel:} Intrinsic hard X--ray luminosity distribution of the $255$ Type-2 AGN considered in this work. \textit{Right Panel:} Column density distribution of the 255 Type-2 AGN (\textit{black histogram}), of the 111 Type-2 AGN with an $N_{\rm H}$ estimate from spectral analysis (\textit{grey filled histogram}), and of the 144 Type-2 AGN with an $N_{\rm H}$ estimate from hardness ratios (\textit{orange hatched histogram}).} \label{fig:zlxnh} \end{figure*} \subsection{Correction for absorption for the X--ray luminosity} \label{Correction for absorption for the X--ray 2--10 keV luminosity} For a sub-sample of 111 AGN we have an estimate of the column density $N_{\rm H}$ from spectral analysis (see \citealt{2007ApJS..172..368M}), while for 144 AGN absorption is estimated from hardness ratios (HR; see \citealt{2010ApJ...716..348B}). For 25 sources for which a column density estimate is not available from HR we consider the Galactic column density. Therefore, we can compute the de-absorbed X--ray luminosity at 0.5--2 keV (soft band) and 2--10 keV (hard band) for all sources in our sample. In Figure \ref{histnh} (right panel) we show the distribution of column densities that ranges from $3\times10^{20}$ cm$^{-2}$ to $1.5\times10^{24}$ cm$^{-2}$. The mean $N_{\rm H}$ value is $8.5\times10^{21}$ cm$^{-2}$ with a dispersion of 0.72 dex. The integrated intrinsic un-absorbed luminosity is computed assuming a power-law spectrum with slope, $\Gamma=2$ and $\Gamma=1.7$ for the 0.5--2 keV and 2--10 keV bands, respectively. The average shift induced by the correction for absorption in the Type-2 sample is $\langle\Delta \LogL_{[2-10]{\rm keV}}\rangle=0.04\pm0.01$. \section{Rest-frame monochromatic fluxes and \\Spectral Energy Distributions} \label{Rest-frame monochromatic fluxes and Spectral Energy Distributions} We used the catalog by \citet{2010ApJ...716..348B} which includes multi-wavelength data from mid-infrared to hard X--rays: MIPS 160 $\mu$m, 70 $\mu$m and 24 $\mu$m GO3 data (\citealt{2009ApJ...703..222L}), IRAC flux densities (\citealt{sanders07}), $u^*$ and $i^*$ CFHT bands and near-infrared $K_S$-band data (\citealt{mccraken08}), J UKIRT (\citealt{2008yCat.2284....0C}), HST/ACS F814W imaging of the COSMOS field (\citealt{koekemoer07}), optical multiband photometry (SDSS, Subaru, \citealt{capak07}) and near- and far-ultraviolet bands with GALEX (\citealt{2007ApJS..172..468Z}). The number of X--ray sources detected at $160\,\mu m$ and $70\,\mu m$ is 18 and 42, respectively. For the undetected sources in these bands we consider 5$\sigma$ upper limits of $65 \, \rm m Jy$ and $8.5 \, \rm m Jy$ for $160\,\mu m$ and $70\,\mu m$, respectively. At $24\,\mu m$ the number of detected sources is 237. For the 18 undetected sources at $24\,\mu m$, we consider 5$\sigma$ upper limits of $80 \, \rm \mu Jy$. All 255 sources are detected in the infrared in all IRAC bands, and only a few objects were not detected in the optical and near-IR bands: we have only 8 upper limits in the $z^+$ band; 1 upper limit in the $B_{J}$ and $i^*$ bands; 2 upper limits in the $u^{*}$ band; 4 upper limits in the $K_{S}$ CFHT band and 2 in the $J$ UKIRT band. The observations in the various bands are not simultaneous, as they span a time interval of about 5 years: 2001 (SDSS), 2004 (Subaru and CFHT) and 2006 (IRAC). Variability for absorbed sources is likely to be a negligible effect, but, in order to further reduce it, we selected the bands closest in time to the IRAC observations (i.e., we excluded SDSS data, that in any case are less deep than other data available in similar bands). GALEX bands are not taken into account because, given the large aperture they can include light from close companions. All the data for the SED computation were shifted to the rest frame, so that no K-corrections were needed. Galactic reddening has been taken into account: we used the selective attenuation of the stellar continuum $k(\lambda)$ taken from Table 11 of \citet{capak07}. Galactic extinction is estimated from \citet{schlegel98} for each object in the $\text{971-{\rm XMM}}$ catalog. Count rates in the 0.5-2 keV and 2-10 keV are converted into monochromatic X--ray fluxes in the observed frame at 1 and 4 keV, respectively, using a Galactic column density $N_{\rm H} = 2.5 \times 10^{20}\,cm^{-2}$ (see \citealt{cappelluti09}), and assuming a photon index $\Gamma_x=2$ and $\Gamma_x=1.7$, for the soft and hard band, respectively. We do not correct these X--ray fluxes for the intrinsic column density. All sources are detected in the 2-10 keV band by definition of the sample, while in the soft band we have 70 upper limits. \section{Average SED} \label{Average SED} \begin{figure} \includegraphics[width=8cm]{17175fig5} \caption{Mean (\textit{orange crosses}) and median (\textit{red points}) SED from the total sample of 255 Type-2 AGN. The blue points represent the rest-frame data, from infrared to X--ray, used to construct the average SED, while the black lines represent the interpolated SED.} \label{meansed257} \end{figure} We have computed the individual rest-frame SEDs for all sources in the sample, following the same approach as in L10, and we have normalized all of them at 1$\mu$m. After this normalization we divided the frequency interval from ${\rm Log}~ \nu_{\rm min}$ to ${\rm Log}~ \nu_{\rm max}$ using a fixed step $\Delta {\rm Log}~ \nu$. The minimum and maximum frequency depends on both the data and the redshift of the source considered to compute the SED, in our case we have used ${\rm Log}~ \nu_{\rm min}=12$ Hz and ${\rm Log}~ \nu_{\rm max}=20$ Hz, with a $\Delta {\rm Log}~ \nu=0.02$. We averaged data in each given interval ${\rm Log}~ \nu_{l} \leq {\rm Log}~ \nu \leq {\rm Log}~ \nu_{ l+1}$. The mean and median SEDs are obtained by taking the arithmetic mean and the median of logarithmic luminosities, ${\rm Log}~ L$, in each bin. It is important to note that sources at different redshift contribute to the same bin. Because of the relatively wide range of redshifts, the lowest and the highest frequency bins are populated by a variable number of points. This effect may introduce relatively high fluctuations in the average luminosity in those bins. In order to minimize these effects we select a minimum number of 200 SEDs in each bin (this number depends on the total number of sources in the sample). Then, we select the mean reference frequency, $\overline{{\rm Log}~ \nu}$, of the bin and use a \texttt{binary-search} algorithm to find all luminosities that correspond at $\overline{{\rm Log}~ \nu}$ (if a source does not have a frequency that correspond to $\overline{{\rm Log}~ \nu}$, we choose the luminosity with the closer frequency to $\overline{{\rm Log}~ \nu}$). Finally, all adjacent luminosities in each bin are then connected to compute the final mean and median SED. In Figure \ref{meansed257} the resulting mean and median SEDs are reported with orange crosses and red points, respectively. The data points are reported in order to show the dispersion with respect to 1$\mu$m. The average SED is characterized by a flat X--ray slope, $\langle\Gamma=1.12\rangle$, while in the optical-UV the observed emission appears to be consistent with the host-galaxy. The flattening of the X--ray slope is likely due to the fact that we do not correct for the intrinsic absorption the fluxes at 1 and 4 keV. The average SED in the mid-infrared is probably a combination of dust emission from star-forming region and AGN emission reprocessed by the dust. \begin{figure*} \centering {\label{meanbinx}\includegraphics[width=0.3\textwidth]{17175fig6}} {\label{meanbinir}\includegraphics[width=0.3\textwidth]{17175fig7}} {\label{meanbinz}\includegraphics[width=0.3\textwidth]{17175fig8}} \caption{Average SEDs in the rest-frame ${\rm Log}~(\nu L_\nu)-{\rm Log}~ \nu$ plane. \textit{Right panel:} Mean SEDs computed binning in X--ray luminosity at 4 keV. \textit{Center panel:} Mean SEDs computed binning in infrared luminosity at 8$\mu$m. \textit{Left Panel:} Mean SEDs computed binning in redshift. The color code refers to the different bins as labeled.} \label{fig:averagesed} \end{figure*} Before trying to deconvolve each source using an SED-fitting code, we binned the total sample in X--ray and infrared luminosites and redshift. We used the luminosity at 4 keV and 8$\mu$m to divide the total sample in 6 bins with the same number of sources in each bin. The wavelength of the luminosity used to bin the sample is chosen to minimize the host-galaxy contribution. In the three panels of Figure \ref{fig:averagesed} the resulting mean SEDs are shown. The redshift trend is directly connected to the luminosity trend, since at higher redshifts we are looking at the more luminous sources. The shapes of the average SEDs in the optical bands are approximately the same in all luminosity and redshift bins. As expected, there is a stronger host-galaxy contribution at lower luminosity/redshift bins, where the average SEDs have a typical galaxy shape. Moreover, there is a trend between X--ray and mid-infrared luminosity: the contribution in the infrared is higher at higher X--ray luminosities. This effect is already known for both Type-1 and Type-2 AGN using the intrinsic (non-stellar) emission from the AGN (e.g., \citealt{2004A&A...418..465L}, \citealt{2009A&A...502..457G}). The observed average SED for our sample is a combination of both host-galaxy and AGN light, but the change in the average shape in the mid-infrared as a function of the X--ray luminosity suggests that most luminous sources are probably dominated by the AGN emission at those wavelengths. \section{SED-fitting} \label{SED-fitting} The purpose of performing SED fitting is to properly disentangle the emission associated to stellar light from that due to accretion and constrain physical parameters. Since the relative contribution of the different components varies with wavelength, a proper decomposition can be obtained through an SED-fitting approach providing a robust estimate of the nuclear emission (bolometric luminosities and bolometric corrections) and its relation with the host-galaxy properties (mass, star formation rates, morphological classification). A well sampled SED is mandatory; in particular, far-infrared observations are fundamental to sample the star-formation activity, while mid-infrared observations are necessary to sample the region of the SED where most of the bolometric luminosity of obscured AGN is expected to be re-emitted. In Fig.~\ref{sed} the broad-band SEDs of four XMM-Newton Type-2 AGN are plotted as examples. The lower two panels are representative of a full SED with all detections from the far-infrared to the optical. Unfortunately, there are very few detections at 160 and 70$\mu$m, so that the more representative situation is shown in the upper left panel of Fig.~\ref{sed}. The three components adopted in the SED-fitting code, \textit{starburst}, \textit{AGN torus} and \textit{host-galaxy} templates, are shown as a long-dashed line, solid line and dotted line, respectively. All the templates used in the SED-fitting code will be described in the following Sections. The red line represents the best-fit, while the black points represent the photometric data used in the code, from low to high frequency: MIPS-Spitzer ($160\mu m$, $70\mu m$ and $24\mu m$ if available), 4 IRAC bands, K CFHT, J UKIRT, optical Subaru and CFHT bands. \par The observed data points from infrared to optical are fitted with a combination of various SED templates (see Sect.~\ref{Template libraries}) using a standard $\chi^2$ minimization procedure \begin{equation} \label{chisquare} \chi^2=\sum_{i=1}^{\rm{n_{filters}}}\left[\frac{F_{obs,i}-A\times F_{gal,i}-B\times F_{agn,i}-C\times F_{ir,i}}{\sigma_i}\right]^2 \end{equation} where $F_{obs,i}$ and $\sigma_i$ are the monochromatic observed flux and its error in the band $i$; $F_{gal,i}$, $F_{agn,i}$ and $F_{ir,i}$ are the monochromatic template fluxes for the host-galaxy, the AGN and the starburst component, respectively; $A$, $B$ and $C$ are the normalization constants for the host-galaxy, AGN and starbust component, respectively. The starburst component is used only when the source is detected at $160\mu m$ and $70\mu m$. Otherwise, a two components SED-fit is used. Sixteen is the maximum number of bands adopted in the SED-fitting (only detection are considered), namely: $160\mu m$, $70\mu m$, $24\mu m$, $8.0\mu m$, $5.8\mu m$, $4.5\mu m$, $3.6\mu m$, $K_S$, $J$, $z^+$, $i^*$, $r^+$, $g^+$, $V_J$, $B_J$ and $u^*$. \subsection{Template libraries} \label{Template libraries} \subsubsection{Optical template library} We used a set of 75 galaxy templates built from the \citet[BC03 hereafter]{2003MNRAS.344.1000B} code for spectral synthesis models, using the version with the \textquotedblleft{Padova 1994}\textquotedblright~tracks, solar metallicity and Chabrier IMF (\citealt{2003ApJ...586L.133C}). For the purposes of this analysis a set of galaxy templates representative of the entire galaxy population from ellipticals to starbursts is selected. To this aim, 10 exponentially decaying star formation histories with characteristic times ranging from $\tau = 0.1$ to $30$\,Gyr and a model with constant star formation are included. For each SFH, a subsample of ages available in BC03 models is selected, to avoid both degeneracy among parameters and speed up the computation. In particular, early-type galaxies, characterised by a small amount of ongoing star formation, are represented by models with values of $\tau$ smaller than $1$ Gyr and ages larger than $2$\,Gyr, whereas more actively star forming galaxies are represented by models with longer values of $\tau$ and a wider range of ages from $0.1$ to $10$\,Gyr. An additional constraint on the age is that, for each sources, the age has to be smaller than the age of the Universe at the redshift of the source. Each template is reddened according to the \citet{2000ApJ...533..682C} reddening law. The input value is $E(B-V)$, corresponding to a dust-screen model, with $F_o(\lambda)=F_i(\lambda)10^{-0.4 E(B-V) k(\lambda)}$, where $F_o$ and $F_i$ are the observed and the intrinsic fluxes, respectively. The extinction at a wavelength $\lambda$ is related to the colour excess $E(B-V)$ and to the reddening curve $k(\lambda)$ by $A_\lambda=k(\lambda)E(B-V)=k(\lambda)A_V /R_V$, with $R_V=4.05$ for the Calzetti's law. The $E(B-V)$ values range between 0 and 1 with a step of $0.05$. \subsubsection{AGN template library} The nuclear SED templates are taken from \citet{2004MNRAS.355..973S}. They were constructed from a large sample of Seyfert galaxies selected from the literature for which clear signatures of non-stellar nuclear emission were detected in the near-IR and mid-IR. After a proper subtraction of the stellar contribution, the nuclear infrared data were interpolated with a radiative transfer code for dust heated by a nuclear source with a typical AGN spectrum, and including different geometries, dust distribution, variation of the radii, density and dust grain sizes to account for possible deviations from a standard ISM extinction curve (see for more details \citealt{1994MNRAS.268..235G,2001A&A...365...37M}). \par The infrared SEDs were then normalized by the intrinsic, unabsorbed X-ray flux in the 2--10 keV band, and are divided into 4 intervals of absorption: $N_{\rm H}<10^{22}$ cm$^{-2}$ for Sy1, while $10^{22}<N_{\rm H}<10^{23}$ cm$^{-2}$, $10^{23}<N_{\rm H}<10^{24}$ cm$^{-2}$ and $N_{\rm H}>10^{24}$ cm$^{-2}$ for Sy2 (see Fig.~1 in \citealt{2004MNRAS.355..973S}). The main differences between the SEDs of Sy1s and Sy2s with $10^{22}<N_{\rm H}<10^{23}$ cm$^{-2}$ are the absorption in the near-IR at about $\lambda<2\mu$m and the silicate absorption at $\lambda=9.7\mu$m, which are present in the Sy2 template. The shape of the SED in the mid-infrared with $10^{23}<N_{\rm H}<10^{24}$ cm$^{-2}$ is quite similar to that with $10^{22}<N_{\rm H}<10^{23}$ cm$^{-2}$. The Compton-thick SED ($N_{\rm H}>10^{24}$ cm$^{-2}$) shows conspicuous absorption also at $\lambda\sim1.3\mu$m. If a source has the $N_{\rm H}$ value available, this is used as a prior in the selection of the best-fit AGN template. \subsubsection{Starburst template library} We used two different starburst template libraries for the SED-fitting: \citet{2001ApJ...556..562C} and \citet{2002ApJ...576..159D}. These template libraries represent a wide range of SED shapes and luminosities and are widely used in the literature. Here, we briefly describe how each of these libraries was derived and discuss the main differences between them. \par The \citet{2001ApJ...556..562C} template library consists of 105 templates based on the SEDs of four prototypical starburst galaxies (Arp220 (ULIRG); NGC 6090 (LIRG); M82 (starburst); and M51 (normal star-forming galaxy)). They were derived using the \citet{1998ApJ...509..103S} models with the mid-infrared region replaced with ISOCAM observations between 3 and 18$\mu$m (verifying that the observed values of these four galaxies were reproduced by the templates). These templates were then divided into two portions (4--20$\mu$m and 20--1000$\mu$m) and interpolated between the four to generate a set of libraries of varying shapes and luminosities. The \citet{2001ApJ...549..215D} templates are also included in this set to extend the range of shapes. \par The \citet{2002ApJ...576..159D} templates are updated versions of the \citet{2001ApJ...549..215D} templates. This model involves three components, large dust grains in thermal equilibrium, small grains semistochastically heated, and stochastically heated PAHs. They are based on IRAS/ISO observations of 69 normal star-forming galaxies in the wavelength range 3--100$\mu$m. \citet{2002ApJ...576..159D} improved upon these models at longer wavelengths using SCUBA observations of 114 galaxies from the Bright Galaxy Sample (BGS, see \citealt{1989AJ.....98..766S}), 228 galaxies observed with ISOLWS (52--170$\mu$m; Brauher 2002), and 170$\mu$m observations for 115 galaxies from the ISOPHOT Serendipity Survey (\citealt{2000A&A...359..865S}). All together, these 64 templates span the IR luminosity range $10^8-10^{12}L_\odot$. The total infrared template sample used in our analysis is composed of 168 templates. \begin{figure*} \centering \includegraphics[width=13cm,height=13cm]{17175fig9} \caption{Examples of SED decompositions. Black circles are the observed photometry in the rest-frame (from the far-infrared to the optical-UV). The long-dashed, solid and dotted lines correspond respectively to the starburst, AGN and host-galaxy templates found as the best fit solution. The red line represents the best-fit SED. The stellar mass and the SFR derived from the galaxy template are reported.} \label{sed} \end{figure*} \section{Bolometric luminosities and bolometric corrections} \label{Bolometric luminosities and bolometric corrections} The nuclear bolometric luminosities and bolometric corrections are estimated, using an approach similar to Pozzi et al. (2007, see also \citealt{2010MNRAS.402.1081V,2010A&A...517A..11P}), whereas the infrared luminosity is used as a proxy of the intrinsic nuclear luminosity. The appropriate nuclear template from \citet{2004MNRAS.355..973S} is selected based on the absorbing column density $N_{\rm H}$, when available, or from the best-fit nuclear infrared template. In order to compute the hard X--ray bolometric correction we used the standard definition \begin{equation} k_{\rm bol}=\frac{L_{\rm bol}}{L_{[2-10]{\rm keV}}} \end{equation} where the $L_{[2-10]{\rm keV}}$ is the intrinsic X--ray luminosity and the bolometric luminosity is computed as the sum of the total infrared and X--ray luminosity \begin{equation} L_{\rm bol}=L_{\rm IR}+L_{\rm X}. \end{equation} After performing the SED-fitting, only the nuclear component of the best-fit is integrated. Hence, the total IR luminosity $L_{\rm IR}$ is obtained integrating the nuclear template between 1 and 1000$\mu$m. To convert this IR luminosity into the nuclear accretion disk luminosity, we applied the correction factors to account for the torus geometry and the anisotropy (see \citealt{2007A&A...468..603P}). The first correction is parameterized by the covering factor $f$. The covering factor is related to the geometry of the torus that obscures the accretion disk emission in the optical-UV along the line of sight, and its value is estimated from the ratio of obscured/unobscured quasars found by the X--ray background synthesis models (\citealt{2007A&A...463...79G}). This correction factor is $\sim1.5$. This value correspond to a typical covering factor of $f\sim0.67$, consistent with the results based on clumpy torus models (\citealt{2008ApJ...685..160N}). \par The anisotropy factor is defined as the ratio of the luminosity of face-on versus edge-on AGN, where the obscuration is a function of the column density $N_{\rm H}$. Therefore, SEDs in \citet{2004MNRAS.355..973S} have been integrated in the 1--30$\mu$m range, after normalizing these SEDs to the same luminosity in the 30--100$\mu$m range. The derived anisotropy values are 1.2--1.3 for $10^{22}<N_{\rm H}<10^{24}$ and 3--4 for $N_{\rm H}>10^{24}$. The same values as in \citet{2010MNRAS.402.1081V} are adopted: 1.3 for $10^{22}<N_{\rm H}<10^{24}$ and 3.5 for $N_{\rm H}>10^{24}$. \par The total X--ray luminosity $L_{\rm X}$ is estimated integrating in the 0.5-100 keV range the X--ray SED. We have interpolated the de-absorbed soft and hard X--ray luminosities. Since we are integrating at the rest-frame frequencies, the X--ray SED is extrapolated at higher and lower energies using the estimated X--ray slope, and introducing an exponential cut-off at 200 keV (\citealt{2007A&A...463...79G}, see also Sect.~3.1 in L10). \section{Robustness of the method} \label{Calibrating the method} \begin{figure*} \centering {\label{fig:lbolcheck}\includegraphics[width=0.3\textwidth]{17175fig10}} {\label{fig:kbolcheck}\includegraphics[width=0.3\textwidth]{17175fig11}} \\ {\label{fig:histlbolcheck}\includegraphics[width=0.3\textwidth]{17175fig12}} {\label{fig:histkbolcheck}\includegraphics[width=0.3\textwidth]{17175fig13}} \caption{Upper panel: Comparison between the values of bolometric luminosity and bolometric correction from data presented in L10 and from this work. The three red triangles mark the outliers discussed in Sect.~\ref{Calibrating the method}. Lower panel: Distribution of the differences between the values of bolometric luminosity and bolometric correction from data presented in L10 and from this work.} \label{fig:param} \end{figure*} The robustness of the method used to estimate nuclear bolometric luminosities and bolometric corrections from SED-fitting, for the sample of Type-2 AGN, has been tested against the updated soft X--ray selected sample of Type-1 AGN discussed in L10. The Type-1 AGN sample in the L10 work was composed of 361 spectroscopically classified broad-line AGN. The recent work by Brusa et al. (2010) has updated the spectroscopic classification and increased the number of Type-1 AGN with spectroscopic redshift, so that the final sample is composed of 395 Type-1 AGN in the redshift range $0.103\leq z \leq4.255$ with X--ray luminosities $42.20\leq {\rm Log}~ L_{[2-10]{\rm keV}}\leq 45.23$. We have computed bolometric and X--ray luminosities, and bolometric corrections using the same approach as in L10 for the Type-1 sample: bolometric luminosites are computed by integrating the rest-frame SEDs from 1$\mu$m up to the UV-bump. In order to compare these estimates with the results from the SED-fitting code, we have applied to the same sample the method described in Sect.~\ref{SED-fitting} and \ref{Bolometric luminosities and bolometric corrections} to estimate bolometric parameters. To be consistent with the selection criteria of the sample discussed in this paper, we have considered only AGN with X--ray detection in the hard band, removing from the main sample 87 Type-1 AGN with an upper limit at 2--10 keV. Moreover, for 2 sources the best-fit does not consider an AGN component, so we cannot compute the bolometric luminosities for them. The final test sample is composed of 306 Type-1 AGN in the redshift range $0.103\leq z \leq3.626$ and X--ray luminosities $42.20\leq {\rm Log}~ L_{[2-10]{\rm keV}}\leq 45.04$. \par In order to select the appropriate nuclear template from \citet{2004MNRAS.355..973S}, we consider the SED for Sy1 AGN (no correction for anisotropy is necessary in this case) and the Sy2 SED with $10^{22}<N_{\rm H}<10^{23}$ for 20 AGN that have $N_{\rm H}$ in this range. \par We present the comparison between the values of $L_{\rm bol}$ and $k_{\rm bol}$ from L10 and this work in Fig.~\ref{fig:param}. The outlier in the bottom side of the plot, XID=357 at redshift 2.151 has ${\rm Log}~ k_{\rm bol}=1.95$ from L10 and ${\rm Log}~ k_{\rm bol}=1.04$ using the new approach, and presents large error bars in the 24$\mu m$ detection, so that the total bolometric luminosity, computed using the infrared luminosity, is probably underestimated. The outlier in the right end of the distribution, XID=5114 at redshift 0.212 (${\rm Log}~ k_{\rm bol}=2.82$ from L10 and ${\rm Log}~ k_{\rm bol}=1.95$ using the new approach) has detections at 160, 70 and 24$\mu$m, ${\rm Log}~ N_{\rm H} = 22.68$ and ${\rm Log}~ L_{[2-10]{\rm keV}}=42.89$. Probably this source is a star-forming galaxy, so that using the L10 approach we included stellar emission in the estimate of the nuclear bolometric luminosity, thus overestimating the nuclear bolometric luminosity and, therefore, the bolometric correction. The last notable outlier in the top/left side of the distribution, XID=2152 at redshift 0.627 (${\rm Log}~ k_{\rm bol}=1.35$ from L10 and ${\rm Log}~ k_{\rm bol}=2.27$ using the new approach) presents a significant host-galaxy contribution in the optical-UV and, therefore, the bolometric luminosity is likely to be underestimated in the L10 approach \par Although the two methods are very different, the bolometric luminosity estimates agree remarkably well, with a $1\sigma$ dispersion of 0.20 dex after performing a 3.5$\sigma$ clipping method in order to avoid outliers. Bolometric luminosities from SED-fitting are on average slightly larger than those computed integrating the rest-frame SED from 1$\mu m$ to the X--ray (see the lower left side in Fig.~\ref{fig:param}). This effect is also present in the Vasudevan et al. (2010) work. A possible explanation is that SED-fitting underestimates the host-galaxy contribution, or that the anisotropy and geometry corrections are too large for some objects. The agreement between the two methods is overall quite satisfactory and in the following we will discuss our findings for the Type-2 sample. \section{Results and discussion} \label{Results and discussion} \subsection{Bolometric correction and luminosites for Type-2 AGN} Bolometric luminosities and bolometric corrections have been computed for the Type-2 AGN sample. Intrinsic soft and hard X--ray luminosities are estimated as described in Sect.~\ref{Correction for absorption for the X--ray 2--10 keV luminosity}. For 15 sources we do not have an estimate of the AGN component from the SED-fitting, and we cannot compute the bolometric luminosity for them. In Fig.~\ref{fig:histkbol12} the distribution of the bolometric correction for the 240 Type-2 AGN sample and for the 306 Type-1 AGN ($k_{\rm bol}$ for Type-1 AGN are computed using the SED-fitting code) are presented. \begin{figure} \includegraphics[width=8cm]{17175fig14} \caption{Distribution of the bolometric correction for the 240 Type-2 AGN sample (\textit{red hatched histogram}) and for the 306 Type-1 AGN (\textit{blue hatched histogram}).} \label{fig:histkbol12} \end{figure} \begin{figure} \includegraphics[width=8cm]{17175fig15} \caption{Hard X--ray bolometric correction against the intrinsic 2--10 keV luminosity for 240 Type-2 AGN with AGN best-fit (\textit{red data}). The crosses represent the bolometric correction for 306 Type-1 AGN, computed with the approach described in Sect.~\ref{Bolometric luminosities and bolometric corrections}. The green and blue lines represent the bolometric correction and the $1 \sigma$ dispersion obtained by \citet{hopkins07} and \citet{marconi04}, respectively. The red points and open squares represent the 111 Type-2 AGN with $N_{\rm H}$ from spectral analyses and the 144 Type-2 AGN with $N_{\rm H}$ from HR, respectively.} \label{fig:l210kbol} \end{figure} Figure~\ref{fig:l210kbol} shows bolometric corrections for both the Type-1 and the Type-2 AGN samples as a function of the hard X--ray luminosity. For both samples, bolometric parameters are estimated from the SED-fitting as discussed in Sect.~\ref{SED-fitting}. The green and orange curves represent the bolometric corrections and their $1~\sigma$ dispersion as derived by \citet{hopkins07} and \citet{marconi04}, respectively. Type-2 AGN have, on average, smaller bolometric corrections than Type-1 AGN at comparable hard X--ray luminosity. For example, at $43.30\leq\LogL_{[2-10]{\rm keV}} \leq44.30$ (vertical lines in Fig.~\ref{fig:l210kbol}), where both AGN types are well represented, the median bolometric correction for the Type-2 AGN (134 objects) is $\langle k_{\rm bol}\rangle\sim13\pm1$, to be compared with a median bolometric correction $\langle k_{\rm bol}\rangle\sim23\pm1$ for the Type-1 AGN (167 objects). The two averages are statistically different at the $\sim7~\sigma$ level and this is consistent with the results in \citet{2010MNRAS.402.1081V}. The mean $L_{[2-10]{\rm keV}}$ for the Type-1 and Type-2 AGN within this luminosity range differs by a factor 1.8, and this could in principle explain at least part of the difference in the average bolometric corrections for the two samples of AGN. However, the significance of the difference is still present if we split this luminosity range in two equal ${\rm Log}~ L_{[2-10]{\rm keV}}$ bins and perform a Kolmogorov--Smirnov test for the Type-1 and Type-2 AGN luminosity distributions in each bin. \par \citet{vasudevanfabian09} and \citet{2010A&A...512A..34L} have shown that hard X--ray bolometric corrections are correlated with the Eddington ratios ($\lambda_{\rm Edd}=L_{\rm bol}/L_{\rm Edd}$) for Type-1 AGN (see also \citealt{marconi04,kelly08}). The $k_{\rm bol}-\lambda_{\rm Edd}$ relation suggests that there is a connection between the broad-band emission, mostly in the optical-UV, and the Eddington ratio, which is directly linked to the ratio between mass accretion rate and Eddington accretion rate. A high $\lambda_{\rm Edd}$ corresponds to an enhanced optical-UV emission, which means a prominent big-blue bump and therefore a higher $k_{\rm bol}$. The difference between the average bolometric corrections for Type-1 and Type-2 AGN could be due to lower mass accretion rates in Type-2 AGN, assuming the same black hole mass distribution for the two AGN populations (see \citealt{2011arXiv1103.0276T}). The current theoretical framework of AGN/host-galaxy co-evolution predicts that obscured AGN are highly accreting objects and their black hole is rapidly growing. However, we note that this is true for $z=1-3$ (see \citealt{2008A&A...490..905H}), while the majority of Type-2 AGN in the present sample are relatively luminous ($42.37\leq{\rm Log}~ L_{\rm bol}\leq46.80$), and at moderately low redshift ($0<z<1$). Therefore, the Type-2 AGN sample is likely to represent a later stage in AGN evolution history. \subsection{Infrared emission: indication of AGN activity} \begin{figure*} \centering \includegraphics[width=13cm,height=13cm]{17175fig16} \caption{Examples of SED decompositions. Black circles are the observed photometry in the rest-frame (from the far-infrared to the optical-UV). The long-dashed, solid and dotted lines correspond respectively to the starburst, AGN and host-galaxy templates found as the best fit solution. The red line represents the best-fit SED. The stellar mass and the SFR derived from the galaxy template are reported. The green point represents the nuclear mid-infrared luminosity using Eq.~(\ref{gandhieq}), while the cross represents the total observed luminosity at 12.3~$\mu$m computed from the rest-frame SED. XID=19 and 81 are examples of low-$r$ AGN, while XID=172 and 117 represent high-$r$ AGN.} \label{sedgandhi} \end{figure*} The re-processed infrared emission can be used as a proxy of the average disc emission, since the timescale for transfer of energy from the disk to the outer edge of the torus into infrared emission is of the order of several years in standard AGN picture; whereas optical, UV and X--ray variability in AGN is known to occur on shorter timescales. The correlation between the 2--10 keV X--ray emission and IR emission at 12.3$\mu$m for a sample of Seyfert nuclei has been discussed in Gandhi et al. (2009), and it could be used to estimate the intrinsic AGN power. Using X--ray data from the literature and new IR data from the Very Large Telescope's Imager and Spectrometer for mid-Infrared (VISIR), taken specifically for addressing the issue of nuclear emission in local Seyferts, they found a tight correlation between intrinsic, uncontaminated IR luminosity and X--ray luminosity in the 2--10 keV range \begin{equation} \label{gandhieq} {\rm Log}~ \frac{L_{12.3~\mu m}}{10^{43}}=(0.19\pm0.05)+(1.11\pm0.07){\rm Log}~\frac{L_{[2-10]\rm{keV}}}{10^{43}}. \end{equation} The relation is characterized by a small scatter with a standard deviation of 0.23 dex. The expected nuclear mid-infrared luminosity is computed from Eq.~(\ref{gandhieq}) using the estimate of the intrinsic unabsorbed X--ray luminosity. From the observed rest-frame SED (AGN+host-galaxy) the luminosity at 12.3~$\mu$m is computed. A comparison of the total observed luminosity at 12.3~$\mu$m and that predicted by Eq.~(\ref{gandhieq}) is plotted in Figure~\ref{sedgandhi} for four representative sources. In Figure~\ref{fig:hist_l12.3micron_obspredicted} the distribution of the ratio $r={\rm Log}~\left(L_{12.3~\mu m,{\rm obs}}/L_{12.3~\mu m,{\rm predicted}}\right)$ is plotted for the Type-2 AGN sample. The distribution of the ratio $r$ has a mean which is shifted from zero by $\sim0.2$. However, if we consider a gaussian distribution centered at $r=0$ with $\sigma=0.23$, i.e. the same dispersion observed by \citet{2009A&A...502..457G} in their local sample, the majority of the objects are found within $2\sigma$ of the $r$ distribution. The tail outside $2\sigma$ and extending to high $r$ includes 73 sources (with $r\gtrsim0.5$) for which the predicted mid-infrared luminosity is significantly lower than observed. The hard X--ray luminosities of these 73 AGN are mainly in range ${\rm Log}~ L_{[2-10]{\rm keV}}\sim42-44$, where the local correlation is well-sampled. There are two possible explanations for a significant ($\sim30\%$ of the objects) tail toward high-$r$ values: either the Gandhi relation, which was derived for a sample of local Seyfert galaxies, cannot be extended to all the sources in our sample or the SED-fitting procedure may overestimate, in a fraction of these objects, the nuclear contribution. In order to study the properties of these outliers, bolometric corrections, morphologies, stellar masses and SFR are discussed in following. We call ``low-$r$'' AGN all sources within $2\sigma$ of the $r$ distribution, while the ``high-$r$'' AGN sample is populated by the sources deviating more than $2\sigma$ (see Fig.~\ref{sedgandhi} for some examples). \par A clear separation in bolometric corrections for these two sub-samples is found. This is shown in Figure~\ref{fig:lxkbol_gdad} in which bolometric corrections are plotted as a function of the 2--10 keV luminosity. At a given hard X--ray luminosity ($43\leq\LogL_{[2-10]{\rm keV}} \leq44$) the low-$r$ sample has a median bolometric correction of $\langle k_{\rm bol}\rangle\sim11\pm1$ (110 objects), to be compared with a median bolometric correction for the high-$r$ sample of $\langle k_{\rm bol}\rangle\sim26\pm3$ (44 objects). The two median values for $k_{\rm bol}$ are statistically different at the $\sim5\sigma$ level. \par Furthermore, in the high-$r$ sample 24 Type-2 AGN out of 73 have a detection at 70$\mu$m ($\sim33\%$, significantly higher than those for the low-$r$ sample, $\sim4\%$) and 9 of these 24 have also a detection at 160$\mu$m ($\sim12\%$ considering the total high-$r$ sample, and only $\sim1\%$ for the low-$r$ sample). This denotes that the difference in the average bolometric corrections between the low-$r$ and high-$r$ samples is probably due to the fact that a significant fraction of the infrared emission is attributable to an incorrect modeling of the star-formation process, or the AGN contribution is somehow overestimated by the SED-fitting procedure. \par There is no significant difference in the average nuclear absorption between the low-$r$ and the high-$r$ sample, while there is a possibly significant difference in SFR and stellar masses. The median stellar mass in the high-$r$ sample is $\langle {\rm Log}~ M_\ast \rangle\sim 10.93 M_\odot$ with a dispersion of 0.30, while for the low-$r$ sample is $\langle {\rm Log}~ M_\ast \rangle\sim 10.76 M_\odot$ with $\sigma=0.30$. The two averages are statistically different at the $\sim3~\sigma$ level. The median SFR, as derived from the SED-fit, for the high-$r$ sample is $\langle SFR \rangle\sim 17\pm3 M_\odot$/yrs with a $\sigma=0.30$, while for the low-$r$ sample is $\langle SFR \rangle\sim 3\pm1 M_\odot$/yrs with a $\sigma=0.30$ and the two averages are statistically different at the $4.4~\sigma$ level. \par Overall, the SED-fitting for the 73 Type-2 AGN is likely to overestimate the AGN emission in the infrared, which is probably due to the infrared emission from star-forming regions. The average bolometric correction for Type-2 AGN, excluding these sources, would be even lower than what we have computed in the previous Section. This reinforces the idea of lower bolometric corrections for Type-2 AGN with respect to Type-1 AGN. The low bolometric corrections for Type-2 AGN could be also explained if a fraction of the accretion disk bolometric output is not re-emitted in the mid-infrared, but rather dissipated (e.g., by AGN-feedback). This would not be accounted in the bolometric luminosity, and could provide a plausible explanation for low $k_{\rm bol}$ especially if the low-$r$ sample is considered. At this stage, this is just a speculation, and more work is needed to verify this possibility. \begin{figure} \includegraphics[width=8cm]{17175fig17} \caption{Histogram of the ratio between the total observed luminosity at 12.3~$\mu$m and the mid-infrared luminosity predicted by Eq.~(\ref{gandhieq}). The red curve represents a gaussian with mean equal to zero and standard deviation 0.23. The $1\sigma$ and $2\sigma$ standard deviations of the correlation are also reported.} \label{fig:hist_l12.3micron_obspredicted} \end{figure} \begin{figure} \includegraphics[width=8cm]{17175fig18} \caption{Hard X--ray bolometric correction against 2--10 keV luminosity for 240 Type-2 AGN with AGN best-fit. The 240 Type-2 sample is divided into subsamples: low-$r$ AGN sample (\textit{black data}) and high-$r$ AGN sample (\textit{yellow data}). The green and orange lines represent the bolometric correction and $1\sigma$ dispersion obtained by \citet{hopkins07} and \citet{marconi04}, respectively. In the de-absorbed hard X--ray luminosity range highlighted by the solid lines, we have 167 low-$r$ and 73 high-$r$ (in the infrared) sources.} \label{fig:lxkbol_gdad} \end{figure} \begin{table*} \caption{Properties of the Type-2 AGN sample. \label{tbl_ch4-2}} \begin{center} \begin{tabular}{lccclllcccc} \hline \hline XID & Redshift & ${\rm Log}~ L_{[2-10]{\rm keV}}$ & $\LogL_{\rm bol}$ & $k_{\rm bol}$ & ${\rm Log}~ M_\ast$ & SFR & $M_{\rm U}$ & $M_{\rm V}$ & $M_{\rm J}$ & Morphological class$^{\mathrm{a}}$ \\ {} & {} & $\rm{[erg\,s^{-1}]}$ & $\rm{[erg\,s^{-1}]}$ & {} & $\rm{[M_\odot]}$ & $[M_\odot/{\rm yrs}]$ & {} & {} & {} & {} \\ \hline\noalign{\smallskip} 67 & 0.367 & 42.79 & 43.80 & 10.24 & 9.68 & 1.86 & -18.63 & -19.80 & -20.72 & 0 \\ 65 & 0.979 & 43.83 & 44.85 & 10.62 & 10.19 & 38.60 & -20.50 & -21.63 & -22.71 & 7 \\ 64 & 0.686 & 43.54 & 44.52 & 9.57 & 10.45 & 32.82 & -20.17 & -21.56 & -23.05 & 10 \\ 63 & 0.355 & 42.98 & 44.28 & 19.88 & 10.79 & 4.94 & -20.61 & -22.13 & -23.26 & 2 \\ 54 & 0.350 & 42.58 & 43.38 & 6.32 & 11.18 & 0.11 & -20.57 & -22.62 & -23.78 & 12 \\ 45 & 0.121 & 41.90 & 42.91 & 10.31 & 9.39 & 0.00 & -16.76 & -18.70 & -19.77 & 1 \\ 43 & 1.162 & 44.22 & 45.28 & 11.49 & 11.30 & 1.47 & -21.55 & -23.48 & -24.61 & 2 \\ 19 & 0.659 & 43.67 & 44.76 & 12.44 & 11.07 & 35.88 & -21.57 & -22.88 & -24.02 & 1 \\ 117 & 0.936 & 43.47 & 45.59 & 132.11 & 11.24 & 199.40 & -21.81 & -23.34 & -24.80 & 23 \\ 116 & 0.874 & 43.49 & 44.50 & 10.17 & 10.56 & 3.77 & -20.19 & -21.81 & -22.90 & 0 \\ 112 & 0.762 & 43.65 & 44.65 & 9.97 & 10.93 & 0.62 & -20.00 & -22.13 & -23.52 & 3 \\ 104 & 0.623 & 44.08 & 45.11 & 10.75 & 10.79 & 0.45 & -20.25 & -22.18 & -23.30 & 1 \\ 101 & 0.927 & 43.70 & 44.68 & 9.56 & 10.92 & 0.61 & -21.19 & -22.93 & -23.78 & 0 \\ 100 & 0.270 & 42.61 & 43.54 & 8.59 & 10.21 & 0.00 & -19.14 & -20.98 & -21.92 & 0 \\ 99 & 0.730 & 43.54 & 44.78 & 17.47 & 9.97 & 116.26 & -21.61 & -22.26 & -23.15 & 11 \\ 85 & 1.001 & 43.46 & 44.84 & 23.99 & 10.13 & 38.52 & -20.15 & -21.34 & -22.56 & 8 \\ 81 & 0.915 & 44.11 & 45.05 & 8.65 & 11.18 & 0.00 & -20.35 & -22.59 & -24.07 & 2 \\ 70 & 0.688 & 44.00 & 45.60 & 39.74 & 10.65 & 542.60 & -20.87 & -22.32 & -24.31 & 3 \\ 152 & 0.895 & 43.75 & 44.82 & 11.72 & 9.87 & 92.88 & -21.06 & -21.82 & -22.85 & 7 \\ 150 & 0.740 & 43.29 & 44.25 & 9.27 & 10.64 & 0.55 & -19.38 & -21.28 & -22.50 & 3 \\ \hline \end{tabular} \flushleft \begin{list}{}{Notes---This table is presented entirely in the electronic edition; a portion is shown here for guidance. } \item[$^{\mathrm{a}}$]{The morphological classification of the Type-2 AGN hosts is coded from 0 to 23: 0 = elliptical, 1 = S0; 2 = bulge-dominated; 3 = intermediate-bulge; 4 = disk-dominated; 5 = irregular; 6 = compact/irregular; 7 = compact; 8 = unresolved/compact; 9 = blended; 10 = bulge-dominated/close-companion; 11 = intermediate-bulge/close-companion; 12 = S0/close-companion; 23 = possible mergers.} \end{list} \end{center} \end{table*} \subsection{Host-galaxy properties: $M_\ast$, SFR, colors and morphologies} \begin{figure} \includegraphics[width=8cm]{17175fig19} \caption{The morphology distribution (using the ZEST+ code) of the 233 AGN host-galaxies on the $(U-V)$ colour-mass diagram. We also plotted the 22 sources without morphological information. The $(U-V)$ color and stellar masses are computed using the SED-fitting code. We overplot the contours of about 8700 galaxies in $z$COSMOS (colours and stellar masses from \textit{Hyperz}). The morphology classification is labeled as follow: elliptical (Ell), S0, bulge-dominated galaxy (BD), intermediate-bulge galaxy (IB), disk-dominated galaxy (DD), irregular (Irr), Compact, possible mergers (PM) and unresolved compact (UC). The red dashed line represents the red sequence cut defined by \citet{2006A&A...453..869B}, while the green short dashed line defines an approximate green valley region, both lines are calculated at redshift $\sim 0.76$, which is the average redshift of the main Type-2 sample.} \label{contour_morp_uv_zest+} \end{figure} Galaxies show a colour bi-modality both in the local Universe and at higher redshift (up to $z\sim2$; e.g., \citealt{2001AJ....122.1861S,2004ApJ...608..752B}). This bi-modality (red-sequence and blue-cloud galaxies) has been interpreted as an evidence for a dicothomy in their star formation and merging histories (e.g., \citealt{2005ApJ...632...49M}, but see also \citealt{2010ApJ...721L..38C} for an alternative explanation). Color-magnitude and color-mass diagrams (e.g., rest-frame $(U-V)$ versus stellar mass) have been used as tools in galaxy evolution studies, and since many models invoke AGN feedback as an important player in such evolution, it is interesting to locate the hosts of Type-2 AGN in those diagrams. Using the galaxy component obtained from the best fit of the Type-2 AGN, it is possible to derive rest-frame colors for the host that, linked to the stellar mass and the morphology, can provide hints on AGN feedback. Several studies found that the hosts of obscured AGN tend to be redder than the overall galaxy population in the rest-frame $(U-V)$ color (e.g., \citealt{2007ApJ...660L..11N}). There are at least two possible and significantly different interpretations for this observational result: the observed red colors are mainly due to dust extinction, so that a significant fraction of obscured AGN would live in massive, dusty star-forming galaxies with red optical colors (e.g., \citealt{2009A&A...507.1277B}); or red sources are linked with passive systems (e.g., \citealt{2007A&A...475..115R,2009ApJ...692L..19S,2010ApJ...721L..38C}). Therefore, accurate stellar mass and SFR estimates, together with detailed galaxy morphologies, are of particular importance to discriminate between the two alternative possibilities. \par The very high resolution and sensitivity of ACS-HST imaging in the COSMOS survey provides resolved morphologies for several hundreds of thousands galaxies with $i_{\rm acs}\leq 24$ (see Scarlata et al 2007 for details). Galaxy morphologies were obtained with an upgraded version of the Zurich Estimator of Structural Types (ZEST; \citealt{2007ApJS..172..406S}), known as ZEST+ (Carollo et al. 2011, in prep). Relative to its predecessor, ZEST+ includes additional measurements of non-parametric morphological indices for characterising both structures and substructures. For consistency with the earlier versions, ZEST+ uses a Principal Component Analysis (PCA) scheme in the 6-dimensional space of concentration, asymmetry, clumpiness, M$_{20}$ (second-order moment of the brightest 20\% of galaxy pixels), Gini coefficient, and ellipticity. ZEST+ classifies galaxies in seven morphological types located in specific regions of the 6-dimensional space: elliptical, S0, bulge-dominated disk, intermediate-bulge disk, disk-dominated, irregular, compact. The different types were then visually inspected. For 19 objects ZEST+ is unable to give any information on morphology because these sources lie off the edge of the ACS tiles and 4 sources are blended. As a result of the ZEST+ procedure and visual inspection of the other 233 galaxies in our sample, we find that 16 are ellipticals (Ell), 53 are S0s, 74 are bulge-dominated (BD) disks, 27 are intermediate-bulge (IB) disks, just 1 is disk-dominated (DD), 19 are irregular galaxies (Irr), 15 are compact galaxies (i.e. the structural parameters computed for these galaxies from the HST-ACS images are highly affected by the instrumental PSF) and 18 are unresolved compact galaxies (UC, i.e. essentially point-like sources). Ten galaxies show distortions and potential signatures of ongoing or recent mergers (PM). \rev{At the typical magnitudes of the objects in our sample, the ZEST+ classification is highly reliable for galaxies with redshift $\lesssim1$. At higher redshifts, morphological k-correction and, to a lesser extent, resolution effects can adversely affect measurements of ZEST+ parameters (note that only 4\% of the main Type-2 AGN sample have $z>1.5$). However, broad morphological bins (e.g., early/late type galaxies) should be relatively robust. For high-$z$ galaxies, resolution might also have an impact on the classification for mergers, and ACS images could not be deep enough to distinguish merger features (see also \citealt{2011arXiv1105.5395M}). Moreover, inclination might also affect the morphological classification (e.g., \citealt{2010ApJS..186..427N}). However, a detailed study of systematics and biases in the morphological classification is beyond the purposes of the present paper.} In Table \ref{tbl_ch4-2} we list the main properties of the sample. \par The rest-frame $(U-V)$ color encompasses the 4000$\text{\AA}$ break, and it is particularly sentitive to age and metallicity variations of stellar population in galaxies (e.g., \citealt{1978ApJ...225..742S,2004ApJ...608..752B,2006A&A...453..869B,2010ApJ...721L..38C}). In Fig.~\ref{contour_morp_uv_zest+} the distribution of the rest-frame $(U-V)$ colors, which are computed directly from the best-fit galaxy template, and stellar masses (from the SED-fitting code) are reported for the entire Type-2 AGN sample. In the same figure, the background contours for a sample of $\sim8700$ galaxies in zCOSMOS ($i_{\rm acs}<22.5$, 240 Type-2 are detected in the $i_{\rm acs}$ band, 183/240 Type-2 AGN (76\%) have $i_{\rm acs}<22.5$) are also plotted, where colours and stellar masses are computed using the Hyperz code (\citealt{2000A&A...363..476B}). \par AGN are known to reside in massive galaxies (e.g., \citealt{2009ApJ...696..396S,2009A&A...507.1277B}) and this is fully confirmed by the present analysis. The morphologies of the host-galaxies and the stellar masses indicate that there is a preference for these Type-2 AGN to be hosted in bulge-dominated and S0 galaxies ($\sim50\%$) with stellar masses greater than $10^{10}M_\odot$. This result is consistent with the previous studies on Type-2 AGN by \citet[see also \citealt{2003MNRAS.346.1055K,2008ApJ...681..931B,2011ApJ...727L..31S}]{2008ApJ...675.1025S}. \par It should be noted that no correction for the internal extinction has been applied to the $(U-V)$ colors of both background galaxies in zCOSMOS and Type-2 AGN hosts. This correction could be important as shown in \citet{2008ApJ...686...72C} (see also \citealt{2010ApJ...721L..38C}). In that work star formation and galactic stellar mass assembly are analyzed using a very large and highly spectroscopically complete sample selected in the rest-frame NIR bolometric flux in the GOODS-N. They found that applying extinction corrections is critical when analyzing galaxy colors; nearly all of the galaxies in the green valley are 24$\mu m$ sources, but after correcting for extinction, the bulk of the 24$\mu m$ sources lie in the blue cloud. This correction introduces an average shift in color of $\sim0.2$~mag for the most extincted/star-forming galaxies. However, to be consistent with the colors of the background galaxies no correction for intrinsic extinction is considered. \par AGN host-galaxies belong to the red-sequence if their $(U-V)$ color is above the threshold (\citealt{2006A&A...453..869B}): \begin{equation} (U-V)_{\rm AB, rest-frame}>0.277 {\rm Log}~ M_\ast -0.352 z -0.39 \end{equation} Sources in the green-valley are approximately defined shifting this relation by 0.25 downward towards bluer colors. With these definitions, $\sim42\%$ (108/255) and $\sim25\%$ (63/255) of the total sample are included in the red-cloud and the green-valley, respectively. For all sources the Specific Star-Formation Rate (SSFR) is estimated, defined as the ratio of the SFR per unit of galaxy stellar mass (SSFR=SFR/$M_\ast$). The inverse of the SSFR, $SSFR^{-1}$, is called ``growth time'' \begin{equation} \label{ssfr} SSFR^{-1}=M_\ast/\dot M_\ast, \end{equation} and corresponds to the time required for the galaxy to double its stellar mass, assuming its SFR remained constant. Actively star-forming galaxies are defined as sources with growth time smaller than the age of the Universe at their redshift ($SSFR^{-1} < t_{\rm Hubble}$), while sources with $SSFR^{-1}$ larger than the age of the Universe can be considered passive galaxies (see also \citealt{2009A&A...501...15F,2009A&A...507.1277B}). Figure~\ref{ssfrmstar} shows $SSFR^{-1}$ as a function of the stellar mass in three different redshift bins for the AGN host-galaxies in the red-sequence, in the green-valley and in the blue-coud and for the zCOSMOS galaxies in same redshift ranges. The horizontal lines mark the age of the Universe at the two redshift boundaries of the chosen intervals. At face value, almost all the sources in the red-sequence have $SSFR^{-1}$ larger than the age of the Universe at their redshift, which is consistent with passive galaxies. However, the value of $SSFR^{-1}$ has to be considered only as an approximate indication of the star-formation activity; in fact, there is some possible evidence of some residual star-formation, in red-cloud AGN host-galaxies, as witnessed by their morphologies. In the red-sequence 8 and 28 sources are classified as ellipticals and S0s, respectively; all together they represent 34\% of the host-galaxy population in the red-sequence. About 42\% is represented by disk galaxies (both bulge-dominated and intemediate-bulge), which are probably still forming stars but not at high rates. In fact, 15 over 108 sources ($\sim14\%$) have a detection at 70$\mu$m and 5 have also a detection at 160$\mu$m ($\sim6\%$). \par For these objects, the SFR inferred from the far-infrared detections is significantly higher than the SFR derived from the SED-fitting procedure. Indeed, an SED-fitting over the UV, optical and near-infrared bands is not always able to discriminate between the red continua of passive galaxies and those of dusty star-forming galaxies. Therefore, we decided to include another indicator in the present analysis broadly following the procedure described in \citet[i.e., the $(U-V)-(V-J)$ color diagram]{2010ApJ...721L..38C}. Near-infrared emission can distinguish between red-passive or dust-obscured galaxies: given a similar $0.5\mu$m flux, a star-forming galaxy has more emission near $\sim1\mu$m than a passive galaxy. A sub-sample of galaxies is selected in the same redshift range explored by \citet{2010ApJ...721L..38C}, we find 92 AGN host-galaxies with $0.8\leq z\leq 1.2$. Fig.~\ref{mstaruvredshiftsel} shows both inactive galaxies and AGN host-galaxies in the same redshift range and the thresholds considered to divide galaxies in the red-sequence and in the green-valley (we consider an average redshift of 1 to define the threshold for the red-sequence and the green-valley). Thirty-five out of 92 AGN hosts are found to lie in the red-sequence ($\sim38\%$) and 23 in the green-valley ($\sim25\%$); while for inactive galaxies about 32\% and 21\% lie in the red-sequence and green-valley, respectively. In Figure~\ref{uvvj} the $(U-V)-(V-J)$ color diagram for the 92 Type-2 AGN hosts is presented. From a preliminary analysis of the rest frame $(U-V)$ against the rest-frame $(V-J)$ color (see Fig.~\ref{uvvj}, but see also Fig.~2 in \citealt{2010ApJ...721L..38C}), only $\sim9\%$ of the AGN host-galaxies in the red-sequence and $\sim30\%$ of AGN host-galaxies in the green-valley are moved in the region populated by dusty star-forming galaxies in the color-color diagram. To be compared with $20\%$ AGN host-galaxies in the red-sequence and $75\%$ AGN host-galaxies in the green-valley found by Cardamone and collaborators. The fractions of dust-obscured galaxies among the red-cloud and green-valley AGN in our sample, at $0.8\leq z\leq1.2$, are lower than those in the \citet{2010ApJ...721L..38C} sample. However, the global fractions of AGN hosts, tentatively associated to passive galaxies, are very similar ($\sim50\%$) in the two samples. The fractions for both AGN host-galaxies and inactive galaxies are reported in Table~\ref{tbl-2}. \begin{figure} \includegraphics[width=8cm]{17175fig20} \caption{Inverse of the SSFR rate as a function of the stellar mass of the AGN host-galaxies in three different redshift bins for the zCOSMOS galaxies and for the Type-2 AGN sample in the red-sequence (red crosses), in the green-valley (green triangles) and in the blue-cloud (blue open circles). The horizontal lines mark the age of the Universe at the two redshift boundaries of the chosen intervals.} \label{ssfrmstar} \end{figure} \begin{figure} \includegraphics[width=8cm]{17175fig21} \caption{Distribution of the stellar masses as a function of the rest-frame $(U-V)$ colors in the redshift range $0.8\leq z\leq 1.2$. The red dashed line represents the red sequence cut defined by \citet{2006A&A...453..869B}, while the green short dashed line defines an approximate green valley region, both lines are calculated at redshift $\sim 1$. The points are color coded as in Fig.~\ref{contour_morp_uv_zest+}. } \label{mstaruvredshiftsel} \end{figure} \begin{figure} \includegraphics[width=8cm]{17175fig22} \caption{Distribution of Type-2 AGN hosts in the rest-frame $(U-V)$ against the rest-frame $(V-J)$ color. Color coded as in Fig.~\ref{ssfrmstar}. Sources with the same best-fit galaxy template and the same extinction lie in the same position in the color-color diagram. Point size is keyed to the number of objects. } \label{uvvj} \end{figure} \begin{table*}[ct] \caption{AGN hosts and galaxies properties. \label{tbl-2}} \centering \begin{tabular}{l c| cl|cl|c} Sample & N & Red-sequence & & Green-valley & & Blue-cloud \\ [0.5ex] \hline\hline\noalign{\smallskip} \multicolumn{7}{c}{$0.045\leq z \leq 3.452$} \\ \hline\noalign{\smallskip} && &96 (89\%) P && 52 (82\%) P \\[-1ex] \raisebox{1.5ex}{Type-2 AGN} & \raisebox{1.5ex}{255}& \raisebox{1.5ex}{108 (42\%)} & 12 (11\%) D & \raisebox{1.5ex}{63 (25\%)} & 11 (18\%) D & \raisebox{1.5ex}{84 (33\%)} \\[1ex] && & 2306 (91\%) P && 1596 (83\%) P \\[-1ex] \raisebox{1.5ex}{Galaxies} & \raisebox{1.5ex}{8742}& \raisebox{1.5ex}{2535 (29\%)} & 229 (9\%) D & \raisebox{1.5ex}{1923 (22\%)} & 327 (17\%) D & \raisebox{1.5ex}{4284 (49\%)} \\[1ex] \hline\hline\noalign{\smallskip} \multicolumn{7}{c}{$0.8\leq z \leq 1.2$} \\ \hline\noalign{\smallskip} && &32 (91\%) P && 16 (70\%) P \\[-1ex] \raisebox{1.5ex}{Type-2 AGN} & \raisebox{1.5ex}{92}& \raisebox{1.5ex}{35 (38\%)} & 3 (9\%) D & \raisebox{1.5ex}{23 (25\%)} & 6 (30\%) D & \raisebox{1.5ex}{34 (37\%)} \\[1ex] && &569 (97\%) P && 269 (70\%) P \\[-1ex] \raisebox{1.5ex}{Galaxies} & \raisebox{1.5ex}{1836}& \raisebox{1.5ex}{587 (32\%)} & 18 (3\%) D & \raisebox{1.5ex}{385 (21\%)} & 116 (30\%) D & \raisebox{1.5ex}{864 (47\%)} \\[1ex] \hline\hline \end{tabular} \flushleft\begin{list}{}{Note -- P=Passive, D=Dusty.} \item \end{list} \end{table*} \section{Summary and Conclusions} \label{Summary and Conclusions} A detailed analysis of the SEDs of 255 spectroscopically identified hard X--ray selected Type-2 AGN from the XMM-COSMOS survey is presented. In obscured AGN, the optical-UV nuclear luminosity is intercepted along the line of sight by the dusty torus and reprocessed in the infrared, so what we see in the optical-UV is mostly the light from the host-galaxy. On the one hand, this allows us to study the galaxy properties, on the other hand it makes difficult to estimate the nuclear bolometric power. An SED-fitting code has been developed with the main purpose of disentagling the various contributions (starburst, AGN, host-galaxy emission) in the observed SEDs using a standard $\chi^2$ minimization procedure (the starburst component is only used in the case of detection at 70$\mu$m). The code is based on a large set of starburst templates from \citet{2001ApJ...556..562C} and \citet{2001ApJ...549..215D}, and galaxy templates from the \citet{2003MNRAS.344.1000B} code for spectral synthesis models, while AGN templates are taken from \citet{2004MNRAS.355..973S}. These templates represent a wide range of SED shapes and luminosities and are widely used in the literature. The total (nuclear) AGN bolometric luminosities are then estimated by adding the X--ray luminosities integrated over the 0.5-100 keV energy range to the infrared luminosity between 1 and 1000$\mu$m. The total X--ray luminosity is computed integrating the X--ray SED using the de-absorbed soft and hard X--ray luminosities. The SED is extrapolated to higher energies using the observed X--ray slope, and introducing an exponential cut-off at 200 keV. The total infrared luminosity is evaluated integrating the infrared AGN best-fit and then converted into the nuclear accretion disk luminosity applying the appropriate correction factors to account for the geometry and the anisotropy of the torus emission. The reprocessed IR emission is considered to be a good proxy of the intrinsic disk emission and this is supported by previous investigations (\citealt{2007A&A...468..603P}; Gandhi et al. 2009; \citealt{2010MNRAS.402.1081V}). In the distribution of the ratio $r={\rm Log}~\left(L_{12.3~\mu m,{\rm obs}}/L_{12.3~\mu m,{\rm predicted}}\right.$; see Eq.~\ref{gandhieq}) the majority of the objects are within $2\sigma$ of the $r$ distribution. The tail outside $2\sigma$ and extending to high $r$ includes 73 sources (with $r\gtrsim0.5$) for which the predicted mid-infrared luminosity is significantly lower than observed. We call ``low-$r$'' AGN all sources within $2\sigma$ of the $r$ distribution, while the ``high-$r$'' AGN sample is represented by the sources deviating more than $2~\sigma$. \par Our main results are the following: \begin{enumerate} \item The average observed SED is characterized by a flat X--ray slope, $\langle\Gamma=1.12\rangle$, as expected for obscured AGN (not corrected for absorption), while in the optical-UV the observed light appears to be consistent with the host-galaxy emission. The average SED in the mid-infrared is more likely a combination of dust emission from star-forming region and AGN emission reprocessed by the dust. \item The full sample is split into four bins of different X--ray and infrared luminosities and redshift. The shapes of the average SEDs in the optical bands are approximately the same in all luminosity and redshift bins. There is a stronger host-galaxy contribution at lower luminosity/redshift bins, where the average SEDs have a typical galaxy shape. Moreover, there is a trend between X--ray and mid-infrared luminosity: the contribution of the AGN in the infrared (around $8-15\mu$m) is higher at higher X--ray luminosities. \item Type-2 AGN appear to have smaller bolometric corrections than Type-1 AGN. At the same hard X--ray luminosity, $43.30\leq\LogL_{[2-10]{\rm keV}} \leq44.30$, where both samples are well represented, we find that the median bolometric correction for Type-2 AGN (134 objects) is $\langle k_{\rm bol}\rangle\sim13\pm1$, to be compared with a median bolometric correction $\langle k_{\rm bol}\rangle\sim23\pm1$ for Type-1 AGN (167 objects). The two averages are statistically different at the $\sim7~\sigma$ level. \item A clear separation in bolometric corrections for the low-$r$ and the high-$r$ samples is found. The relation provided by Gandhi and collaborators is valid for the majority of objects, while for 30\% of the sample SED-fitting procedure may underestimate the non-nuclear contribution. At a given hard X--ray luminosity ($43\leq\LogL_{[2-10]{\rm keV}} \leq44$) the low-$r$ sample has a median bolometric correction of $\langle k_{\rm bol}\rangle\sim11\pm1$ (110 objects), to be compared with a median bolometric correction for the high-$r$ sample of $\langle k_{\rm bol}\rangle\sim26\pm3$ (44 objects). The two median values for $k_{\rm bol}$ are statistically different at the $\sim5\sigma$ level. \item Host-galaxies morphologies and the stellar masses indicate that Type-2 AGN are preferentially hosted in galaxies which have a bulge, irrespective of the strength of the bulge or if the galaxy is on the red sequence or blue cloud, and with stellar masses greater than $10^{10}M_\odot$. \item Almost all the sources in the red-sequence have $SSFR^{-1}$ larger than the age of the Universe at their redshift, which is consistent with passive galaxies. Following the same approach as in Cardamone and collaborators (i.e., combining the rest-frame $(U-V)$ vs ${\rm Log}~ M_\ast$ and the rest-frame $(U-V)$ vs $(V-J)$ color diagrams), we find that, consistent with their results, $\sim50\%$ of AGN hosts lie in the passive region of this diagram. In contrast from Cardamone et al. (2010), only $\sim30\%$ of AGN host-galaxies in the green-valley in our sample are consistent with dust-obscured sources in $0.8\leq z\leq 1.2$. \end{enumerate} It is clear that the mid and far-infrared parts of the SED are under-sampled with respect to the optical part. The ongoing Herschel survey over various fields at different depths ($100\mu m$ and $160\mu m$ in the COSMOS field) and the upcoming ALMA surveys will allow us to gain an optimal multiwavelength coverage also in the far-infrared. \begin{acknowledgements} We gratefully thank B. Simmons for her useful comments and suggestions. In Italy, the XMM-COSMOS project is supported by ASI-INAF grants I/009/10/0, I/088/06 and ASI/COFIS/WP3110 I/026/07/0. Elisabeta Lusso gratefully acknowledges financial support from the Marco Polo program, University of Bologna. In Germany the XMM-\textit{Newton} project is supported by the Bundesministerium f\"{u}r Wirtshaft und Techologie/Deutsches Zentrum f\"{u}r Luft und Raumfahrt and the Max-Planck society. Support for the work of E.T. was provided by NASA through Chandra Postdoctoral Fellowship Award grant number PF8-90055, issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. The entire COSMOS collaboration is gratefully acknowledged. \end{acknowledgements} \bibliographystyle{aa}
1,477,468,750,756
arxiv
\section{Introduction} String/M theory has led to a rich web of non-perturbative dualities between supersymmetric field theories. Checking/exploiting/extending these dualities requires exact computations in field theories. In recent years, using methods based on localization, several exact quantities in supersymmetric gauge theories have been computed. Two of such quantities, the superconformal index of $4d$ gauge theories \cite{Kinney:2005ej,Romelsberger:2005eg} and the partition function of supersymmetric gauge theories on $S^3$ \cite{Kapustin:2009kz,Kapustin:2010xq}, are the main focus of this note. The superconformal index of ${\cal N}=1$ IR fixed points was computed in \cite{Dolan:2008qi, Spiridonov:2008zr, Spiridonov:2009za}, there it served as a check of Seiberg duality. The indices of ${\cal N}=4$ SYM and type IIB supergravity in $AdS_5$ were computed and matched in \cite{Kinney:2005ej}. The superconformal index of ${\cal N}=2$ supersymmetric gauge theories was used to check ${\cal N}=2$ S-dualities conjectured by Gaiotto and to define a $2d$ topological field theory in the process \cite{Gadde:2009kb,Gadde:2010te}. Recently the partition function of supersymmetric gauge theories on $S^3$ has been used to check a variety of $3d$ dualities including mirror symmetry \cite{Kapustin:2010xq} and Seiberg-like dualities \cite{Kapustin:2010mh}. Remarkably, the exact partition function has also allowed for a direct field theory computation of $N^{3/2}$ degrees of freedom of ABJM theory \cite{Drukker:2010nc,Herzog:2010hf}. The $S^3$ partition function of ${\cal N}=2$ theories is extremized by the exact superconformal R-symmetry \cite{Jafferis:2010un,Martelli:2011qj,Jafferis:2011zi} so just like the $a$-maximization in $4d$, the $3d$ partition function can be used to determine the exact R-charges at interacting fixed points. The purpose of this note is to relate these two interesting and useful exactly calculable quantities in $3$ and $4$ dimensions. The superconformal index of a $4d$ gauge theory can be computed as a path integral on $S^3 \times S^1$ with supersymmetric boundary conditions along $S^1$. All the modes on the $S^1$ contribute to this path integral. In a limit with the radius of the circle shrinking to zero the higher modes become very heavy and decouple. The index is then given by a path integral over just the constant modes on the circle. In other words, the superconformal index of the $4d$ theory reduces to a partition function of the dimensionally reduced $3d$ gauge theory on $S^3$. The $3d$ theory preserves all the supersymmetries of the ``parent'' $4d$ theory on $S^3 \times S^1$. More generally, for any $d$ dimensional manifold $M^d$, one would expect the index of a supersymmetric theory on $M^d\times S^1$ to reduce to the exact partition function of dimensionally reduced theory on $M^d$. This idea was applied by Nekrasov to obtain the partition function of $4d$ gauge theory on $\Omega$-deformed background as a limit of the index of a $5d$ gauge theory~\cite{Nekrasov:2002qd}. A crucial property of the four dimensional index that facilitates its computation is the fact that it can be computed exactly by a saddle point integral. We show that in the limit of vanishing circle radius, this matrix integral reduces to the one that computes the partition function of $3d$ gauge theories on $S^3$ ~\cite{Kapustin:2009kz,Kapustin:2010xq}. It doesn't come as a surprise as the path integral of the ${\cal N}=2$ supersymmetric gauge theory on $S^3$ was also shown to localize on saddle points of the action. The note is organized a follows. In section \ref{pathint} we write the superconformal index of $4d$ theory as a saddle point integral and describe the limit in which this integral reduces to the $S^3$ partition function. The limit is performed in section \ref{mmlimit}. In particular, we show that the building blocks of the matrix model that computes the superconformal index in $4d$ map separately to the building blocks of the $3d$ partition function matrix model. In section \ref{dualities}, we comment on the connections between $4d$ and $3d$ dualities. We conclude with an appendix that generalizes the Kapustin et. al. matrix model for ${\cal N}=4$ gauge theories with two supersymmetric deformations. One such deformation involving squashed $S^3$ was studied in \cite{Hama:2011ea}. \ \noindent{\bf Note added}: While this note was in preparation we received \cite{Dolan:2011rp} in which the authors find a mathematical relation between the 4d index and the $S^3$ partition function which is equivalent to the relation that we derive here from physical considerations. \section{\label{pathint}$4d$ Index as a path integral on $S^3\times S^1$} The superconformal index is a Witten index with respect to one of the supercharges. For concreteness, let us restrict ourselves to the supercharge \footnote{The supercharges of $ \mathcal{N}=2$ gauge theory are denoted as ${\cal Q}^I_\alpha$ and $\bar{\cal Q}_{I\dot{\alpha}}$ where $I=1,2$ is an $SU(2)_R$ index and $\alpha=\pm,\,\dot{\alpha}=\dot{\pm}$ are Lorentz indices.} ${\cal Q}\equiv\bar{\cal Q}_{2+}$ $\in$ ${\cal N}=2$ superconformal algebra, although the index can be defined more generally. In radial quantization the superconformal index is defined as \begin{eqnarray} {\cal I}= {\rm Tr}_{\cal H} (-1)^F t^{2(E+j_2)} y^{2j_1} v^{-(r+R)}\,. \end{eqnarray} The fugacities $t,y$ and $v$ couple to all possible $SU(2,2|2)$ charges that commute with ${\cal Q}$. $E$ is the conformal dimension. $(j_1,j_2)$ are the $SU(2)_1\otimes SU(2)_2$ Lorentz spins and $(R,r)$ are the charges of $SU(2)_R\times U(1)_r$ R-symmetry. The superconformal index doesn't depend on the couplings of the theory and hence it can be calculated in the weak coupling limit. The entire contribution to the supersymmetric partition function on $S^3\times S^1$ thus comes from the saddle point approximation. One loop partition function of a $4d$ gauge theory on $S^3\times S^1$ was computed in~\cite{Aharony:2003sx} in the presence of fugacities associated with various conserved charges. To compute the superconformal index, we only allow fugacities for charges which commute with ${\cal Q}$; i.e. $t,\,y$ and $v$. For the one loop computation in $SU(N)$ gauge theory, it is convenient to use the Coulomb gauge $\partial_i A^i=0$ where $i,j,k$ are $S^3$ coordinates and $\partial_i$ are covariant derivatives. The residual gauge freedom is fixed by imposing $\partial_0 \alpha=0$ where $\alpha=\frac{1}{V}\int_{S^3} A_0$ and $V$ is the volume of $S^3$. The partition function is then written as \begin{eqnarray} Z=\int d\alpha \Delta_2 \int {\cal D}A \Delta_1 e^{-S(A,\alpha)}\,, \end{eqnarray} where $\Delta_1$ and $\Delta_2$ are Fadeev-Popov determinants associated with the first and second gauge fixing conditions respectively. For a charge $s$ that commutes with ${\cal Q}$, we can add a supersymmetric coupling with a constant background gauge field as \begin{eqnarray} S\to S+\int d^4 x\, s^\mu \chi_\mu, \end{eqnarray} where $s^\mu$ is associated conserved current. $\chi_\mu$ is take to be a $(\chi,0,0,0)$ and $\chi$ is identified with the chemical potential for charge $s$. The chemical potential is related to the fugacity, say $x$, of the Hamiltonian formalism as $x=e^{-\beta \chi}$. In our case, $x$ can be any of the $t,y$ and $v$. After performing $\int {\cal D}A$, one gets an $SU(N)$ unitary matrix model \begin{eqnarray} Z=\int [dU] e^{-S_{eff}[U]}\,, \end{eqnarray} where $U=e^{i\beta \alpha}$ and $\beta$ is the circumference of the circle, $[dU]$ is the invariant Haar measure on the group $SU(N)$. We can write $S_{eff}$ concisely as follows \begin{eqnarray}\label{indexmodel} S^{eff}[U]= \sum_{m=1}^\infty \frac{1}{m} \sum_j i_{{\cal R}_j}(t^m,y^m,v^m)\chi_{{\cal R}_j} (U^m, V^m)\,. \end{eqnarray} Here, $V$ denotes the chemical potential that couples to the Cartan of the flavor group; ${\cal R}_j$ labels the representation of the fields under gauge and flavor groups and $i_{{\cal R}_j}$ is the single letter index of the fields in representation ${\cal R}_j$. The circumference $\beta$ of the circle is related to the fugacity $t$ as $t=e^{-\beta/3}$. To produce the partition function of dimensionally reduced gauge theory on $S^3$~\cite{Kapustin:2009kz,Kapustin:2010xq} we also scale $v=e^{-\beta/3}$, $y=1$, and take the limit $\beta\to 0$. In appendix \ref{app} we restore the additional deformations by defining $v=e^{-\beta(1/3+u)}$ and set $y=e^{-\beta\eta}$ where $u$ and $\eta$ are chemical potentials for fugacities $v$ and $y$ respectively. The partition function of $3d$ gauge theories on squashed $S^3$ was computed in~\cite{Hama:2011ea}, the $\eta$ deformation is related to the squashing parameter of $S^3$. \section{\label{mmlimit}$4d$ Index to $3d$ Partition function on $S^{3}$} A matrix model for computing the partition function of $3d$ gauge theories on $S^3$ ($S^3$ matrix model) was obtained in \cite{Kapustin:2009kz,Kapustin:2010xq}. In this section, we will derive this matrix model as a $\beta\to0$ limit of the matrix model that computes the superconformal index (\ref{indexmodel}) (index matrix model) of the $4d$ gauge theories. Both matrix models involve integrals over gauge group parameters and their integrand contains one-loop contributions from vector- and hyper-multiplets. We will show that the gauge group integral together with the contribution from the vector multiplet map nicely from the index model to the $S^3$ model. The contributions of the hypermultiplets match up separately. We also show that the superconformal index is the $q$-deformation of the $S^3$ partition function of the daughter theory. \subsection{Building blocks of the matrix models} For concreteness, let us consider $4d$ $ \mathcal{N}=2$ $SU(N)$ gauge theory. It is constructed using two basic building blocks: hyper-multiplets and vector multiplets. \subsubsection*{Hyper-multiplet} As was first observed in \cite{Dolan:2008qi}, the index of the hypermultiplet can be written elegantly in terms of a special function \cite{Gadde:2009kb} \begin{equation} {\cal I}^{hyp}=\prod_{i}\Gamma\left(\frac{t^{2}}{\sqrt{v}}a_i;t^{3}y,t^{3}y^{-1}\right), \end{equation} where $\Gamma$ is the elliptic gamma function \cite{Spiridonov5} defined to be \begin{equation} \Gamma(z;r,s)=\prod_{j,k\geq0}\frac{1-z^{-1}r^{j+1}s^{k+1}}{1-zr^{j}s^{k}}\,, \end{equation} and $a_i$ are eigenvalues of the maximal torus of the gauge/flavor group satisfying $\prod_{i=1}^N a_i=1$. In this section, for the sake of simplicity, we set $v=t$ and $y=1$ and will discuss the general assignment of chemical potentials in appendix~\ref{app}. We choose a convenient variable $q\equiv e^{-\beta}$ to parametrize the chemical potentials of the Cartan of the flavor group as $a_i=q^{-i\alpha_i}$, and the chemical potential $t$ as $t=q^{\frac{1}{3}}$. The index of the hyper-multiplet then becomes \begin{eqnarray} {\cal I}^{hyp} = \prod_{i}\prod_{j,k\geq0}\frac{1-q^{-\frac{1}{2}+i\alpha_i}q^{j+1}q^{k+1}}{1-q^{\frac{1}{2}-i\alpha_i}q^{j}q^{k}} = \prod_{i}\prod_{n\geq1}\left(\frac{[n+\frac{1}{2}+i\alpha_i]_{q}}{[n-\frac{1}{2}-i\alpha_i]_{q}}\right)^{n}\,, \end{eqnarray} where $[n]_{q}\equiv\frac{1-q^{n}}{1-q}$ is the \textit{$q$-number}. It has the property $[n]_{q}\stackrel{q\to1}{\longrightarrow}n$. So far we have fixed the chemical potentials $v$ and $y$ that couple to $-(R+r)$ and $j_{1}$ respectively. To recover $3d$ partition function on $S^{3}$ we should take the radius of $S^{1}$ to be very small, which corresponds to the limit $q\to1$. \begin{eqnarray} {\cal I}^{hyp} = \prod_{i}\prod_{n\geq1}\left(\frac{n+\frac{1}{2}+i\alpha_i}{n-\frac{1}{2}-i\alpha_i}\right)^{n} = \prod_{i}(\cosh\pi\alpha_i)^{-\frac{1}{2}}\,. \end{eqnarray} One can find a proof of the second equality in~\cite{Kapustin:2009kz}. From the limiting procedure, it is clear that the superconformal index of the hypermultiplet is the $q$-deformation of the $3d$ hypermultiplet partition function. \subsubsection*{Vector multiplet} The index of an ${\mathcal N}=2$ vector multiplet is given by \begin{equation} {\cal I}^{vector}=\prod_{i< j}\frac{1}{(1-a_i/a_j)(1-a_j/a_i)}\frac{\Gamma(t^{2}v(a_i/a_j)^{\pm};t^{3}y,t^{3}y^{-1})}{\Gamma((a_i/a_j)^\pm;t^{3}y,t^{3}y^{-1})}\,, \end{equation} Here we have dropped an overall $a_i$-independent factor. We use the condensed notation, $\Gamma(z^{\pm1};r,s)=\Gamma(z^{-1};r,s)\Gamma(z;r,s)$. With the same variable change as above we get \begin{eqnarray} {\cal I}^{vector} & = & \prod_{i<j}\frac{1}{1-q^{i(\alpha_i-\alpha_j)}}\frac{1}{1-q^{-i(\alpha_i-\alpha_j)}}\frac{1}{\Gamma(q^{\pm i(\alpha_i-\alpha_j)};q,q)}\nonumber\\ & = & \prod_{i<j}\frac{1}{1-q^{i(\alpha_i-\alpha_j)}}\frac{1}{1-q^{-i(\alpha_i-\alpha_j)}}\prod_{n\geq1}\left(\frac{1-q^{n+i(\alpha_i-\alpha_j)+1}}{1-q^{n-i(\alpha_i-\alpha_j)-1}}\frac{1-q^{n-i(\alpha_i-\alpha_j)+1}}{1-q^{n+i(\alpha_i-\alpha_j)-1}}\right)^{-n}\\ & \stackrel{reg}{=} & \prod_{i<j}\prod_{n\geq1}\left(\frac{[n-i(\alpha_i-\alpha_j)]_{q}}{[n]_{q}}\frac{[n+i(\alpha_i-\alpha_j)]_{q}}{[n]_{q}}\right)^{2}\,.\nonumber \end{eqnarray} The last line involves regulating the infinite product in a way that doesn't depend on $\alpha$. In the limit $q\to1$, i.e. the radius of the circle goes to zero, we get\begin{equation} {\cal I}^{vector}=\prod_{i<j}\prod_{n\geq1}\left(1+\frac{(\alpha_i-\alpha_j)^{2}}{n^{2}}\right)^{2}= \prod_{i<j}\left(\frac{\sinh\pi(\alpha_i-\alpha_j)}{\pi(\alpha_i-\alpha_j)}\right)^{2}\,.\end{equation} The last equality again is explained in~\cite{Kapustin:2009kz}. Again, the we see that the index of the vector multiplet is the $q$-deformation of the $3d$ vector partition function. Most general expression for the one-loop contribution of the vector multiplet with $u$ and $\eta$ turned on is obtained in appendix \ref{app}. \subsubsection*{Gauge group integral} The gauge group integral in the $4d$ index matrix model is done with the invariant Haar measure \begin{eqnarray} [dU]=\prod_i d\alpha_i \prod_{i<j} \sin^2\(\frac{\beta(\alpha_i-\alpha_j)}{2}\)\,\,\,\stackrel{\beta\to0}{\longrightarrow}\,\,\,\prod_i d\alpha_i \prod_{i<j}\(\frac{\beta(\alpha_i-\alpha_j)}{2}\)^2. \end{eqnarray} After appropriate regularization, the measure factor precisely cancels the weight factor in the denominator of the vector multiplet one-loop determinant. The unitary gauge group integral in the index matrix model can be done as a contour integral over $a$ variables parametrizing the Cartan sub-group, i.e. $a\in\mathbb{T}$. After the change variables $a=q^{-i\alpha}$ the contour integral becomes a line integral as follows. We write $a=q^{-i\alpha}=e^{i\beta\alpha}$. The contour integral around the unit circle is then \begin{eqnarray} \oint_{\mathbb{T}}\frac{da}{a}\dots=\int^{\pi/\beta}_{-\pi/\beta}d\alpha\dots\qquad:\qquad \beta\to0,\qquad \oint_{\mathbb{T}}\frac{da}{a}\dots=\int_{-\infty}^{\infty}d\alpha\dots\,. \end{eqnarray} \section{\label{dualities}$4d\leftrightarrow3d$ dualities} \subsubsection*{S duality} Let us illustrate the reduction of a four dimensional index to three dimensional partition function with a simple example. Consider ${\mathcal N}=2$ $SU(2)$ gauge theory with four hypermultiplets in four dimensions. The index of this theory is given by the following expression (up to overall normalization constants) \begin{eqnarray} \oint \frac{dz}{z}\,\frac{\Gamma(t^{3/2}a^{\pm1}b^{\pm1}z^{\pm1};t^3,t^3) \;\Gamma(t^{3/2}c^{\pm1}d^{\pm1}z^{\pm1};t^3,t^3)} {\Gamma(z^{\pm2};t^3,t^3)}\,. \end{eqnarray} Here, $a,\,b,\,c$ and $d$ label the Cartans of $SU(2)^4\subset SO(8)$ flavor group. The Gamma functions in the numerator come from the four hyper-multiplets; the Gamma functions in the denominator come from the ${\mathcal N}=2$ vector multiplet. From the results of the previous section this expression for the index gives rise to the partition function of ${\mathcal N}=2$ $SU(2)$ gauge theory in three dimensions. We scale $t\to 1$ and rewrite this as \begin{equation} {\cal Z}(\alpha,\beta,\gamma,\delta)=\int d\sigma\frac{\sinh^{2}2\pi\sigma}{\cosh\pi(\sigma\pm\alpha\pm\beta)\cosh\pi(\sigma\pm\gamma\pm\delta)}\,,\end{equation} where, each $\cosh$ is product of four factors with all sign combinations. The flavor (now mass) parameters $\alpha,\,\beta,\,\gamma$ and $\delta$ are related to the flavor parameters in $4d$ as before. The superconformal index of the ${\mathcal N}=2$ $SU(2)$ gauge theory with four hypermultiplets in four dimensions is expected to be invariant under the action of an S-duality group which permutes the four hypermultiplets. The expression above can be explicitly shown to exhibit this property~\cite{Gadde:2009kb}. The four dimensional S-duality implies that the three dimensional partition function is invariant under permuting $\alpha$, $\beta$, $\gamma$, and $\delta$. One can show (e.g. numerically or order by order expansion in $\alpha$) that this is indeed true. Note that this implies a new kind of Seiberg-like duality in three dimensions. This computation can be generalized to any of the theories recently discussed by Gaiotto~\cite{Gaiotto:2009we} in four dimensions. In particular the index of these theories was claimed to posses a TQFT structure~\cite{Gadde:2009kb}; and this structure is inherited by the three dimensional partition functions after doing the dimensional reduction. The reasoning in four dimensions and three dimensions is however different. In four dimensions one can associate a punctured Riemann surface to each of the superconfomal theories with the modular parameters of the surface related to the gauge coupling constants. The index does not depend on the coupling constants and thus is independent of the moduli giving rise to a topological quantity associated to the Riemann surface. After dimensionally reducing to three dimensions the theories cease to be conformal invariant and flow to a fixed point in the IR. The statement is then that at the IR fixed point the information about the original coupling constant is ``washed away'' and theories originally associated to punctured Riemann surfaces of the same topology flow to an equivalent fixed point in the IR. \subsubsection*{Mirror symmetry} In principle one can try to use relations special to field theories in three dimensions to gain information about the four dimensional theories. Let us comment how this can come about. In three dimensions certain classes of theories are related by mirror symmetry. for example, in~\cite{Benini:2010uu} it is claimed that mirror duals of $T_{N}$~\cite{Gaiotto:2009we} theories have Lagrangian description and are certain star shaped quiver gauge theories. Let us see if the partition function of $T_{2}$ (free hyper-multiplet in trifundamental of $SU(2)^{3}$) matches with the partition function of its mirror dual: \begin{eqnarray} {\cal Z}_{T_{2}} & = & \frac{1}{\cosh\pi(\alpha\pm\beta\pm\gamma)},\\ {\cal Z}_{\tilde{T}_{2}} & = & \int d\sigma d\mu d\nu d\rho\frac{\sinh^{2}2\pi\sigma\, e^{2\pi i(\mu\alpha+\nu\beta+\rho\gamma)}}{\cosh\pi(\sigma\pm\mu)\cosh\pi(\sigma\pm\nu)\cosh\pi(\sigma\pm\rho)}.\nonumber \end{eqnarray} In ${\cal Z}_{T}$, the parameters $\alpha,\beta,\gamma$ appear as masses while in ${\cal Z}_{\tilde{T}}$ they appear as FI terms. Let us compute ${\cal Z}_{\tilde{T}_{2}}$. One can perform the ${\cal Z}_{\tilde{T}_{2}}$ integrations. First we work out \begin{equation*} \int d\mu\frac{e^{2\pi i\alpha\mu}}{\cosh\pi(\mu\pm\sigma)}=\frac{2\sin2\pi\alpha\sigma}{\sinh\pi\alpha\sinh2\pi\sigma}\,. \end{equation*} Then we find that \begin{equation} {\cal Z}_{\tilde{T}_{2}}=\int d\sigma\frac{8\sin2\pi\alpha\sigma\sin2\pi\beta\sigma\sin2\pi\gamma\sigma} {\sinh\pi\alpha\sinh\pi\beta\sinh\pi\gamma\sinh2\pi\sigma}\\ =\frac{1}{\cosh\pi(\alpha/2\pm\beta/2\pm\gamma/2)}\,. \end{equation} ${\cal Z}_{\tilde{T}_{2}}$ is actually ${\cal Z}_{T_{2}}$ if we rescale $\alpha$, $\beta$ and $\gamma$ in ${\cal Z}_{\tilde{T}_{2}}$ by a factor of $2$. This fact can be in principle use to investigate the index of the strongly coupled SCFTs in four dimensions which do not have Lagrangian description. One can dimensionally reduce these theories to three dimensions, consider their mirror dual and compute its $3d$ partition function; finally, one can try to uplift this result to $4d$ and obtain thus the superconformal index of the original four dimensional theory. The feasibility of this approach is currently under investigation. \section*{Acknowledgements} We thank Leonardo Rastelli and Shlomo Razamat for very useful discussions and guidance. We also thank Chris Beem for useful conversations. This work was supported in part by DOE grant DEFG-0292-ER40697 and by NSF grant PHY-0969739. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
1,477,468,750,757
arxiv
\section{Introduction} Turbulence in compressible fluids is not well understood. One of the open questions is the existence of the so called inertial range, where the kinetic energy cascades (usually from large to small scales) without any losses. In the incompressible approximation hydrodynamic (HD) turbulence typically exhibits such inertial range provided that a large separation exists between the driving/energy containing scales and the dissipation ones. These properties are well described by the K\'arm\'an-Howarth-Monin (KHM) equation \citep{kaho38,moya75} for statistically homogeneous turbulence. This equation represents a scale-dependent energy conservation and relates the driving/decay of kinetic energy, its cascade and dissipation. The inertial range can be formally defined as the region where the driving/decay and dissipation are negligible and so that the dominant process is the cascade; this leads in the infinite Reynolds number limit to so called exact (scaling) laws for isotropic media \citep{kolm41b,fris95}. In the case of compressible HD turbulence, the kinetic energy and the internal energy are coupled via the dissipation as well as through compressible (pressure dilatation) effects. In this case it is not clear if there can be an inertial range of the kinetic energy. One may consider the total (kinetic+internal) energy, that is strictly conserved, but it is unclear if there is a cascade of the total energy \cite[cf.,][]{eydr18}. \cite{gaba11} derived the KHM equation for the total (kinetic and internal) energy assuming that the internal energy is governed by the isothermal closure. This closure, however, partly decouples the internal and kinetic energies and does not conserve the total energy. It is unclear if all or only a part of pressure dilatation effects are present in such a system. The cascade of the kinetic energy and pressure dilatation effects have not yet been studied in detail within the KHM approach. On the other hand, the filtering/coarse graining approach \citep{germ92,eyal09} has been applied to the compressible turbulence \citep{alui11,alui13} to derive relations equivalent to the KHM equation. In particular, \cite{aluial12} show that the energy exchanges between the kinetic and internal energies appear (at least for some parameters) on large scales and that there may exist a range of scales where the kinetic energy cascades in a conservative way, forming an inertial range similar to that in the incompressible HD approximation. Here we reexamine the KHM equation for the cascade of kinetic energy in compressible HD following \cite{gaba11}, we test it on results of numerical simulations and compare these results with the coarse-graining approach. The paper is organized as follows: in section~\ref{simulation} we present an overview of the direct 3D HD simulation with the initial Mach number $M=1$. In section~\ref{cascade} we present the KHM equation for the kinetic energy for incompressible and compressible HD and we test these two versions of KHM equation on the results of the simulation. In section~\ref{aluie} we compare these results with the coarse-graining approach assuming both incompressible and compressible approximations. Finally, in section~\ref{discussion} we discuss the results. \section{Numerical simulation} \label{simulation} Here we use a 3D pseudo-spectral compressible hydrodynamic code derived from the compressible MHD code \citep{verdal15} based on P3DFFT library \citep{peku12} and FFTW3 \citep{frjo05}. The code resolves the compressible Navier-Stokes equations, for the fluid density $\rho$, velocity $\boldsymbol{u}$, and the pressure $p$: \begin{align} \frac{\partial \rho}{\partial t}+ \boldsymbol{\nabla} \cdot (\rho \boldsymbol{u}) &= 0, \label{density}\\ \frac{\partial (\rho\boldsymbol{u})}{\partial t}+ \boldsymbol{\nabla}\cdot (\rho \boldsymbol{u}\boldsymbol{u}) &=-\boldsymbol{\nabla}p +\boldsymbol{\nabla}\cdot\boldsymbol{\tau}, \label{velocity} \end{align} completed with an equation for the temperature $T=p/\rho$ \begin{align} \frac{\partial T}{\partial t}+ (\boldsymbol{u} \cdot \boldsymbol{\nabla}) T =& \alpha \Delta T + (\gamma-1) \left (-T \boldsymbol{\nabla}\cdot \boldsymbol{u} + \frac{1}{\rho}\boldsymbol{\nabla}\boldsymbol{u}:\boldsymbol{\tau}\right) \label{temperature} \end{align} where $\boldsymbol{\tau}$ is the viscous stress tensor ($\tau_{ij}=\mu\left(\partial u_{i}/\partial x_{j}+\partial u_{j}/\partial x_{i}-2/3\delta_{ij}\partial u_{k}/\partial x_{k}\right)$; here the dynamic viscosity $\mu$ is assumed to be constant) and $\alpha$ is the thermal diffusivity (we set $\alpha=\mu$ and $\gamma=5/3$); the colon operator denotes the double contraction of second order tensors, $\boldsymbol{\mathrm{A}}:\boldsymbol{\mathrm{B}}=\sum_{ij}A_{ij} B_{ij}$. The box size is $(2\pi)^3$ (with a grid of $1024^3$ points), periodic boundary conditions are assumed. The simulation is initialized with isotropic, random-phase, solenoidal fluctuations ($\boldsymbol{\nabla}\cdot\boldsymbol{u}=0$) on large scales (with wave vectors $k=|\boldsymbol{k}|\leq 4$) having the rms Mach number $M=1$ and a $k^{-1}$ 1-D power spectrum profile. We set the (constant) dynamic viscosity $\mu=2.8\ 10^{-3}$. The evolution of the simulation is shown in Figure~\ref{evol}. In the simulation the total energy $E_t=E_k+E_i$ is well conserved. Here $E_k=\langle \rho u^2 \rangle/2$ is the kinetic energy and $E_i=\langle \rho T \rangle/(\gamma-1)$ is the internal one (here $\langle \bullet \rangle$ denotes averaging over the simulation box). Top panel of Figure~\ref{evol} displays the evolution of the relative changes in these energies, $\Delta E_{k,i,t}=(E_{k,i,t}(t)-E_{k,i,t}(0))/E_t(0)$. The relative decrease of the total energy is negligible, $\Delta E_t(t=8)\sim -4\ 10^{-6}$. The middle panel of Figure~\ref{evol} shows the evolution of the rms of the vorticity $\boldsymbol{\omega}=\boldsymbol{\nabla}\times \boldsymbol{u}$. The vorticity reaches a maximum at $t\simeq 6.2$; this is a signature of a fully developed turbulent cascade. The bottom panel of Figure~\ref{evol} displays the evolution of the average Mach number $M$ (i.e., the ratio between rms of the velocity and the mean sound speed). $M$ slowly decreases during the evolution due to the decay of the level of fluctuations as well as due to the turbulent heating that leads to an increasing sound speed. \begin{figure \centerline{\includegraphics[width=10cm]{evolPHD13a}} \caption{Evolution of (top) the relative changes in the kinetic energy $\Delta E_k$ (solid line), the total energy $\Delta E_t$ (dotted line), and the internal energy $\Delta E_i$ (dashed), (middle) vorticity $\omega$, (bottom) Mach number $M$ as functions of time. \label{evol} } \end{figure} \begin{figure \centerline{\includegraphics[width=10cm]{specPHD13a}} \caption{Power spectral density of $\boldsymbol{u}$ as a function of the wave vector $k$. The dotted line denotes a dependence $\propto k^{-5/3}$. \label{spec} } \end{figure} Figure~\ref{spec} shows the power spectral density (PSD) of the velocity fluctuation at the time $6.3$, around the maximum activity of the vorticity, when turbulence is expected to be fully developed. The PSD does not exhibit a clear, Kolmogorov like spectrum, thus suggesting that there is no inertial range in the simulation. This is likely due to the small system size (small Reynolds number) and/or due to the compressible effects. In the following sections we'll quantify these effects using KHM and coarse graining approaches. \section{KHM equation} \label{cascade} \subsection{Incompressible HD} For the incompressible Navier-Stokes equation \begin{align} \frac{\partial\boldsymbol{u}}{\partial t}+ \boldsymbol{\nabla}\cdot (\boldsymbol{u}\boldsymbol{u}) &=-\frac{\boldsymbol{\nabla}p}{\rho} + \nu \Delta \boldsymbol{u}, \label{ivelocity} \end{align} where $\boldsymbol{u}$ is the velocity field, $\rho$ the density, $p$ the pressure, $\nu$ is the kinematic viscosity. For statistically homogeneous decaying turbulence one can get from Equation~(\ref{ivelocity}) the following form of the KHM equation \citep{kaho38,moya75} in the terms of structure functions of the increments of the velocity field $\delta\boldsymbol{u}=\boldsymbol{u}(\boldsymbol{x}+\boldsymbol{l})-\boldsymbol{u}(\boldsymbol{x})$ \begin{equation} \frac{\partial S^{(i)}}{\partial t}+ \boldsymbol{\nabla}_{\boldsymbol{l}}\cdot \boldsymbol{Y}^{(i)} = 2 \nu \mathrm{\Delta}_{\boldsymbol{l}} S^{(i)} - 4 \epsilon, \label{KHM} \end{equation} where $S^{(i)}=\langle|\delta\boldsymbol{u}|^{2}\rangle$, $\boldsymbol{Y}^{(i)}=\left\langle \delta\boldsymbol{u}|\delta\boldsymbol{u}|^{2}\right\rangle$ and $\langle \bullet \rangle$ denotes statistical/spatial averaging ($S^{(i)}$ and $\boldsymbol{Y}^{(i)}$ are functions of $\boldsymbol{l}$). Equation~(\ref{KHM}) is simply related to the original form of the KHM equation that involves the cross-correlation $\left\langle\boldsymbol{u}(\boldsymbol{x}+\boldsymbol{l}) \cdot\boldsymbol{u}(\boldsymbol{x})\right\rangle$ \cite[cf.,][]{fris95} \begin{align} 2\frac{\partial}{\partial t}\left\langle\boldsymbol{u}(\boldsymbol{x}+\boldsymbol{l}) \cdot\boldsymbol{u}(\boldsymbol{x})\right\rangle -\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}^{(i)} =4\nu\Delta_{\boldsymbol{l}}\left\langle \boldsymbol{u}(\boldsymbol{x})\cdot\boldsymbol{u}(\boldsymbol{x}+\boldsymbol{l})\right\rangle \end{align} since $S^{(i)}=2\langle |\boldsymbol{u}|^2\rangle -2\langle\boldsymbol{u}(\boldsymbol{x}+\boldsymbol{l})\cdot\boldsymbol{u}(\boldsymbol{x})\rangle$ and $\partial \langle |\boldsymbol{u}|^2\rangle/\partial t=-2\epsilon$. Note that here the superscript $(i)$ denotes the incompressible approximation. Equation~(\ref{KHM}) represents a scale-dependent energy-like conservation and relates the decay of kinetic energy $\partial S^{(i)}/{\partial t}$, the (incompressible) dissipation rate (per mass) \begin{equation} \epsilon= \nu \langle \boldsymbol{\nabla} \boldsymbol{u} : \boldsymbol{\nabla}\boldsymbol{u} \rangle , \end{equation} the cascade rate $\boldsymbol{\nabla}_{\boldsymbol{l}} \cdot \boldsymbol{Y}^{(i)} $, and the dissipation term $\nu \mathrm{\Delta}_{\boldsymbol{l}} \langle |\delta\boldsymbol{u}|^{2}\rangle$. The inertial range can be formally defined as the region where the decay and dissipation terms are negligible so that \begin{equation} \boldsymbol{\nabla}_{\boldsymbol{l}} \cdot \boldsymbol{Y}^{(i)} = - 4 \epsilon. \label{inertial} \end{equation} For isotropic media, in the infinite Reynolds number limit, Equation (\ref{inertial}) leads to the exact (scaling) laws \citep{kolm41b,fris95}. Equation~(\ref{KHM}) is more general and may be directly tested in numerical simulations \cite[e.g.,][]{gotoal02} since large Reynolds numbers needed for existence of the inertial range are computationally challenging \cite[cf.,][]{ishial09}. \subsection{Compressible HD} Here we assume compressible Navier-Stokes equations, Equations~(\ref{density},\ref{velocity}), and investigate the structure function $S=\left\langle \delta\boldsymbol{u}\cdot\delta\left(\rho\boldsymbol{u}\right)\right\rangle$ assuming a statistically homogeneous system following \cite{gaba11}. After some manipulations (see appendix~\ref{apKHM} for details) we get \begin{align} \frac{\partial S}{\partial t}+\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}+R =C_p-C_{\tau} +2\left\langle \delta p\delta\theta\right\rangle - 2 \left\langle \delta\boldsymbol{\tau}:\delta\boldsymbol{\mathrm{\Sigma}}\right\rangle, \label{KHMc} \end{align} where $\boldsymbol{Y}=\left\langle \delta\boldsymbol{u}\left[\delta\left(\rho\boldsymbol{u}\right)\cdot\delta\boldsymbol{u}\right]\right\rangle$, is a third-order structure function, $\theta=\boldsymbol{\nabla}\cdot \boldsymbol{u}$ is the dilatation, $\boldsymbol{\Sigma}=\boldsymbol{\nabla}\boldsymbol{u}$ is the strain tensor, and $R=\left\langle \delta\boldsymbol{u} \cdot \left( \theta^\prime \rho \boldsymbol{u} -\theta \rho^{\prime}\boldsymbol{u}^{\prime}\right)\right\rangle $. Here $C_p$ and $C_{\tau}$ are `correction' terms to $\left\langle \delta p\delta\theta\right\rangle$ and $\left\langle \delta\boldsymbol{\tau}:\delta\boldsymbol{\mathrm{\Sigma}}\right\rangle$, respectively, \begin{align} C_p= \mathcal{C}\left[\boldsymbol{u},\boldsymbol{\nabla}p \right] \ \ \ C_{{\tau}}=\mathcal{C}\left[\boldsymbol{u},\boldsymbol{\nabla}\cdot\boldsymbol{\tau}\right], \label{correction} \end{align} where \begin{align} \mathcal{C}\left[\boldsymbol{a},\boldsymbol{b}\right]&=\left\langle \delta\boldsymbol{a}\cdot\delta\boldsymbol{b}-\delta\left(\rho\boldsymbol{a}\right)\cdot\delta\left(\frac{\boldsymbol{b}}{\rho}\right)\right\rangle =\left(\frac{\rho^{\prime}}{\rho}-1\right)\boldsymbol{a}^{\prime}\cdot\boldsymbol{b}+\left(\frac{\rho}{\rho^{\prime}}-1\right)\boldsymbol{a}\cdot\boldsymbol{b}^{\prime}.\nonumber \end{align} The $C_p$ and $C_{\tau}$ terms depend on the level of density fluctuations in the system. The two terms, $S$ and $\boldsymbol{Y}$, are natural compressible generalization of $S^{(i)}$ and $\boldsymbol{Y}^{(i)}$, respectively. The $R$ term presents an additional compressible energy-transfer channel \cite[cf.,][]{gaba11}; we do not see an obvious way how to turn this term to a divergence form similar to $\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}$. The term $\left\langle \delta p\delta\theta\right\rangle$ is a structure-function formulation of the pressure dilation effect $p \theta$. The viscous term $\left\langle \delta\boldsymbol{\tau}:\delta\boldsymbol{\Sigma}\right\rangle$ corresponds to a combination of the two dissipation terms in the incompressible case $2\epsilon - \nu \Delta S^{(i)}$ in Equation~(\ref{KHM}). On large scales, $|\delta \boldsymbol{x}|\rightarrow \infty$, where the correlations $\left\langle {\boldsymbol{\tau}(\boldsymbol{x}^\prime)}:\boldsymbol{\Sigma} \right\rangle\rightarrow 0$ the viscous term becomes twice the viscous heating rate $Q_{\mu}$, \begin{equation} \left\langle \delta\boldsymbol{\tau}:\delta\boldsymbol{\Sigma}\right\rangle \rightarrow 2 \left\langle \boldsymbol{\tau}:\boldsymbol{\Sigma}\right\rangle = 2 Q_{\mu}. \end{equation} Equation~(\ref{KHMc}) is analogous to Equation~(10) of \cite{gaba11} but it does not include the isothermal internal energy assumed there (i.e., $p=c_s^2 \rho$, $e=c_s^2\ln{\rho/\rho_0}$, $c_s$: sound speed; see also appendix~\ref{apIE}). Also, in contrast with \cite{gaba11}, we do not consider forcing since we investigate decaying turbulence here. Now we can test Equation (\ref{KHMc}) using the simulation results of section~\ref{simulation}. We define the departure from zero of this equation as \begin{align} O(l) = \frac{1}{4} \left( -\frac{\partial S}{\partial t}- \boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y} - R + 2\left\langle \delta p\delta\theta\right\rangle +C_p - 2\left\langle \delta\boldsymbol{\tau}:\delta\boldsymbol{\mathrm{\Sigma}} \right\rangle -C_\tau \right). \label{KHMO} \end{align} \begin{figure \centerline{\includegraphics[width=12cm]{cyagPHD13arC}} \caption{(black) The departure $O$ (given by Equation~(\ref{KHMO})) as a function of the scale $l$ along with the different contributions, the decaying term (blue) $-{\partial S}/{\partial t}/4$, the cascade term (green) $-\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}/4-R/4$, the compressible coupling term (orange) $\left\langle \delta p\delta\theta\right\rangle/2+C_p/4$, and (red) the scale-dependent dissipation term $-\left\langle \delta\boldsymbol{\tau}:\delta\boldsymbol{\mathrm{\Sigma}}\right\rangle/2 -C_\tau$. Dashed lines show the incompressible equivalent, (black) the departure $O^{(i)}$ (given by Equation~(\ref{KHMO})), the decaying term (blue) $-\rho_0{\partial S^{(i)}}/{\partial t}/4$, the cascade term (green) $-\rho_0\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}^{(i)}/4$, and (red) the dissipation term $\rho_0 \nu \mathrm{\Delta} S^{(i)}/2 - \rho_0\epsilon$. $O$, $O^{(i)}$ and all their contributions are normalized to $Q_{\mu}$. \label{yag} } \end{figure} Figure~\ref{yag} shows (black) the departure $O$ as a function of the scale $l$ (isotropized/averaged over spherical angles) along with the different contributions, the decaying term (blue) $-{\partial S}/{\partial t}/4$, the cascade term (green) $-\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}/4-R/4$, the pressure dilation term (orange) $\left\langle \delta p\delta\theta\right\rangle/2+C_p/4$, and the scale-dependent dissipation term $ - \left\langle \delta\boldsymbol{\tau}:\delta\boldsymbol{\mathrm{\Sigma}}\right\rangle/2 -C_\tau $. This calculation is done for times $6.2$ and $6.3$ (see Figure~\ref{evol}) over a reduced box $512^3$ (taking every second point in all directions); the structure functions are calculated over the full separation space and isotropized/averaged over the spherical angles; the partial time difference is approximated by the finite difference between the two times. Figure~\ref{yag} demonstrates that the departure $O$ is small as predicted by Equation~(\ref{KHMc}); quantitatively we get $|O|/Q_\mu<0.006$. The decay, dissipation, and pressure dilatation terms approach zero as $l\rightarrow 0$ and reach their maximum absolute values on large scales: the compressible dissipation term $\left\langle \delta\boldsymbol{\tau}:\delta\boldsymbol{\mathrm{\Sigma}}\right\rangle/2 \rightarrow Q_\mu $ as expected, and, similarly, $\partial S/\partial t /4 \sim \partial \langle \rho |\boldsymbol{u}|^2 \rangle/\partial t /2\simeq 0.91 Q_\mu $ and $\langle \delta p \delta \theta \rangle/2 \sim \langle p \theta \rangle \simeq 0.12 Q_\mu$. On large scales we recover the energy conservation $\partial \langle \rho |\boldsymbol{u}|^2 \rangle/\partial t /2=-Q_\mu+ \langle p \theta \rangle$; the small error is likely due to the estimation of the time derivative by the finite difference and other numerical effects. The correction terms are small but not negligible $|C_p|/Q_\mu/4<0.06$ and $|C_\tau|/Q_\mu/4<0.02$ and tend to zero on small and large scales. The cascade term is important on medium scales but there is no true inertial range since the decay, dissipation as well as the pressure dilatation term are not negligible there. For larger Reynolds numbers one may expect that the decay and pressure dilatation terms become negligible on medium scales and that there is a range of scales the cascade term is compensated by the constant dissipation term \begin{align} \boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}+R=-4 Q_\mu, \label{exactKHM} \end{align} i.e., the inertial range. Figure~\ref{yag} also displays by dashed lines results of the corresponding incompressible version of KHM equation, the departure from zero (renormalized by the background density $\rho_0$) given by \begin{equation} O^{(i)}(l)= \frac{\rho_0}{4} \left( -\frac{\partial S^{(i)}}{\partial t}- \boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}^{(i)} + 2 \nu \mathrm{\Delta} S^{(i)} - 4\epsilon \right), \label{KHMiO} \end{equation} The incompressible terms are comparable to their compressible counterparts. In particular, the dissipation terms are close to each other. This indicates that most of the dissipation is incompressible. On the other hand, $\rho_0\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}^{(i)}$ and $\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}$ are almost identical, the decrease of the cascade rate in the compressible KHM is due to the compressible $R$ term. \section{Coarse-graining approach} \label{aluie} Let us now compare the structure function approach with the coarse graining one. This method is based on scale-dependent filtering of the compressible Navier-Stokes equation \cite[cf.,][]{alui13}. For any field $a(\boldsymbol{x})$ one defines a coarse-grained (low-pass filtered) field \begin{equation} \overline{a}_{\ell}(\boldsymbol{x})= \int_V G_{\ell}(\boldsymbol{r}) a(\boldsymbol{x}+\boldsymbol{r})\mathrm{d}^3\boldsymbol{r} \end{equation} where $G_{\ell}(\boldsymbol{r})$ is a convolution kernel, $\int_V G_{\ell}(\boldsymbol{r})\mathrm{d}^3\boldsymbol{r}=1$. Here we use a filter $G_{\ell}(\boldsymbol{r})=\ell^{-3} \mathcal{G}(\boldsymbol{r}/\ell)$ based on the kernel $\mathcal{G}(\boldsymbol{r})$ which has the following Fourier transform \begin{equation} \hat{\mathcal{G}}(\boldsymbol{k})\propto \begin{cases} \mathrm{exp}\left(- \frac{k^2}{1/4-k^2} \right) & k < 1/2\\ 0 & k\geq 1/2 \end{cases} \end{equation} where $k=|\boldsymbol{k}|$ \cite[see][for details]{eyal09}. To include the density variations one also defines, for each field $a(\boldsymbol{x})$, a density-weighted (Favre) filtered field \begin{equation} \tilde{a}_{\ell}(\boldsymbol{x})= \frac{ \overline{\rho a}_{\ell} (\boldsymbol{x}) }{\overline{\rho}_{\ell}(\boldsymbol{x})}. \end{equation} By applying the filtering to Equations~(\ref{density},\ref{velocity}) one gets \begin{align} \frac{\partial \overline{\rho}_{\ell}}{\partial t}+ \boldsymbol{\nabla} \cdot(\overline{\rho}_{\ell}\tilde{\boldsymbol{u}}_{\ell})&= 0, \label{cgdensity}\\ \frac{\partial (\overline{\rho}_{\ell}\tilde{\boldsymbol{u}}_{\ell})}{\partial t}+ \boldsymbol{\nabla}\cdot (\overline{\rho}_{\ell} \tilde{\boldsymbol{u}}_{\ell} \tilde{\boldsymbol{u}}_{\ell}) &= - \boldsymbol{\nabla}\cdot\left[ \overline{\rho}_{\ell} ( \widetilde{\boldsymbol{u}\boldsymbol{u}}_{\ell}-\tilde{\boldsymbol{u}}_{\ell} \tilde{\boldsymbol{u}}_{\ell} )\right] -\boldsymbol{\nabla}\overline{p}_{\ell} +\boldsymbol{\nabla}\cdot\overline{\boldsymbol{\tau}}_{\ell}. \label{cgvelocity} \end{align} One can derive a filtered energy budget to get the following spatial averaged energy conservation equation (assuming a closed system) that removes the energy spatial transport \begin{equation} \frac{ \partial \langle \mathcal{E}_{\ell} \rangle}{\partial t} + \langle \Pi_{\ell} +\Lambda_{\ell} - \overline{p}_{\ell} \boldsymbol{\nabla}\cdot \overline{\boldsymbol{u}}_{\ell} + D_{\ell} \rangle=0 \label{aluiec} \end{equation} where $\langle \cdot \rangle$ denotes spatial averaging ($\langle a(\boldsymbol{x}) \rangle =\int_V a(\boldsymbol{x}) \mathrm{d}^3\boldsymbol{x}/V$) and \begin{align} \mathcal{E}_{\ell}&=\frac{1}{2} \overline{\rho}_{\ell}|\tilde{\boldsymbol{u}}_{\ell}|^2, \\ \Pi_{\ell} &= - \overline{\rho}_{\ell} \boldsymbol{\nabla}\tilde{\boldsymbol{u}}_{\ell}:(\widetilde{\boldsymbol{u}\boldsymbol{u}}_{\ell}-\tilde{\boldsymbol{u}}_{\ell}\tilde{\boldsymbol{u}}_{\ell}),\\ \Lambda_{\ell} &= ( \tilde{\boldsymbol{u}}_{\ell} -\overline{\boldsymbol{u}}_{\ell} ) \cdot \boldsymbol{\nabla} \overline{p}_{\ell}, \\ D_{\ell} &= \boldsymbol{\nabla}\tilde{\boldsymbol{u}}_{\ell}: \overline{\boldsymbol{\tau}}_{\ell}. \end{align} Equation~(\ref{aluiec}) represents a coarse-graining equivalent to the KHM equation~(\ref{KHMc}); $\partial \mathcal{E}_{\ell}/{\partial t}$ describes the (scale-dependent) kinetic energy decay, $\langle \Pi_{\ell}+\Lambda_{\ell}\rangle $ represents the energy transfer across scales, $ \langle \overline{p}_{\ell} \boldsymbol{\nabla}\cdot \overline{\boldsymbol{u}}_{\ell} \rangle$ is the (scale-dependent) pressure dilatation term, and $ \langle D_{\ell} \rangle$ is the dissipation term. Similarly one can get the incompressible version of Equation~(\ref{aluiec}) starting from Equation (\ref{ivelocity}) \cite[cf.,][]{eyal09} as \begin{equation} \frac{ \partial \langle \mathcal{E}_{\ell}^{(i)} \rangle}{\partial t} + \langle \Pi_{\ell}^{(i)} + D_{\ell}^{(i)} \rangle=0, \label{aluiei} \end{equation} where \begin{align} \mathcal{E}_{\ell}^{(i)}&=\frac{1}{2} \rho_0|\overline{\boldsymbol{u}}_{\ell}|^2, \\ \Pi_{\ell}^{(i)}&= - \rho_0\boldsymbol{\nabla}\overline{\boldsymbol{u}}_{\ell}:(\overline{\boldsymbol{u}\boldsymbol{u}}_{\ell}-\overline{\boldsymbol{u}}_{\ell}\overline{\boldsymbol{u}}_{\ell}),\\ D_{\ell}^{(i)} &= \mu \boldsymbol{\nabla}\overline{\boldsymbol{u}}_{\ell}:\boldsymbol{\nabla}\overline{\boldsymbol{u}}_{\ell}, \end{align} and $\rho_0$ is the background density. To test the validity of Equation~(\ref{aluiec}), we define the departure from zero as \begin{equation} O_{\ell} = -\frac{ \partial \langle \mathcal{E}_{\ell} \rangle}{\partial t} - \langle \Pi_{\ell} +\Lambda_{\ell} - \overline{p}_{\ell} \boldsymbol{\nabla}\cdot \overline{\boldsymbol{u}}_{\ell} + D_{\ell} \rangle. \label{testo} \end{equation} Figure~\ref{ccg} displays the results of the simulation of section~\ref{simulation} (solid lines), $O_{\ell}$ (normalized to $Q_{\mu}$) as a function of $\ell$ as well as the different contributions, the decaying term (blue) $-\partial \mathcal{E}_{\ell}/{\partial t}$, the energy transfer term (green) $\langle \Pi_{\ell}+\Lambda_{\ell}\rangle $, the large scale pressure dilatation term (orange) $ \langle \overline{p}_{\ell} \boldsymbol{\nabla}\cdot \overline{\boldsymbol{u}}_{\ell} \rangle$, and the dissipation term $ \langle D_{\ell} \rangle$. As in the KHM approach the calculation is done for times $6.2$ and $6.3$ over a reduced box $512^3$. Equation~(\ref{aluiec}) is in the simulation well satisfied, the departure $O_{\ell}$ is small, $|O_{\ell}|/Q_{\mu}\sim 10^{-2}$. Figure~\ref{ccg} shows, similarly to the KHM results, the decay, dissipation and pressure dilation terms go to zero on small scales and on large scales they reach their unfiltered counterparts: $\langle D_{\ell}\rangle \rightarrow Q_\mu$, $\partial \mathcal{E}_{\ell}/\partial t \rightarrow \partial \langle \rho|\boldsymbol{u}|^2 \rangle/\partial t /2$, and $\langle\overline{p}_{\ell} \boldsymbol{\nabla}\cdot \overline{\boldsymbol{u}}_{\ell}\rangle \rightarrow p \boldsymbol{\nabla}\cdot \boldsymbol{u} $. The behaviors of the decay and dissipation terms are similar to their KHM counterparts (see Figure~\ref{yag}) but the characteristic scales differ. The pressure dilatation term is small but nonnegligible on all scales and overall decreases from large to small scales; this is also in agreement with the KHM results. The energy transfer (cascade rate) $\langle \Pi_{\ell}+\Lambda_{\ell}\rangle $ is important on medium scales and reaches a value comparable to that of the KHM cascade rate $-(\boldsymbol{\nabla}_{\boldsymbol{l}}\cdot\boldsymbol{Y}+R)/4$, about $0.7 Q_{\mu}$; the main difference between the coarse graining and KHM results is the sign due to the different formulation of the scale-dependent energy conservation. A question is how the situation looks like for large Reynolds numbers where there may be an inertial range. The present results suggest that in this case the cascade rate will be compensated by the (constant) decay term in the inertial range \begin{align} \langle \Pi_{\ell} +\Lambda_{\ell}\rangle = - \frac{1}{2}\frac{ \partial \langle \rho|\boldsymbol{u}|^2 \rangle}{\partial t}. \label{exactCG} \end{align} Figure~\ref{ccg} also shows (dashed lines) the results of the incompressible equivalent (see Equation~(\ref{aluiei}) \begin{equation} O_{\ell}^{(i)} = -\frac{ \partial \langle \mathcal{E}_{\ell}^{(i)} \rangle}{\partial t} - \langle \Pi_{\ell}^{(i)} + D_{\ell}^{(i)} \rangle. \label{testoi} \end{equation} As in the KHM case, the incompressible decay and dissipation terms are close to their compressible equivalents. The incompressible cascade rate $\langle \Pi_{\ell}^{(i)}\rangle$ is similar to that obtained for the incompressible KHM cascade. This support the interpretation of $R$ as an additional compressible cascade in the KHM equation. \begin{figure \centerline{\includegraphics[width=12cm]{ccg_PHD13ar}} \caption{(solid) Departures from coarse-grained energy conservation (black) $O_{\ell}$ (given by Equation~(\ref{testo})) as a function of the filtering scale $\ell$, along with the different contributions: the decaying term (blue) $-\partial \mathcal{E}_{\ell}/{\partial t}$, the energy transfer term (green) $\langle \Pi_{\ell}+\Lambda_{\ell}\rangle $, the pressure dilatation term (orange) $ \langle \overline{p}_{\ell} \boldsymbol{\nabla}\cdot \overline{\boldsymbol{u}}_{\ell} \rangle$, and the dissipation term $ \langle D_{\ell} \rangle$. Dashed lines give the incompressible equivalents (black) $O_{\ell}^{(i)}$ (given by Equation~(\ref{testoi})), along with the different contributions, the decaying term (blue) $-\partial \mathcal{E}_{\ell}^{(i)}/{\partial t}$, the energy transfer term (green) $\langle \Pi_{\ell}^{(i)}\rangle$, and the dissipation term $ \langle D_{\ell}^{(i)} \rangle$. $O_{\ell}$, $O_{\ell}^{(i)}$ and all their contributions are normalized to $Q_{\mu}$. \label{ccg} } \end{figure} \section{Discussion} \label{discussion} In this paper we investigated on the existence of the conservative cascade (inertial range) of the kinetic energy in compressible hydrodynamic turbulence. We compared the K\'arm\'an-Howarth-Monin and coarse-grained energy conservation approaches (in compressible and incompressible forms) for the kinetic energy, using data from a 3D HD decaying turbulence simulation with a moderate Reynolds number and the initial Mach number $M=1$. In this simulation the two scale-dependent energy conservation equations are well satisfied. The pressure dilation coupling between the kinetic and internal energies are the strongest on large spatial scales and decrease towards smaller scales, in agreement with the results of \cite{aluial12}. Coherently with the PSD of the kinetic energy which does not show a Kolmogorov spectrum, we do not observe a region where the kinetic-energy cascade dominates, the effects of decaying, pressure dilation and dissipation being not negligible. The KHM and coarse-graining approaches give rates of the cascade, decay, dissipation, and the pressure dilation processes that are in semi-quantitative agreement; the localization of these different processes is, however, different when expressed in the scale separation or filtering spatial scales. This is not surprising, calculations of structure functions and low-pass filtering are very different procedures. The kinetic energy decay and dissipation rates estimated from the incompressible approximations are close to the compressible predictions. In the simulation the observed kinetic-energy cascade is weaker than that predicted by the incompressible KHM equation, showing that the compressible term $R$ is not negligible. In both approaches the pressure dilation terms $2\langle\delta p\delta\theta\rangle+C_p$ and $\langle \overline{p}_{\ell} \boldsymbol{\nabla}\cdot \overline{\boldsymbol{u}}_{\ell} \rangle$ seem to be weak in the region where the kinetic energy cascade term dominate: We then expect that, depending on the level of compressibility, for a large enough Reynolds number \cite[cf.,][]{ishial09} an inertial range for the kinetic energy may exist. The pressure dilation effects typically weaken from large to small scales \citep{aluial12} and also inclusion of forcing extends the region where the cascade dominates. The KHM and coarse-graining approaches could be used to determine the heating/cascade rate. The compressible equivalents of the incompressible ``exact'' laws, Equations~(\ref{exactKHM}) and \ref{exactCG}, have different meanings, the KHM approach gives the (viscous) heating rate whereas the coarse graining approach relates to the kinetic-energy decay rate \cite[or to the energy injection rate in the forced turbulence, cf.,][]{alui13}. In both the KHM and coarse-graining approaches only the cascade of kinetic energy is investigated. The effect of including also the isothermal internal energy as used by \cite{gaba11} is questionable; the structure function $\langle \delta \rho \delta e \rangle$ proposed there does not represent well the internal energy (see appendix~\ref{apIE}). It is also questionable if a conservative cascade of the kinetic energy exists for strongly compressible (high Mach number) turbulence \citep{eydr18,drey18}; an extension of this work to more compressible cases and/or larger Reynolds numbers is needed. The KHM structure function as well as coarse graining approaches may be further extended to (Hall) magnetohydrodynamics (MHD) \citep{yangal17b,andral18,campal18,hellal18,ferral19} and even combined \citep{eyin03,kuzzal19} to look at the localization of energy transfer processes. One limitation of the usual coarse-graining approach is that the filter is assumed to be isotropic; in anisotropic cases (such as in rotating HD or magnetized MHD) an anisotropic filter may be more appropriate; the KHM approach resolves this anisotropy rather naturally \citep{verdal15}.
1,477,468,750,758
arxiv
\section{Introduction} By now, holography has established itself as one of the main tools used to gain insights into the out-of-equilibrium dynamics of strongly coupled field theories. Mapping the process of thermalization into black hole formation in asymptotically Anti-de Sitter (AdS) spacetime, gauge/gravity \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} methods have already solved several outstanding problems motivated by both heavy ion and condensed matter physics that have long eluded solutions using traditional field theory techniques (for reviews, see e.g.~\cite{Gubser:2009md,CasalderreySolana:2011us,DeWolfe:2013cua,Brambilla:2014jmp}). This can be largely attributed to the absence of competition: Perturbative methods typically fail already at moderate couplings, while time-dependent quantum phenomena are outside the realm of lattice Monte-Carlo simulations. Important recent advances in applied holography include a fully dynamical description of shock wave collisions in strongly coupled ${\mathcal N}=4$ Super Yang-Mills (SYM) theory \cite{Chesler:2010bi, Wu:2011yd, Casalderrey-Solana:2013aba, vanderSchee:2012qj,vanderSchee:2013pia, Chesler:2015wra} as well as extensive work on the evolution of entropy-like quantities such as the holographic entanglement entropy (HEE) \cite{AbajoArrastia:2010yt,Albash:2010mv,Baron:2012fv,Liu:2013iza,Liu:2013qca,Abajo-Arrastia:2014fma,Balasubramanian:2011ur,Pedraza:2014moa, Auzzi:2013pca, Alishahiha:2014cwa}. At the same time, technical leaps have been taken in the incorporation of inhomogeneities and anisotropies in thermalization dynamics \cite{Chesler:2008hg,Heller:2012km,Chesler:2012zk,Heller:2013oxa,Balasubramanian:2013rva}, the development of a formalism to evaluate out-of-equilibrium Green's functions \cite{CaronHuot:2011dr,Mukhopadhyay:2012hv,Balasubramanian:2012tu,Keranen:2014lna}, as well as the first studies of thermalization dynamics away from the infinite coupling limit \cite{Steineder:2012si,Steineder:2013ana,Stricker:2013lma,Baron:2013cya} and in non-conformal backgrounds \cite{Craps:2013iaa}. The above list of references clearly reflects an ongoing pursuit to take the holographic description of equilibration dynamics closer to the physical systems realized in nature, which are typically characterized by complicated initial states, finite coupling strength and $N_c$, as well as broken conformal invariance. In this approach, one is typically confined to determining rather simple observables such as the temporal and spatial evolution of energy density or pressure. A different line of research concentrates on the simplest thermalization models available, but attempts to compute more complicated quantities, such as various off-equilibrium Green's functions and other non-local observables. One prominent example of such models involves the gravitational collapse of an infinitesimally thin but massive shell in AdS space \cite{Danielsson:1999zt,Danielsson:1999fa}; following these papers, several works have addressed a variety of physical phenomena including particle production rates \cite{Baier:2012ax, Baier:2012tc}, the chiral magnetic effect \cite{Lin:2013sga}, jet quenching \cite{Caceres:2012px} and even elliptic flow \cite{Muller:2013ila}. Most of these calculations, however, apply the so-called quasistatic approximation and assume the time scale related to the collapse to be parametrically larger than the other scales of interest, thus effectively considering the shell a static object \cite{Lin:2008rw}. In a preceding paper \cite{Keranen:2014zoa}, we reported results from a set of calculations inspecting the falling shell model in a fully dynamical setup, where the shell follows a physical trajectory solved from the Einstein equations. The quantities considered in this context were the HEE and the Causal Holographic Information (CHI), which are both examples of geometric probes whose determination reduces to finding the area of some bulk hypersurface. As this involved rather complicated calculations requiring finding and matching extremal surfaces and geodesics in a time-dependent background, one of the aims of our current paper is to walk the reader through the technical details of this work. In addition, we will, however, present a considerably more thorough analysis of the HEE, comparing in particular its time evolution to results obtained in the quasistatic approximation and in the Vaidya metric. Here, we will find that during all times at least one of these approximation schemes is in a good quantitative agreement with the full results. We will also analyze the dynamics of the collapsing shell itself, and provide the full details of the construction of a coordinate system continuous at the location of the shell, briefly introduced already in \cite{Keranen:2014zoa}. In references \cite{Liu:2013iza,Liu:2013qca}, it was noticed that in the Vaidya spacetime the entanglement entropy of large boundary regions exhibits linear increase in time for an extended period. The coefficient of this increase, $v_\text{E}$, quantifies the rate, at which the time evolution entangles the subsystem to its surroundings. The authors of \cite{Liu:2013iza,Liu:2013qca} proposed an interesting conjecture that the value of $v_\text{E}$ computed for a collapse from AdS to the AdS-Schwarzschild spacetime might provide an upper bound for the rate of entanglement production in any relativistic quantum field theory. Furthermore, it was argued here that the rate $v_\text{E}$ is a property of the final equilibrium state only, as it is only affected by the metric of the final black hole. One way of testing this proposal is to consider different initial states that evolve towards the same thermal state at late times --- an exercise straightforwardly implementable in the collapsing shell model. As we will see, in all of our results the rate $v_\text{E}$ is indeed seen to be independent of the details of the shell trajectory, i.e.~of the way the non-equilibrium initial state is prepared. Thus, we find evidence supporting the picture that $v_\text{E}$ is a property of the final equilibrium state only. Our paper is organized as follows: First, in section 2 and the corresponding appendices A, B and C, we provide technical details of our calculations, including solving for the shell dynamics, constructing a coordinate system that is continuous across the shell, and deriving continuity conditions for geodesics and extremal surfaces at the shell. In section 3, we then analyze the solutions to the shell equation of motion (EoM), while section 4 as well as appendix D are devoted to deriving the HEE and analyzing the corresponding results. Finally, in section 5 we compare our numerical findings to the quasistatic and Vaidya limits, analyzing the regions of validity of these approximation schemes, and in section 6 we draw our conclusions. \section{Details of the calculation} In this section, we introduce the machinery needed to obtain the time evolution of the HEE we are after. To this end, we first introduce our collapsing shell setup and derive the EoM of a shell falling in AdS$_5$ spacetime in section \ref{sec:dynamics}. Then, we derive a coordinate system continuous at the shell in section \ref{sec:junctionconditions}, which we use to write down junction conditions for extremal surfaces and more generic geometric probes intersecting the shell. Several details of the calculations are left to appendices A--C. \subsection{Setup and shell dynamics} \label{sec:dynamics} Just as in \cite{Keranen:2014zoa}, we work in a spacetime characterized by a negative cosmological constant, into which we immerse a thin massive shell, whose energy momentum tensor is proportional to a delta function in the radial coordinate.\footnote{Similarities of the thin shell setup and the fully back-reacted numerical solution of the Einstein-Klein-Gordon system are discussed in \cite{Garfinkle:2011tc}.} Since both inside and outside the shell, the space is a solution to vacuum Einstein equations, we choose the inside metric to be that of an empty AdS Poincar\'{e} patch and the outside metric the AdS-Schwarzschild solution with Schwarzschild radius $r_h$, \begin{eqnarray} \label{eq:metric} ds^2 &=& -f_\pm(r)\,dt^2 + \frac{dr^2}{f_\pm(r)} + r^2 d\mathbf{x}^2\,, \\ \label{eq:fpm} f_\pm(r) &=& \begin{cases} \hfill r^2 - \frac{r_h^4}{r^2}, \hfill & \text{ if $r > r_s$} \\ \hfill r^2, \hfill & \text{ if $r<r_s$}\end{cases} \, . \end{eqnarray} Here we have introduced a notation that we will be using throughout the calculation, where the subscripts $+$ and $-$ refer to quantities evalauted outside and inside the shell, respectively. It is important to note that although the metric functions $f_\pm$ themselves are time independent, the location of the shell $r_s$, and thus the location of the discontinuity, are time dependent. The radial coordinate $r$ and the spatial coordinates $\mathbf{x}$ are in addition assumed to be continuous at the shell. This means that there are two different and \emph{a priori} unrelated time coordinates $t_+$ and $t_-$, which we will later relate to each other. The coordinates on the shell are chosen to be the proper time of the shell and the spatial coordinates $\mathbf{x}$, denoted by \begin{equation} \left[ \xi^i \right] = \left( \tau, \mathbf{x} \right) \, . \end{equation} The embedding of the shell in the five-dimensional space is then given by \begin{equation} \left[ y^\mu \right] = \left( t_{s\pm}(\tau),r_s(\tau),\mathbf{x}\right) \, , \end{equation} where $\mu$ is an index running over the five coordinates of the $\mathrm{AdS}_5$ space. Requiring that $\tau$ is the proper time of the shell, we can further relate $t_{s\pm}$ and $r_s$ to each other by writing \begin{equation} ds^2 = -d\tau^2 = -f_\pm \,\dot{t}_{s\pm}^2\,d\tau^2 + \frac{\dot{r}_s^2}{f_\pm}\,d\tau^2 \, . \end{equation} Thus, the derivatives of $t_{s\pm}$ and $r_s$ with respect to the proper time of the shell --- denoted here by dots --- are related by \begin{equation} f_\pm \,\dot{t}_{s\pm} = \sqrt{f_\pm + \dot{r}_s^2}\;. \end{equation} In appendix \ref{sec:eom}, we derive the EoM of the shell, given by eq.~(\ref{eq:Israeli}). To evaluate its right-hand side, we need to specify the energy momentum content of the shell in the appropriate coordinate system. To this end, we employ the perfect fluid form, \begin{equation} S^{ij} = (\rho + p ) u^i u^j +p\,\gamma^{ij}, \label{eq:fluidem} \end{equation} where $u^i$ the four-velocity of the fluid and $\gamma^{ij}$ the induced metric on the shell. This is in fact the most general possible energy momentum tensor when imposing translational and rotational symmetry in the $\textbf{x}$ directions. Since the time coordinate in the $\xi$ coordinate system is the proper time of the shell, the coordinate system is in the rest frame of the fluid, and thus $u = (1,\mathbf{0})$. The two independent non-zero components of equation (\ref{eq:Israeli}) then become \begin{align} \label{eq:betas}-\frac{3}{r_s} \left( \sqrt{f_-+\dot{r}_s^2} -\sqrt{f_+ +\dot{r}_s^2} \right) & = -8\pi g_5\,\rho\, ,\\ \frac{1}{\dot{r}_s} \frac{d}{d\tau}\left[r_s^2\left( \sqrt{f_-+\dot{r}_s^2}-\sqrt{f_++\dot{r}_s^2}\right)\right] & = -8\pi g_5\,p\,r_s^2 \, , \end{align} from which we can derive a scaling law for the energy density, \begin{equation} \frac{d}{dr_s} \left( \rho \,r_s^3\right) = -3\,r_s^2\,p \, . \end{equation} Considering the simple equation of state (EoS) $p = c\,\rho$, we finally obtain \begin{equation} \rho \propto r_s^{-3(1+c)} \, , \end{equation} so that we can define a constant of motion $M$ satisfying \begin{equation} \label{eq:rhoscale} \frac{8}{3}\pi g_5 \, \rho = \frac{M}{r_s^{3(1+c)}}\, . \end{equation} When the shell is pressureless ($c=0$), the constant $M$ is directly related to the conserved mass of the shell. After inserting eq.~(\ref{eq:rhoscale}) to (\ref{eq:betas}), we get as the final EoM of the shell \begin{equation} \dot{r}_s^2 = \frac{M^2}{4\,r_s^{4+6c}}-\frac{f_-+f_+}{2}+\frac{(f_--f_+)^2r_s^{4+6c}}{4\,M^2}\, , \end{equation} where $f_-$ and $f_+$ are evaluated at the shell, $r=r_s$. It is noteworthy that the functional forms of $f_\pm$ are at this point still arbitrary, and that this equation is first order in time derivatives. The latter fact implies that solving it requires only one initial condition, e.g.~the value of $r_s$ at some known time $\tau$, while the initial velocity is encoded in the constant $M$. This equation can be interpreted as the non-linear generalization of the conservation of kinetic and potential energy in Newtonian mechanics. If we now insert the explicit forms of $f_\pm$ from equation (\ref{eq:fpm}), we obtain from the above \begin{equation} \dot{r}_s^2 =- r_s^2 +\frac{r_h^4}{2\,r_s^2}+\frac{M^2}{4\,r_s^{4+6c}} + \frac{r_h^8 r_s^{6c}}{4\,M^2} \, . \label{shelleom1} \end{equation} Using as the initial conditions $r_s(\tau=0) = r_0$, $\dot{r}_s(\tau=0) = 0$, this allows us to solve the value of $M$ as \begin{equation} M^2 = r_0^{4+6c}\left[ \sqrt{f_+(r_0)}-\sqrt{f_-(r_0)}\right]^2 \, , \end{equation} or using the explicit form of $f_\pm$, \begin{equation} M^2 = 2\,r_0^{6(1+c)} \left( 1- \frac{r_h^4}{2\,r_0^4} -\sqrt{1-\frac{r_h^4}{r_0^4}}\right) \, . \end{equation} Together with the equation of motion (\ref{shelleom1}), this determines how the shell falls as a function of its proper time. If one on the other hand wants to EoM of the shell in terms of the coordinate time, or possibly relate the discontinuous time coordinates on the two sides of the shell to each other, one has to further use the relation \begin{equation} \frac{dt_-}{dt_+} = \frac{\dot{t}_{s-}}{\dot{t}_{s+}} = \frac{f_+}{f_-}\sqrt{\frac{f_-+\dot{r}_s^2}{f_+ +\dot{r}_s^2}} \, , \end{equation} which applies at the shell. \subsection{The junction conditions} \label{sec:junctionconditions} In order to eventually determine the time evolution of the entanglement entropy in the boundary field theory, we must be able to solve minimal surfaces in the spacetime containing a moving shell. In particular, we need to know how to join the minimal surfaces across the shell, i.e.~how they refract at the shell. As we will review in section \ref{sec:entropy}, the determination of a minimal surface can be phrased as a variational problem, where one extremizes a functional of the generic form \begin{equation} S=\int d^n \sigma\,\mathcal{L}\left[x^{\mu}(\sigma),\partial_{a}x^{\mu}(\sigma),g_{\mu\nu}\right],\label{eq:genericfunctional} \end{equation} where $ \partial_a x^{\mu}(\sigma)=\partial x^{\mu}(\sigma)/\partial\sigma^{a}$, with $\sigma$ denoting some set of coordinates on the minimal surface and $x^{\mu}(\sigma)$ encoding the embedding of the surface in the spacetime. In this section, we will work out the refraction conditions following from extremizing a generic functional of the form (\ref{eq:genericfunctional}). Thus, the results we obtain can be applied to any geometric probes in the spacetime, such as geodesics, string worldsheets and minimal area surfaces. Varying the action of eq.~(\ref{eq:genericfunctional}) leads to equations of motion for $x^{\mu}(\sigma)$, the Euler-Lagrange equations, that involve first derivatives of the metric. As we are dealing with a metric that is discontinuous, these equations will have delta function contributions from the derivatives. One way to derive junction conditions for $x^{\mu}(\sigma)$ would be to integrate the EoMs across these singularities; in our case, this is, however, difficult to apply in practice, so we will use a different method. Namely, we will in the following explicitly construct a coordinate system, where the metric is continuous at the position of the shell. Working within it, the EoMs will have no delta function singularities, and therefore the solution $x^{\mu}(\sigma)$ and all its first derivatives $\partial x^{\mu}(\sigma)/\partial \sigma^a$ will be continuous across the shell. Then, to obtain the junction conditions in the original coordinate system, we simply perform a coordinate transformation back to the original coordinates, where the discontinuities in the derivatives reappear from discontinuities in the coordinate transformation. To explicitly construct the coordinate system described above, we choose the timelike coordinate to be the proper time of the shell, $\tau$. Correspondingly, the required spatial coordinate is chosen to be the proper physical distance from the shell normal to it, which we denote by $\lambda$ and use to define our time slicing. Thus, our coordinate transformation has the form \begin{equation} (t_{\pm},r,\mathbf{x})\rightarrow (\tau,\lambda,\mathbf{x})\,, \end{equation} where a complication, however, arises from the fact that the normal vector of the shell is only defined at its location. This implies that we need to parallel transport this vector to cover the other parts of the spacetime. Intuitively, we start from the shell and then head out in the direction of the normal vector, parallel transporting it according to \begin{equation} \nabla_n n=0 \; . \end{equation} This requirement is clearly nothing but the geodesic equation, meaning that our new spatial coordinate is simply the physical distance from the shell along a spacelike geodesic normal to the shell at its location. In order to determine the metric in this continuous coordinate system as well as to obtain the desired junction conditions, we need to know how the coordinates in the different coordinate systems are related to each other. Instead of obtaining explicit expressions for the coordinate transformation, it is, however, sufficient to merely calculate the values of the partial derivatives\footnote{Here we have introduced the notation $\PD{a,b,c}$ familiar from thermodynamics to keep in mind which parameter is held constant as the other one is varied. To make our expressions somewhat more compact, we have also suppressed the arguments of our functions: When using the coordinates $\tau$ and $\lambda$, $r$ and $t$ are functions of both of these variables, whereas the time and position of the shell, $t_s$ and $r_s$ are functions of $\tau$ only. Furthermore, $f$ is a function of $r$ and thus of both $\tau$ and $\lambda$ while $f_s = f(r_s(\tau))$.} \begin{equation} \PD{t,\tau,\lambda} , \; \PD{t,\lambda,\tau} , \; \PD{r,\tau,\lambda} \; \text{and} \;\; \PD{r,\lambda,\tau} \end{equation} at the shell. This exercise is performed in appendix \ref{sec:cont}. We will now proceed to compute the first total derivatives $d x^{\mu}/d \sigma^a$ in the outside patch, transform them to the new coordinate system, and then transform them further to the inside patch. Using the chain rule, we can write the necessary derivatives in the form \begin{align} \frac{dt_+}{d\sigma^a} &= \frac{d}{d\sigma^a} t_+\left(\tau(\sigma),\lambda(\sigma)\right) = \PD{t_+,\tau,\lambda}\frac{d\tau}{d\sigma^a} + \PD{t_+,\lambda,\tau}\frac{d\lambda}{d\sigma^a}\,,\\ \frac{dr_+}{d\sigma^a} &= \frac{d}{d\sigma^a} r_+\left(\tau(\sigma),\lambda(\sigma)\right) = \PD{r_+,\tau,\lambda}\frac{d\tau}{d\sigma^a} + \PD{r_+,\lambda,\tau}\frac{d\lambda}{d\sigma^a}\,, \end{align} where we will now drop the index $a$ from $\sigma^a$ to simplify our notation. From these expressions, we then solve \begin{align} \label{eq:dlambdadx}\frac{d\lambda}{d\sigma} & = \frac{\PD{r_+,\tau,\lambda}\frac{dt_+}{d\sigma}-\PD{t_+,\tau,\lambda}\frac{dr_+}{d\sigma}}{\PD{r_+,\tau,\lambda}\PD{t_+,\lambda,\tau}-\PD{r_+,\lambda,\tau}\PD{t_+,\tau,\lambda}}\,, \\ \label{eq:dsdx}\frac{d\tau}{d\sigma} & = \frac{\PD{r_+,\lambda,\tau}\frac{dt_+}{d\sigma}-\PD{t_+,\lambda,\tau}\frac{dr_+}{d\sigma}}{\PD{r_+,\lambda,\tau}\PD{t_+,\tau,\lambda}-\PD{r_+,\tau,\lambda}\PD{t_+,\lambda,\tau}} \, , \end{align} which, when evaluated at the shell using the partial derivatives calculated in appendix \ref{sec:cont}, gives further \begin{align} \label{eq:dlambdadx+} \frac{d\lambda}{d\sigma} &= -\dot{r}_s \frac{dt_+}{d\sigma} + \frac{\sqrt{f_{s+}+\dot{r}_s^2}}{f_{s+}} \frac{dr_+}{d\sigma}\,, \\ \label{eq:dsdx+} \frac{d \tau}{d\sigma} & = \sqrt{f_{s+}+\dot{r}_s^2}\frac{dt_+}{d\sigma}-\frac{\dot{r}_s}{f_{s+}}\frac{dr_+}{d\sigma} \; . \end{align} Next, we use the chain rule to express $dt/d\sigma$ and $dr/d\sigma$ in the inside patch, \begin{align} \frac{dt_-}{d\sigma} &= \PD{t_-,\tau,\lambda}\frac{d\tau}{d\sigma} + \PD{t_-,\lambda,\tau}\frac{d\lambda}{d\sigma}\,,\\ \frac{dr_-}{d\sigma} &= \PD{r_-,\tau,\lambda}\frac{d\tau}{d\sigma} + \PD{r_-,\lambda,\tau}\frac{d\lambda}{d\sigma}\,, \end{align} which, evaluated again at the shell, produces \begin{align} \frac{dt_-}{d\sigma} &= \frac{\sqrt{f_{s-} + \dot{r}_s^2}}{f_{s-}}\frac{d\tau}{d\sigma} + \frac{\dot{r}_s}{f_{s-}}\frac{d\lambda}{d\sigma}\,,\\ \frac{dr_-}{d\sigma} &= \dot{r}_s\frac{d\tau}{d\sigma} + \sqrt{f_{s-}+\dot{r}_s^2}\frac{d\lambda}{d\sigma} \, . \end{align} Finally, we insert eqs.~(\ref{eq:dlambdadx+}) and (\ref{eq:dsdx+}) into the above equations to get the junction conditions \begin{align} \label{eq:match1} \left.\frac{dt_-}{d\sigma}\right|_{r=r_s} &= \left.\frac{dt_+}{d\sigma}\right|_{r=r_s} \frac{\beta_{s-}\beta_{s+}-\dot{r}_s^2}{f_-} + \left.\frac{dr_+}{d\sigma}\right|_{r=r_s} \frac{\dot{r}_s}{f_- f_+}\left(\beta_{s+}-\beta_{s-}\right)\,,\\ \label{eq:match2} \left.\frac{dr_-}{d\sigma}\right|_{r=r_s} &= \left.\frac{dt_+}{d\sigma}\right|_{r=r_s} \dot{r}_s\left( \beta_{s+}-\beta_{s-}\right) + \left.\frac{dr_+}{d\sigma}\right|_{r=r_s} \frac{1}{f_+}\left( \beta_{s+}\beta_{s-}-\dot{r}_s^2\right)\, , \end{align} where $\beta_{s\pm} = \sqrt{f_{s\pm}+\dot{r}_s^2}$. As a consistency check, we verify that in the limit where the shell vanishes, $f_- \to f_+$, both of these relations become identities. Also, in the limit where the velocity of the shell approaches the speed of light $\dot{r}_s\rightarrow\infty$, the junction conditions reduce to the ones previously found in the Vaidya spacetime, cf.~e.g.~\cite{Liu:2013qca}. Interestingly, the above matching conditions are valid in a space with an arbitrary dimensionality, and one only needs to modify the metric functions $f_+$ and $f_-$ in eq.~(\ref{eq:fpm}). Also, in Appendix \ref{genjun} we show how the conditions get modified, if one takes as the starting point of the calculation a more generic metric, where the $dt^2$ and $dr^2$ components are a priori not related to each other. \section{Properties of the shell motion \label{sec:shellmot}} In this section, we perform a systematic study of the solutions of the shell EoM for different values of the EoS parameter $c$, defined through $p=c\, \rho$. For brevity, we will here denote $t_{s\pm} (\tau)$ by simply $t_\pm$. \subsection{Simple example: $c=-1/3$} Let us start by considering in detail the case of $c=-1/3$, which exhibits the same qualitative features as the more general cases studied later, but is computationally somewhat simpler. For this value of $c$, the equations of motion namely reduce to \begin{align} \dot{r}_s^2&=-r_s^2+r_0^2\Big(\frac{r_0}{r_s}\Big)^2\,,\label{eq:rs} \\ \dot{t}_+&=\frac{r_s\sqrt{r_0^4-r_h^4}}{r_s^4-r_h^4}\,,\label{eq:tp} \end{align} of which we can solve the first one by direct integration, producing \begin{equation} \tau=\frac{1}{r_0^2}\int_{r_s}^{r_0}\frac{dr r}{\sqrt{1-\Big(\frac{r}{r_0}\Big)^4}}=\frac{1}{2}\arccos\Big(\frac{r_s}{r_0}\Big)^2 \, , \end{equation} or equivalently \begin{equation} r_s(\tau)=r_0\sqrt{\cos(2\tau)}\,. \end{equation} From here, we see that for small and negative $\tau$ the shell heads towards the boundary, while at $r=r_0$ or $\tau=0$ it turns around and collapses. At the proper time $\tau=\pi/4$, the shell reaches the singularity at $r=0$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.55,trim={0cm 0cm 4cm 19cm},clip]{shelltrajectories1.pdf}$\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!$\includegraphics[scale=0.38]{trajectories1.pdf} \caption{\label{fig:shelltrajectories1} Left: Shell trajectories for $c=-1/3$ and for $r_0=(4,6,10,25,50)$ (bottom to top). The dashed curve represents here an ingoing null geodesic starting from the boundary at $t_+=0$. Right: Shell trajectories for $r_0=3$ and 5 and for $c=0,$ 0.3, 0.33 and $1/3$ (from left to right). The units in both figures are chosen such that $r_h=1$.} \end{center} \end{figure} Many of the interesting features of the shell trajectories become apparent only once the trajectory is expressed in terms of the time coordinate $t_{+}$. Solving eq.~(\ref{eq:tp}) leads to an expression for $t_{+}$ in terms of elliptic integrals, which is not particularly illuminating. We will thus rather take a step back and solve the EoM for $dr_s/dt_+$, obtained by taking the ratio of eqs.~(\ref{eq:rs}) and (\ref{eq:tp}), \begin{equation} \frac{dr_s}{dt_+}=\frac{\sqrt{r_0^4-r_s^4}\left(r_h^4-r_s^4\right)}{r_s^2\sqrt{r_0^4-r_h^4}}\, . \end{equation} Solving for $t_+$ from here, we obtain \begin{equation} t_+=\frac{\sqrt{r_0^4-r_h^4}}{r_0^3}\int_{r_s/r_0}^1\frac{du\,u^2}{\sqrt{1-u^4}\left[u^4-\left(\frac{r_h}{r_0}\right)^4\right]}\, ,\label{eq:tintegral1} \end{equation} where we have defined the integration variable $u=r/r_0$. A numerical integration of eq.~(\ref{eq:tintegral1}) is shown in figure \ref{fig:shelltrajectories1} (left). It is clearly seen from here that all trajectories asymptotically approach the horizon at $r=r_h=1$ with the same exponential rate as a null geodesic, but that the approach towards the null geodesic becomes faster when $r_0$ is increased. Both of the above features can be understood from the integral of eq.~(\ref{eq:tintegral1}). Near the horizon, it is dominated by its lower limit, where we can approximate \begin{equation} t_+= \frac{1}{4r_h}\int_{r_s/r_0}\frac{du}{u-r_h/r_0}+...=-\frac{1}{4r_h}\log(r_s-r_h)+... \,, \end{equation} leading to the relation \begin{equation} r_s\approx r_h+C e^{-4 r_h t_+}\,, \end{equation} i.e.~a null geodesic near the horizon. The exact same thing happens when $r_0$ is taken towards the boundary with $r_s/r_0$ fixed to a small number: The integral is again dominated by the lower limit of integration, and we can approximate \begin{equation} t_+=\frac{1}{r_0}\int_{r_s/r_0}\frac{du}{u^2}+...=\frac{1}{r_s}+... \, , \end{equation} which gives \begin{equation} r_s\approx\frac{1}{t_+}\, , \end{equation} identified as a null geodesic near the boundary. As $r_0\rightarrow\infty$, a boundary observer thus sees the shell apparoching a null geodesic, implying that the whole spacetime for $r\ll r_0$ is well approximated by the Vaidya limit. \subsection{Generic EoS: $-1<c<1/3$} For the range $-1<c<1/3$, the shell trajectories share the same qualitative features as the above example $c=-1/3$; in particular, the they are always seen to approach a null geodesic when either $r_0/r_h\rightarrow\infty$ or $r_s\rightarrow r_h$. To demonstrate this, we work at the level of the EoM and show that it approaches the equation of a null geodesic, \begin{equation} \frac{dr}{dt_+}=-f_+(r)\, ,\label{eq:lightliketraj} \end{equation} in these limits. For a general $c$, the shell equations of motion are given by \begin{align} \dot{r}_s^2 &=- r_s^2 +\frac{r_h^4}{2\,r_s^2}+\frac{M^2}{4\,r_s^{4+6c}} + \frac{r_h^8 r_s^{6c}}{4\,M^2}\,,\label{eq:generalc1} \\ \dot{t}_{\pm}&=\frac{\sqrt{f_{\pm}+\dot{r}_s^2}}{f_{\pm}} \,, \label{eq:generalc2} \end{align} where we will first consider the limit of the shell approaching the horizon, $r_s\rightarrow r_h$. In this case, $f_+$ approaches zero, so we can approximate (\ref{eq:generalc2}) as \begin{equation} \dot{t}_+\approx\frac{|\dot{r}_s|}{f_+}\,. \end{equation} Using this, we obtain \begin{equation} \frac{dr_s}{dt_+}=\frac{\dot{r_s}}{\dot{t}_+}\approx -f_+(r_s), \end{equation} which clearly implies that for all initial data with $r_0 > r_h$ the shell approaches the speed of light, as it approaches the horizon. Moving next to the case of $r_0/r_h\rightarrow\infty$, we substitute the integration constant \begin{equation} M\approx \frac{1}{2}r_h^4 r_0^{3c-1} \end{equation} into eq.~(\ref{eq:generalc1}). This leads to \begin{equation} \dot{r}_s^2 \approx - r_s^2 +\frac{r_h^4}{2\,r_s^2}+\frac{r_h^8 \, r_0^{6c-2}}{16 \, r_s^{6c+4}} + \frac{r_s^{6c}}{r_0^{6c-2}}\,, \end{equation} which shows that as long as $c<1/3$ and the shell location $r_s$ is kept fixed as $r_0/r_h\rightarrow\infty$, $\dot{r}_s\rightarrow\infty$ in this limit. Thus, in eq.~(\ref{eq:generalc2}) we can use $\dot{r}_s^2\gg f_+(r_s)$ and approximate \begin{equation} \dot{t}_{\pm}\approx \frac{|\dot{r}_s|}{f_{\pm}}, \end{equation} which again leads to a null geodesic solution for $r_s$. One difference between the two limits we have been considering above is that as the starting point of the shell is sent to infinity, the shell trajectory approaches a null geodesic when viewed either from the outside of the shell, using the time coordinate $t_+$, or from the interior of the shell, using the time coordinate $t_-$. This is not the case when $r_0$ is kept fixed and the shell approaches the horizon. In this case, only the trajectory as viewed from the exterior approches a null geodesic, while from the interior point of view it usually does not. \subsection{Generic EoS: $c\ge 1/3$} Finally, we take a look at values of $c$ greater or equal to the conformal value $c=1/3$. Again, the analysis of the trajectories near the horizon goes through unchanged, as the argument presented in the previous subsection is independent of the value of $c$. Thus, the shell is seen to approach the speed of light also for $c\ge 1/3$. The case of large $r_0$ is, however, very different. Here, the AdS spacetime can be seen to provide a harmonic potential that pulls the shell towards the center, seen in the first term of the right hand side of eq.~(\ref{eq:generalc1}). For $c>1/3$, the last term in this equation wins the pull of AdS, and the shell gets repelled from the center, accelerating towards the boundary. Physically, this means that the pressure of the shell wins over the gravitational attraction towards the center of AdS. In the special case of $c=1/3$, the last term of eq.~(\ref{eq:generalc1}) also scales as $r_s^2$, which leads to the possibility that the ``forces'' cancel at large $r_s$. This enables the shell to approach the center of AdS with a very small acceleration even when it starts from near the boundary, as the gravitational pull and pressure almost cancel each other. Quantitatively, this can be seen by determining the acceleration of the shell, \begin{equation} \ddot{r}_s = \frac{1}{2\dot{r}_s}\frac{d}{d\tau} \dot{r}_s^2 \, , \end{equation} and then expanding it near the turning point. Differentiating the expression in eq.~(\ref{shelleom1}) and expanding it yields \begin{equation} \ddot{r}_s = -4r_0 + 2\,(1+3c)\,r_0\,\sqrt{1-\frac{r_h^4}{r_0^4}} + \mathcal{O}(r-r_0) \; . \end{equation} From this expression, we see that the $c=\frac{1}{3}$ case is special, as for sufficiently large $r_0$ the acceleration vanishes to first order and the motion of the shell can be arbitrarily slow. In a spacetime of arbitrary dimenson $d+1$ (in our case $d=4$), the special value of $c$ reads $c=\frac{1}{d-1}$. \section{Entanglement entropy \label{sec:entropy}} Next, we move on to consider the covariant holographic entanglement entropy (HEE) \cite{Hubeny:2007xt}, which is obtained by extremizing the area functional \begin{equation} A=\int d^3\sigma\sqrt{\det_{ab}g_{\mu\nu}\frac{\partial x^{\mu}}{\partial\sigma^a}\frac{\partial x^{\nu}}{\partial\sigma^b}} \end{equation} with the condition that the bulk hypersurface ends on a predefined surface $\mathcal{A}$, which resides on a constant time slice on the boundary. In the dual CFT, the (geometric) entanglement entropy of the region $\mathcal{V}$, whose boundary $\mathcal{A}$ is, is then conjectured to be given by \begin{equation} S_\text{EE}=\frac{A}{4G_N}\, . \end{equation} The time evolution of this quantity has been extensively analyzed in various equilibration scenarios since the original work of \cite{Hubeny:2007xt}; in particular, for studies in the Vaidya spacetime, see \cite{AbajoArrastia:2010yt,Albash:2010mv,Balasubramanian:2011ur}. In the current section, we will consider the evolution of the HEE in the collapsing shell model, using the physical shell trajectories obtained in the previous section. First, we study a simple example shape for the boundary region, a strip of width $L$, and then derive some more generic results for arbitrary shapes. To supplement this discussion, the relevant equations of motion for the case of a spherical boundary region are derived in some detail in appendix \ref{sec:spherical}. In all of the calculations presented in this section, we work in the Eddington-Finkelstein coordinates, where instead of using the time coordinate $t$ we employ the lightcone coordinate $v_{\pm}$, defined by \begin{equation} dv_{\pm}=dt_{\pm}+\frac{dr}{f_{\pm}(r)}\,. \end{equation} In addition, we will switch to the bulk radial coordinate $z=1/r$, so that our bulk metric will be given by \begin{equation} ds^2=\frac{1}{z^2}\left[-h(z,v)dv^2-2dv dz+d\textbf{x}^2\right]\, . \end{equation} Here, we have further defined \begin{equation} h(z,v)=1-\theta(v-v_s(z))z^4\,, \end{equation} where $v_s(z)$ is the trajectory of the shell parametrized as a function of $z$. In this entire section, we set the Schwarzschild radius to unity, i.e.~$r_h=1/z_h=1$. \subsection{Strip boundary region} The interior of a strip on the boundary is defined as the region of space with $x^1\in (-L/2,L/2)$, $x^2\in (-L_2/2,L_2/2)$, and $x^3\in (-L_3/2,L_3/2)$, where $L_2$ and $L_3$ will be sent to infinity at the end. In this case, we can clearly assume that the bulk extremal surface is invariant under translations in the $x^2$ and $x^3$ directions. Thus, we can parametrize the extremal surface using the coordinates $z=z(x)$ and $v=v(x)$, where $x\equiv x^1$, while the surface is spread homogenously along the $x^2$ and $x^3$ directions. For a more thorough explanation of the setup, we refer the reader to \cite{Albash:2010mv,Balasubramanian:2011ur}. With the above definitions, the area functional under consideration becomes \begin{equation} A=L_2L_3\int dx\frac{\sqrt{B}}{z(x)^3}=L_2L_3\int dx\mathcal{L},\quad B=1-h(z(x),v(x))\,v'(x)^2-2\,v'(x)z'(x)\,. \end{equation} Due to the translational invariance of the system, there is a conserved quantity \begin{equation} \frac{\partial \mathcal{L}}{\partial z'}z'+\frac{\partial \mathcal{L}}{\partial v'}v'-\mathcal{L}=-\frac{1}{z^3\sqrt{B}}\, , \end{equation} which is indeed constant along the entire extremal surface. Its value can be fixed by evaluating it at the point $(z_*,v_*,x_*)$ where the surface turns around,\footnote{Here we assume that the extremal surface is reflection symmetric around $x=0$.}, i.e.~$z'(x_*)=v'(x_*)=0$ which quickly leads to the result \begin{equation} \sqrt{B}=\left(\frac{z_*}{z}\right)^3\,.\label{eq:xconstant} \end{equation} The metric is clearly independent of $v$ everywhere except at the position of the shell, implying that there is also a second constant of motion, \begin{equation} \frac{\partial\mathcal{L}}{\partial v'}=-\frac{h v'+z'}{z^3\sqrt{B}}\equiv -\tilde{E}\,, \end{equation} which takes different values on the two sides of the shell. Using eq.~(\ref{eq:xconstant}), we obtain \begin{equation} h_{\pm}v'+z'=E_{\pm}\,,\label{eq:energy} \end{equation} where we have redefined the constant as $E_{\pm}\equiv z_*^3\tilde{E}_{\pm}$, and denoted $h_-=1$ and $h_+=1-z^4$. Solving this equation for $v'$ and plugging the result into eq.~(\ref{eq:xconstant}) finally leads us to \begin{equation} z'^2=E_{\pm}^2+h_{\pm}\left[\left(\frac{z_*}{z}\right)^6-1\right]\equiv H_{\pm}(z)\,.\label{eq:zdiffeq} \end{equation} If the boundary separation $L$ is sufficiently small, the extremal surface never reaches the shell and always stays in the black hole region, implying that the entanglement entropy stays thermal at all times. The precise value of $L$, above which the surface crosses the shell, clearly depends both on the trajectory of the shell and on the boundary time. In the rest of this section, we will only consider the interesting case, where $L$ is large enough so that the surface crosses the shell in the beginning of the time evolution. We start by studying the equations of motion in the pure AdS region inside the shell, where the extremal surfaces always have a turning point with $z'=v'=0$. It is easy to see that $E_-$ vanishes there, which implies that everywhere inside the shell we have \begin{equation} v'=-z'\,.\label{eq:constanttime} \end{equation} Written in terms of the $t_-$ coordinate, this means that $t'_-=0$, i.e.~that the surface lies in a constant time slice inside the shell. Integrating eq.~(\ref{eq:zdiffeq}) is now a straighforward excercise, and one finds a two parameter family of solutions parameterized by the turning point $z_*$ and the value of the time coordinate there, $v(z_*)$. The latter of these parameters can, however, be further traded for the point $z_c$ where the extremal surface crosses the shell, so that the interior surface is parameterized by the pair $(z_*,z_c)$. In the following, we will need the value of the derivative at the interior shell position $z'_-\equiv z'(x_c)$, which is given by (cf.~eq.~(\ref{eq:zdiffeq})) \begin{equation}\label{inside} z_-'=-\frac{1}{z_c^3}\sqrt{z_*^6-z_c^6} \,. \end{equation} Next, we continue the extremal surfaces across the shell using the junction conditions of eqs.~(\ref{eq:match1}) and (\ref{eq:match2}), with $\sigma=x$. In our current coordinate system, these read \begin{align} z'_+&=\frac{1}{z_c^2}\left[\alpha_+\alpha_-+\dot{z}_s(\alpha_--\alpha_+)-\dot{z}_s^2\right]z'_- +\frac{\dot{z}_s}{z_c^2}(\alpha_--\alpha_+)v'_- \,, \\ v'_+&=\frac{1}{z_c^2h_+(z_c) }\left[\alpha_+\alpha_--\dot{z}_s(\alpha_--\alpha_+)-\dot{z}_s^2\right]v'_-\,, \end{align} where $z_{\pm}$ and $v_{\pm}$ are the corresponding derivatives evaluated on the outside and inside of the shell, $\alpha_{\pm}\equiv\sqrt{h_{\pm}z_c^2+\dot{z}_s^2}$, and $\dot{z}_s$ is the proper velocity of the shell. Inside the shell, we can further use eq.~(\ref{eq:constanttime}) to write $v'_-=-z'_-$, which reduces the junction conditions to \begin{align} \frac{z'_+}{z'_-}&\equiv Z(\dot{z}_s)=\frac{1}{z_c^2}\left(\alpha_+\alpha_--\dot{z}_s^2\right) \,,\nonumber \\ \frac{v'_+}{z'_-}&\equiv V(\dot{z}_s)=-\frac{1}{z_c^2h_+(z_c)}\left[\alpha_+\alpha_--\dot{z}_s(\alpha_--\alpha_+)-\dot{z}_s^2\right]\,.\label{eq:matching} \end{align} As a sidenote, we remark that we have here introduced a notation, where the junction conditions are considered functions of the derivative terms; this is done in anticipation of the following section, where we will consider the effects of the quasistatic approximation where these derivatives are altogether ignored. From eq.~(\ref{eq:matching}), we see that the quantity $z'_-$, which depends on $z_*$ and $z_c$, determines the values of the derivatives $z'_+$ and $v'_+$ outside the shell. Thus the integration constant $E_+$ gets determined by $z_*$ and $z_c$ using eq.~(\ref{eq:energy}), and we can therefore denote $E_+=E_+(z_*,z_c)$. Now the boundary quantities can also be straightforwardly calculated using eqs.~(\ref{eq:energy}) and (\ref{eq:zdiffeq}), and in particular the length of the boundary interval becomes \begin{equation} L/2=\int_{z_c}^{z_*}\frac{dz}{\sqrt{H_-(z)}}+\int_{z_c}^{z_{max}}\frac{dz}{\sqrt{H_+(z)}}+\int_{0}^{z_{max}}\frac{dz}{\sqrt{H_+(z)}}\,.\label{eq:L} \end{equation} Here, we have denoted by $z_{max}$ the maximal value that the coordinate $z$ obtains along our extremal surface within the outside region. If $z'_+<0$, then $z_{max}=z_c$ as the surface climbs monotonically up towards the boundary, while if $z'_+>0$, then $z_{max}$ is the point at which the surface turns around outside the shell, to be determined from the condition \begin{equation} H_+(z_{max})=0. \end{equation} The time, at which the surface reaches the boundary, is on the other hand given by \begin{equation} t=v_s(z_c)+\int_{z_c}^{z_{max}}\frac{dz}{h_+}\Bigg[\frac{E}{\sqrt{H_+(z)}}-1\Bigg]+\int_{0}^{z_{max}}\frac{dz}{h_+}\Bigg[\frac{E}{\sqrt{H_+(z)}}+1\Bigg]\,,\label{eq:t} \end{equation} while the area of the extremal surface reads \begin{equation} A=2L_2L_3z_*^3\Bigg[\int_{z_c}^{z_*}\frac{dz}{z^6\sqrt{H_-(z)}}+\int_{z_c}^{z_{max}}\frac{dz}{z^6\sqrt{H_+(z)}}+\int_0^{z_{max}}\frac{dz}{z^6\sqrt{H_+(z)}}\Bigg]\,.\label{eq:a} \end{equation} As we can see from here, all boundary quantities have now been given implicitly in terms of two parameters: the turning point $z_*$ and the crossing location $z_c$. \subsubsection{Early time behavior} At early times, right after the shell is released from rest, the geometry is close to being static, and we can work in an expansion around a static shell. The relevant extremal surfaces then lie close to constant $t$ surfaces, making it appropriate to use the $(z,t)$ coordinate system. In particular, the shell trajectory near the turning point can be written in terms of a proper acceleration $a=\ddot{z}_s(0)$ as \begin{equation} z_s(\tau)=z_0+\frac{1}{2}a\tau^2+O(\tau^3)\,, \end{equation} where $a$ can be determined from the equation of motion of the shell. Expanding now the junction conditions to first order in powers of $\dot{z}_s=a\tau+O(\tau^2)$ gives \begin{equation} z'_+=\sqrt{h_+}z'_-,\quad t'_+=\frac{1-\sqrt{h_+}}{z_ch_+}\dot{z}_s z'_-\,. \end{equation} The boundary length and the area of the extremal surface are again given by eqs.~(\ref{eq:L}) and (\ref{eq:a}), while the boundary time becomes \begin{equation} t=t_s(z_c)+\int_0^{z_c}dz\frac{E_+}{h_+(z)\sqrt{H_+(z)}}\,.\label{eq:t2} \end{equation} Here, $t_s(z_c)$ is once again the shell trajectory, now parametrized in terms of $z$ and evaluated at the point where the extremal surface crosses the shell, $z=z_c$. In the following we will for simplicity denote $t_s(z_c)\equiv t_c$. Finally, the proper time $\tau$ can at early times be approximated by the proper time measured by an observer at rest at $z=z_c$, \begin{equation} \tau\approx \sqrt{h_+(z_c)}t_c/z_c\,. \end{equation} At this point, an important observation is that $t'_+$ is proportional to $\dot{z}_s\approx a\tau$, which is small at early times. Being proportional to $t'_+$, $E_+$ is therefore also small, and we can expand eqs.~(\ref{eq:L}), (\ref{eq:a}) and (\ref{eq:t2}) in powers of $E_+$ and $\delta z\equiv z_c-z_0=a\tau^2/2$. Assuming $L$ to be large, so that $z_*$ has to be sizable as well, we obtain for $L$ \begin{equation} \frac{L}{2}=z_* \sqrt{\pi}\frac{\Gamma\left(\frac{2}{3}\right)}{\Gamma\left(\frac{1}{6}\right)}+O(z_*^{-3})+O(z_*^{-3}\tau^2)\,. \label{eq:largezstar} \end{equation} We see from here that $z_*$ varies in time only very slowly, as the time dependence is supressed by an overall factor $z_*^{-3}$. Thus, $z_*$ is fixed in terms of $L$. \begin{figure}[t] \begin{center} \includegraphics[scale=.7]{earlytimeanalytic.pdf} \caption{\label{fig:earlytimeanalytic} The time dependence of the HEE (solid blue curves), obtained through a numerical integration of the extremal surface equations of motion, compared to the early time analytic solution of eq.~(\ref{analyticres}) (dashed red curves). Here $z_0=0.5$ and $L=8$, while $c=(-1,0,0.1,0.3,1/3)$ (left to right). } \end{center} \end{figure} To first order in $E_+$, the boundary time becomes now \begin{equation} t=t_c+\int_0^{z_0}dz\frac{E_+}{h_+\sqrt{H_0(z)}}+O(\tau^3)\,, \end{equation} where $H_0(z)=H_+(z)-E_+^2$. For large $z_*$, the integral can be easily evaluated, leading to \begin{equation} t\approx t_c\left[1-\frac{(1-\sqrt{h_+(z_0)})^2}{2 z_0^5}a\right]\,,\label{eq:boundaryt} \end{equation} where we have used \begin{equation} E_+=-\frac{\sqrt{h_+(z_c)}(1-\sqrt{h_+(z_c)})}{z_c^5}z_*^3 a t_c \end{equation} and replaced $z_c$ by $z_0$, which is allowed to leading order in $\tau$. Similarly, we can expand the area in eq.~(\ref{eq:a}) in powers of $E_+$ and $\delta z$, which leads to the change in the area equaling \begin{equation} \Delta A=A(t)-A(t=0)= L_2 L_3\frac{\sqrt{h_+(z_0)}(1-\sqrt{h_+(z_0)})}{z_0^{5}} at_c^2\left[1-\frac{(1-\sqrt{h_+(z_0)})^2}{2z_0^{5}}a\right]\,.\nonumber \end{equation} Combining finally the above results produces \begin{equation} \Delta A= \frac{1}{2}A_{\partial A}\frac{\sqrt{h_+(z_0)}\left(1-\sqrt{h_+(z_0)}\right)}{z_0^{5}}\frac{a t^2}{\left[1-\frac{(1-\sqrt{h_+(z_0)})^2}{2z_0^{5}}a\right]}\,, \label{analyticres} \end{equation} where we denote the area of the boundary entangling surface as $A_{\partial A}=2L_2L_3$. This formula nicely demonstrates the relation between the entanglement growth and acceleration at early times. The early time behavior obtained here is compared to a full numerical integration of eqs.~(\ref{eq:L}), (\ref{eq:t}) and (\ref{eq:a}) in fig.~\ref{fig:earlytimeanalytic}, which shows an impressive agreement up to relatively large time scales. \subsubsection{Linear scaling} For large values of $L$, the quadratic early time behaviour of the entanglement entropy is followed by a long regime of linear increase, where $\Delta S_\text{EE}\sim \Delta t$. In the case of Vaidya collapse, the existence of this region was demonstrated in \cite{Liu:2013qca}, where it was seen to emerge from extremal surfaces inside the horizon of the black hole. In what follows, our analysis is closely related to the Vaidya case, and to this end we refer the interested reader to \cite{Liu:2013qca} for more details. Our goal will be to provide a simple and hopefully intuitive picture of where and why the linear region appears in our setup, highlighting the main differences that arise due to the slower motion of the shell. The precise details of the shell motion are unimportant for what follows, so we will only use the fact that $z_s(v)$ is a monotonically increasing function of $v$ and that the shell does not move faster than the speed of light. A key assumption in our calculation is that the relevant extremal surfaces at late times are those that pass through the black hole horizon, which can indeed be shown to be true by numerically constructing the relevant surfaces. The equation of motion of extremal surfaces in the black hole region, cf.~eq.~(\ref{eq:zdiffeq}), can be written in a suggestive form \begin{equation} z'^2+V(z)=\mathcal{E},\quad V(z)=z_*^6\left(\frac{1}{z^2}-\frac{1}{z^6}\right),\quad \mathcal{E}=E_+^2, \end{equation} where we have neglected two terms subleading at large $z_*$. In the following, we will think of this equation as an EoM for a non-relativistic particle moving in the potential $V(z)$, with $x$ interpreted as a fictitious time coordinate. This potential is plotted in fig.~\ref{fig:potential}. \begin{figure}[t] \begin{center} \includegraphics[scale=1.2]{potential.pdf} \caption{\label{fig:potential} The potential in the effective particle problem. Note that in this figure the boundary resides at $z=0$, while the black hole singularity lives at $z=\infty$.} \end{center} \end{figure} The rule for constructing the extremal surface is as follows. As we saw previously, just outside the shell the derivatives $z'$ and $v'$ are determined by the two parameters $z_*$ and $z_c$ through the junction conditions of eq.~(\ref{eq:matching}). This setup is clearly equivalent to a classical mechanics problem with a particle starting from $z=z_c$ with some fixed initial velocity $z'=v_0$ and with the requirement of having to end up at the boundary. As can be seen from figure \ref{fig:potential}, the potential has a maximum at some $z=z_m$, and for $z>z_m$ the force felt by the particle tries to pull it towards the singularity. The location of this point is given by \begin{equation} V'(z_m)=0 \quad \Rightarrow \quad z_m=3^{1/4}. \end{equation} There are clearly two ways to avoid the fall of the particle into the black hole singularity. The first one is to have it start from $z_c > z_m$ and give it enough negative initial velocity to get over the potential barrier at $z_m$. This is, however, not allowed by the junction conditions, which determine the initial velocity through \begin{equation} v_0=-\frac{z_*^3}{z_c^5}A(z_c),\quad A(z_c)=\sqrt{z_c^2h(z_c)+\dot{z}_s^2}\sqrt{z_c^2+\dot{z}_s^2}-\dot{z}_s^2\,.\label{eq:v0} \end{equation} For general real values of $\dot{z}_s$, the function $A(z_c)$ takes negative values in the region $z_c>\bar{z}_c$, where $1<\bar{z_c}<2^{1/4}$ and where the upper bound is approached when we approach the Vaidya spacetime as $\dot{z}_s\rightarrow \infty$. This means that $A(z_c)$ is negative and $v_0$ positive for $z_c\ge z_m$, which implies that the particle will unavoidably fall into the singularity. A second way to reach the boundary is to start from $z<z_m$ and choose the initial velocity to be either positive but sufficiently small so that the particle will not get over the potential barrier, or alternatively even negative. For reasons that will become clear in a moment it is the first case that turns out to be the relevant one for us. Then, the maximum value for the energy of the particle is \begin{equation} \mathcal{E}_{max}=V(z_m)=-\frac{z_*^6h(z_m)}{z_m^6}\,, \end{equation} so that $E_+$ should be bounded by $\sqrt{\mathcal{E}_{max}}$. In order to make the initial velocity $v_0$ given by eq.~(\ref{eq:v0}) small, we must obviously have the coefficient $A(z_c)$ be very small. As $A(z_c)$ changes sign at $\bar{z}_c$, which falls in the interval $(1,z_m)$, we can do this by choosing $z_c$ to be sufficiently close to $\bar{z}_c$. Since we want the value of $v$, at which the particle reaches the boundary, to be large, it should spend a long fictitious time $x$ in the bulk. After the above considerations, it should now be clear how to arrange this: We should choose the initial velocity to be such that the particle almost reaches $z_m$ and then turns around. To quantify this statement, we expand the potential around $z=z_m$, obtaining \begin{equation} z'^2-\omega^2(z-z_m)^2 =-\omega^2\delta\mathcal{E},\quad \omega^2=-\frac{1}{2}V''(z_m),\quad \delta \mathcal{E}=(\mathcal{E}_{max}-\mathcal{E})/\omega^2\,, \label{eq:expandedeom} \end{equation} and then integrate this expression to get \begin{equation} \Delta x\approx \frac{2}{\omega}\int_{\sqrt{\delta\mathcal{E}}}\frac{dy}{\sqrt{y^2-\delta\mathcal{E}}}=\frac{1}{\omega}\log\left(\frac{\mathcal{E}_{max}-\mathcal{E}}{\omega^2}\right)\,.\label{eq:harmonicintegral} \end{equation} Here, we have used the integration variable $y\equiv z_m-z$ and neglected the contribution of the upper integration limit, as it is subleading in the limit of interest, $\delta\mathcal{E}\rightarrow 0$. The factor of 2 in front of the integral is due to the symmetricity of the trajectory. The value of the boundary time coordinate of the extremal surface is determined by \begin{equation} v'=\frac{\sqrt{\mathcal{E}}-z'}{h}\,. \end{equation} As $z'$ is very small for most of the time, we can treat the right hand side of this equation as a constant, producing \begin{equation} t\approx -\frac{\sqrt{\mathcal{E}_{max}}}{h(z_m)}\Delta x. \end{equation} The area of the extremal surface is then given by \begin{equation} A\approx \frac{4 L_2L_3z_*^3}{z_m^6\omega}\int_{\sqrt{\delta\mathcal{E}}}\frac{dy}{\sqrt{y^2-\delta\mathcal{E}}}= \frac{2 L_2L_3z_*^3}{z_m^6}\Delta x\,, \end{equation} from which we obtain, using $\Delta x=-t\frac{h(z_m)}{\sqrt{\mathcal{E}_{max}}}$ and $\mathcal{E}_{max}=-\frac{z_*^6h(z_m)}{z_m^6}$, \begin{equation} A\approx 2 L_2L_3 \,\frac{\sqrt{-h(z_m)}}{z_m^3}\,t\,. \end{equation} Inspecting the obtained result, we clearly observe that the area increases linearly in time. Following \cite{Liu:2013qca} in defining $v_\text{E}=\sqrt{-h(z_m)}/z_m^3$ as well as in denoting the thermal entropy density by $s_{th}=1/(4 G_N)$ and the area of the boundary entangling region by $A_{\partial A}=2 L_2 L_3$, we can now write \begin{equation} S_\text{EE}\approx A_{\partial A}s_{th}v_\text{E} t\,, \end{equation} where the numerical value of $v_\text{E}$ is found to be \begin{equation} v_\text{E}=\frac{\sqrt{2}}{3^{3/4}}\approx 0.620403\,. \end{equation} This is exactly the result obtained for the Vaidya case in \cite{Liu:2013qca}. The above discussion should make it clear that the result is independent of the shell trajectory and on the precise form of the junction conditions. \subsection{Linear scaling for general shapes} In ref.~\cite{Liu:2013qca}, the authors argue that in order to isolate the linear regime in the time evolution of the HEE, one can take a limit in which turning point of the geodesic is sent to infinity before considering the time to be large. In this limit, the time evolution of the extremal surface is mostly due to the region inside the black hole horizon but outside the shell (i.e.~within the AdS Schwarzschild metric), and the surface furthermore moves very little in the $x$ direction in comparison with the part in the pure AdS region. To obtain the leading contribution to the area of the surface, it is therefore sufficient to approximate it as moving only in the $v$ direction. This way, the area functional becomes \begin{equation} A=A_{\partial A}\int dz\frac{1}{z^3}\sqrt{-h \Big(\frac{dv}{dz}\Big)^2-2\frac{dv}{dz}}\,, \end{equation} where $A_{\partial A}$ is again the area of the boundary theory entangling region and we have parameterized the surface with $v=v(z)$, denoting $v'\equiv dv/dz$. Within the black hole region, there is again a conserved ``Hamiltonian'' \begin{equation} \frac{\partial L}{\partial v'}=-\frac{hv'+1}{z^3\sqrt{-h (v')^2-2v'}}=-E\,, \end{equation} from which we can easily solve \begin{equation} v'=\frac{1}{h}\left(\pm\frac{E}{\sqrt{E^2+\frac{h}{z^{6}}}}-1\right)\, , \end{equation} where the $\pm$ correspond to two branches of solutions. Upon integration, this finally gives \begin{equation} t=v_s(z_c)-\int_{z_c}^{z_{max}}\frac{dz}{h}\left(\frac{E}{\sqrt{E^2+\frac{h}{z^{6}}}}+1\right)-\int_{0}^{z_{max}}\frac{dz}{h}\left(\frac{E}{\sqrt{E^2+\frac{h}{z^{6}}}}-1\right)\,. \end{equation} Here, the trajectory of the shell appears only via $v_s(z_c)$, which is obtained by inverting the function $z_s(v)$. Next, let us concentrate on the dominant contribution to the above $z$ integral that originates from the region near $z=z_{max}$. As the first two terms of the Taylor expansion of $\sqrt{E^2+\frac{h}{z^{6}}}$ aroud this point vanish, the value of $t$ diverges logarithmically, signaling that these specific values of $E$ and $z_{max}$ correspond to a critical surface. This way, we are lead to the two conditions \begin{equation} E^2+\frac{h(z_{max})}{z_{max}^{6}}=0,\quad \frac{\partial}{\partial z_{\max}}\frac{h(z_{max})}{z_{max}^{6}}=0\,, \end{equation} of which the second one can be solved for the value of $z_{max}$ and the first one for $E$, giving \begin{equation} z_{max}=z_m=3^{1/4},\quad E=\sqrt{-\frac{h(z_m)}{z_m^{6}}}\,. \end{equation} Using these results, the area becomes \begin{equation} A=A_{\partial A}\int_{z_c}^{z_{max}}\frac{dz}{z^{6}}\frac{1}{\sqrt{E^2+\frac{h}{z^{6}}}}+A_{\partial A}\int_{0}^{z_{max}}\frac{dz}{z^{6}}\frac{1}{\sqrt{E^2+\frac{h}{z^{6}}}}\,, \end{equation} and matching the main logarithmic contributions of the two integrals produces \begin{equation} A\approx A_{\partial A}\frac{-h(z_m)}{E z_m^{6}}t\,. \end{equation} Finally, we approximate $E$ by its value at the critical surface and thereby obtain \begin{equation} A\approx A_{\partial A} v_\text{E} t\, , \quad v_\text{E}=\sqrt{-\frac{h(z_m)}{z_m^{6}}} \, , \end{equation} a result in full agreement with that of \cite{Liu:2013qca}. \section{Comparison with common approximation schemes} The collapsing shell model of gauge theory thermalization has been extensively used not only in the context of studying entropies, but also to evaluate various correlation functions, corresponding e.g.~to the electromagnetic current operator or the energy momentum tensor on the field theory side \cite{Lin:2008rw,Baier:2012tc,Baier:2012ax,Steineder:2012si,Steineder:2013ana,Stricker:2013lma}. While entanglement entropy calculations such as \cite{AbajoArrastia:2010yt,Albash:2010mv,Liu:2013iza,Liu:2013qca,Balasubramanian:2011ur} are typically performed in the Vaidya limit of a lightlike shell, the Green's functions are usually determined in the opposite `quasistatic' approximation, in which the shell is taken to be a static object when formulating the junction conditions for the corresponding bulk fields. Physically, this approximation amounts to assuming that the time scale associated with the collapse of the shell is considerably larger than any other time scales relevant for the system, such as the inverse energy scale $1/\omega$ of the two-point function considered. The calculations we have performed in this paper for the HEE allow us to make an interesting comparison between our `exact' results, derived for shells following realistic trajectories and employing the full junction conditions, and the corresponding quasistatic and Vaidya limits thereof. To obtain the former limit, we simply set $\dot{z}_s=0$ in the junction conditions of eq.~(\ref{eq:matching}), reducing them to the simple forms \begin{eqnarray} \frac{z'_+}{z'_-}&=& Z(0)=\frac{\alpha_+\alpha_-}{z_c^2} \,,\nonumber \\ \frac{v'_+}{z'_-}&=& V(0)=-\frac{\alpha_+\alpha_-}{z_c^2h_+(z_c)}\,,\label{qs} \end{eqnarray} where $z_-'$ is given by eq.~(\ref{inside}).\footnote{The physical timelike trajectories are, however, used elsewhere in the calculation.} At the same time, the Vaidya result is available by merely replacing the shell trajectory by an ingoing lighlike geodesic. Naively, we expect the quasistatic approximation to be valid only at the earliest times, i.e.~near the turning point of the shell, while the Vaidya limit should be approached at late times. \begin{figure}[t] \begin{center} \includegraphics[scale=.4]{4a.pdf}$\;\;\;\;\;$\includegraphics[scale=.4]{4b.pdf} \caption{\label{fig:staticapprox} Left: The HEE evaluated for a sphere of radius $R=2$ for $z_0=0.2$ and $c=0,\;0.33,\;1/3$ (from left to right). Shown here are the full results (solid red lines) as well as the quasistatic (black dashed lines) approximation thereof. Right: A zoom-in to the $c=0$ case, with the Vaidya limit (blue dotted line) added in the plot. } \end{center} \end{figure} In figure \ref{fig:staticapprox}, we display the result of the above comparison for the HEE evaluated for a sphere of radius $R=2$. Somewhat to our surprise, we observe from the left figure that independent of the value of $c$, the static approximation appears to work rather well until fairly late times and only breaks down when the shell is very close to the horizon; to aid the comparison, we have marked with dashed vertical lines the boundary times, at which those extremal surfaces that intersect the shell very close to the horizon, at $z_s=0.99z_h$, anchor to the boundary. At this point, the deviation of the quasistatic result from the full one is still only at the $10\%$ level. In the right figure, we take a closer look at the $c=0$ case and include in the figure also the Vaidya limit. We observe that the Vaidya result in turn gives a very good approximation of the full one already at relatively early times, and in particular that a combination of the quasistatic and Vaidya curves approximates the physical behavior of the entropy to a few percent level at all times. It is tempting to speculate, whether this observation would generalize to other, more complicated observables as well. To understand the observed behavior, it is instructive to inspect, how the full matching conditions of eq.~(\ref{eq:matching}) relate to the static approximation of eq.~(\ref{qs}). The result of this comparison is displayed in fig.~\ref{fig:relmc}, where we plot $Z(\dot{z}_s)/Z(0)$ in the upper and $V(\dot{z}_s)/V(0)$ in the lower part of the figure, both given as functions of the boundary time. Shown are three curves corresponding to the three different values of the shell initial data already inspected in fig.~\ref{fig:staticapprox}: $c=0$, 0.3 and 1/3, with $z_0$ set to 0.2 in each case. Comparing to fig.~\ref{fig:staticapprox}, we observe that in all three cases the deviation of the quasistatic HEE from the full result can to a good accuracy be attributed to the growth of the derivative terms in the junction conditions. In particular, the onset of the rapid growth of $Z(\dot{z}_s)/Z(0)$ in fig.~\ref{fig:relmc} coincides very accurately with the point of time, when the quasistatic entropy starts to visibly deviate from the full result in fig.~\ref{fig:staticapprox}. \begin{figure}[t] \begin{center} \includegraphics[scale=.55]{matching2.pdf} \caption{\label{fig:relmc} A plot of the ratios $Z(\dot{z}_s)/Z(0)$ (upper part) $V(\dot{z}_s)/V(0)$ (lower part) as functions of the boundary time. In both cases, the three curves correspond to the three cases displayed in fig.~\ref{fig:staticapprox}, i.e.~$z_0=0.2$ and $c=0,\;0.33,\;1/3$ (from left to right). } \end{center} \end{figure} Having seen that the success of the quasistatic approximation can be traced back to the junction conditions for extremal surfaces, it is natural to ask, to what extent one can understand the reason for the slow turning on of their derivative terms. To this end, we now switch to the $r$, $t$ coordinate system, in which the matching conditions can be shown to take the forms \begin{align} \frac{t'_+}{r'_-}\equiv T(\dot{r}_s)=\frac{\dot{r}_s}{f_+ f_-}(\beta_--\beta_+)\,, \\ \frac{r'_+}{r'_-}\equiv R(\dot{r}_s)=\frac{1}{f_-}(\beta_+\beta_--\dot{r}_s^2)\,, \end{align} with the quasistatic limit corresponding to \begin{equation} T(0)=0,\quad R(0)=\sqrt{\frac{f_+}{f_-}}\;, \end{equation} and the Vaidya one to \begin{equation} T(\infty)=-\frac{f_+-f_-}{2f_+f_-},\quad R(\infty)=\frac{f_++f_-}{2f_-}\,. \end{equation} Expanding these functions around $r=\infty$ in the case of a $d$-dimensional field theory, we obtain in the quasistatic approximation \begin{equation} T(0)=0\quad R(0)=1-\frac{1}{2r^d}+O(r^{-2d}), \end{equation} while the Vaidya counterparts of these results become \begin{equation} T(\infty)=\frac{1}{2 r^{d+2}},\quad R(\infty)=1-\frac{1}{2r^d}\,. \end{equation} From here, we see that for $r\gg 1$ the two matching conditions quickly approach each other, and that in particular the asymptotic behavior of $R$ is independent of $\dot{r}_s$. This explains, why the quasistatic entropy approximates the full results, and even the Vaidya limit, so well, when the shell is released from close to the boundary. \begin{figure}[t] \includegraphics[scale=0.67]{quasid2.pdf}$\;\; $\includegraphics[scale=0.67]{quasid4.pdf}$\;\; $\includegraphics[scale=0.67]{quasid6.pdf} \caption{\label{fig:quasivaidya2} Comparison of the Vaidya (blue curve) and quasistatic (purple curve) limits of the HEE at $d=2$, 4 and 6, evaluated for a strip with $L=8$. } \end{figure} Finally, we remark that it is clear from the above results that the quasistatic approximation should successively improve, as the number of spatial dimensions in the system is increased --- a direct consequence of the $d$-dependence of the blackening factor in the AdS Schwarzschild metric, $h(z)=1-z^d$. This effect is indeed clearly seen in the three plots displayed in fig.~\ref{fig:quasivaidya2}, where we compare the quasistatic and Vaidya limits of the HEE at $d=2$, 4 and 6. In the quasistatic results, the trajectory of the shell is taken to be lightlike and start from the boundary. \section{Conclusions} Many of the existing studies of holographic equilibration have been performed within highly simplified models, which however have the virtue of allowing the determination of rather complicated physical quantities. One prominent example of this is the description of black hole formation via the gravitational collapse of a thin shell of matter in AdS spacetime, typically motivated as resulting from a rapid quench in the dual field theory system. Up to very recently, the determination of physical observables in this model has, however, been only possible with further simplifications, such as approximating the spacetime by its Vaidya limit, corresponding to a lightlike shell, or in the quasistatic approximation where the shell is taken to move arbitrarily slowly when formulating the so-called junction conditions. In our previous paper \cite{Keranen:2014zoa}, we took the first steps towards overcoming these limitations in the case of the Holographic Entanglement Entropy (HEE). In particular, we considered there the time evolution of the HEE in the background of a shell following its physical trajectory, varying both the equation of state and turning point of the shell. Doing so, we were able to verify the earlier conjecture of \cite{Liu:2013iza} concerning the existence of a linear regime in the time evolution of the HEE that only depends on the properties of the final state of the system. In the paper at hand, we have continued work in the direction of \cite{Keranen:2014zoa}. In particular, we have studied the universality of the early and late time behaviors of the evolution of the HEE, and provided the first ever comparison of a fully dynamical thermalization calculation with its Vaidya and quasistatic limits. As expected, we observed that the quasistatic results provide a good approximation of the full ones at early times, while the Vaidya limit is approached at late times. Similarly, the quality of the quasistatic approximation improves when the shell EoS is chosen to minimize the rate of the collapse, while the Vaidya limit works better when the shell is allowed to accelerate more quickly. What we, however, found surprising was the fact that for many shell EoSs one can identify a brief period of overlap between the regions of validity of the quasistatic and Vaidya approximations, which seems to be at odds with the opposite nature of the two limits. To this end, we performed a detailed investigation of the reason of this behavior, tracing it back to the form of the junction conditions for extremal surfaces at the shell and to the slow onset of derivative terms in them. This observation suggests that the behavior may well be of somewhat universal nature, extending beyond the HEE, and raises hopes that the finding may eventually be used to simplify the holographic determination of many other dynamical quantities. Apart from improving the general understanding of thermalization dynamics, the most important outcome of our present work is the introduction of new technical tools to aid dynamical calculations within the collapsing shell model. In particular, the construction of a coordinate system continuous at the location of the shell and the derivation of junction conditions for extremal surfaces penetrating the shell are results that we hope will find applications in many forthcoming works. \section*{Acknowlegements} We thank Rudolf Baier, Jan de Boer, Niko Jokela, Esko Keski-Vakkuri, and Javier Mas for useful discussions. V.K.~was supported by the European Research Council under the European Union's Seventh Framework Programme (ERC Grant agreement 307955), H.N.~by the Bielefeld Young Researcher's Fund, S.S.~by the FWF project P26328, O.T.~by a research program of the "Stichting voor Fundamenteel Onderzoek der Materie (FOM)", financially supported by the "Nederlandse organisatie voor Wetenschappelijke Onderzoek (NWO)", and~A.V. by the Academy of Finland, grant $\#$266185. \begin{appendix} \section{Equation of motion for the shell\label{sec:eom}} In this first appendix, we derive the equation of motion of a thin shell undergoing gravitational collapse in AdS$_5$ spacetime. To simplify our expressions, we suppress here the indices $\pm$ indicating whether we are inside or outside of the shell. All identities involving $f$ or $t$ can be seen to hold both inside and outside of the shell. The unit normal vector of the shell is easily seen to read \begin{equation} n^r = \sqrt{f+\dot{r}_s^2} \quad \text{and} \quad n^t = \frac{\dot{r}_s}{f} \, , \end{equation} where we have chosen the vector to point towards increasing $r$, and $f$ is evaluated at the location of the shell, $r=r_s$. Using this, we see that the only nonzero components of the shell's extrinsic curvature, \begin{equation} K_{ij} = n_\alpha\left( \frac{\partial^2 y^\alpha}{\partial \xi^i \, \partial \xi^j} + \Gamma^\alpha_{\beta\gamma} \frac{\partial y^\beta}{\partial \xi^i}\frac{\partial y^\gamma}{\partial \xi^j}\right) \, , \end{equation} are \begin{equation} K_{\tau\tau} = \frac{f'+2\ddot{r}_s}{2\sqrt{f+\dot{r}_s^2}} \quad \text{and} \quad K_{xx} = K_{yy} = K_{zz} = -r_s\sqrt{f+\dot{r}_s^2} \; . \end{equation} The induced metric on the shell is on the other hand given by \begin{equation} \gamma_{ij} = \partial_i y^\mu\,\partial_j y^\nu\,g_{\mu\nu}\, , \end{equation} so that \begin{equation} \gamma_{\tau\tau} = -1 \quad \text{and} \quad \gamma_{xx} = \gamma_{yy} = \gamma_{zz} = r^2_s \, . \end{equation} Thus, the trace of the curvature tensor reads \begin{equation} K = \gamma^{ij} K_{ij} = -\frac{1}{\sqrt{f+\dot{r}_s^2}}\left( \ddot{r}_s+\frac{f'}{2}+\frac{3}{r_s}\left( f+\dot{r}_s^2\right)\right)\, . \end{equation} To derive the EoM of the shell, we use the Isreal junction condition \cite{Israel:1966rt}. It states that the difference of the extrinsic curvature between the inside and outside of the shell is related to the energy-momentum content of the object through \begin{equation} \label{eq:Israeli} \left[ K_{ij}-\gamma_{ij}K\right] = -8\pi g_5 S_{ij} \, , \end{equation} where the square brackets denote the difference between the inside and outside, \begin{equation} \left[ \mathcal{O} \right] = \mathcal{O}_\mathrm{inside} - \mathcal{O}_\mathrm{outside}\, . \end{equation} The LHS of eq.~(\ref{eq:Israeli}) is zero for the non-diagonal terms, while for the diagonal terms the expression inside the square bracket is given by \begin{align} K_{\tau\tau} - \gamma_{\tau\tau} K & = -\frac{3}{r_s} \sqrt{f+\dot{r}_s^2} \, , \\ K_{xx}-\gamma_{xx} K & = \frac{1}{\dot{r}_s} \frac{d}{d\tau}\left( r_s^2 \sqrt{f+\dot{r}_s^2}\right) \, . \end{align} \section{Determining the metric in the continuous coordinate system \label{sec:cont}} In this appendix, we determine the values of the partial derivatives $\PD{r,\tau,\lambda}$, $\PD{t,\tau,\lambda}$, etc.~needed to construct a coordinate system continuous at the shell, cf.~section \ref{sec:junctionconditions}. The equations determining spacelike geodesics are given by \begin{align} \label{eq:geodesic1} \frac{f'}{f} \frac{dr}{d\sigma}\frac{dt}{d\sigma} + \frac{d^2t}{d\sigma^2} & \;=\; 0\,,\\ \label{eq:geodesic2}-\frac{f'}{2f}\left( \frac{dr}{d\sigma} \right)^2 +\frac{1}{2} f\,f'\left( \frac{dt}{d\sigma} \right)^2 - r\,f\left( \frac{dx}{d\sigma} \right)^2 + \frac{d^2r}{d\sigma^2} &\;=\;0\,,\\ \label{eq:geodesic3}\frac{2}{r}\frac{dr}{d\sigma}\frac{dx}{d\sigma} + \frac{d^2x}{d\sigma^2} &\;=\;0\,, \end{align} where $'$ denotes a derivative with respect to $r$, and $\sigma$ is the affine parameter of the geodesic, identified as the proper length $\sigma = \lambda$. The trajectories we are interested in are defined at a constant $\mathbf{x}$, so that the equations can be integrated to give \begin{align} \label{eq:ftdot} f\frac{dt}{d\lambda} & = A\, ,\\ \label{eq:rdotsquared} \left(\frac{dr}{d\lambda}\right)^2 & = f + A^2 \, . \end{align} The value of the integration constant $A$ can be fixed by requiring that the geodesic points in the direction of the normal vector of the shell at its location, \begin{equation} \left.\label{eq:normal} \frac{dx^\mu}{d\lambda} \right|_{\lambda=0}= n^\mu \, , \end{equation} where $n^\mu$ has the form \begin{equation} \left[ n^\mu \right] = \left( \dot{r}_s/f, \sqrt{f+\dot{r}_s^2},\vec{0}\right) \, . \end{equation} Evaluating eq.~(\ref{eq:normal}) at the shell, we easily get \begin{align} \frac{dt}{d\lambda} &= \frac{A}{f(r_s)} = \frac{\dot{r}_s}{f(r_s)}\,,\\ \frac{dr}{d\lambda} &= \sqrt{f(r_s) + A^2} = \sqrt{f(r_s) + \dot{r}_s^2} \, , \end{align} from which we see that $A = \dot{r}_s$. Since eqs.~(\ref{eq:ftdot}) - (\ref{eq:rdotsquared}) apply for spacelike geodesics, which by definition have constant $\tau$, these relations immediately produce the two partial derivatives \begin{equation} \PD{t,\lambda,\tau} = \frac{\dot{r}_s}{f(r)} \quad \text{and} \quad \PD{r,\lambda,\tau} = \sqrt{f(r) + \dot{r}_s^2} \; . \end{equation} To get the derivatives $\PD{r,\tau,\lambda}$ and $\PD{t,\tau,\lambda}$, we on the other hand need to differentiate the integrals of the equations of motion. Equation (\ref{eq:rdotsquared}) can be integrated to yield \begin{equation} \label{eq:lambdar} \int_0^\lambda d\lambda = \int_{r_s(\tau)}^{r(\tau,\lambda)} \frac{dr}{\sqrt{f(r)+\dot{r}^2_s(\tau)}} \,, \end{equation} which upon differentiation w.r.t.~$\tau$ leads to the expression \begin{equation} \PD{\lambda,\tau,\lambda} = \PD{r,\tau,\lambda} \frac{1}{\sqrt{f+\dot{r}_s^2}} -\frac{\dot{r}_s}{\sqrt{f_s+\dot{r}_s^2(\tau)}} - \int_{r_s(\tau)}^{r(\lambda,\tau)} dr \frac{\ddot{r}_s\dot{r}_s}{(f(r) + \dot{r}_s^2)^{\frac{3}{2}}} \, . \end{equation} By changing the integration variable in this equation to $\lambda$ and requiring that the expression vanishes ($\lambda$ and $\tau$ are by definition independent variables) then gives \begin{equation} \PD{r,\tau,\lambda} = \dot{r}_s\sqrt{f+\dot{r}_s^2}\left(\frac{1}{\sqrt{f_s+\dot{r}_s^2}} + \int_0^\lambda d\lambda \frac{\ddot{r}_s}{f+\dot{r}_s^2} \right) \label{eq:drds} \, . \end{equation} Finally, to get $\PD{t,\tau,\lambda}$, we solve $t$ as an integral of $\lambda$ from eq.~(\ref{eq:ftdot}) and then differentiate it w.r.t.~$\tau$ to get \begin{equation} \label{eq:dtds1} \PD{t,\tau,\lambda} - \dot{t}_s = \int_0^\lambda d\lambda \left[ \frac{\ddot{r}_s}{f} - \dot{r}_s\frac{f'}{f^2}\PD{r,\tau,\lambda}\, \right] \, , \end{equation} which quickly leads us to \begin{equation} \label{eq:dtds} \PD{t,\tau,\lambda} = \frac{f+\dot{r}_s^2}{f}\left(\frac{1}{\sqrt{f_s+\dot{r}_s^2}} + \int_0^\lambda d\lambda \frac{\ddot{r}_s}{f+\dot{r}_s^2} \right) = \frac{\sqrt{f+\dot{r}_s^2}}{f \, \dot{r}_s} \PD{r,\tau,\lambda} \, . \end{equation} Now that we have all the necessary partial derivates at hand, we can see what our desired metric looks like. Setting $dx=0$, we get \begin{align*} ds^2 & = -f\,dt^2 + \frac{dr^2}{f}\\ & = d\tau^2 \left[-f \PD{t,\tau,\lambda}^2 + \frac{1}{f} \PD{r,\tau,\lambda}^2\right] + d\lambda^2 \left[ -f \PD{t,\lambda,\tau}^2+ \frac{1}{f} \PD{r,\lambda,\tau}^2\right]\\ & \qquad + 2\,d\lambda\,d\tau\,\left[ -f\PD{t,\tau,\lambda}\PD{t,\lambda,\tau}+\frac{1}{f}\PD{r,\tau,\lambda}\PD{r,\lambda,\tau} \right]\,, \end{align*} where the $\lambda\lambda$-component can be simplified using the fact that $\lambda$ was defined as a proper length, \begin{equation} g_{\lambda\lambda} = -f \PD{t,\lambda,\tau}^2 + \frac{1}{f} \PD{r,\lambda,\tau}^2 = -f \frac{\dot{r}_s^2}{f^2}+ \frac{1}{f}(f+\dot{r}_s^2) = 1 \, . \end{equation} The non-diagonal part of the metric is on the other hand given by \begin{align} \nonumber g_{\tau\lambda} = & -f \PD{t,\tau,\lambda}\PD{t,\lambda,\tau} + \frac{1}{f} \PD{r,\tau,\lambda}\PD{r,\lambda,\tau}\\ = & -f \frac{\dot{r}_s}{f} \frac{\sqrt{f+\dot{r}_s^2}}{f \, \dot{r}_s} \PD{r,\tau,\lambda} + \frac{1}{f} \sqrt{f+\dot{r}_s^2} \PD{r,\tau,\lambda}\\ \nonumber = & \, 0 \, , \end{align} implying that the metric is everywhere diagonal. Finally, the $\tau\tau$-component is given by the (by construction continuous) function \begin{align} \nonumber g_{\tau\tau} = & -f \PD{t,\tau,\lambda}^2 + \frac{1}{f} \PD{r,\tau,\lambda}^2\\ = & -f \frac{f+\dot{r}_s^2}{f^2 \dot{r}_s^2}\PD{r,\tau,\lambda}^2 + \frac{1}{f} \PD{r,\tau,\lambda}^2\\ \nonumber = & -\left(\sqrt{\frac{f+\dot{r}_s^2}{f_s+\dot{r}_s^2}} + \sqrt{f+\dot{r}_s^2} \int_0^\lambda d\lambda \frac{\ddot{r}_s}{f+\dot{r}_s^2} \right)^2 \, . \end{align} It is important to note that although the result for the $\tau\tau$ component of the metric looks complicated and has dependence on the function $f$, on the shell it is equal to just $-1$, independent of the functional form of $f(r)$. This reflects the fact that $\tau$ was defined as the proper time of the shell. \section{Generalized junction conditions \label{genjun}} In this appendix, our goal is to generalize the junction conditions to the case where there are two unknown functions in the metric, \begin{equation} ds^2 = -f(r) \,dt^2 + \frac{dr^2}{g(r)} + r^2\,d\mathbf{x}^2 \, . \end{equation} Following the above treatment, we again first construct the continuous coordinate system, and then proceed to derive the junction conditions. We will again take the location of the shell to be parameterized by $(t_s(\tau),r_s(\tau))$; this time the relation between $r_s$ and $t_s$, however, reads \begin{equation} \dot{t}_s^2 = \frac{1}{f}\left( \frac{\dot{r}_s^2}{g}+1\right) \, , \end{equation} while the normal vector of the shell is given by \begin{equation} \left[ n^\mu \right] = \left( \frac{\dot{r}_s}{\sqrt{fg}}, \sqrt{g+\dot{r}_s^2},0,\ldots\right) \, . \end{equation} The geodesic equations are now seen to take the forms \begin{align} \frac{f'\dot{r}\dot{t}}{f}+\ddot{t} & = 0,\,\\ -\frac{g'}{2g}\dot{r}^2 + \frac{1}{2} g f' \dot{t}^2 + \ddot{r} & =0,\, \end{align} which --- taking into account that the affine parameter is the proper length of a space-like geodesic --- can be integrated to \begin{align} f\,\dot{t} & =A\, ,\\ \dot{r}^2 &= g\left( 1+\frac{A^2}{f}\right)\, . \end{align} Requiring finally that the geodesic is normal to the shell, \begin{equation} \frac{dx^\mu}{d\lambda}\bigg|_{\lambda=0} = n^\mu \, , \end{equation} we obtain for the constant $A$ \begin{equation} A = \sqrt{\frac{f_s}{g_s}} \dot{r}_s\, . \end{equation} At this point, we can again read off the necessary partial derivatives at the shell, obtaining \begin{equation} \PD{t,\lambda,\tau} = \frac{\dot{r}_s}{\sqrt{f_sg_s}}\;, \quad \PD{r,\lambda,\tau} = \sqrt{g_s+\dot{r}_s^2}\;,\quad \PD{t,\tau,\lambda} = \sqrt{\frac{g_s+\dot{r}_s^2}{f_s g_s}}\;,\quad\PD{r,\tau,\lambda}=\dot{r}_s \, . \end{equation} Inserting these into the relations (cf.~eqs.~(\ref{eq:dlambdadx})--(\ref{eq:dsdx})) \begin{align} \frac{d\lambda}{d\sigma} & = \frac{\PD{r_+,\tau,\lambda}\frac{dt_+}{d\sigma}-\PD{t_+,\tau,\lambda}\frac{dr_+}{d\sigma}}{\PD{r_+,\tau,\lambda}\PD{t_+,\lambda,\tau}-\PD{r_+,\lambda,\tau}\PD{t_+,\tau,\lambda}}\,,\\ \frac{d\tau}{d\sigma} & = \frac{\PD{r_+,\lambda,\tau}\frac{dt_+}{d\sigma}-\PD{t_+,\lambda,\tau}\frac{dr_+}{d\sigma}}{\PD{r_+,\lambda,\tau}\PD{t_+,\tau,\lambda}-\PD{r_+,\tau,\lambda}\PD{t_+,\lambda,\tau}} \, , \end{align} we get at the location of the shell \begin{align} \frac{d\lambda}{d\sigma} &= -\sqrt{\frac{f_+}{g_+}}\dot{r}_s \frac{dt_+}{d\sigma}+\frac{\sqrt{g_++\dot{r}_s^2}}{g_+}\frac{dr_+}{d\sigma}\,, \\ \frac{d \tau}{d\sigma} & = \sqrt{\frac{f_+}{g_+}}\sqrt{g_++\dot{r}_s^2}\frac{dt_+}{d\sigma} -\frac{\dot{r}_s}{g_+}\frac{dr_+}{d\sigma} \, . \end{align} Using the chain rule, we next express $dt/d\sigma$ and $dr/d\sigma$ inside of the shell as \begin{align} \frac{dt_-}{d\sigma} &= \PD{t_-,\tau,\lambda}\frac{d\tau}{d\sigma} + \PD{t_-,\lambda,\tau}\frac{d\lambda}{d\sigma}\,,\\ \frac{dr_-}{d\sigma} &= \PD{r_-,\tau,\lambda}\frac{d\tau}{d\sigma} + \PD{r_-,\lambda,\tau}\frac{d\lambda}{d\sigma}\,, \end{align} which, when evaluated at the shell, produce \begin{align} \frac{dt_-}{d\sigma} &= \sqrt{\frac{g_-+\dot{r}_s^2}{f_-g_-}}\frac{d\tau}{d\sigma}+\frac{\dot{r}_s}{\sqrt{f_-g_-}}\frac{d\lambda}{d\sigma}\,,\\ \frac{dr_-}{d\sigma} &= \dot{r}_s \frac{d\tau}{d\sigma} +\sqrt{g+\dot{r}_s^2}\frac{d\lambda}{d\sigma} \, . \end{align} Combining finally all the above results, we obtain as the generalized junction conditions \begin{align} \frac{dt_-}{d\sigma} & = \frac{dt_+}{d\sigma}\sqrt{\frac{f_+}{f_-g_-g_+}}\left( \beta_+\beta_- -\dot{r}_s^2\right)+\frac{dr_+}{d\sigma} \frac{\dot{r}_s}{g_+\sqrt{f_-g_-}}\left( \beta_+ - \beta_-\right)\, ,\\ \frac{dr_-}{d\sigma} & = \frac{dt_+}{d\sigma} \dot{r}_s\sqrt{\frac{f_+}{g_+}}\left(\beta_+-\beta_-\right)+\frac{dr_+}{d\sigma} \frac{1}{g_+}\left( \beta_+\beta_--\dot{r}_s^2\right)\,, \end{align} where $\beta_\pm$ are defined as before and the expression is evaluated at the location of the shell. \section{Spherical boundary region}\label{sec:spherical} In this last appendix, we provide some details for the computation of the HEE in the case, where the boundary surface has the form of a sphere of radius $R$. In this case, the extremal surface is independent of the angular coordinates due to rotational symmetry, and we can parametrize it as $z=z(\rho)$, $v=v(\rho)$, with $\rho$ being the radial coordinate on the field theory side (i.e.~on the boundary). The area functional becomes then \begin{equation} A=4\pi \int d\rho\frac{\rho^2}{z(\rho)^3}\sqrt{B}=4\pi \int d\rho \, \mathcal{L}\,, \end{equation} where $B$ is the same quantity as in the strip case except that the derivatives $z'$ and $v'$ are derivatives with respect to $\rho$. Due to the explicit appearance of $\rho$ in the area functional, there are fewer conserved quantities this time, making the spherical case slightly more complicated than the strip one. There is, however, still a partial time translational invariance away from the shell, which gives rise to the conservation law \begin{equation} \frac{\partial\mathcal{L}}{\partial v'}=-\frac{\rho^2(h v'+z')}{z^3\sqrt{B}}=E\,,\label{eq:energy2} \end{equation} with $E$ again taking different values on the two sides of the shell. In the interior region (assuming the boundary radius to be large enough so that the extremal surface passes through the shell) there is a turning point at $\rho=0$, where $z'$ and $v'$ vanish, which immediately tells us that $E=0$ in the interior, or \begin{equation} v'=-z'\,. \end{equation} Applying this identity in the Euler-Lagrange equation for $z(\rho)$, we obtain \begin{equation} z(\rho z''+2 z'^3+2 z')+3 \rho (1+z'^2)=0\,. \end{equation} This equation has a one parameter family of solutions \begin{equation} z(\rho)=\sqrt{z_*^2-\rho^2}\,, \end{equation} labeled by the turning point $z_*$, identified as the most general regular solution with a turning point at $z=z_*$. For a second order equation, one should specify two initial conditions, which in our case are chosen as $z(0)=z_*$ and $z'(0)=0$. Outside the shell, we use eq.~(\ref{eq:energy2}) to solve for $v'$ and plug this into the Euler-Lagrange equation for $z$, giving \begin{equation} 2(\rho^4 z h+E_+^2z^7)z''+2\rho^3 h (3\rho z'^2+2zz')+z(\rho^3 z'^2(-\rho h'+4 z')+E_+^2z^6 h')+6\rho^4 h^2=0\,. \end{equation} This equation needs to be solved numerically. Noting that the interior surface satisfies $v'=-z'$, the junction condition for the derivatives at the position of the shell is again given by eq.~(\ref{eq:matching}), with the derivatives now understood as derivatives with respect to $\rho$. This way the value of the constant $E_+$ is fixed to be \begin{equation} E_+=-\frac{\rho_c^2(h_+ v'_++z'_+)}{z_c^3\sqrt{1-h_+ (v_+')^2-2v_+'z_+'}}\,,\label{eq:energy3} \end{equation} which completes our exercise. \end{appendix} \bibliographystyle{JHEP-2}
1,477,468,750,759
arxiv
\section{Introduction} Web applications constitute valuable up-to-date tools in a number of different cases. One such case is their use in the management of environmental problems so as to protect civilians from any unfortunate consequences that these problems can cause. Their evolution, therefore, has been especially important in many cases, one of them being in the development of systems of administration of the air quality \cite{Triant2004}.The right of accessing environmental information has been enacted in European level through appropriate legislation, which are incorporated in the relevant Greek legislation see \cite {Council1}- \cite {Council5}. Nowadays, the combination of telecommunications and new technologies create a framework for developing such systems increasingly sophisticated \cite {Karatzas}- \cite {Triant_book} . That is just diffusion of environmental information and public access which was attempted effectively through the system codenamed EAP (Laboratory of Atmospheric Pollution and Environmental Physics) in Western Macedonia. It is developed for the first time in 2002 \cite {anakoinosi}, providing the possibility for direct information to the public about the air quality, as it was recorded in the four atmospheric measurement stations established in the capitals of Countries Kozani, Florina, Kastoria and Grevena though an appropriate web-site, as well as SMS, with the possibility for extension of stations and also the historical measurements privilege \cite {triantEvazoras2006}. For every station a previous and current index of pollution appears (in a scale 1-10) with an appropriate colour scale \cite {Comeap}. The system was expanded and upgraded in May 2010, which consists in transferring data, the way of presentation as well as the amount of information provided. Specifically it is recommended: a) the combine use of different methods of transportation in real or almost real time data of terminal stations measurements to a central base station b) the environmental information is promoted to the internet, with a properly designed dynamic website with enabled navigating of Google map \cite {TriantSkordas}, \cite {Skordas_Fragulis_Triant2011}, \cite {airlab}. In this paper the novelty of information system EAP is presented, in a dynamic, easily accessible and user-friendly manner. It consists of a structured system that users have access to and they can manipulate thoroughly, as well as of a system for accessing and managing results of measurements in a direct and dynamic way. It provides updates about the weather and pollution forecast for the next few days (based on current day information) in Western Macedonia. These forecasts are displayed through dynamic-interactive web charts and the visual illustration of the atmospheric pollution of the region in a map using images and animation images. Moreover, there is the option to view historical elements. An additional new function is the use of online reports to monitor, analyze, control and processing measurements, historical data and statistics of each station in real time over the Internet. This function focuses on designing an effective and user-friendly process. Finally, the management system of measurement stations, the administrator has the ability to dynamically create, modify and delete objects, points and information of each station on the GoogleMap. In this way the processing (update, delete, add) of points is easier. The A.Q.M.E.I.S. application has been developed using open source software tools like HTML, Javascript, PHP and MySQL. HTML is the language for the Internet Interface design. The goal for HTML was to create a platform-independent language for constructing hypertext documents to communicate multimedia information easily over the Internet \cite {web2008}. Javascript is a client-side scripting language that provides powerful extensions to the HTML used to compose web pages and is mainly utilised for checking and validating web form input values to make sure that no invalid data are submitted to the server \cite {Java1}. PHP is the most popular server-side programming language for use on Web servers. Any PHP code in a requested file is executed by the PHP runtime, usually to create dynamic web page content. It can also be used for command-line scripting and client-side GUI applications. PHP is a cross platform programming language and can be used with many relational database management systems (RDBMS)\cite {php1}. MySql is a high-performance, multi-thread, multi-user RDBMS and is built around a client-server architecture. Also MySQL uses the Structured Query Language standard giving a great deal of control over this relational database system \cite {mysql}. Finally, Apache server is responsible for expecting requests from various programs - users and then serve the pages, according to the standards set by the protocol HTTP (Hypertext Transfer Protocol) \cite {apache}. \section{User Interface} In this section the user interface and functions of this application are described. There are three (3) levels of user access (groups of users). On the first level the user has the ability to be informed in real time about the weather conditions, the air pollution and air pollution indices in an area of interest using Google Map.The second level is for authorized users only, who can be informed analytically through reports about measurements of a specific time period. The third level is for the administrator, who has access to all information and who also inserts, updates or deletes data from the database. The administrator can also interfere dynamically and manage all the information of the GoogleMap. \subsection{Online Web Station Reports} The 'Online Web Station Reports' is a new online web feature which offers to the approved members of the application to monitor, analyze, check and process the measurements using statistics of each station in real time. Furthermore, the ability of pumping previous measurements is given; a function that did not exist in the web application until a short time ago. The login is achieved through the use of a special personal password that is given to the members by the support group of the web application. The users input the password and after validation, they can perform a number of available functions in a safe and user-friendly online web environment. More specifically the feature offers the following functions to its members: Presentation of daily, weekly, monthly and according to the user's choice values either for a chosen station of all its measured data or for specifically chosen measured parameters (sensors), with simultaneous calculation and presentation of maximum, minimum, average values, sum, number and percentage of measurements in a table or the ability of output of data in MS Excel. The functions of the new web features are described in detail . There are four categories of Online Web station reports, i.e. daily, weekly, monthly and custom. Each report is displayed in three parts (forms). In the first form, by choosing a station the image is displayed as well as various information about the specific station (Fig. 1). \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{sxima3.jpg} \caption{} \end{figure} If the user does not choose a station, an error report appears. By clicking on 'Next', the second form of the report appears in which the user can choose which measure fields to be shown,as well as the measure time interval (5min or 60min) and the specific date that those measurements were taken (Fig. 2). \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{sxima4.jpg} \caption{} \end{figure} The dates differentiate according to the report category that the user will choose; more specifically, there are: a) Daily report: the current date appears. b) Weekly: the first day of the current week is set as the starting date. c) Monthly: the first day of the current month is set as the starting date. d) Periodical: the current dates are set as the dates (From - To) with the ability of changing the spaces (From - To) by the user If the user does not choose any measure fields or chooses date in which there are no figures reported, then the system will display an error message. By clicking on 'Report' the algorithm moves to the last tab of the report where a table of contents appears in a dynamic way with information about the hour, the measure fields, measurement results as well as other statistical data (Fig. 3). \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{sxima5.jpg} \caption{} \end{figure} Also, the algorithm calculates and displays the number of measurements (e.g. '100 measurements were found'), the current page and the total number of pages (e.g. 'page 1 from 12'). Depending on the number of records, an equal number of pages is created. The application can display 25 measurements per page. Also the users can move to any page they wish so as to have access to any measurement of interest. Every form of measurement also has a status field (numbers 0, 1 or 2). This table is used to check the validity of the measurements of a field. In this way, if there are no results for a specific date in one field, then the indication 'NO DATA' is displayed. All checks are made based on the status field. If, however the measurements in a field are wrong for a specific date due to various factors, then the indication 'Offscan' is displayed. In a similar manner, checks rely on the status field. For every measure field in a specific moment the following statistics are taken into account a) average, b) minimum value, c) date and time that minimum value was found, d) the number of records of the minimum value, e) maximum value ,g) the date and time of record of the maximum value, h) the total number of measurements, i) the \% percentage. By pressing 'Excel' the measurements of the station can be displayed on Excel form, which the user is able to open or save it for later use (Fig. 4). \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{sxima6.jpg} \caption{} \end{figure} \subsection{Manage Stations using GoogleMaps} Another innovation of the web application is the dynamic management of the measurements from each station in a simple way using an online geographical web interface. The administrator using this specific feature (Management of the Measurement Stations using GoogleMaps) can insert, delete and modify easily and simply data having as a purpose the dynamic renewal on the GoogleMap. This gives the advantage to the administrator to use the specific feature as a platform of visualization of information without having to write on their own not even a line of code for this purpose. Moreover, an important element of the feature is the easy expansion and integration N (N = count) measurement stations on the interactive GoogleMap. Station management is made through the interactive interface of GoogleMap; the administrator of this application can insert, delete and modify dynamically a certain point (station) in an area (according to geographical latitude and longitude). To insert a certain station in the map, the following actions are required: a) the insertion of Municipality of choice: the user chooses through a list the one which the station belongs to; b) the insertion of the type of station of measurement: the user chooses if the station is meteorological or one that measures pollution or both. All the data is stored in the application MySQL database. As a last step, the administrator sets the name of the station, the longitude and latitude, municipality, address, description, type of station and the image of the station. (Fig. 5). Next, all information are stored into the database and are retrieved from there to be displayed dynamically (both the points and information) on the GoogleMap. On the map users can see meteorological information as well as information about pollution from various stations and areas. For every station a previous and current index of pollution appears (in a scale 1-10) with an appropriate corresponding colour scale. By clicking on each point of the station the information (i.e. Online measurements, air pollution indices for the previous and current day, general information about the station) is displayed. The user may also activate or deactivate one or more points on the map \cite {TriantSkordas}. To achieve the dynamic update of the measurement stations on the GoogleMap, the file airlab\_markers.php is called. It is responsible for the creation and update of the XML file. More specifically, the data of the application, i.e. the name and the measurements of the station, their geographical position where they belong, the general information with the representative photograph of each station, even the representation symbol, are retrieved in XML structure, submitting the appropriate preset SQL question to the database, via the corresponding code of the php page. The XML file has an element (root-top level element) and especially the '<markers> </markers>'. The remaining elements are nested to this. For the appropriate structure of the XML file, there is an additional code in the airlab\_markers.php file. All necessary checks about the validity of the data then take place. The algorithm was realized by PHP scripts and the specific feature that was developed, is supported by the Internet Explorer, Firefox, Opera and Google Chrome browsers. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{sxima7_old.jpg} \caption{} \end{figure} \subsection{Weather Forecast in West Macedonia with Dynamic Web Charts} An additional new feature is the weather forecast with the aid of dynamic web charts, is committed to deliver the most reliable, accurate weather information possible. It provides free, real-time and online weather information for the web users with the state-of-the-art technology monitors conditions and forecasts in the area of Western Macedonia in the next few days (Fig. 6). The information is produced in a high-end server in EAP / WMAQIS \cite {triantkrestou2011} and is read and stored in a database and it appears in the internet with the form of dynamic web graphs (Fig.7). Meteorological parameters are temperature, humidity, wind speed, wind direction, accumulated precipitation, mixing height and total solar radiation. PHP scripts retrieves from the MySQL database the 24 average hourly values for each meteorological parameter (except for the accumulated precipitation, for which a total of 6 hours is taken into account) on each location in Western Macedonia. Next, the information is displayed with a graph. The user then can choose a location to see the weather forecast. By choosing 'History', the previous meteorological measurements and figures are displayed using graphs. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{sxima8.jpg} \caption{} \end{figure} \subsection{Air Pollution Forecast in West Macedonia} Another important part of the A.Q.M.E.I.S. application is the atmospheric pollution forecast of the pm10 (particulate matter) concerning the next few days in Western Macedonia. (Fig. 8). Our application displays dynamically these regions in a map using images; according to pollution percentages in a certain region, the corresponding colour scale is represented denoting the levels of pollution. Choosing 'region', 'pollutant agent', 'source of emission' and 'date', pollution for the previous and current dates, as well as the ones of the next three days are displayed . This part of the application uses javascript, while a very small part of the code was written in PHP (dates management). The air pollution model produces image files (xxx.jpg) in the hard disc of the server in which javascript searches and then displays. In cases where not enough environmental data exist for a certain date, an image appears entitled 'Pollution Image Display Unavailable'. The necessary validation integrity checks of the dates are also made (from-to). Finally choosing 'history' the user can see older images of pollution rates in a certain region of Western Macedonia. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{sxima7.jpg} \caption{} \end{figure} Choosing 'movement', a javascript algorithm is executed animating the pollution illustrations in the area of interest. (Fig 9). \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{sxima9.jpg} \caption{} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{sxima10.jpg} \caption{} \end{figure} \section{MySQL and Database Architecture} MySql is a very fast and powerful database management system. Allows the user to store, search, sort and recall data efficiently. This application stores all information in the MySQL database, so that they can be retrieved dynamically every time they are needed. The architecture of this database consists of a total of 23 tables. 8 tables (s001t05, s001t60, s002t05, s002t60, s003t05, s003t60, s004t05, s004t60) include measurements collected from the stations; they are used in reports and in the dynamic system for monitoring the air pollution, via an interactive chart. 's00x' refers to station x and t05 or t60 refer to a measure time interval (i.e. an average 5 or 60 minute measurement). The primary key is Date\_Time, while the rest of the fields (value1 up to value32) store meteorological and environmental data. Tables 'city\_info', 'points\_categories' and 'points' are used for the management of stations that use Google Map. In particular, 'city\_info' stores the town in which each point (station) is located, along with additional information. Fields include 'id' (main key), title, en\_title, lat, log . Table 'points\_categories' stores the station's category (fields: 'id' is the primary key, 'title' and 'en\_title') . On the table 'points' points are set up, along with additional information. These fields are 'id' (primary key), 'category', 'city', 'en\_city', 'address', 'title', 'description', 'lat', 'log', 'thumb', 'image' . Finally 12 tables (mamyntaio, mflorina, mgrevena, mkastoria, mkkomi, mkozani, mnpedio, mpetrana, mpontokomi, mptl, mservia, msiatista) which store weather forecast information are used. Every table represents a certain location in Western Macedonia. Fields DATE , HOUR have been set up as primary key denoting solely each record. The rest (WDIR, TEMP, RHUM, TEMPSCR, RHUMSCR, TSR, NETR, SENS, EVAP, WSTAR, ZMIX, USTAR, LSTAR, RAIN and SNOW) are meteorological parameters . Tables about pollution forecasts do not exist, all measurements are stored in the hard disc of the server as image files. A sample table of weather forecast has the following form : mkozani (DATE, HOUR, WDIR, TEMP, RHUM, TEMPSCR, RHUMSCR, TSR, NETR, SENS, EVAP, WSTAR, ZMIX, USTAR, LSTAR, RAIN and SNOW.) The proposed A.Q.M.E.I.S. application is part of a system - air quality monitoring network, which was developed in the Laboratory of Atmospheric Pollution and Environmental Physics of Technological Education Institute of Western Macedonia, to monitor the air quality in Western Macedonia area, with industrial focus on the region of Prolemais - Kozani basin. This system was co-financed by the TEIWM, Regional Operational Programm 2000 - 2006 Western Macedonia and recently by the municipality of Kozani. The architecture of this system is constituted by five terminal stations, which collect environmental information, the central station and a web server. Different technologies (ADSL, GPRS, ETHERNET) are used to transfer the data to the central station. The data are sent every half an hour to the main station which collects the complete set of data and transfers them to the web server every sixty minutes, where under the application proposed in this paper provides meteorological, environmental, weather and air pollution forecast data in West Macedonia area. Further details on the design of the above mentioned air quality monitoring network can be obtained from \cite {Triant2004},\cite {TriantSkordas}, \cite {Skordas_Fragulis_Triant2011}. \section{Conclusion} An operational monitoring, as well as high resolution local-scale meteorological and air quality forecasting information system(A.Q.M.E.I.S.) for Western Macedonia, Hellas, has been developed and is operated by the Laboratory of Atmospheric Pollution and Environmental Physics / TEI Western Macedonia. In this paper the novelty of information system is presented, in a dynamic, easily accessible and user-friendly manner. The application is developed using state of the art web technologies (Ajax, Google maps etc) and under the philosophy of the open source software that gives the ability to users/authors to update/enrich the code so that their augmentative needs are met.
1,477,468,750,760
arxiv
\section{Introduction} \label{s1} The Hamiltonian or canonical approach to quantum gravity \cite{1} aims at implementing the constraints as operators on a Hilbert space. In the classical theory, the constraints generate the Einstein equations via the Hamiltonian equations of motion \cite{2}. They underlie the numerical implememtation of the initial value formulation of Einstein's equations e.g. in black hole merger and gravitational wave template codes \cite{3}. The mathematically sound construction of canonical quantum gravity is a hard problem because the constraints are non-polynomial expressions in the elementary fields and in that sense much more non-linear than even the most complicated interacting QFT on Minkowski space such as QCD whose Hamiltonian is still polynomial in gluon and quark fields. As the theory is non-renormalisable and thus believed to exist only non-perturbatively, the Loop Quantum Gravity (LQG) approach has systematically developed such a non-perturbative programme \cite{4}. LQG derives its name from the fact that it uses a connection rather than metric based formulation, hence it is phrased in the language of Yang-Mills type gauge fields and thus benefits from the non-perturbative technology introduced for such theories, specifically gauge invariant Wilson loop variables \cite{5}. The current status of LQG can be described as follows: While the quantum constraints can indeed be implemented in a Hilbert space representation \cite{6} of the canonical (anti-) commutation and adjointness relations as densely defined operators \cite{7} and while its commutator algebra is mathematically consistent in the sense that it closes, it closes with the wrong structure ``functions''. The inverted commas refer to the fact that the classical constraints do not form a Lie Poisson algebra because for a Lie algebra it is required that one has structure constants. By contrast, here we have non-trivial structure functions in the classical theory which are dictated by the fundamental hypersurface deformation algebra \cite{8} and in the quantum theory they become operators themselves and are not simply constant multiples of the identity operator. We therefore call them structure operators. The most important missing step in LQG is therefore to correct those structure operators. It is for this reason that more recently Hamiltonian renormalisation techniques were considered \cite{9}. There one actually works with a 1-parameter family of gauge fixed versions of the theory \cite{10} so that the constraints no longer appear and are traded for a Hamiltonian which drives that one parameter evolution. The reason for doing is are twofold: On the one hand, working with the gauge fixed version means solving the constraints classically and saves the work to determine quantum kernel and Hilbert space structure on it. On the other hand, the techniques of \cite{9} were derived from Osterwalder-Schrader reconstruction \cite{11} which deals with theories whose dynamics is driven by an actual Hamiltonian rather than constraints (see however \cite{12}). Still, that Hamiltonian uniquely descends from the constraints and therefore its quantisation implicitly depends on the quantisation of the constraints. Therefore, the quantum constrains and their structure operators are implicitly also present in the gauge fixed version. In addition, in \cite{13} we have shown that the techniques of \cite{9} can be ``abused'' also for constrained quantum theories in the sense that the renormalisation steps to be carried out can be performed independently for all constraints ``as if they were actual Hamiltonians'', even if the corresponding operators are not bounded from below. In that sense the methods of \cite{9} complememt those of \cite{14} where the correction of the structure operators is approached by exploiting the spatial diffeomorphism invariance of the classical theory in an even more non-linear fashion than it was alrady done in \cite{7}. The programme of \cite{9} rests on the following observation: In quantising an interacting classical field theory one cannot proceed directly but rather has to introduce at least an UV cut-off $M$ where we may think of $M^{-1}$ as a spatial resolution. Introducing $M$ produces quantisation ambiguities which are encoded in a set of parameters depending on $M$. Almost all points in that set do not define consistent theories where a consistent theory is defined to be one in which the theory at resolution $M$ is the same as the theory at higher resolution $M'>M$ after ``integrating out'' the extra degrees of freedom. Renormalisation introduces a flow on these parameters whose fixed or critical points define consistent theories. In this way, the correct structure operators or algebra of constraints referred to above are also believed to be found, either explicitly or implicitly. In \cite{13} we have shown that this is what actually happens for the much simpler case of 2d parametrised field theory \cite{13a} whose quantum hypersurface deformation algebra coincides with the Virasoro algebra. The lesson learnt from this is that the quantum constraint algebra {\it must not even close} at any finite resolution even if {\it the continuum algebra closes with the correct structure operators}. In other words, it is physically correct that the finite resolution constraints are anomalous while the actual continuum theory is anomaly free.\\ \\ In \cite{15} we have further tested \cite{9} for free bosons (scalars and vector fields). Theories with fermions were not considered so far. In this paper we close this gap, see also \cite{15a} for a closely related formulation.\\ \\ The architecture of this paper is as follows:\\ \\ In section \ref{s2} we briefly recall the bosonic theory from \cite{9}. In section \ref{s3} we adapt the bosonic theory to the fermionic setting. In section \ref{s4} we test the fermionic Hamiltonian renormalisation theory for free Dirac-Weyl fermions both with and without IR cut-off using the Dirichlet-Weyl kernel and confirm a manifestly doubler free spectrum at each resolution $M$ at the fixed point. The Nielsen-Ninomiya theorem \cite{16} is evaded because the finite resolution Hamiltonians are spatially non-local as it is usually the case when one ``blocks from the continuum'' i.e. computes the ``perfect Hamiltonian''. A similar observation was made in the conext of QCD in the Euclidian action approach \cite{17}. In section \ref{s5} we summarise and conclude. \section{Review of Hamiltonian renormalisation for bosons} \label{s2} To be specific will consider the theory either with IR cut-off so that space is a d-torus $T^d$ or without IR cut-off so that space is d-dimensional Euclidian space $\mathbb{R}^d$ and it will be sufficient to consider one coordinate direction as both spaces are a Cartesian products. Thus $X=[0,1)$ or $X=\mathbb{R}$ in what follows. Thus for simplicity we consider a bosonic, scalar quantum field $\Phi$ (operator valued distribution) with conjugate momentum $\Pi$ on $X$ with canonical commutation and adjointness relations (in natural units $\hbar=1$) \begin{equation} \label{3.1} [\Pi(x),\Phi(y)]=i\; \delta(x,y),\;\;\Phi(x)^\ast=\Phi(x),\;\Pi^\ast(x)=\Pi(x) \end{equation} where \begin{equation} \label{3.2} \delta(x,y)=\sum_{n\in \mathbb{Z}}\; e_n(x)\; e_n(y)^\ast,\; e_n(x)=e^{2\pi\;i\;n\;x} \end{equation} is the periodic $\delta$ distribution on the torus or \begin{equation} \label{3.2} \delta(x,y)=\int_{\mathbb{R}}\;\frac{dk}{2\pi} e_k(x)\; e_k(y)^\ast,\; e_k(x)=e^{i\;k\;x} \end{equation} on the real line respectively. It is customary to work with the bounded Weyl operators \begin{equation} \label{3.2} w[f,g]=\exp(i[\Phi(f)+\Pi(g)]),\;\; \Phi(f)=\int_{X}\; dx\; f(x)\; \Phi(x),\; \Pi(g)=\int_{X}\; dx\; g(x)\; \Pi(x) \end{equation} with $f,g\in L=L_2(X,dx)$ test functions or smearing functions usually with some additional properties such as differentiability or even smoothness. For tensor fields of higher degree a similar procedure can be followed (see \cite{9}). Since the space $L$ enters the stage naturally we use multi resolution analysis (MRA) language \cite{18} familiar from wavelet theory \cite{19} to define a renormalisation group flow. MRA's serve as a powerful organising principle to define renormalisation flows in terms of coarse graining kernels and while the choice of the kernel should intuitively not have much influence on the fixed point or continuum theory (at least in presence of universality) the examples of \cite{13,18b} show that generic features such as smoothness can have an impact. In the most general sense an MRA is a nested sequence of Hilbert subspaces $V_M\subset L$ indexed by $M\in {\cal M}$ where $\cal M$ is partially ordered and directed by $\le$. That is, one has $V_M\subset V_{M'}$ for $M\le M'$ and $\cup_{M\in {\cal M}}\; V_M$ is dense in $L$. Pick an ONB $d(M)^{1/2}\;\chi^M_m$ for $V_M$ where $m$ is from a countably finite (infinite) index set $Z_M$ for $X=[0,1)$ ($X=\mathbb{R}$) respectively and $d(M)$ is a finite number. In case that $X=[0,1)$ typically $Z_M$ is the lattice $x^M_m,\; m/d(M)$ and $d(M)=\dim(V_M)$ the number of points in it. Let $L_M=l_2(Z_M)$ be the Hilbert space of square summable sequences indexed by $Z_M$ with inner product \begin{equation} \label{3.3} <f_M,g_M>:=d(M)^{-1}\;\sum_{m\in Z_M} \; f^\ast_M(m)\; g_M(m) \end{equation} This scalar product offers the interpretation of $f_M(m):=f(x^M_m),\; x^M_m:=\frac{m}{d(M)}$ and similar for $g_M$ as the discretised values of some functions $f,g\in L$ in which case (\ref{3.3}) is the Riemann sum approximant of $<f,g>_L$. It is for this reason that we did not normalise the $\chi^M_m$. What follows works for any such choice of ONB indexed by $M$. However, to reduce the amount of arbitrariness and to give additional structure to MRA's one requires, both in wavelet theory and renormalisation, in addition that the ONB's descend from a few mother scaling functions $\phi$ by dilatations depending on $M$ and translations depending on $m$. In wavelet theory on the real line one is rather specific about the concrete desecendance. In particular, there is only one mother scaling function, the $\chi^M_m$ and $\phi$ are linearly related, $\cal M$ just consists of the powers $M=2^N,\; N\in \mathbb{Z}$ and $\chi^M_m=\phi(M\;x-m)$. As advertised in \cite{18} we allow a more general descendance and thus accept a finite, fixed number of mother scaling functions and that the $\chi^M_m$ are dilatations and translations of a rational function of those mother scaling functions. This keeps the central idea of providing minimal structure to an MRA while increasing flexibility. The spaces $V_M, L_M$ are in bijection via \begin{equation} \label{3.3a} I_M:\; L_M\to L,\;f_M\mapsto \sum_m\; f_M(m)\; \chi^M_m \end{equation} Note that (\ref{3.3a}) has range in $V_M\subset L$ only. Its adjoint $I_M^\dagger:\; L\to L_M$ is defined by \begin{equation} \label{3.4} <I_M^\dagger f,f_M>_{L_M}:=<f,\; I_M\; f_M>_{L} \end{equation} so that \begin{equation} \label{3.5} (I_M^\dagger f)(m)=d(M)\; <\chi_M,f>_L \end{equation} Clearly \begin{equation} \label{3.6} (I_M^\dagger I_M f_M)(m)=d(M)\; <\chi^M_m, I_M f_M>_L =f_M(m) \end{equation} i.e. $I_M^\dagger I_M=1_{L_M}$ while \begin{equation} \label{3.7} (I_M I_M^\dagger f)(x)=d(M)\; \sum_m\; \chi^M_m(x) <\chi^M_m,f_M>_L =(p_M f)(x) \end{equation} is the projection $P_M:\; L\mapsto V_M$. Given $M\le M'$ we define the coarse graining map \begin{equation} \label{3.8} I_{MM'}:=I_{M'}^\dagger \; I_M:\; L_M\mapsto L_{M'} \end{equation} It obeys \begin{equation} \label{3.9} I_{M'}\; I_{MM'}=p_{M'}\; I_M=I_M \end{equation} because $I_M$ has range in $V_M\subset V_{M'}$ for $M\le M'$. This is the place where the MRA property of the nested set of subspaces $V_M$ was important. Next for $M_1\le M_2\le M_3$ we have \begin{equation} \label{3.9} I_{M_2 M_3}\; I_{M_1 M_2}= I_{M_3}^\dagger\; p_{M_2}\; I_{M_1}= I_{M_3}^\dagger\; I_{M_1}=I_{M_1 M_3} \end{equation} for the same reason. This is called the condition of cylindrical consistency which is crucial for the renormalisation group flow. To see the importance of (\ref{3.9}) we consider a probability measure $\nu$ on the space $\cal F$ of field configurations $\Phi$ which defines a Hilbert space ${\cal H}=L_2({\cal F},d\nu)$ and a representations space for the Weyl algebra $\mathfrak{A}$ generated from the Weyl elements (\ref{3.2}). We set $w[f]:=w[f,g=0]$ and define the generating functional of moments of $\nu$ by \begin{equation} \label{3.10} \nu(f):=\nu(w[f]) \end{equation} If we restrict $f$ to $V_M$ we obtain an effective measure on the space of discretised quantum fields $\Phi_M=I_M^\dagger \Phi$ via \begin{equation} \label{3.11} w[I_M f_M]=w_M[f_M]=e^{i\Phi_M(f_M)},\; \Phi_M(f_M)=<f_M,\Phi_M>_{L_M} \end{equation} and \begin{equation} \label{3.12} \nu_M(f_M):=\nu(w[I_M f_M])=\nu_M(w_m[f_M]) \end{equation} The measures $\nu_M$ on the spaces ${\cal F}_M$ of fields $\Phi_M$ are consistently defined by construction \begin{equation} \label{3.13} \nu_{M'}(I_{MM'} f_M)=\nu_M(f_M) \end{equation} for any $M<M'$ since the $\nu_M$ descend from a continuum measure. Conversely, given a family of measures $\nu_M$ satisfying (\ref{3.13}) a continuum measure $\nu$ can be constructed known as the projective limit of the $\nu_M$ under mild technical assumptions \cite{20}. To see the imprortance of (\ref{3.9}) for this to be the case, suppose we write $f\in L$ in two eqivalent ways $f=I_{M_1} f_M=I_{M_2} g_{M_2}$ then we should have $\nu_{M_1}(f_{M_1})=\nu_{M_2}(g_{M_2})$. Now while $M_1,M_2$ may not be in relation, as $\cal M$ is directed we find $M_1,M_2\le M_3$. Applying $I_{M_3}^\dagger$ we conclude $I_{M_1 M_3} f_{M_1}=I_{M_2 M_3} g_{M_2}$ thus due to (\ref{3.13}) indeed \begin{equation} \label{3.14} \nu_{M_1}(f_1)=\nu_{M_3}(I_{M_1 M_3} f_{M_1})= \nu_{M_3}(I_{M_2 M_3} g_{M_2})= \nu_{M_2}(g_{M_2}) \end{equation} In CQFT the task is to construct a representation of the Weyl algebra $\mathfrak{A}$ with additional properties such as allowing for the imlementation of a Hamiltonian operator $H=H[\Phi,\Pi]$ which imposes severe restrictions on the Hilbert space representation. One may start with discretised Hamiltonians \begin{equation} \label{3.15} H^{(0)}_M:[\Phi_M,\Pi_M]:=H(p_M\Phi,p_M\Pi] \end{equation} on ${\cal H}^{(0)}_M:=L_2({\cal F}_M,\nu^{(0)}_M)$ where $\nu^{(0)}_M$ is any probability measure to begin with, for instance a Gaussian measure or a measure constructed from the ground state $\Omega^{(0)}_M$ of the Hamiltonian $H^{(0)}_M$. The point of using the IR cut-off is that there are only finitely many, namely $d(M)$ degrees of freedom $\Phi_M,\Pi_M$ which are conjugate \begin{equation} \label{3.16} [\Pi_M(m),\Phi(m')]=i\; d(M)\; \delta(m,m'),\;\;\Phi_M(m)^\ast= \Phi_M(m),\;\Pi_M^\ast(m)=\Pi_M(m) \end{equation} so that construction of $\nu^{(0)}_M$ does not pose any problems. In case that there is no IR cut-off it is significantly harder to show that the theories even at finite UV cut-off exist. Assuming this to be the case, one fixes for each $M\in {\cal M}$ an element $M\le M'(M)\in {\cal M}$ and defines isometric injections \begin{equation} \label{3.17} J^{(n+1)}_{MM'(M)}:\; {\cal H}^{(n+1)}_M\to {\cal H}^{(n)}_{M'(M)},\;\; {\cal H}^{(n)}_M:=L_2({\cal F}_M,d\nu^{(n)}_M) \end{equation} via \begin{equation} \label{3.18} \nu^{(n+1)}_M(f_M):=\nu^{(n)}_{M'(M)}(I_{MM'(M)} f_M) \end{equation} and with these the flow of Hamiltonians \begin{equation} \label{3.19} H^{(n+1)}_M:=J_{MM'(M)}^\dagger\; H^{(n)}_{M'(M)}\; J_{MM'(M)} \end{equation} The isometry of the injections relies on the assumption that the span of the $w_M[f_M]$ is dense in ${\cal H}^{(0)}_M$ which is typically the case. This defines a sequence or flow (indexed by $n$) of families (indexed by $M$) of theories ${\cal H}^{(n)}_M,H^{(n)}_M$. At a critical or fixed point of this flow the consistency condition (\ref{3.13}) is satisfied (at first in the linearly ordered sets of ${\cal M}(M):=\{(M')^N(M),\; N\in \mathbb{N}_0\}$ and then ususally for all of $\cal M$ by universality) and one obtains a consistent family $({\cal H}_M,\;H_M)$. This family defines a continuum theory $({\cal H}, H)$ as one obtains inductive limit isometric injections $J_M:\; {\cal H}_M \mapsto {\cal H}$ such that $J_{M'} J_{MM'}=J_M,\;M\le M'$ thanks to the fixed point identiy $J_{M_2 M_3}\;J_{M_1 M_2}=J_{M_1 M_3},\;M_1\le\; M_2\le M_3$ and such that \begin{equation} \label{3.20} H_M=J_M^\dagger \; H\; J_M \end{equation} is a consistent family of quadratic forms $H_M=J_{MM'}^\dagger\; H_{M'}\; J_{MM'},\; M\le M'$.\\ \\ We conclude this section by noting that wavelet theory actually also seeks to decompose the spaces as $V_{M'}=V_M\oplus W_M$ where $W_M$ is the orthogonal complement of $V_M$ in $V_{M'},\;M\le M'$, to provide an ONB for the $W_M$ and to require that this basis descends from a mother wavelet $\psi$ concretely related to the scaling function in the same specific way as outlined above for the scaling function. For the purpose of renormalisation this additional structure is not essential, thus we will not go int further details. We remark however that in \cite{18} we also generalised the notion of wavelets in the same way as for the scaling function which again keeps the central idea of structurising the MRA and showed that the Dirichlet and Shannon kernels are non-trivial realisations of that more general definition. \section{Hamiltonian renormalisation for Fermions} \label{s3} To distinguish the bosonic field $\Phi$ from the previous section form the present fermionic field we use the notation $\xi_B$ for a chiral (or Weyl) fermion where $B=1,2$ transforming in one of the two fundamental representations of $SL(2,\mathbb{C})$. Its Majorana conjugate $\epsilon\xi^\ast$ with $\epsilon=i\sigma_2$ (Pauli matrix) of opposite chirality then transforms in the dual fundamental representation. We have the fundamental simultaneous canonical anti-commutation relations (CAR) \begin{equation} \label{6.31a} [\xi_B(x),\xi_C(y)^\ast]_+ := \xi_B(x)\;\xi_C(y)^\ast + \xi_C(y)^\ast \; \xi_B(x) =\delta_{BC}\; \delta(x,y) \end{equation} all other anti-commutators vanishing. Dirac fermions and Majorana fermions can be considered as usual by using direct sum $SL(2,\mathbb{C})$ representations of of independent Weyl fermions of opposite chirality or of the direct sum of a fermion with its Majorana conjugate. It will be sufficient to consider a single Weyl fermion species $\xi_B$ for what follows. The measure theoretic language given for bosons of the previous section cannot apply because of several reasons: i. the ``Weyl elements'' $w[f]:=\exp(i\;<f,\xi>_L^\dagger)$ are not mutually commuting, ii. the $w[f]\Omega$ are not dense in the Fock space $\cal H$ defined by $<f,\xi>\Omega=0$ because in fact $w[f]=1_{{\cal H}}+i<f,\xi>^\dagger$ due to nil-potency and iii. $w[f]$ is not unitary. To avoid this one can formally work with Berezin ``integrals'' \cite{21} and anti-commuting smearing fields $f$ but then we cannot immediately transfer the functional analytic properties of the commuting test functions from the bosonic theory and apart from serving as compact organising tool, anticommuting smearing functions do not have any advantage over what we say below. One of the motivations to work with Weyl elements rather than say $\Phi(f),\Pi(f)$ in the bosonic case is that the Weyl elements are bounded operators. However, the operators $\xi(f):=<f,\xi>_L,\;\xi(f)^\dagger$ are already bounded by $||f||_L$ as follows from the CAR \begin{equation} \label{6.31} [\xi(f),\xi(f)^\ast]_+=||f||_L^2\; 1_{{\cal H}}\;\;\Rightarrow\;\; ||\xi(f)\;\psi||_{{\cal H}}^2,\;||\xi(f)\;\psi||_{{\cal H}}^2 \; \le\; ||f||_L^2\; ||\psi||^2 \end{equation} The derivation of the renormalisation scheme given in \cite{9} in fact covers both the bosonic and fermionic case but the practical implememtation for bosons used measures \cite{15}. We thus adapt the bosonic renormalisation scheme by reformulating it in an eqivalent way which then extends to the fermionic case:\\ Given cyclic vectors $\Omega^{(n)}_M$ for the algebra generated by the \begin{equation} \label{6.31a} \xi_M(f_M):=<I_M f_M,\xi>_{L}=<f_M,\;I_M^\dagger \xi>_{L_M}= \frac{1}{d(M)}\;\sum_{B,m\in Z_M}\; [f^B_M(m)]^\ast\; \xi_{M,B}(m) \end{equation} and their adjoints (perhaps the vacua of the Hamiltonians $H^{(n)}_M$) we define the flow of isometric injections (e.g. for $M'=M'(M)$) \begin{equation} \label{6.32} J^{(n+1)}_{MM'}\;\Omega^{(n+1)}_M:=\Omega^{(n)}_{M'},\;\; J^{(n+1)}_{MM'}\; \Xi_M(F_{M,1})..\Xi_M(F_{M,N})\;\Omega^{(n+1)}_M := \Xi_{M'}(I_{MM'} F_{M,1})^\ast.. \Xi_{M'}(I_{MM'} F_{M,N})^\ast\;\Omega^{(n)}_{M'} \end{equation} Note that $\xi_M=d(M)\; I_M^\dagger\xi$ preserve the CAR in the sense that \begin{equation} \label{6.32a} [\xi_M(m),\;[\xi_M(m')]^\ast]_+=d(M)\; M\delta_{mm'} \end{equation} and $\Xi(F)=\sum_B\; [<f_B,\xi_B>_L + <\tilde{f}_B,\xi_B>^\ast]$ where we have collected four independent smearing functions $f_B,\tilde{f}_B,\;B=1,2$ into one symbol $F$. The same notation was used in (\ref{6.32}) for the $M$ dependent quantities. With these we define the flow of Hamiltonian quadratic forms as \begin{equation} \label{6.33} H^{(n+1)}_M:= [J^{(n+1)}_{MM'}]^\dagger\; H^{(n)}_{M'}\; J^{(n+1)}_{MM'} \end{equation} These formulas are even simpler than in the bosonic case because there is no fermionic Gaussian measure and corresponding covariance to consider. However, as in the bosonic case, one has to give initial data for this flow. This can be done, e.g. by defining \begin{equation} \label{6.33a} H^{(0)}_M[\xi_M,\xi_M^\ast]:= \;:\; H[p_M \xi,\;(p_M\xi)^\ast] \; : \; \end{equation} where $(p_M \xi)_B:=I_M\;I_M^\dagger \xi_B$, $H$ is the classical Hamiltonian and $:.:$ denotes normal ordering with respect to a Fock space ${\cal H}^{(0)}_M$ with cyclic Fock vacuum $\Omega^{(0)}_M$ annihilated by $A^{(0)}_{B,M}$ assembelled from $\xi_{M,B},\;\xi_{M,B}^\ast$ as suggested by the form of $H[p_M \xi,\;(p_M\xi)^\ast]$. As in the bosonic case, the fields $\xi_{M,B}$ do not depend on the sequence label $n$ while the annihilators $A^{(n)}_{M,B}$ do as one obtains them from the $\xi_{M,B}$ using extra discretised structure that depends on $M$, typically lattice derivatives and more complicated aggregates made from those (Dirac-Weyl operators, Laplacians,..). \section{Hamiltonian renormalisation of free fermions and fermion doubling} \label{s4} In this section we will concretely choose the renormalisation structure as follows (see \cite{18} for more details): $Z_M$ will be the lattice of points $x^M_m$ with $m\in \mathbb{Z}$ if $X=\mathbb{R}$ and $m\in \mathbb{Z}_M:=\{0,1,2,..,M-1\}$ if $X=[0,1)$ respectively and $d(M)=M$. The set $\cal M$ consists of the odd naturals with partial order $M\le M'$ iff $M'/M\in \mathbb{N}$. The renormalisation sequence will be constructed using $M'(M)=3M$ for simplicity. The MRA's are based on the Shannon \cite{21} and \cite{22} Dirichlet kernels respectively, that is, \begin{equation} \label{6.0} \chi^M_m(x)= \left\{ \begin{array}{cc} \frac{\sin(M\;\pi\;(x-x^M_m))}{M\;\pi\; (x-x^M_m)} & X=\mathbb{R} \\ \frac{\sin(M\;\pi\;(x-x^M_m))}{M\;\sin(\pi (x-x^M_m))} & X=[0,1) \end{array} \right. \end{equation} Their span is dense in $V_M$ and they are mutually orthogonal with norm $M^{-1}$. The Dirichlet kernel is 1-periodic as it should be. Both have maximal value 1 at $x=x^M_m$, are symmetric about this point and (slowly) decay away from it, thus display some position space locality. They are real valued and smooth and have compact momentum support $k\in [-\pi M,\pi M]$ and $k=2\pi n,\; n\in \hat{\mathbb{Z}}_M=\{-\frac{M-1}{2}, -\frac{M-1}{2}+1,..,\frac{M-1}{2}\}$ respectively. Recall the following facts about the topologies of position space and momentum space via the Fourier transform where we denote by $M$ the spatial resolution of the lattice $x^M_m$ with either $m\in \mathbb{Z}$ or $m\in\mathbb{Z}_M=\{0,1,2,..,M-1\}$ where for $M$ odd we set $\hat{\mathbb{Z}}_M=\{-\frac{M-1}{2},..,\frac{M-1}{2}\}$ (c: compact, nc: non-compact, d: discrete, nd: non-discrete (continuous)): \begin{equation} \label{6.1} \begin{array}{ccc} {\sf space-topology} & {\sf momentum-topology} & {\sf Fourier-function}\\ & \\ {\sf nc,~~ nd:}\;\;\mathbb{R} & {\sf nc,~~ nd:}\;\;\mathbb{R} & e_k(x)=e^{i\;k\;x} \\ {\sf nc,~~ d:}\;\;\frac{1}{M}\cdot\mathbb{Z} & {\sf c,~~ nd:}\;\;[-M\pi,\;M\pi) & e^M_k(m)=e^{i\;k\; x^M_m} \\ {\sf c,~~ nd:}\;\;[0,1) & {\sf nc,~~ d:}\;\;\mathbb{Z} & e_n(x)=e^{2\pi\; i\; n\;x} \\ {\sf c,~~ d:}\;\;\frac{1}{M}\cdot \mathbb{Z}_M & {\sf c,~~ d:}\;\;\;\hat{\mathbb{Z}}_M & e^M_n(m)=e^{2\;\pi\;i\;n\;x^M_m} \end{array} \end{equation} Accordingly, in the non-compact and comact case respectively, the space of Schwartz test functions is a suitable subspace of $L=L_2(\mathbb{R},dx)$ and $L=L_2([0,1),dx)$ respectively which have momentum support in $2\pi\mathbb{R}$ and $2\pi\cdot \mathbb{Z}$ respectively. Upon discretising space into cells of width $1/M$ the momentum support $\mathbb{R}$ and $\mathbb{Z}$ respectively gets confined to the Brillouin zones $[-\pi\;M,\pi M)$ and $\hat{\mathbb{Z}}_M$ respectively. The corresponding completeness relations or resolutions of the identity read \begin{eqnarray} \label{6.2} \delta_{\mathbb{R}}(x,x') &=& \int_{\mathbb{R}}\;\frac{dk}{2\pi}\; e_k(x-x') \nonumber\\ M\;\delta_{\mathbb{Z}}(m,m') &=& \int_{-\pi \;M}^{\pi M}\; \frac{dk}{2\pi}\; e^M_k(m-m') \nonumber\\ \delta_{[0,1)}(x,x') &=& \sum_{n\in \mathbb{Z}}\; e_n(x-x') \nonumber\\ M\;\delta_{\mathbb{Z}_M}(m,m') &=& \sum_{n\in \mathbb{Z}_M}\; e^M_n(m-m') \end{eqnarray} While the first and third relation in (\ref{6.2}) define the $\delta$ distribution on $\mathbb{R}$ and $[0,1)$ respectively, the second and fourth relation in (\ref{6.2}) are the restrictions to the lattice of the regular functions \begin{eqnarray} \label{6.3} \delta_{\mathbb{R},M}(x) &=& \int_{-\pi \;M}^{\pi M}\; \frac{dk}{2\pi}\; e_k(x)=\frac{\sin(\pi\;M\;x)}{\pi\; x} \nonumber\\ \delta_{[0,1),M}(x) &=& \sum_{n\in \mathbb{Z}_M}\; e_n(x) =\frac{\sin(\pi\;M\;x)}{\sin(\pi\; x)} \end{eqnarray} which we recognise as the Shannon (sinc) and Dirichlet kernel respectively. After dividing and dilating them by $M$ and tranlating them by $m$ we obtain precisely the functions (\ref{6.0}). These kernels can be considered as regularisations of the afore mentioned $\delta$ distributions in the sense that the momentum integral $k\in \mathbb{R}$ or momentum sum $n\in \mathbb{Z}$ has been confined to $|k|<\pi M$ and $|n|<\frac{M-1}{2}$ respectively. Both are real valued, smooth, strongly peaked at $x=0$ and have compact momentum support. The Shannon kernel like the Dirichlet kernel is an $L_2$ function but it is not of rapid decay with respecto to position.\\ \\ The simplest possible action for fermions is the massless, chiral theory in 2d Minkowski space \begin{equation} \label{6.11} S=i\int_{\mathbb{R}}\; dt\; \int_X\; dx\; \bar{\xi}\slashed{\partial}\xi \end{equation} Here $X=\mathbb{R}$ or $X=[0,1)$. The 2d Clifford algebra with signature $(-1,+1)$ is generated by $\gamma^0=\epsilon=i\sigma_2,\; \gamma^1=\sigma_1$ where $\sigma_1, \sigma_2,\sigma_3=\epsilon\sigma_1$ are the Pauli matrices. Then $\slashed{\partial}=\gamma^\mu\partial_\mu,\; x^0=t, x^1=x$ and $\bar{\xi}=(\xi^\ast)^T\; \gamma^0$. Due to $([\gamma^0 \gamma^\mu]^\ast)^T=\gamma^0\gamma^\mu$ the action is real valued. Generalisations to higher dimensions, massive theories, with more species or higher spin are immediate and just require the corresponding Clifford algebras. Then $i[\xi^A]^\ast,\; A=1,2$ is canonically conjugate to $\xi^A$ which results in the non vanishing canonical anti commutation relations (CAR) \begin{equation} \label{6.12} [\xi^A(x),(\xi^B)^\ast(y)]_+=\delta^{AB}\;\delta(x,y) \end{equation} and the Hamiltonian is \begin{equation} \label{6.13} H=-i\;\int_X\;dx\; \{[\xi^\ast]^T\sigma_3\; \xi'\}(x) \end{equation} with $\xi'=\partial\xi/\partial x$ which is linear in spatial derivatives. Indeed the Dirac-Weyl equation $\slashed{\partial}\xi=0$ is reproduced by the Heisenberg equation of (\ref{6.13}) \begin{equation} \label{6.13a} i\dot{\xi}=[H,\xi]=i\sigma_3 \xi'\;\;\Leftrightarrow\;\; \epsilon\dot{\xi}-\epsilon\sigma_3\xi'=\slashed{\partial}\xi=0 \end{equation} As (\ref{6.13}) is indefinite as it stands we introduce the self-adjoint projections on $L=L_2(X,dx)$ with $s=\pm 1$ \begin{equation} \label{6.14} Q_s=\frac{1}{2}[1_L+i\;s\frac{\partial}{\omega}]\;Q,\; Q=1_L-1\;<1,.>_L/||1||_L^2,\;\omega=\sqrt{-\partial^2},\;\; i\partial Q_s=s\; \omega Q_s \end{equation} Note $Q=1_L$ for $X=\mathbb{R}$. We then rewrite the Hamiltonian as \begin{eqnarray} \label{6.15} -H &=& <\xi_1,[Q_+ -\;Q_-]\omega\;\xi_1>_L -<\xi_2,[Q_+ -\;Q_-]\omega\;\xi_2>_L \\ &=& <Q_+ \xi_1,\;\omega\;Q_+\xi_1>_L -<Q_- \xi_1,\;\omega\;Q_-\xi_1>_L -<Q_+ \xi_2,\;\omega\;Q_+\xi_2>_L +<Q_- \xi_2,\;\omega\;Q_-\xi_2>_L \nonumber \end{eqnarray} Thus we declare \begin{equation} \label{6.14a} A_{1,+}:=(Q_+ \xi_1)^\ast,\; A_{1,-}:=Q_- \xi_1, A_{2,-}:=(Q_- \xi_2)^\ast,\; A_{2,+}:=Q_+ \xi_2 \end{equation} as annihilators and obtain the normal ordered, positive semi-definite Hamiltonian \begin{equation} \label{6.16} :H:=\sum_{B=1,2;\sigma=\pm}\; \int_X\; dx\; A_{B,\sigma}^\ast \;\omega\; A_{B,\sigma} \end{equation} where the $A_{B,\sigma}$ obey the CAR \begin{equation} \label{6.17} [A_{B,\sigma}(x),[A_{B',\sigma'}(x')]^\ast]_+= \delta_{BB'}\;\delta_{ss'}\;Q_s(x,x') \end{equation} where $Q_s(x,x')$ is the integral kernel $(Q_s f)(x)=\int_X\; dx'\; Q_s(x,x') \; f(x')$. Note that the zero modes of $\xi_B$ do no not contribute to $H$ so we have to quantise them without guidance from the form of the Hamiltonian. With $Q^\perp=1_L-Q$ we define $A_{B,0}:=Q^\perp \xi_B$ as the annihilation operator which is non vanishing only for $X=[0,1)$. From this perspective, the problem of the fermion doublers on the lattice $\frac{1}{M}\mathbb{Z}$ or $\frac{1}{M}\mathbb{Z}_M$ for $X=\mathbb{R}$ and $X=[0,1)$ respectively is encoded in the way one discretises the partial derivative $\partial$ that appears in the projections $Q_s$ (in Hamiltonian renormalisation the time variable and time derivatives are kept continuous). For scalar theories, $\partial$ appears only quadratically in the Laplacian $\Delta=-\partial^2$ while for fermions it appears linearly. This problem is therefore not only present for fermions but for all theories in which besides the Laplacian also the partial derivatives themselves are involved in the quantisation process. One such example is parametrised field theory which shares many features with string theory \cite{11}. Alternatively, this problem shows up in the discretisation of the 2-point functions of the theory (as the theory is free, the two point function determines all higher N-point functions). To compute them from the current Hamiltonian setting we use the CAR to compute the Heisenberg time evolution of the annhilators (from now on normal ordering is being understood) \begin{equation} \label{6.17a} A_{B,\sigma}(t,x)=e^{-it H}\; A_{B,\sigma}(x)\; e^{it H}= [e^{it \omega} A_{B,\sigma}](x) \end{equation} where $Q_\sigma A_{B,\sigma}=A_{B,\sigma}$ was used. Then the non-vanishing two point functions are \begin{eqnarray} \label{6.17b} && <\Omega, \xi_B(s,x)\; \xi_C(t,y)^\ast\;\Omega> =<\Omega, ([Q_+ + Q_- + Q^\perp]\xi_B)(s,x)\; ([Q_+ + Q_- + Q^\perp]\xi_C](t,y)^\ast\;\Omega> \nonumber\\ &=& <\Omega, \{\delta_{B,1}[A_{1,+}^\ast + A_{1,-} + A_{1,0}] +\delta_{B,2}[A_{2,+} + A_{2,-}^\ast + A{2,0}]\}(s,x)\;\times \nonumber\\ && \{\delta_{C,1}[A_{1,+} + A_{1,-}^\ast + A_{1,0}^\ast] +\delta_{C,2}[A_{2,+}^\ast + A_{2,-}+A_{2,0}^\ast]\}(t,y)\;\Omega> \nonumber\\ &=& <\Omega, \{\delta_{B,1}\; [A_{1,-}+A_{1,0}] + \delta_{B,2} [A_{2,+}+A_{2,0}]\}(s,x)\; \{\delta_{C,1}\;[A_{1,-}^\ast+A_{1,0}^\ast] + \delta_{C,2}\; [A_{2,+}^\ast + A_{2,0}^\ast\}(t,y)\;\Omega> \nonumber\\ &=& e^{is\omega_x-it \omega_y}\; \; \{ \delta_{1,B}\delta_{1,C}\;[Q_-(x,y)+Q^\perp](x,y) +\delta_{2,B}\delta_{2,C}\;Q_+(x,y)+Q^\perp](x,y) \} \nonumber\\ &=& \frac{1}{2} \;e^{is\omega_x-it \omega_y}\; \; \{\delta_{BC}(1+Q^\perp)-i\;[\sigma_3]_{BC}\frac{\partial_x}{\omega_x}\} \delta(x,y) \nonumber\\ &=& \frac{\delta_{BC}}{2\;||1||^2} +\int\; \frac{dk}{2\pi\;2\omega(k)}\; e^{i[\omega(k)(s-t)-k(x-y)]} [\omega(k)\; 1_2-k\sigma_3]_{BC} \nonumber\\ &=& \frac{\delta_{BC}}{2\;||1||^2} +\int\; \frac{dk}{2\pi\;2\omega(k)}\; e^{-i\; K\cdot(X-Y)} [K^0\;(1+Q^\perp) 1_2-K^1\sigma_3]_{BC} \nonumber\\ &=& \frac{\delta_{BC}}{2\;||1||^2} -i\; [1_2(1+Q^\perp) \partial_{X^0}+\sigma_3\;\partial_{X^1}]_{BC} \;\int\; \frac{dk}{2\pi\;2\omega(k)}\; e^{-i\; K\cdot(X-Y)} \nonumber\\ &=& \frac{\delta_{BC}}{2\;||1||^2} + i\;([\epsilon (1+Q^\perp)\partial_{X^0}+\sigma_1\;\partial_{X^1}]\;\epsilon)_{BC} \Delta_+(x-y) \nonumber\\ &=& \frac{\delta_{BC}}{2\;||1||^2} +i\; [\slashed{\partial}_X \; \epsilon]_{BC} \Delta_+(X-Y) \end{eqnarray} with $K^0:=\omega(k)=|k|,\; K^1=k$ and $X^0=s,\; X^1=x, \;Y^0=t,\; Y^1=y$ and $K\cdot X=-K^0 X^0 +K^1 X^1$. Here $\Delta_+$ is the Wightman two point function of a free massless Klein-Gordon field in 2d Minkowski space \begin{equation} \label{6.17c} \Delta_+(X-Y)= \int\; \frac{dk}{2\pi\;2\omega(k)}\; e^{-i\; K\cdot(X-Y)} \end{equation} A similar computation yields ($X,Y$ and $B,C$ and $Q_+, Q_-$ switch and the contribution from $A_{B,0}$ is missing leading to $-\delta_{BC}$ in the final result) \begin{equation} \label{6.17d} <\Omega, \; \xi_C(t,y)^\ast\;\xi_B(s,x)\;\Omega> =-\frac{\delta_{BC}}{2\;||1||^2} +i\epsilon[\epsilon \partial_{Y^0}-\sigma_3 \partial_{Y^1}]_{CB} \;\Delta_+(Y-X) =-\frac{\delta_{BC}}{2\;||1||^2} +i[\epsilon\; \slashed{\partial}_Y]_{CB}\; \Delta_+(Y-X) \end{equation} Using the conjugate spinor $\overline{\xi}=[\xi^\ast]^T\epsilon$ we may rewrite (\ref{6.17c}), (\ref{6.17d}) as \begin{eqnarray} \label{6.17e} <\Omega,\;\xi(X)\otimes \overline{\xi}(Y)\;\Omega>x= &=& \frac{\epsilon}{2\;||1||^2} +i\;\slashed{\partial}_X\; \Delta_+(X-Y),\;\; \nonumber\\ <\Omega,\;\overline{\xi}(Y)\otimes \xi(X)\;\Omega>x &=& -\frac{\epsilon}{2\;||1||^2} +i\;\slashed{\partial}_Y\; \Delta_+(Y-X),\;\; \end{eqnarray} which gives the time ordered 2 point function or Feynman propagator \begin{eqnarray} \label{6.17f} && D_F(X-Y):=<\Omega,\;T[\xi(X)\otimes \overline{\xi}(Y)]\;\Omega> \nonumber\\ & :=& \theta(X^0-Y^0)\; <\Omega,\;\xi(X)\otimes \overline{\xi}(Y)\;\Omega> -\theta(Y^0-X^0)\; <\Omega,\;\overline{\xi}(Y)\otimes \xi(X)\;\Omega> \nonumber\\ &=&\slashed{\partial}_X \Delta_F(X-Y) \end{eqnarray} where \begin{equation} \label{6.17g} \Delta_F(X-Y)=-i\;\lim_{\epsilon\to 0+}\;\int\; \frac{d^2K}{(2\pi)^2}\; \frac{e^{-iK\cdot(X-Y)}}{-K\cdot K-i\epsilon} \end{equation} is the Feynman propagator of the 2d massless Klein Gordon field. We see that $\slashed{\partial}_X \;D_F(X-Y)=i\delta^{(2)}(X-Y)$ due to $\slashed{\partial}^2=\Box$, i.e. $D_F=i\slashed{\partial}^{-1}$. In Hamiltonian renormalisation one discretises only $x,\partial_x$ and confines only $|K^1|<\pi M$ while in the Euclidian approach one discretises also $t,\partial_t$ and confines $|K^0|<\pi M$. In any case we see that it is the projections $Q_s$ that directly translate into $\slashed{\partial}$ which is linear in the derivatives. If the propagator is to keep the property to invert the Dirac-Weyl operator $\slashed{\partial}$ then we are forced to write the momentum expression of (\ref{6.17f}), say in the Hamiltonian approach, as \begin{equation} \label{6.17h} \frac{\epsilon\; K_0+\sigma_1\;\lambda_M(K_1)} {K_0^2-\lambda_M(K_1)^2-i\epsilon} \end{equation} where $[\partial_M e_{K_1}](X^1)=i\lambda_M(K_1)\;e_{K_1}(X^1),\; X^1\in\mathbb{Z}/M$ defines the eigenvalues of the discrete derivative and indices are moved with the Minkowski metric. The case $X=[0,1)$ is literally the same, just that we must sum over $k=K^1=2\pi n,\;n\in \mathbb{Z}$ rather than integrating over $K^1\in \mathbb{R}$ with measure $dK^1/(2\pi)$. Also the $Q^\perp$ contribution is now non-trivial but cancels in the Feynman propagator. That is, all expressions remain the same except that we must replace $\Delta_+,\Delta_F$ by \begin{eqnarray} \label{6.17i} \Delta_+(X-Y) &=& \sum_{n\in \mathbb{Z}}\; \frac{1}{2\omega(n)}\; e^{-i\; K\cdot(X-Y)},\;\omega(n)=2\pi |n|,\;K_1=2\pi n \nonumber\\ \Delta_F(X-Y) &=& -i\;\int\;\frac{dK^0}{2\pi}\;\sum_{n\in \mathbb{Z}}\; \frac{e^{-i\; K\cdot(X-Y)}}{-K\cdot K-i\epsilon},\; \;K_1=2\pi n \end{eqnarray} In the so-called ``naive'' discretisation one writes \begin{equation} \label{6.18} (\partial_M f_M)(m):=\frac{M}{2}[f_M(m+1)-f_M(m-1)] \end{equation} for $f_M\in L_M$ the Hilbert space of square symmable sequences on the lattice. Using the Fourier functions $f_M(m)=e^M_k(m)=e_k(x^M_m)$ with $|k|<\pi M$ for $X=\mathbb{R}$ and $f_M(m)=e^M_n(m)=e_{2\pi n}(x^M_m)$ with $|n|\le\frac{M-1}{2}$ and $x^M_m=\frac{m}{M}$ with $m\in \mathbb{Z}$ or $m\in \mathbb{Z}_M$ respectively we find the eigenvalues $\lambda_M(k)$ given by $i\;M\sin(\frac{k}{M})$ and $iM\sin(\frac{2\pi n}{M})$ respectively. These vanish in the allowed domain of $k$ and $n$ respectively at $k=0,\; k=\pm \pi M$ and $n=0, n=\frac{M}{2}$ if $M$ is even, otherwise only at $n=0$ with corresponding doubler pole in the propagator when $K^0=0$. We see that there are no doublers in the compact case for lattices with odd numbers of points even with respect to the naive discretisation of the discrete derivative. Still, even in the compact case, and for odd $M$ the eigenvalue $i\;M\sin(\pi\frac{M-1}{M})=-i\;M\;\sin(\pi/M)$ for $n=\frac{M-1}{2}$ approaches $-i\pi$ for large $M$ while most other eigenvalues are large of order $M$ and thus $n=\pm(M-1)/2$ can be considered as an ``almost'' doubler mode. We now show that the spectrum of $\partial_M$ is {\it doubler free} if we do not pick the naive discetisation but rather the {\it natural discretisation} provided by the maps $I_M,\;I_M^\dagger$ in terms of which the renormalisation flow is defined. This discretisation is defined by \begin{equation} \label{6.19} \partial_M:=I_M^\dagger\; \partial\; I_M \end{equation} for both $X=\mathbb{R}$ and $X=[0,1)$ and is well defined whenever the MRA functions $\chi^M_m$ are at least $C^1$. Note that with this definition $\partial_M$ is automatically anti-symmetric since $\partial$ is. In fact, for the Haar flow which is not $C^1$ we formally find \begin{equation} \label{6.20} \partial_M f_M(m)= M\;\sum_{\tilde{m}}\;<\chi^M_m,[\chi^M_{\tilde{m}}]'>_L\; f_M(m)= -M\;\sum_{\tilde{m}}\;<[\chi^M_m]',\chi^M_{\tilde{m}}>_L\; f_M(m) =\frac{M}{2}(f_M(m+1)-f_M(m-1) \end{equation} i.e. precisely the naive derivative where we have formally integrated by parts in between and used that $\chi^M_m$ is of compact support for $X=\mathbb{R}$ and periodic for $X=[0,1)$ respectively. Thus the Haar flow results in the naive discretisation which yields the doubler troubled spectrum. Note that the map $I_M:\; L_M\to L$ has range in $V_M$ and in fact $I_M^\dagger:\; L\to L_M$ restricts to the inverse as $I_M^\dagger I_M=1_{LM}$ i.e. $L_M, V_M$ are in bijection. Thus, if in fact $\partial$ preserves $V_M$ then the spectrum of $\partial_M$ will simply coincide with that of $\partial$ except that $k$ will be restricted from $\mathbb{R}$ to $[-\pi M,\pi M]$ and $n$ from $\mathbb{Z}$ to $\mathbb{Z}_M$. This is precisely what happens for both the Shannon and Dirichlet kernel as we will now confirm. For the Shannon kernel in the case $X=\mathbb{R}$ we compute \begin{eqnarray} \label{6.21} (\partial_M \; f_M)(m) &=& M\; \sum_{\tilde{m}\in \mathbb{Z}}\; f_M(\tilde{m})\; <\chi^M_m,\partial\;\chi^M_{\tilde{m}}>_L (\partial_M \; f_M)(m) \nonumber\\ &=& M\; \sum_{\tilde{m}\in \mathbb{Z}}\; f_M(\tilde{m})\; \int_{\-\pi M}^{\pi M}\;\frac{dk}{2\pi}\; (ik)\; <\chi^M_m,e_k>_L\; <e_k,\chi^M_{\tilde{m}}>_L \nonumber\\ &=& M\; \sum_{\tilde{m}\in \mathbb{Z}}\; f_M(\tilde{m})\; \int_{\-\pi M}^{\pi M}\;\frac{dk}{2\pi}\; (ik)\; e_{k}(x^M_m-x^M_{\tilde{m}}) \nonumber\\ &=& \sum_{\tilde{m}\in \mathbb{Z}}\; f_M(\tilde{m})\; [\partial_x\; \chi^M_{\tilde{m}}(x)]_{x=x^M_m} \nonumber\\ &=& \sum_{\tilde{m}\in \mathbb{Z}}\; f_M(\tilde{m})\; [\frac{y\cos(M\pi y)-(M\pi)^{-1}\sin(\pi M y)}{y^2}]_{y=x^M_m-x^M_{\tilde{m}}} \end{eqnarray} which displays the non-local nature of the discrete derivative as all points $\tilde{m}\in \mathbb{Z}$ contribute. However, (\ref{6.21}) vanishes at $m=\tilde{m}$ and takes the maximal value $\mp M$ at $m-\tilde{m}=\pm 1$ which shows that it approximates the naive derivative in the vicinity of $m$. On the other hand, for $f_M=e^M_k$ we find the exact eigenfunctions \begin{equation} \label{6.22} (\partial_M \; e^M_k)(m) = M\; \int_{\-\pi M}^{\pi M}\;\frac{dq}{2\pi}\; (iq)\; e_{q}(x^M_m) \sum_{\tilde{m}\in \mathbb{Z}}\; e_{k-q}(x^M{\tilde{m}})\; =ik\; e^M_k(m) \end{equation} with manifestly doubler free spectrum. For the Dirichlet kernel in the case $X=[0,1)$ the computations are completely analogous \begin{eqnarray} \label{6.23} (\partial_M \; f_M)(m) &=& M\; \sum_{\tilde{m}\in \mathbb{Z}_M}\; f_M(\tilde{m})\; <\chi^M_m,\partial\;\chi^M_{\tilde{m}}>_L (\partial_M \; f_M)(m) \nonumber\\ &=& M\; \sum_{\tilde{m}\in \mathbb{Z}_M}\; f_M(\tilde{m})\; \sum_{|n|\le \frac{M-1}{2}}\;(2\pi\;i n)\; <\chi^M_m,e_{2\pi n}>_L\; <e_{2\pi n},\chi^M_{\tilde{m}}>_L \nonumber\\ &=& M\; \sum_{\tilde{m}\in \mathbb{Z}_M}\; f_M(\tilde{m})\; \sum_{|n|\le \frac{M-1}{2}}\;(2\pi\;i n)\; e_{2\pi \;n}(x^M_m-x^M_{\tilde{m}}) \nonumber\\ &=& \sum_{\tilde{m}\in \mathbb{Z}_M}\; f_M(\tilde{m})\; [\partial_x\; \chi^M_{\tilde{m}}(x)]_{x=x^M_m} \nonumber\\ &=& \sum_{\tilde{m}\in \mathbb{Z}_M}\; f_M(\tilde{m})\;\pi\; [\frac{\sin(\pi y)\;\cos(M\pi y)-M^{-1}\sin(\pi M y) \cos(\pi y)}{\sin^2(\pi y)}]_{y=x^M_m-x^M_{\tilde{m}}} \end{eqnarray} which displays the non-local nature of the discrete derivative as all points $\tilde{m}\in \mathbb{Z}_M$ contribute. However, (\ref{6.23}) vanishes at $m=\tilde{m}$ and takes the maximal value $\mp M$ at $m-\tilde{m}=\pm 1$ which shows that approximates the naive derivative in the vicinity of $m$. On the other hand, for $f_M=e^M_n$ we find the exact eigenfunctions \begin{equation} \label{6.24} (\partial_M \; e^M_n)(m) = M\; \sum_{\tilde{m}\in \mathbb{Z}_M}\; e^M_nM(\tilde{m})\; \sum_{|\tilde{n}|\le \frac{M-1}{2}}\;(2\pi\;i \tilde{n})\; e^M_{\tilde{n}}(m-\tilde{m}) =2\pi\;i\; n \; e^M_n(m) \end{equation} with manifestly doubler free spectrum.\\ \\ We now study the Shannon or Dirichlet flow of the (non-)compact theory. We start with some initial discretisation $\partial^{(0)}_M,\; \omega_M^{(0)}=\sqrt{-[\partial^{(0)}_M]^2},\;Q^{(0)}_{M,s}= \frac{1}{2}[1_{L_M}+i\; s\; \frac{\partial^{(0)}_M}{\omega^{(0)}_M}]$ which determines the annihilators in analogy to (\ref{6.14a}) \begin{equation} \label{6.34} A^{(0)}_{M,1,+}:=(Q^{(0)}_{M,+} \xi_{M,1})^\ast,\; A^{(0)}_{M,1,-}:=Q^{(0)}_{M,-} \xi_{M,1}, A^{(0)}_{M,2,-}:=(Q^{(0)}_{M,-} \xi_{M,2})^\ast,\; A^{(0)}_{M,2,+}:=Q^{(0)}_{M,+} \xi_{M,2} \end{equation} the vacuum $\Omega^{(0)}_M$, the Fock space ${\cal H}^{(0)}_M$ and the initial Hamiltonian family \begin{equation} \label{6.35} H^{(0)}_M=\sum_{m\in \mathbb{Z}}\; \sum_{B,\sigma}\; [A^{(0)}_{M,B,\sigma}]^\ast\;\omega^{(0)}_M\; A^{(0)}_{M,B,\sigma} \end{equation} and similar for the compact case with the restriction $m\in \mathbb{Z}_M$. We can encode the flow (\ref{6.32}), (\ref{6.33}) into a single quantity $\partial^{(n)}_M$ in terms of which we define analogously $\omega_M^{(n)}=\sqrt{-[\partial^{(n)}_M]^2},\;Q^{(n)}_{M,s}= \frac{1}{2}[1_{L_M}+i\; s\; \frac{\partial^{(n)}_M}{\omega^{(n)}_M}]$ as well as \begin{equation} \label{6.36} A^{(n)}_{M,1,+}:=(Q^{(n)}_{M,+} \xi_{M,1})^\ast,\; A^{(n)}_{M,1,-}:=Q^{(n)}_{M,-} \xi_{M,1}, A^{(n)}_{M,2,-}:=(Q^{(n)}_{M,-} \xi_{M,2})^\ast,\; A^{(n)}_{M,2,+}:=Q^{(n)}_{M,+} \xi_{M,2} \end{equation} and the initial Hamiltonian family \begin{equation} \label{6.37} H^{(n)}_M=\sum_{m\in \mathbb{Z}}\; \sum_{B,\sigma}\; [A^{(n)}_{M,B,\sigma}]^\ast\;\omega^{(n)}_M\; A^{(n)}_{M,B,\sigma} \end{equation} and again for the compact case we just restrict to $m\in \mathbb{Z}_M$. To see that this indeed possible we note that in the corresponding Fock spaces it is sufficient to check isometry on vectors of the form \begin{eqnarray} \label{6.38} && \Psi^{(n)}_{M'}(I_{MM'}\;F_{M,1},..,I_{MM'} \;F_{M,N}):= A^{(n)}_{M'}(I_{MM'} F_{M,1})^\ast .. A^{(n)}_{M'}(I_{MM'} F_{M,N})^\ast\Omega^{(n)}_{M'},\;A^{(n)}_M(F_M) \nonumber\\ &:=& \sum_{B,\sigma} <F_{M,B,\sigma},A^{(n)}_{M,B,\sigma}>_{L_M} \end{eqnarray} These give the inner products \begin{eqnarray} \label{6.39} &&<\Psi^{(n)}_{M'}(I_{MM'}\; F_{M,1},..,I_{MM'}\;F_{M,N}),\; \Psi^{(n)}_{M'}(I_{MM'}\; G_{M,1},..,I_{MM'} G_{M,\tilde{N}})>_{{\cal H}^{(n)}_{M'}} \nonumber\\ &=& \delta_{N,\tilde{N}} \det([<Q^{(n)}_{M'}\; I_{MM'} F_{M,k},\; Q^{(n)}_{M'}\; I_{MM'} G_{M,l}>_{L_{M'}^4}]_{k,l=1}^N) \end{eqnarray} where \begin{eqnarray} \label{6.40} && <Q^{(n)}_{M'}\; I_{MM'} F_M,\; Q^{(n)}_{M'}\; I_{MM'} G_M>_{L_{M'}^4} =\sum_{B,\sigma}\; <I_{MM'} F_{M,B,\sigma},\;Q^{(n)}_{M'\sigma} I_{MM'} G_{M,B,\sigma}>_{L_{M'}} \nonumber\\ &=& \sum_{B,\sigma}\; <F_{M,B,\sigma},\;[I_{MM'}^\dagger\; Q^{(n)}_{M',\sigma} I_{MM'}] G_{M,B,\sigma}>_{L_M} \end{eqnarray} We used that, whatever $\partial^{(n)}_M$ is, the corresponding operators $Q^{(n)}_{M,s}\;Q^{(n)}_{M,s'}=\delta_{s,s'}\; Q^{(n)}_{M,s}$ are orthogonal projections and that the $B=1,2$ species anti-commute. Comparing with \begin{equation} \label{6.41} <\Psi^{(n+1)}_M(F_{M,1},..,F_{M,N}),\; \Psi^{(n+1)}_M(G_{M,1},.., G_{M,\tilde{N}})>_{{\cal H}^{(n+1)}_M} \end{equation} we obtain isometry iff \begin{equation} \label{6.42} Q^{(n+1)}_{M,\sigma} =I_{MM'}^\dagger\; Q^{(n)}_{M',\sigma}\; I_{MM'} \end{equation} Similarly, since \begin{equation} \label{6.43} [H^{(n)}_{M'}\; [A^{(n)}_{M'}(I_{MM'} F_M)]^\ast]= -[A^{(n)}_{M'}(\omega^{(n)}_{M'}\; Q^{(n)}_{M'} I_{MM'} F_M)]^\ast \end{equation} we get match between the matrix elements of Hamiltonians iff \begin{equation} \label{6.44} \omega^{(n+1)}_M =I_{MM'}^\dagger\; \omega^{(n)}_{M'}I_{MM'} \end{equation} where we used that by construction $[\omega^{(n)}_M,Q^{(n)}_{M,s}]=0$. We now ask under what conditions on the coarse graining kernel $I_M$ both (\ref{6.42}) and (\ref{6.44}) are implied by \begin{equation} \label{6.45} \partial^{(n+1)}_M:=I_{MM'}^\dagger\; \partial^{(n)}_{M'}\;I_{MM'} \end{equation} \begin{Theorem} \label{th6.1} ~\\ Suppose that $\partial_M^{(0)}:=I_M^\dagger \;\partial\; I_M$ is the natural discrete derivative w.r.t. a coarse graining kernel $I_M:\;L_M\to L$ and such that $[\partial,I_M\;I_M^\dagger]=0$. Then (\ref{6.45}) implies both (\ref{6.42}) and (\ref{6.44}). \end{Theorem} Proof:\\ By (\ref{6.45}) we have \begin{equation} \label{6.46} \partial^{(1)}_M= [I_{MM'}]^\dagger\;\partial^{(0)}_{M'}\;I_{MM'} =I_M^\dagger \;\partial\; I_M=\partial^{(0)}_M \end{equation} since by construction $I_M=I_{M'}\; I_{MM'}$. Thus by iteration $\partial^{(n)}_M=\partial^{(0)}_M=\partial_M$ is already fixed pointed, no matter what the coarse graining maps $I_M$ are as long as they descend from an MRA. It follows \begin{equation} \label{6.47} \partial_M^N=I_M^\dagger\; (\partial\; [I_M \; I_M^\dagger]^{N-1} \; \partial\; I_M \end{equation} While $I_M^\dagger I_M=1_{L_M}$ by isometry, $p_M:=I_M \;I_M^\dagger$ is a projection in $L$ (onto the subspace $V_M$ of the MRA). Thus, if $[\partial,p_M]=0$ we find $\partial_M^N=I_M^\dagger \partial^N I_M$. The claim then follows from the spectral theorem (functional calculus).\\ $\Box$\\ \\ To see that both the Shannon and Dirichlet kernel satisfy the assumtion of the theorem it suffices to remark that they only depend on the difference $x-y$, i.e. they are translation invariant. Explicitly, since the $\chi^M_m$ with $m\in \mathbb{Z}$ and $m\in \mathbb{Z}_M$ respectively are an ONB of $V_M$ just as are the $e_k,\; |k|\le \pi M$ and $e_{2\pi n},\;|n|\le \frac{M-1}{2}$ respectively \begin{equation} \label{6.48} (p_M f)(x) = \sum_m \chi^M_m(x)\; <\chi^M_m,f> =\left\{ \begin{array}{cc} \int_X\; dy\; [\int_{-\pi M}^{\pi M}\;\frac{dk}{2\pi}\; e_k(x-y)]\; f(y) & X=\mathbb{R}\\ \int_X\; dy\; [\sum_{|n|\le \frac{M-1}{2}}\; e_{2\pi n}(x-y)]\; f(y) & X=[0,1) \end{array} \right. \end{equation} and integration by parts does not lead to boundary terms due to the support properties of $f$ or by periodicity respectively.\\ \\ It follows that using the natural discretisation the theory is already at its fixed point and the fixed point family member at resolution $M$ coincides with the the continuum theory blocked from the continuum to resolution $M$, that is, by simply dropping the superscript $^{(n)}$ we have \begin{equation} \label{6.50} J_M\Omega_M=\Omega,\; J_M\;A_M(F_{M,1})^\ast..A_M(F_{M,N})^\ast\;\Omega_M,\; =A(I_M\; F_{M,1})^\ast..A(I_M F_{M,N})^\ast\;\Omega,\; H_M=J_M^\dagger\; H\; J_M \end{equation} Remark:\\ Thus translation invariance of the Shannon and Dirichlet kernel respectively is, besides smoothness, another important difference with the Haar kernel \cite{23} \begin{equation} \label{6.49} \sum_m\; \chi^M_m(x)\chi^M_m(y)= =\sum_m\; \chi_{[\frac{m}{M},\frac{m+1}{M})}(x) \chi_{[\frac{m}{M},\frac{m+1}{M})}(y) \end{equation} which is not translation invariant. Therefore in this case the flows of $\omega_M$ or $\omega_M^{-1}$ are not simply related by $\omega_M = I_M^\dagger \omega I_M,\; \omega_M^{-1} = I_M^\dagger \omega^{-1} I_M$ and thus one must define $\omega_M$ as the inverse of the covariance $\omega_M^{-1}$. As $M\to\infty$ this difference disappears but at finite $M$ it is present and makes the study of the flow with respect to a non-translation invariant kernel much more and unnecessarily involved. \section{Conclusion and Outlook} \label{s5} In this paper we have extended the definition of Hamiltonian renormalisation in the sense of \cite{9} from the bosonic to the fermionic case. The definition given in \cite{9} in fact covers both cases but the practical implementation for bosons was in terms of measures \cite{15} which cannot be used for fermions. We have tested the scheme for massless 2d chiral fermion theories, the extension to the massive and higher dimensional case being immediate, just requiring the higher dimensional Clifford algebra. In particular we showed that using the smooth local Shannon-Dirichlet kernel for renormalisation and discretisation results in simple flow, an easy computable fixed point theory which coincides with the known continuum theory and has manifestly doubler free spectrum even at finite resolution due to the inherent non-locality with respect to the chosen finite resolution microscopes based on those kernels. An immediate extension of the current paper that suggests itself is to apply the current framework to the known solvable 2d interacting fermion theories \cite{25}.
1,477,468,750,761
arxiv
\section{Introduction} \noindent Spectroscopy is a fundamental exploratory tool in the study of astronomy to investigate plasma components, density, and temperature in stars, planets, interstellar material, etc. A diffraction grating is an important spectroscope component exhibiting a periodic structure that diffracts light into constituent wavelengths. Transmission gratings and reflection gratings are two types of diffraction gratings. The reflection grating normally shows strength in higher diffraction efficiency and a wider range of spectral coverage. Usually, the reflection grating is patterned with parallel grooves with a constant period. However, to achieve a higher spectral resolving power, our group is working on a non-parallel, radial grooves grating fabrication method \citep{randall2013,drew2018,randall2019,jake2020}. This novel grating requires a customized groove pattern with a variation of the period of as little as a few nanometers over centimeters of groove length. To qualify the fabrication method of this special reflection grating, we require an accurate mapping method for a customized groove pattern on this spatial scale. Accurate groove period measurement, practical scanning time, and ability for scanning non-parallel grooves are three requirements for mapping the period distribution of a grating with customized groove pattern. Scanning electron microscopy (SEM), scanning probe microscopy (SPM), and optical interferometry are three traditional methods for measuring groove period of this type of gratings with parallel grooves \citep{tatsuo1987, misumi2003, voronov2017}. However, these methods fall short of achieving the three requirements. SEM is a developed microscopy method used for several decades to directly image the surface topography of a sample. It has a large magnification level ranging from 100 to 100,000 \citep{hansen2006}. The resolution can be 2 nm for the highest magnification \citep{hansen2006}. However, the actual resolution will be much lower when scanning a big feature under a lower magnification. Typical tool accuracy is $\sim$5\% of a feature size. Thus, the SEM method has limited accuracy when the feature size is large and its small image area limits the scanning speed. SPM, including scanning tunneling microscopes and atomic force microscopes, can offer topography data on very fine surfaces. Basically, this method uses a sharp probe to scan over the sample with an area up to 100 $\mu$m $\times$ 100 $\mu$m \citep{hansen2006}. SPMs apply the piezoelectric effect to maintain a small distance between the probe and the sample. Thus, this technology can measure the depth variation on the surface of the sample and the vertical resolution can reach the sub-nanometer level \citep{hansen2006}. However, SPMs do not have the same accuracy in measuring the lateral size of a feature (typically ~2\%), which is the critical dimension for groove period measurements. SPM is usually used to scan even a smaller area than SEM, making scanning gratings with area of square centimeter impractical. Optical interferometry works very well for mapping constant groove density over large areas with high accuracy quickly \citep{hutley1982, palmer2002, voronov2017}. This method can quickly map a sample with a large area (such as a size of 100 mm $\times$ 100 mm), which is much beyond the ability of SEM and SPM. This method is based on the technology of measuring a phase change of two reflected light beams when the diffraction grating satisfies the Littrow condition \citep{samuel2017}. The variation of the phase can be transformed to the variation of the groove period of the grating. The resolution of this method can reach sub-nm \citep{hansen2006}; however, the period variation can be resolved over only a very narrow period range which is related to the wavelength of the laser. A custom grating with large period variation or non-parallel grooves cannot be adequately measured with this method. For example, a grating with a 3 $\mu$m groove period and a period variation of 20 nm will have too large an angular variation to be measured interferometrically. In addition, the horizontal scanning resolution will be significantly reduced when the Littrow angle is large, as required to measure gratings with fine periods ($<$1000 nm) with commercial interferometers ($\lambda$ = 632.8 nm, typically). Other studies show that a the laser reflection tool can demonstrate high accuracy in measuring grating period variations \citep{dewey1994, jungki2017}. \citet{dewey1994} designed a laser reflection (LR) period mapping tool to verify the High-Energy Transmission Grating for the {\it{Chandra X-ray Observatory}}. They measured the spot motions of a reflected beam and a diffracted beam to calculate the grating period variation. \citet{jungki2017} improved the LR tool by adding a normal-incidence beam to account for the surface height variations. Their system has a higher accuracy and better repeatability compared to the LR tool. However, their design requires a well-calibrated grating with a similar groove period as a reference. This design is impractical for measuring groove period of a grating with unknown period or customized groove patterns. Thus, a new method capable of measuring custom-period gratings is still desirable. In this paper, we present a new, inexpensive, bench-top method for measuring groove period variations over large areas in a reasonable time, the grating mapper for accurate period (GMAP). The details of using the GMAP to map the groove period are described in Section 2. The GMAP accurately measures groove periods, has the ability to identify non-parallel grooves, and can be used to scan a large area. To achieve these qualities, the GMAP needs to be well-calibrated. A walkthrough of this process is explained in Section 3. We employed the GMAP to measure three gratings with different designed patterns. The test configuration of these three measurements is discussed in Section 4. Finally, a discussion and summary of the test results are provided in Section 5. \section{Test Methodology} \noindent The GMAP, similar to the LR tool, calculates the groove period by localizing the direction of diffracted orders, but the GMAP can directly measure the diffraction angle without using an extra grating as a reference. The general geometry of a reflection grating is shown in Figure \ref{fig1}, and the generalized grating equation is: \begin{equation} \sin \alpha + \sin \beta = \frac{n \lambda}{d \sin \gamma}\label{eq:1} \end{equation} where $\gamma$ is the angle between the incident beam (AO in Figure 1) and the groove direction (axis z in Figure 1), $d$ is the period, $\lambda$ is the wavelength of the incident beam, $\alpha$ is the polar incidence angle, and $\beta$ is the polar angle of the diffracted light (OB in Figure 1) for nth order \citep{cash1982, randall2013, frassetto2017}. In Figure 1, the pitch, yaw, and roll axes of the test grating are x, y, and z, respectively. Based on this geometry, we can calculate the grating period ($d$) by measuring the angles ($\alpha$, $\beta$, and $\gamma$) and using a stable laser beam ($\lambda$). \begin{figure}[htp!] \centering \includegraphics[width=0.6\textwidth]{Fig1} \caption{Geometry of a reflection grating. A grating is set on xz-plane and the groove direction is the same as the z-axis. Light from point A is incident on the grating at point O. The incident beam is offset from the groove direction by an angle $\Psi$ (yaw) and strikes the grating at an angle $\eta$ (90$^o$ - angle of incidence). $\alpha$ and $\beta$ are azimuthal angles of incident light and diffracted light, respectively. $\gamma$ is the angle between the incident light and the groove direction.} \label{fig1} \end{figure} According to the generalized grating equation (\ref{eq:1}), we need to measure angles $\alpha$, $\beta$, and $\gamma$ to calculate the groove period $d$. To simplify the measurement, we set the grating in an in-plane mounting such that $\gamma = 90^o$ and $\alpha = 0^o$. Then, the grating equation is: \begin{equation} d = \frac{n \lambda}{ \sin \beta }. \label{eq:2} \end{equation} In this case, we only need to measure $\beta$ to calculate the groove period. In the GMAP, we use a rotation stage to measure this diffraction angle $\beta$ and a camera to record the spot of the diffracted beam at nth order. In the initial GMAP setup (left panel of Figure 2), the laser beam is set parallel to the grating normal while $\beta$ is the azimuthal angle of the diffracted light in nth order. The reflected light beam (0th order) returns back to the laser. Then we rotate the rotation stage $\theta$ degrees, such that the 0th order is placed in the same spot on the camera as nth order (right panel of Figure 2). In this case, we have $\beta= 2 \theta$ as the angle of incidence and reflection are the same. Placing $\beta$ back into equation (\ref{eq:2}), we obtain the groove period on the grating where the laser beam is incident. \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig2} \caption{Left - The geometry of the setup of the GMAP, where the camera is set at the nth order diffracted light beam and the 0th order light beam is reflected back to the laser. Right - After rotating the rotation stage with a degree of $\theta$, 0th order lands at the same camera pixel as nth order in the left panel. The dashed line is the grating normal.} \label{fig2} \end{figure} With this process, we can make an absolute measurement of the period at any spot on the grating. However, in our implementation of the GMAP, it takes approximately 20 seconds to repeat this process. At this speed, measuring a 20 mm $\times$ 20 mm grating at 0.1 mm resolution would require roughly 9 days, which is impractical. In practice, we conduct this measurement once, and employ this spot as a reference to deduce changes in the groove period relative to this reference point. According to equation (\ref{eq:2}), a change in the groove period will change the angle of the diffracted light beam ($\beta$). Thus, we only need to measure how much the azimuthal angles of the diffracted light in nth order change because the distance of the grating and the camera is fixed in this process. The changing angle can be calculated by measuring the number of pixels the diffraction spot shifts. This is a fast way to map the groove period of a test grating without rotating the stage, and the entire process only requires half the time with the same mapping resolution requirement. This process assumes the grating surface is flat; however, in actuality, the grating surface is not ideally flat. The surface figure of the grating will cause additional shifts in the diffracted light beam in both the horizontal and vertical direction. The direction of the diffracted light beam at -nth order and nth order has the same shift due to the surface curvature and opposite shift from the changing groove period. Thus, we set a second camera on the -nth order as shown in Figure 3. Subtracting the shift in degrees at nth order from the degrees at -nth order allows us to account for the effect of surface curvature. \begin{figure}[htp!] \centering \includegraphics[width=0.7\textwidth]{Fig3} \caption{The geometry of all components in the GMAP. The grating was set in an in-plane mounting on a rotation stage. Two cameras are set at nth and -nth orders and $\beta_{n}$ and $\beta_{-n}$ are the azimuthal angles of the diffracted light in the nth and -nth orders, respectively.} \label{fig3} \end{figure} \section{The GMAP Setup} The GMAP includes seven main components: a laser source, a rotation stage, two linear stages, a kinematic platform mount, and two cameras. The grating is held by a kinematic platform mounted on two linear stages, which are set in a horizontal/vertical configuration, for 2D scanning. And the linear stages are set on top of a rotation stage (left panel of Figure 4). Two cameras are set at nth and -nth diffraction orders (n is chosen depending on required accuracy.). An image of all components installed on a precision-tuned, damped optical table is presented in the right panel of Figure 4. \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig4} \caption{Left - A grating mount includes a rotation stage and two linear stages. Right - An image of all the components of the GMAP.} \label{fig4} \end{figure} The GMAP is set on a vibration dampened optical table from Newport to reduce the structural vibrations of the tabletop. We set the rotation stage as the reference of the GMAP. The motorized rotation stage is a URS100BPP model from Newport. The stage offers a 360$^o$ angular range, $\pm$4.4 mdeg bi-directional repeatability, and $\pm$8 mdeg accuracy. The machining accuracy of the alignment between the rotation axis and the center axis of the stage (eccentricity) is 0.4 $\mu$m. After placement of the rotation stage, there are three steps: installing the laser, initializing the grating, and setting up the cameras. Rotational degrees of freedom for these three components are shown in Figure 5. The rest of this section will describe the setup sequence and the uncertainty estimation at each step. \begin{figure}[htp!] \centering \includegraphics[width=0.8\textwidth]{Fig5} \caption{Rotational degrees of freedom for the laser, the grating and the camera in the left, middle, and right panel, respectively. } \label{fig5} \end{figure} \subsection{Installing the laser} The laser system consists of a single-mode fiber-coupled laser, a single-mode patch cable with FC/PC connectors, a collimator, and a kinematic mount. The laser source has a center wavelength of $\lambda$ = 402.33$\pm$0.06 nm, where this uncertainty of the laser source is derived from a Gaussian fit to the laser spectrum (left panel of Figure 6). The resolution of the spectral data (scanned from a laser test report offered by Thorlabs) is not adequate for unambiguous determination of the center wavelength. Thus, we used a 3 $\mu$m period grating fabricated through electron-beam lithography to identify the center wavelength. The laser source system requires 1 hr to warm up at 25 $^o$C to be stable at 6 mW. According to the theoretical beam diameter graph offered from Thorlabs, the full-angle divergence of 0.027$^o$ makes the beam diameter 1 mm, 1.5 mm, and 1.75 mm at 1 m, 2 m, and 3 m, respectively, from the collimator. The collimator is mounted on a kinematic mount, which is designed to provide high-resolution adjustment for roll and pitch. To ensure that the laser hits the same spot on the grating while rotating the stage, we need to set the laser light beam coincident with the rotation axis. To roughly align the laser source with the rotation axis of the rotation stage, we first set the laser beam to shoot horizontally along the grid of mounting holes on the table 1 m from the rotation axis (Figure 7). A camera was set at two positions, which are 0.4 m apart, on the optical table for fine adjustment. We adjust the roll and pitch of the laser beam through the kinematic mount such that the beam spot hits the same spot on the camera at these two distances. The camera will be removed after this step. The right panel of Figure 6 shows the laser spot on a Thorlabs CMOS camera, which has 1280 $\times$ 1024 pixels and each pixel is a 5.2 $\mu$m $\times$ 5.2 $\mu$m square. The center coordinate of the spot is identified as the centroid of a Gaussian fit to the x and y intensity distributions as recorded on the camera. To estimate the uncertainty of this localization, we performed a stability test, recording the centroid of the laser in a fixed position 1000 times. This test yields a repeatability of $\pm$ 1 pixel, which we interpret as the uncertainty in our centroid measurement. This uncertainty is equivalent to an uncertainty of 0.001$^o$ for the roll and pitch of the laser. And the position tolerance of the optical table hole is 0.048''$\pm$0.017'', which has a negligible effect on the alignment. This indicates the laser beam has an error of offset to the rotation axis of $\sim$20 $\mu$m at a 1 m distance. During operation of the GMAP, rotations are limited to less than 5 degrees, which would cause a maximum spot shift on the grating of 0.1 $\mu$m. We assume that the groove period can be approximated as constant in this small area, and hence this beam displacement has a negligible effect on determining the reference period of the grating. \begin{figure}[htp!] \centering \includegraphics[width=0.8\textwidth]{Fig6} \caption{Left - The spectrum of the laser source. The red dashed line is a Gaussian fit with a standard deviation of 0.06 nm. Right - An image of a laser spot on the camera with a 1 m distance from the collimator. The x and y units are camera pixels. The top and right panels show the integrated intensity of each column and row fit by Gaussians. The spot center is marked by the red cross where the coordinates are the peaks of the two fitted Gaussians.} \label{fig6} \end{figure} \begin{figure}[htp!] \centering \includegraphics[width=0.6\textwidth]{Fig7} \caption{The geometry of aligning the laser source with the rotation stage. The rotation axis is set 1 m from the collimator. The camera is set at two distances, which are 0.4 m apart, along the direction of the grid of the mounting holes of the table.} \label{fig7} \end{figure} \subsection{Initializing the grating} The grating mount includes two linear stages and a kinematic platform mount. To achieve 2D scanning, the two linear stages are set in a horizontal/vertical configuration (left panel of Figure 4) such that the grating normal is parallel to the laser beam. Each of them has a travel range of 150 mm and a bi-directional repeatability of 1 $\mu$m. The chosen step size of the linear stages depends on the laser beam size on the sample. A small beam size and a small step size can offer high spatial resolution. However, there is a trade-off between spatial resolution and scanning time. Our spatial resolution is set based on the limitation of our laser beam size of 1 mm. In this case, a 0.1 mm step size ensures that two continuous measurements have less than 90\% same area coverage. This 90\% threshold can be adjusted based on a scanning time requirement. The grating is mounted to the linear stage which is then mounted to a kinematic platform through a slot hole. The slot hole provides $\sim$25 mm of adjustment in the horizontal direction while the kinematic platform mount provides $\pm3^o$ tip/tilt adjustment for correcting yaw and pitch (left panel of Figure 4). To make sure the laser is incident on the same spot on the grating while rotating, we need to set the grating so that the central groove is coincident with the axis of the rotation stage. In the previous step, we already set the laser through the axis of the rotation stage; therefore, setting the laser beam coincident with the surface of the grating is equal to setting the axis of the rotation stage on the grating surface (left panel of Figure 8). We first set the grating surface roughly on the path of the laser beam such that half of the beam passes through directly to the camera and the other half is reflected by a small degree. We use a camera to record the location of these two spots and we adjust the grating surface until these two spots overlap each other. This indicates that the grating surface is parallel to the laser beam within 0.03$^o$ (assuming 0.5 mm half beam size at 1 m distance) and coincident with the axis of the rotation stage. Then we adjust pitch and yaw of the grating (the roll correction of the grating will be discussed during the camera setup). To adjust the pitch, we rotate the grating 90$^o$ such that the laser beam is parallel to the grating normal. Slight misalignments of pitch will cause the reflected laser spot to be offset from the collimator aperture (4.5 mm) in the vertical direction. We adjust the kinematic mount such that the laser beam reflects back into the aperture of the collimator. We assume that we can distinguish the offset of the laser beam by half of the beam size (assuming 1.5 mm beam size at 2 m distance), which gives a pitch uncertainty of 0.1$^o$. Next, we need to calibrate the yaw of the grating. We ultimately need to place the grating at $\alpha = 3^o$ in our setup to ensure that the camera does not block the laser (the geometry of this initial position is shown in the right panel of Figure 8). To correct the yaw of the grating, we temporarily set a camera (details are in the camera setup section) 1 m from the grating at 0th order and record its X and Y pixel coordinates (left panel of Figure 9). Then we rotate the rotation stage such that the 1st order diffraction spot hits the camera with the same X pixel as we recorded for 0th order (right panel of Figure 9). Any residual Y pixel offset between the diffracted order and 0th order is due to yaw misalignment. We therefore adjust the kinematic platform mount until the 1st order spot has the same coordinates as the 0th order. Due to the uncertainty of the center of the spot of 1 pixel, the uncertainty of the yaw of the grating is 0.0003$^o$. Setting the pitch and yaw of the grating in this way will ensure that the angle $\gamma$ is as close to $90^o$ as possible. Based on this setup, we calculate the uncertainty of the angle $\gamma$ from the equation: \begin{equation} \cos \gamma = \cos \Psi \cos \eta \label{eq:3} \end{equation} where the $\Psi$ is the yaw of the grating and $\eta$ is the pitch of the grating (shown in Figure 1). According to the uncertainties calculated above, we have $\Psi = 0 \pm 0.0003^o$ and $\eta = 90 \pm 0.1^o$. This results in $\gamma = 90 \pm 0.1^o$ where the error is calculated from Monte Carlo simulation. \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig8} \caption{Left - The geometry of aligning the grating with the laser beam. The surface of the grating is roughly parallel to the laser beam. The red line shows the light path of the reflected spot on the camera and the blue line represents the light path of the direct beam. Right - The initial geometry when we correct pitch of the grating. } \label{fig8} \end{figure} \begin{figure}[htp!] \centering \includegraphics[width=0.6\textwidth]{Fig9} \caption{Left - The initial geometry of setting grating yaw. The 0th order light beam is coincident on the camera. Right - After rotating the rotation stage, the 1st order light beam is incident on the camera with the same X pixel coordinate but slight misalignments of yaw will cause the different Y pixel coordinate.} \label{fig9} \end{figure} \subsection{Camera setup} The diffraction spots are recorded using two of the CMOS cameras described in Section 3.1. Each camera is fixed on a high precision 6-axis kinematic mount for the adjustment of roll, yaw, and pitch. The kinematic mount is used to align the camera normal to the direction of the incident beam. Before installing the camera, we use a mirror to adjust the roll and pitch of the kinematic mount (left panel of Figure 10), such that the beam reflected by the mirror returns to the same spot as the incident laser beam on the grating. We assume that our eye can easily distinguish two separated spots by a distance of the beam size. Thus, the uncertainty of the pitch and roll of the mirror are 0.1$^o$ with a 1.75 mm beam size given a 2 m optical path of the laser beam. Motivated by the repeatablility and stability specified by ThorLabs, we assume that the mount holds the camera plane within one degree of the aligned mirror plane, which will cause a maximum shift on the camera of 1.6 $\mu$m. This is a negligible effect (less than 1 pixel) given the 5.2 $\mu$m pixel size. We adjust the roll and pitch of the mount such that the beam reflected by the mirror returns to the same spot as the incident laser beam on the grating. This correction makes sure the mirror normal is parallel to the diffracted laser beam. The mirror is then replaced by the camera (right panel of Figure 10). A camera yaw offset will cause the spot to drift in the X and Y directions. To adjust the yaw of the camera, we rotate the rotation stage such that the reflected spot moves horizontally across the camera (left panel of Figure 11). We adjust the yaw until the spot always has the same Y pixel value (right panel of Figure 11). Due to the uncertainty of the spot centroid (1 pixels), the uncertainty of the yaw of the camera is 0.07$^o$ given the 800 pixel distance over which the spot is measured. During the measurement, we also need to calibrate the number of pixels the spot moves on the camera as a function of the number of degrees of rotation on the grating stage. Thus, we record the pixel coordinate and the degree of the rotation stage for 10 spots after yaw correction. Based on a simple linear regression model, we can transfer the shift of the center pixel value to the actual degree shift of the spot on each camera. This method has the benefit of not needing to measure the distance between each camera plane to the grating surface. \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig10} \caption{Left - The geometry of setting roll and pitch of the kinematic mount which is used to hold the camera. The blue line represents the diffracted light beam and the green line shows the reflected light beam from the mirror. Right - After correcting the roll and pitch of the mount, the mirror is replaced by the camera.} \label{fig10} \end{figure} \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig11} \caption{The camera before and after yaw correction. The dashed line represents the x direction of the camera. The spots A and B are separated by a distance of 800 pixels in the x direction.} \label{fig11} \end{figure} Using the above procedure, we place and calibrate the nth and -nth order cameras. We can then perform the final alignment and set the roll of the grating such that $\alpha = 0^o$. First we record the coordinates of the nth and -nth diffracted order on the two cameras. Then, we rotate the grating such that the 0th order spot hits the same coordinate as nth order and record this angle as $\theta_{+n}$. We then adjust roll in the opposite direction such that 0th order hits the same coordinate as -nth order and record this degree as $\theta_{-n}$. Finally, we rotate back $(\theta_{+n}+\theta_{-n})/2$ degrees to remove any roll offset. The uncertainty of the center of the spot is 1 pixel at 1 m distance, thus giving a grating roll uncertainty of 0.0002$^o$. \subsection{System uncertainties} Based on the accuracy of the equipment used, the uncertainty of angle $\theta$ is determined by the roll of the grating (0.0002$^o$), the uncertainty from the rotation stage (0.0044$^o$), and the uncertainty of the spot centroid (0.0003$^o$). This gives a $\theta$ ($\beta=2\theta$) uncertainty of 0.0044$^o$. Based on equation (\ref{eq:2}), this results in a groove period uncertainty of $\pm$ 3.4 nm/$n$ when measuring a 3 $\mu$m size groove period with our laser. All the uncertainties of the GMAP are shown in Table 1. \begin{table} \begin{center} \begin{tabular}{ |p{5cm}|p{2.5cm}| } \hline Error Terms & Uncertainties\\ \hline Rotation Stage Repeatability & $\pm0.0044^o$ \\ Rotation Stage Eccentricity & $\pm0.4\mu$m \\ Laser Wavelength & $\pm0.06$nm \\ Centroid of the Laser Spot & $\pm1$ pixel \\ Laser Source Pitch & $\pm0.001^o$ \\ Laser Source Roll & $\pm0.001^o$ \\ Laser Beam Offset & $\pm20\mu$m \\ Grating Pitch & $\pm0.1^o$ \\ Grating Yaw & $\pm0.0003^o$ \\ Grating Roll & $\pm0.0002^o$ \\ Camera Pitch & $\pm0.1^o$ \\ Camera Yaw & $\pm0.07^o$ \\ Camera Roll & $\pm0.1^o$ \\ $\alpha$ & $\pm0.0002^o$ \\ $\beta$ &$\pm0.0088^o$ \\ $\gamma$ & $\pm0.1^o$ \\ \hline \end{tabular} \caption{A summary of the systematic uncertainties present in the GMAP set-up process. The resulting uncertainty in determining a 3 micron groove period is 3.4 nm/$n$, where $n$ is the working diffraction order of the system.} \end{center} \end{table} \section{Measurements} We have tested the efficacy of the GMAP with three gratings. These gratings, shown in Figure 12, are designed with different patterns to test different mapping abilities of the GMAP. First, we test it with a grating of parallel grooves and constant groove period (left panel of Figure 12). Second, we test it with a grating of parallel grooves and five different groove period sections (middle panel of Figure 12). Third, we test it with a grating of non-parallel, radial grooves (right panel of Figure 12). Cameras were set at +2nd and -2nd orders for each of these measurements. The first grating (Grating 1, left panel of Figure 12) is a Thorlabs grating with designed constant groove period distance. The measurement of Grating 1 is designed to demonstrate that the result from the GMAP is consistent with the traditional optical interferometry method \citep{hutley1982}. The optical interferometry method is designed for mapping a grating with parallel grooves and constant groove period which is the same as the design of Grating 1. This method requires an assumption of the average groove period since an interferometer cannot measure absolute groove period accurately. Thus, we apply the mean groove period measured from GMAP as an average groove period for the optical interferometery method. Both mapping results of the GMAP and the optical interferometry method are shown in Figure 13. We mark Grating 1 with three marks on the top right, bottom left, and bottom right, respectively, to define the orientation. We extract groove period values from the red rectangle region of Grating 1 in each method. The histograms in Figure 14 show the groove period distributions from measurements of the GMAP and the optical interferometry method, respectively. The groove period is 3334.3$\pm$0.1 nm from the GMAP and 3334.3$\pm$0.9 nm from the optical interferometry method. Given a $\sim$1.7 nm system uncertainty, this result is consistent with the grating designed period of 3333.3 nm. Both methods show a constant groove period over most of Grating 1 and similar patterns on the left and right edges of the grating. However, the result from the optical interferometry method has more spatial detail (right panel of Figure 13) and larger groove period variation (right panel of Figure 14). Those details are smoothed out in the GMAP's result (left panel of Figure 13 and Figure 14), because the GMAP applies a $\sim$1 mm laser beam and each measurement is an average of the groove period in this 1 mm circle. \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig12} \caption{Left - A Thorlabs grating with constant groove period distance of 3333 nm (Grating 1). Mid - The customized grating array with 4 square sections of groove period of 2990 nm, 2980 nm, 2960 nm, and 2970 nm. On each of these sections, there is a ``L" shaped pattern with constant groove period distance of 3000 nm (Grating 2). Right - The customized grating with converging grooves. The period distance smoothly changing from 3000 nm to 2950 nm (Grating 3).} \label{fig12} \end{figure} \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig13} \caption{Left - Mapping result of a Thorlabs grating with constant groove period measured by GMAP. Right - Mapping result of the same grating measured by optical interferometry method. The red rectangle in both images are the regions we extract the groove period values of the histograms in Figure 14.} \label{fig13} \end{figure} \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig14} \caption{Left - A histogram of the groove period value of the Grating 1 measured by GMAP. Right - A histogram of the groove period value of the Grating 1 measured by optical interferometry method. The dashed line in both histograms represent the assumed average groove period of 3334.3 nm.} \label{fig14} \end{figure} The second grating (Grating 2, middle panel of Figure 12) is a custom-period grating with five different groove period sections. The four 10 mm $\times$ 10 mm squares have periods of 2990 nm, 2980 nm, 2960 nm, and 2970 nm, respectively. On each square, there is a ``L" shaped pattern with 3000 nm groove period. Based on our setup and the limitation of the camera size, the measurement of Grating 2 is designed to demonstrate that the GMAP can identify a minimum 10 nm and maximum 40 nm groove period change (40nm groove period change is twice beyond the limitation of the optical interferometry method) and distinguish features with size of 1 mm, 3 mm, and 5 mm in our setup. The mapping result of the GMAP is shown in the top panel of Figure 15. The straight edges between sections indicate that the image resolution is as small as the smallest feature size, 1 x 1 mm$^2$ per pixel. Moreover, we also extract groove period values from the rectangular regions in each section of Grating 2 as depicted by the black rectangles in the top panel of Figure 15. The histograms in the bottom panel of Figure 15 show the groove period distributions from measurements within these five black rectangles. The groove periods are 2989.9 $\pm$ 0.2 nm, 2980.2 $\pm$ 0.2 nm, 2969.8 $\pm$ 0.2 nm, 2960.2 $\pm$ 0.1 nm, and 3000.0 $\pm$ 0.2 nm in the red, yellow, blue, green and purple regions from Figure 15, respectively. This result is consistent with the designed periods of 2990 nm, 2980 nm, 2970 nm, 2960 nm, and 3000 nm given a 1.7 nm system uncertainty. In addition to the accurate mapping result in each region, this test also indicates that the GMAP can identify groove period change in 10 nm increments and distinguish feature size as small as 1 mm. \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig15} \caption{Top - Mapping result of a customized grating with 4 square sections of different groove period distance measured by the GMAP. Bottom - Histograms of measured period distance in each of the black rectangular regions. The dashed lines represent the mean measured groove periods of 2989.9 nm, 2980.2 nm, 2969.8 nm, 2960.2 nm, and 3000.00 nm in the red, yellow, blue, green and purple regions, respectively. } \label{fig15} \end{figure} The third grating (Grating 3, right panel of Figure 12) is a custom-period grating with variable line spacing along the groove dimension, which results in a continuously variable groove period. This grating is 10 mm wide and 20 mm in the groove direction. Its groove period continuously changes from 3000 nm to 2950 nm. The measurement of Grating 3 is designed to prove that the GMAP can map non-parallel grooves gratings. The mapping result of the GMAP is shown in the right panel of Figure 16. The rainbow pattern on the grating shows the continously changing groove period from top to bottom. Furthermore, we average the groove period in every 10 rows (1 mm) of Grating 3 and plot them in the left panel of Figure 16. The X-axis represents the row distance to the center of the grating and the Y-axis represents the average groove period, while the vertical error bar on each data point represents the GMAP's systematic uncertainty of 1.7 nm in 2nd order. The red solid line displays the designed groove period of this grating with a slope of 2.50 nm/mm. The black dash line displays the fitted groove period with a slope of 2.48$\pm$0.01 nm/mm, which is within two standard deviations of the designed slope. This test indicates that the GMAP can measure continuous groove period change. \begin{figure}[htp!] \centering \includegraphics[width=0.9\textwidth]{Fig16} \caption{Left - The measured period distance of the grating 3 in each row. The blue crosses represent the measured result. The red solid line shows the designed groove period and the black dash line represents the fitted groove period. Right - Mapping result of a customized grating with converging grooves measured by the GMAP.} \label{fig16} \end{figure} \section{Discussion and Summary} Our work shows that the GMAP is a versatile tool for measuring gratings and agrees with standard techniques of mapping groove period errors over a grating surface. As a proof of the methodology, we tested the GMAP on three gratings with different groove periods, different patterns, and different groove orientations. The summary of these results are shown in the Table 2. Compared to previous laser reflection tools, the GMAP offers the ability to measure the absolute groove period, simplifies the experiment setup and calibrations and improves uncertainty estimation. The GMAP also works well for gratings with large groove period changes and non-parallel grooves, for which the optical interferometry method does not work. \begin{table} \begin{center} \caption{Summary of GMAP measurements on three gratings.} \begin{tabular}{ |p{7cm}||p{4cm}|p{4cm}| } \hline Tests & Design & Measured value\\ \hline Grating 1 & 3333.3 nm & 3334.3 $\pm$1.7nm \\ Red section on Grating 2 & 3000 nm & 3000.0$\pm$1.7 nm \\ Yellow section on Grating 2 & 2990 nm & 2989.9$\pm$1.7 nm \\ Green section on Grating 2 & 2980 nm & 2980.2$\pm$1.7 nm \\ Blue section on Grating 2 & 2970 nm & 2969.8$\pm$1.7 nm \\ Purple section on Grating 2 & 2960 nm & 2960.2$\pm$1.7 nm \\ Groove period variation on Grating 3 & 2.50 nm/mm & 2.48$\pm$0.01 nm/mm \\ \hline \end{tabular} \end{center} \end{table} The GMAP can be adapted for other applications. First, the GMAP could be used to map groove period distribution of ultraviolet or even x-ray gratings, which have much smaller groove periods. To achieve the ability to measure small groove periods (of order 200 nm), we can replace the optical laser source with an ultraviolet laser source. Second, the GMAP can be further modified by amplifying the angle of diffraction by setting the cameras in positions corresponding to higher orders. This will enhance the ability of the GMAP to distinguish small groove period changes. For example, GMAP gives an uncertainty of 3.4 nm/$n$ for a 3 $\mu$m grating at order $n$. Third, the GMAP can cover a wide range of groove period changes. This is useful for measuring aberration-correcting gratings or gratings enabling unique optical designs for spectrometers. In this paper, our setup has a limitation of 40 nm groove period change due to the limitation of the camera size. Switching to a bigger camera will improve the ability of the GMAP to cover a wider groove period change. Also moving the camera closer to the grating can improve this ability. However, in this case, there is a trade-off between the range of the coverage and the accuracy of the measurement. \section{Acknowledgments} This work was supported by NASA grants 80NSSC19K0661. We would also like to acknowledge internal funding from The Pennsylvania State University. \bibliographystyle{ws-jai}
1,477,468,750,762
arxiv
\section{Introduction}} Heavy ion collisions have been used to study the Quark Gluon Plasma (QGP) phase transition into a hadronic gas [1--4] for more than a decade. The phase transition was initially assumed to be either first or second order [5--8], but the quantum chromodynamics (QCD) calculations proved that it is a cross-over [9--11]. There is no single critical temperature in a cross-over phase transition where a phase transition occurs and all the degrees of freedom are going to be switched between the phases. It is impossible to idiomatically define (even though in the case of QCD possibly many exists) the characteristic temperature ($T_c$) and it depends on what potential order parameter is being observed. Chiral condensate, Polyakov loop and strangness suscepti- bility [12] were focused in the early lattice QCD calculations and found a wide range in characteristic temperatures. The most recent lattice QCD calculations has shown an approximate difference of 15-20 MeV in the hadronization temperatures between the light and strange particles [13]. Temperature is one of the most important concept in high energy heavy-ion collisions. Several temperatures namely the initial temperature, chemical freezeout temperature, thermal or kinetic freezeout temperature and the effective temperature are frequently used in the physics of high energy collisions and these temperatures occur at different stages. The initial temperature describes the degree of excitation of an interacting system at the initial stage of collisions. Chemical freeze out is an intermediate stage in high energy collisions where the intra-nuclear collisions among the particles become inelastic and the abundances of stable particles are fixed and the temperature of the particles at the stage is known as the chemical freeze out temperature. Correspondingly, the thermal freeze out temperature ($T_0$) describes the excitation degree of interacting system at the last but not the least stage of high energy collisions and the final state particles $p_T$ spectra are determined (frozen) at this stage. Effective temperature is not a real temperature but it describes the sum of excitation degree of interacting system and the effect of transverse flow at the stage of kinetic freeze out. Further details of the temperatures can be found in [14]. $T_0$ has very complex situation, based on their energy [15--17] and centrality dependence [15--19]. Because of having a complex process, the freezeout shows a hierarchy, where the production of different kinds of particles and reactions cease at different time scales. According to kinetic theory perspective, the reactions with lower interaction cross section decouple early from the system compared to reaction with higher cross section. Besides, the particles decoupling may also depend on it's rest mass that reveal the multiple kinetic freezeout scenario. Furthermore, according to some school of thoughts the single, and double kinetic freezeout scenarios also exists in some literatures, which represents one set of parameters to be used for the spectra of both the strange and non-strange particles in case of single freezeout scenario, while one set of parameters for strange (or multistrange) particles and another for non-strange (or non-multistrange) particles should be used. In case of multiple kinetic freezeout scenario one should use different sets of parameters for different particles. It is very important to find out the correct kinetic freezeout scenario. In addition, $\beta_T$ is also an important parameter which reflects the collective expansion of the emission source and it is believed that both $T_0$ and $\beta_T$ maybe dependent on the size of interacting nuclei (A-A, p-A and p-p collisions) and heavier interacting systems may give the chance of formation of QGP. The study of various nucleus-nucleus and hadron-nucleus as well as p-p collisions is very useful in understanding the microscopic features of degrees of equilibration and their dependencies on the number of participants in the system. Besides, this study may also provide some useful information about the formation of super hadronic dense matter which is not the focus in this work. In this paper, our main focal interest is the extraction of kinetic freeze out temperature, transverse flow velocity and freezeout volume from which we can dig out of the correct kinetic freezeout scenario. We will extract the relative parameters from the fitting of transverse momentum (mass) spectra of $\pi^+$, $K^+$, $p$, $K^0_S$, $\Lambda$, $\Xi$ or $\bar\Xi^+$ and $\Omega+\bar \Omega$ or $\bar \Omega^+$ or $\Omega^-+\Omega^+$ produced in central and peripheral Copper-Copper (Cu-Cu), gold-gold (Au-Au) and lead-lead (Pb-Pb) collisions at 200 GeV, 62.4 GeV and 158 GeV respectively by the blast wave model with Boltzmann Gibb's statistics. The related parameters are then extracted from the fittings. The remainder of the paper consists of method and formalism in section 2, followed by the results and discussion in section 3. In section 4, we summarized our main observations and conclusions. \\ {\section{The model and method}} The spectra of high energy transverse momentum region is generally contributed by the hard scattering process which is described by the Quantum chromodynamics (QCD) calculations [20--23]. The Hagedorn function [23,24], which is an inverse power law, can also be used to describe the hard scattering process. The inverse power law has at least three revisions which can be found in [25--31]. The low transverse momentum region is contributed by the soft excitation process in which the kinetic freeze out temperature and transverse flow velocity can be extracted, however the hard scattering process has no contribution in the extraction of kinetic freeze out temperature and transverse flow velocity. Therefore we will not discuss the hagedorn function and it's revisions. The kinetic freeze out temperature and transverse flow velocity can be extracted by analyzing the spectra in low $p_T$ region by various distributions such as Erlang distribution [32--34], Tsallis distribution [35--37], Tsallis+standard distribution[38--43] and others. We will use the blast wave model with Boltzmann Gibbs statistics [44--46], which is the most direct distribution and having less parameters. Due to the contribution of resonance production in some cases or other reasons such as statistical fluctuation. A single component blast wave model is not enough for the description of spectra in low transverse momentum region, then the two-component blast wave model is required in these special cases. According to references [44--46], the first component of kinetic freeze out temperature ($T_1$) and transverse flow velocity ($\beta_{T1}$) in the blast wave model with Boltzmann Gibb's (BGBW) statistics results in the probability density function of transverse momentum ($p_T$) to be \begin{align} f(p_T,T_{01}, \beta_{T1})=&\frac{1}{N}\frac{dN}{dp_T} =C_1 \frac{gV}{(2\pi)^2} p_T m_T \int_0^R rdr \nonumber\\ & \times I_0 \bigg[\frac{p_T \sinh(\rho_1)}{T_{01}} \bigg] K_1 \bigg[\frac{m_T \cosh(\rho_1)}{T_{01}} \bigg], \end{align} where C$_1$ stands for the normalization constant that leads the integral in Eq. (1) to be normalized to 1, $N$ is the number of particles, $g$ is the degeneracy factor of the particle (which is different for different particles, based on $g_n$=2$S_n$+1, while $S_n$ is the spin of the particle) $m_T=\sqrt{p_T^2+m_0^2}$ is the transverse mass, $I_0$ and $K_1$ are the modified Bessel functions of the first and second kinds respectively, $\rho= \tanh^{-1} [\beta(r)]$ is the boost angle, $\beta(r)= \beta_S(r/R)^{n_0}$ is a self-similar flow profile and $\beta_S$ being the flow velocity on the surface, $r/R$ is the relative radial position in the thermal source. The second component of the blast wave model has the same form as the first one, but with the kinetic freeze out temperature ($T_{02}$) and transverse flow velocity ($\beta_{T2}$). thus, the two component transverse momentum distribution function in blast wave model can be demonstrated as \begin{align} f_0(p_T)=kf(p_T,T_{01},\beta_{T1}) +(1-k)f(p_T, T_{02},\beta_{T2}), \end{align} where k is the contribution fraction of the first component (soft excitation), while (1-k) shows the contribution fraction of the second component (hard scattering) in the Eq. 2. According to Hagedorn thermal model [23], the two-component Boltzmann-Gibbs blast-wave distribution function can also be structured by using the usual step function, \begin{align} f_0(p_T)=\frac{1}{N}\frac{dN}{dp_T}=A_1\theta(p_1-p_T)f(p_T,T_{01},\beta_{T1}) \nonumber\\ + A_2 \theta(p_T-p_1) f(p_T,T_{02},\beta_{T2}), \end{align} where $A_1$ and $A_2$ are the fraction constants giving the two components to be equal to each other at $p_T$ = $p_1$, $\theta$ ($p_1$-$p_T$)=1 (or 0), if $p_T$ $<$ $p_1$ (or $p_T$ $>$ $p_1$) and ($p_T$-$p_1$)=1 (or 0), if $p_T$ $>$ $p_1$ (or $p_T$ $<$ $p_1$). Both Eq. (2) and (3) can be used for the extraction of $T_0$ and $\beta_T$ in the two-component blast wave model i.e \begin{align} T_0=kT_{01}+(1-k)T_{02}, \end{align} and \begin{align} \beta_T=k\beta_{T_1}+(1-k)\beta_{T_2}, \end{align} If Eq.(2) is used to get the parameter values of the two components, k is directly given by Eq. (2), However, if the parameter of two component values are obtained by using Eq. (3), k is expressed by \begin{align} k=\int_{0}^{p_1} A_1f(p_T,T_{01},\beta_{T1})dp_T, \end{align} Because Eq.~(3) is the probability density function, it is naturally normalized. The second component in Eq. (2) or (3) is not necessary in the fitting procedure. if the spectra are not in a very wide $p_T$ range, and therefore only the first component in Eq. (2) and Eq.(3), that is, Eq. (1), can be used to fit the spectra. Although Eq. (2) and Eq.(3) are not used in this work, but they are presented in order to show the complete treatment in methodology. In some cases, the spectra are not in the form of $p_T$, but $m_T$. Then the $p_T$ distribution $f_S$($p_T$) is needed to convert into $m_T$ distribution $f_S$($m_T$) by $f_S$($m_T$)$|dm_T|$=$f_S$($p_T$)$|dp_T|$ through $p_T$$|dp_T|$=$m_T$$|dm_T|$ due to the invariant cross section. In fact, Eq. (1) appears in the form of $f_S$($m_T$), so we will convert it into $f_S$($p_T$) pragmatically. In the present work, we have analyzed the $p_T$ spectra of the particles ($\pi^+$, $K^+$, $p$, $K^0_S$, $\Lambda$, $\Xi$ or ($\bar\Xi^+$) and $\Omega+\bar \Omega$ or ($\bar \Omega^+$) or ($\Omega^-+\Omega^+$)) by using the single component blast wave model with Boltzman Gibb's statistics and extracted $T_0$ and $\beta_T$. However the spectra in a wide $p_T$ range is not needed for the extraction of $T_0$ and $\beta_T$ due to small fraction in high $p_T$ region. \\ {\section{Results and discussion}} Figures 1-3 demonstrate the $p_T$ or $m_T-m_0$ or $m_T$ spectra, $(1/N_{ev}) (1/2\pi m_T)d^2N/dm_Tdy$, $(1/2\pi p_T)d^2N/dp_Tdy$, $(1/m_T)d^2N/dm_Tdy$, $(1/m_T)dN/dm_T$\nonumber\\ of $\pi^+$, $K^+$, $p$, $K^0_S$, $\Lambda$, $\Xi$ or ($\bar\Xi^+$) and $\Omega+\bar \Omega$ or [$\bar \Omega^+$ or $\Omega^-+\Omega^+$] produced in central and peripheral Cu-Cu, Au-Au and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 200 GeV, 62.4 GeV and 158 GeV respectively, where $N$ denotes the number of particles. The types of particle and collision along with their energies are marked in the panels. The symbols in the figures represent the experimental data measured by BRAHMS [47], STAR [48, 45, 49], SPS [50], NA 49 [51] and WA 97 [52] Collaborations respectively. The curves are our fitted results by using Eq. (1). The values of the free parameters ($T_0$, $\beta_T$, $V$ and $n$), the normalization constant ($N_0$), $\chi^2$ and the number of degree of freedom (ndof), the concrete collisions, energies, particles, spectra and the amounts scaled for plotting are given in Table 1. The spectra in very low-$p_T$ region are not taken care carefully in the fit process due to the resonance production, while the fit itself is not too good. One can see that the blast-wave fit with Boltzmann Gibb's statistics fit approximately the experimental data over a wide energy range. The change in trends of the kinetic freeze out temperature $(T_0)$ and transverse flow velocity $(\beta_T)$ with the rest mass of the particles $(m_0)$ is demonstrated in figure 4. Figure 4(a) shows the dependence of $T_0$ on $m_0$ in the central and peripheral Cu-Cu, Au-Au and Pb-Pb collisions at 200 GeV, 62.4 GeV and 158 GeV respectively for the non-strange, strange and multi-strange particles, while the dependence of $\beta_T$ on $m_0$ is shown in figure 4(b). Different collisions are represented by different symbols. The solid and open symbols represent the central and peripheral collisions respectively. \begin{figure*}[htbp] \begin{center} \includegraphics[width=14.cm]{fig1} \end{center} Fig. 1. Transverse momentum spectra of $\pi^+$,$K^+$ and $p$ at rapidity $|y|=0$ [47], $K^0_S$, $\Lambda$, $\Xi$ and $\Omega+\bar \Omega$ at rapidity $|y|<0.5$ [48] produced in central (0--10\% centrality) and peripheral (50--70\% centrality for $\pi^+$, $K^+$ and $p$ and 40--60\% centrality for $K^0_S$, $\Lambda$, $\Xi$ and $\Omega+\bar \Omega$) Cu-Cu collisions at 200 GeV . The symbols represent the experimental data measured by BRAHMS and STAR collaborations [47, 48], while the curves are our fitted results by using the blast wave model with Boltzmann Gibb's statistics, Eq (1). The corresponding results of data/fit is presented in each panel. \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=14.cm]{fig2} \end{center} Fig. 2. Transverse momentum spectra of $\pi^+$, $K^+$ and $p$ at mid-rapidity $mid-|y|<0.1$ [45], $K^0_S$, $\Lambda$, $\bar\Xi^+$ and $\bar \Omega^+$ at pseudo-rapidity $|\eta|<1.8$ [49] produced in central (0--5\% centrality) and peripheral (70--80\% centrality for $\pi^+$, $K^+$ and $p$ and 60--80\% centrality for $K^0_S$, $\Lambda$, $\bar\Xi^+$ and $\bar \Omega^+$) Au-Au collisions at 62.4 GeV. The symbols represent the experimental data measured by STAR collaboration [45, 49], while the curves are our fitted results by using the blast wave model with Boltzmann Gibb's statistics, Eq (1). The corresponding results of data/fit is presented in each panel. \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=14.cm]{fig3} \end{center} Fig. 3. Transverse momentum spectra of $\pi^+$,$K^+$ [50], $p$ [51] at rapidity interval $2.4<y<2.8$ and $K^0_S$ and $\Lambda$ at rapidity interval $2 <\eta< 3$, and $\bar\Xi^+$ and $\Omega^-+\bar \Omega^+$ at rapidity interval $3 <\eta< 4$ [52] produced in central and $p$ [51], $K^0_S$, $\Lambda$, $\bar\Xi^+$ and $\Omega^-+\bar \Omega^+$ [52] in peripheral Pb-Pb collisions at 158 GeV . The symbols represent the experimental data measured by SPS, NA 49 and WA 97 collaborations [50--52], while the curves are our fitted results by using the blast wave model with Boltzmann Gibb's statistics, Eq (1). The corresponding results of data/fit is presented in each panel. \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=14.cm]{fig4} \end{center} Fig. 4. Dependence of (a) $T_0$ and (b) $\beta_T$ on $m_0$ and centrality. \end{figure*} \begin{table*} {\scriptsize Table 1. Values of free parameters ($T_0$ and $\beta_T$), normalization constant ($N_0$), $\chi^2$, and degree of freedom (dof) corresponding to the curves in Figs. 1--6. \vspace{-.50cm} \begin{center} \begin{tabular}{cccccccccccccc}\\ \hline\hline Collisions & Centrality & Particle & Spectrum & scaled by & $T_0$ & $\beta_T$ & $V (fm^3)$& $n_0$ & $N_0$ & $\chi^2$ & dof \\ \hline Fig. 1 & 0--10\% & $\pi^+$ & $1/N_{ev}(1/2\pi m_T)d^2N/dm_Tdy$& -- & $0.112\pm0.008$ & $0.434\pm0.011$& $3490\pm190$ &$1$ & $0.074\pm0.007$ &1 & 9\\ Cu-Cu & -- &$K^+$&$1/N_{ev}(1/2\pi m_T)d^2N/dm_Tdy$& 100& $0.130\pm0.006$ & $0.400\pm0.009$ &$3000\pm200$&$2$&$1\times10^{-4}\pm3\times10^{-5}$&3& 8\\ 200 GeV & -- & $p$ & $1/N_{ev}(1/2\pi m_T)d^2N/dm_Tdy$ & 2000 &$0.114\pm0.007$ & $0.388\pm0.012$ & $2730\pm155$ &$2$ & $2.7\times10^{-7}\pm4\times10^{-8}$ & 3 & 9\\ & -- & $K^0_S$ &$(1/2\pi p_T)d^2N/dp_Tdy$& 1/100 &$0.131\pm0.005$ & $0.397\pm0.011$ & $2950\pm163$ &$2$& $0.038\pm0.003$ & 42 & 12\\ & --& $\Lambda$&$(1/2\pi p_T)d^2N/dp_Tdy$& 1/260 & $0.132\pm0.007$ & $0.372\pm0.013$ & $2500\pm172$ &$2$& $0.0047\pm0.0008$ & 8 & 11\\ & -- & $\Xi$&$(1/2\pi p_T)d^2N/dp_Tdy$& 1/100 & $0.148\pm0.005$ & $0.358\pm0.009$ & $2170\pm186$ & $2$& $0.0001\pm0.00004$ & 1 & 6\\ & --& $\Omega+\bar \Omega$& $(1/2\pi p_T)d^2N/dp_Tdy$& 1/100 &$0.149\pm0.006$ & $0.350\pm0.011$ & $1900\pm167$ &$2$ & $1.5\times10^{-4}\pm4\times10^{-5}$&1& 1\\ \cline{2-8} & 50--70\% & $\pi^+$ & $1/N_{ev}(1/2\pi m_T)d^2N/dm_Tdy$ & --& $0.102\pm0.005$ & $0.425\pm0.009$ & $2570\pm160$ &$2$& $0.012\pm0.003$ & 2 & 9\\ & -- & $K^+$ & $1/N_{ev}(1/2\pi m_T)d^2N/dm_Tdy$& 100 &$0.120\pm0.005$ & $0.396\pm0.011$ & $2300\pm150$ & $2$& $9\times10^{-6}\pm7\times10^{-7}$& 6 & 8\\ & -- & $p$ &$1/N_{ev}(1/2\pi m_T)d^2N/dm_Tdy$& 700 & $0.106\pm0.005$ & $0.324\pm0.013$ & $2000\pm240$ &$2$& $2\times10^{-8}\pm4\times10^{-9}$ & 20 & 9\\ & 40--60\% & $K^0_S$ & $(1/2\pi p_T)d^2N/dp_Tdy$ &10& $0.121\pm0.007$ & $0.393\pm0.012$ & $2226\pm191$ &$2.3$& $6.5\times10^{-7}\pm5\times10^{-8}$ &32& 12\\ & -- & $\Lambda$&$(1/2\pi p_T)d^2N/dp_Tdy$& 3& $0.122\pm0.006$ & $0.316\pm0.011$ & $1800\pm140$ & $2.8$& $1.48\times10^{-7}\pm3\times10^{-8}$ & 10 & 11\\ & -- & $\Xi$&$(1/2\pi p_T)d^2N/dp_Tdy$&7& $0.137\pm0.006$& $0.304\pm0.012$ & $1600\pm153$ &$3$& $1.7\times10^{-8}\pm4\times10^{-9}$ & 2 & 6\\ & --& $\Omega+\bar \Omega$&$(1/2\pi p_T)d^2N/dp_Tdy$& 10& $0.138\pm0.005$&$0.290\pm0.013$ & $1400\pm150$ &$3$& $2.12\times10^{-7}\pm2\times10^{-8}$ & 11& 1\\ \hline Fig. 2 & 0--5\% & $\pi^+$ &$(1/2\pi p_T)d^2N/dp_Tdy$& -- & $0.116\pm0.007$ & $0.454\pm0.013$ & $5900\pm220$ &$2$ &$0.3\pm0.004$ &11 & 6\\ Au-Au & -- & $K^+$ &$(1/2\pi p_T)d^2N/dp_Tdy$ & -- & $0.134\pm0.006$& $0.445\pm0.008$ & $5500\pm190$ &$2$ & $0.05\pm0.006$ & 3 & 6\\ 62.4 GeV &-- & $p$ & $(1/2\pi p_T)d^2N/dp_Tdy$& 0.5 & $0.117\pm0.006$ & $0.422\pm0.011$ & $5000\pm212$ &$1$ & $0.025\pm0.005$ & 147 & 10\\ & -- & $K^0_S$ & $(1/2\pi p_T)d^2N/dp_Tdy$& 1/8& $0.135\pm0.006$ & $0.432\pm0.013$ & $5487\pm200$ &$2$ & $0.006\pm0.0005$ & 4 & 10\\ & -- & $\Lambda$ &$(1/2\pi p_T)d^2N/dp_Tdy$& 1/7& $0.136\pm0.006$ & $0.337\pm0.012$ & $4700\pm200$ & $3$ & $0.0045\pm0.0003$ & 3 & 8\\ & -- & $\bar \Xi^+$&$(1/2\pi p_T)d^2N/dp_Tdy$& 1/5& $0.152\pm0.007$ & $0.317\pm0.011$ & $4321\pm213$ &$2$ & $8\times10^{-4}\pm4\times10^{-5}$ & 2 & 6\\ & -- & $\bar \Omega^+$&$(1/2\pi p_T)d^2N/dp_Tdy$& 1/5& $0.154\pm0.005$ & $0.297\pm0.011$ & $4000\pm189$ & $2$ & $0.032\pm0.005$ & 1 & 1\\ \cline{2-8} & 70--80\% & $\pi^+$ &$(1/2\pi p_T)d^2N/dp_Tdy$& -- & $0.102\pm0.006$ & $0.414\pm0.012$ & $5200\pm198$ &$2$ & $0.008\pm0.0003$ & 3 & 6\\ & -- & $K^+$ &$(1/2\pi p_T)d^2N/dp_Tdy$& -- & $0.117\pm0.006$ & $0.400\pm0.013$ & $5000\pm221$ &$2$ & $0.0012\pm0.0004$& 8 & 6\\ & -- & $p$ &$(1/2\pi p_T)d^2N/dp_Tdy$& 1/2& $0.102\pm0.007$ & $0.374\pm0.010$ & $4700\pm180$ &$2$& $5\times10^{-4}\pm4\times10^{-5}$ & 8 & 10\\ & 60--80\% & $K^0_S$ & $(1/2\pi p_T)d^2N/dp_Tdy$& $10^4$ &$0.118\pm0.005$ & $0.399\pm0.010$ & $4950\pm168$ &$2$ & $1.6\times10^{-9}\pm3\times10^{-10}$ & 7 & 10\\ & --& $\Lambda$&$(1/2\pi p_T)d^2N/dp_Tdy$& $10^4$ & $0.119\pm0.007$ & $0.327\pm0.09$ & $4340\pm165$ &$2$& $5\times10^{-10}\pm6\times10^{-11}$ & 3 & 7\\ & --& $\bar \Xi^+$&$(1/2\pi p_T)d^2N/dp_Tdy$& $10^3$& $0.138\pm0.006$& $0.301\pm0.012$ & $4000\pm186$ &$2$& $3\times10^{-10}\pm6\times10^{-11}$ & 2 & 5\\ & --& $\bar \Omega^+$&$(1/2\pi p_T)d^2N/dp_Tdy$& 80& $0.140\pm0.007$&$0.293\pm0.010$ & $3725\pm152$ &$2$& $9\times10^{-9}\pm4\times10^{-10}$ & 21 & 0\\ \hline \hline Fig. 3 & Central & $\pi^+$ &$(1/2\pi p_T)d^2N/dp_Tdy$ & -- & $0.121\pm0.005$ & $0.453\pm0.012$ & $7300\pm300$ & $1$ & $169.17\pm32$ & 19 & 18\\ Pb-Pb & -- & $K^+$ & $(1/2\pi p_T)d^2N/dp_Tdy$ & -- & $0.137\pm0.007$ & $0.435\pm0.010$ & $7000\pm220$ &$1$ & $23.17\pm6$ & 6 & 8\\ 158 GeV & 0--5\% & $p$ & $(1/m_T)d^2N/dm_Tdy$ & 60 & $0.122\pm0.005$ & $0.424\pm0.011$ & $6670\pm190$ &$-2$& $0.02\pm 0.004$ & 15 & 13\\ & Central& $K^0_S$&$(1/m_T)dN/dm_T$ & 50& $0.137\pm0.006$ & $0.430\pm0.013$ & $6960\pm196$ & $0$ &$0.0017\pm0.0005$ & 10 & 6\\ & --& $\Lambda$&$(1/m_T)dN/dm_T$& --&$0.138\pm0.007$ & $0.404\pm0.009$ & $6400\pm185$& $0$ & $0.0016\pm0.0007$ & 10 & 3\\ & --& $\bar \Xi^+$& $(1/m_T)dN/dm_T$& 1/27& $0.156\pm0.006$& $0.284\pm0.011$ & $6113\pm183$ &$2$& $0.014\pm0.004$ & 2 & 1\\ & -- & $\Omega^-+\bar \Omega^+$&$(1/m_T)dN/dm_T$& 1/180& $0.157\pm0.005$&$0.200\pm0.012$ & $5800\pm200$ &$2$& $0.022\pm 0.003$ & 5 & 1\\ \cline{2-8} &43--100\%& $p$ &$(1/m_T)d^2N/dm_Tdy$ & 1/10& $0.117\pm0.006$ & $0.414\pm0.008$ & $6000\pm210$ & $-2$ & $6\times10^{-5}\pm3\times10^{-6}$ & 117 & 11\\ & Peripheral & $K^0_S$ &$(1/m_T)dN/dm_T$& 20& $0.129\pm0.006$ & $0.426\pm0.010$ & $6280\pm200$ & $0$ & $7\times10^{-7}\pm5\times10^{-8}$ & 3 & 6\\ & --& $\Lambda$&$(1/m_T)dN/dm_T$& --&$0.130\pm0.005$ & $0.390\pm0.011$ & $5700\pm215$ &$0$ & $7\times10^{-7}\pm3\times10^{-8}$ & 4 & 3\\ & -- & $\bar \Xi^+$&$(1/m_T)dN/dm_T$& 1/20& $0.146\pm0.007$& $0.274\pm0.010$ & $5413\pm180$ & $2$& $5\times10^{-6}\pm4\times10^{-7}$ & 2 & 2\\ & -- & $\Omega^-+\bar \Omega^+$&$(1/m_T)dN/dm_T$&1/300& $0.148\pm0.007$& $0.184\pm0.012$ & $5000\pm210$ &$2$ & $4\times10^{-6}\pm3\times10^{-7}$ & 1 & 1\\ \hline \end{tabular \end{center}} \end{table*} At present, one can see that $T_0$ has no clear dependence on $m_0$. $\pi^+$ and $p$ has lower value for $T_0$ than all of the strange and multi-strange particles and they decouple from the system at the same time. $K^+$ and $K^0_S$ are lighter than proton but they both decouple early from the system and freeze out earlier than proton. This result is inconsistent with [16, 35, 53] (which shows the obvious mass dependence of $T_0$), but is consistent with [54], although the main idea in [54] is different from our current work but larger $T_0$ for $K^+$ and $K^0_S$ than proton can be seen. \begin{figure*}[htbp] \begin{center} \includegraphics[width=16.cm]{fig4} \end{center} Fig. 5. Variation of $V$ with $m_0$ and centrality for non-strange, strange and multi-strange particles in Cu-Cu, Au-Au and Pb-Pb central and peripheral collisions. \end{figure*} In the present work, It is observed that the K-F-O temperature of the multi-strange particles in considerably larger than those of the strange particles and the later is larger than the non-strange particles, which reveal a picture of separate freeze out processes for the non-strange, strange and multi-strange particles. We believe that the reason behind the very large kinetic freeze out temperature of multi-strange particles followed by strange particles maybe the interaction cross-section, such that if the multi-strange (strange particles) hadrons don't interact with other hadrons, their cross-section is small and hence they decouple early from the system, which allow their early freeze out. Further more the kinetic freeze out temperature in Cu-Cu, Au-Au and Pb-Pb central collisions is larger than in peripheral collisions, and $T_0$ for all the particles in Cu-Cu central and peripheral collisions is smaller than in Au-Au central and peripheral collisions and the later is smaller than in Pb-Pb central and peripheral collisions, which exhibits the dependence of $T_0$ on size of interacting system. The larger $T_0$ in the most heaviest nuclei and central collisions is due to the fact that more number of nucleons are involved in the heavy nuclei and in most central interactions which lead the system to higher excitation degree but a variety of data maybe needed (hadron-hadron, hadron-nucleus, different nucleus-nucleus collisions) to further analyze this work in the future. Figure 4(b) shows the variation of of $\beta_T$ with $m_0$. One can see that $\beta_T$ is slightly larger in central collisions than in the peripheral collisions. Furthermore, it is observed that $\beta_T$ decrease with increasing mass of the particle which indicates the early decoupling of heavier particles from the system that results in their early freeze out of heavy particles. This result is consistent with [16, 18, 55]. No dependence of $\beta_T$ on energy or size of the interacting system is observed in this work. Figure 5 is the same as Fig4. but it demonstraes the variation of $V$ with $m_0$ and centrality for the non-strange, strange and multi-strange particles in Cu-Cu, Au-Au and Pb-Pb central and peripheral collisions at 200, 62.4 and 158 GeV respectively. Different symbols are used to represent different particles, while the solid and open symbols represent the central and peripheral collisions respectively. One can see the larger $V$ for light particles which decrease for heavier particles that leads to the volume differential freezeout scenario. Compared to the peripheral collisions, the central collisions correspond to larger larger $V$ due large number of participant nucleons which indicates the quick approach of the later system to equilibrium state. Furthermore, $V$ is observed to be dependant on the size of the interacting system as it is larger in Pb-Pb collisions than in Au-Au collisions, and in later it is larger than in Cu-Cu collisions. \begin{figure*}[htbp] \begin{center} \includegraphics[width=16.cm]{fig6} \end{center} Fig. 6. Variation of $T_0$ with $V$ for non-strange, strange and multi-strange particles in Cu-Cu, Au-Au and Pb-Pb central and peripheral collisions. \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=16.cm]{fig7} \end{center} Fig. 7. Variation of $N_0$ with $m_0$ and centrality for non-strange, strange and multi-strange particles in Cu-Cu, Au-Au and Pb-Pb central and peripheral collisions. \end{figure*} Figure 6 (a)-(c) shows the co-relation of $T_0$ and $V$. The solid and open symbols are used for the central and peripheral collisions respectively and different particles are represented by different symbols. $T_0$ decrease with the increase of $V$ in the most central and most peripheral heavy ion (Cu-Cu, Au-Au and Pb-Pb) collisions. Figure 7 is the same to Fig 4. but it demonstrates the dependence of $N_0$ with $m_0$ and centrality. $N_0$ is not just a normalization constant but it reflects the multiplicity. One can see lager $N_0$ in central collisions compared to the peripheral collisions. At present, there is no dependence of $N_0$ on $m_0$. \\ {\section{Conclusions}} We summarize here our main observations and conclusions. (a) The transverse momentum spectra of non-strange, strange and multi-strange particles produced in central and peripheral Cu-Cu, Au-Au and Pb-Pb collisions have been analyzed by BGBW model. The model results are in agreement with the experimental data in the special $p_T$ range measured by BRAHMS, STAR, SPS, NA 49 and WA 97 collaborations. (b) Separate kinetic freezeout scenario for non-strange, strange and multi-strange particles are found which shows the dependence of the kinetic freezeout temperature of the particles on their interaction cross-section and reveals the triple kinetic freeze out scenario, However $\beta_T$ and $V$ are mass dependent which decrease with increasing $m_0$. (c) Kinetic freeze out temperature ($T_0$), transverse flow velocity ($\beta_T$) and kinetic freezeout volume ($V$) are extracted from the transverse momentum spectra fitting to the experimental data. It is observed that $T_0$, $\beta_T$ and $V$ are slightly larger in central collisions than in the peripheral collisions, and $T_0$ and $V$ is also generally larger in the most heaviest interacting system, while $\beta_T$ shows no dependence on the size of interacting system. d) The normalization constant ($N_0$) that reflects multiplicity, is larger in central collisions than in peripheral collisions. \\ {\bf Data availability} The data used to support the findings of this study are included within the article and are cited at relevant places within the text as references. \\ \\ {\bf Compliance with Ethical Standards} The authors declare that they are in compliance with ethical standards regarding the content of this paper. \\ \\ {\bf Acknowledgements} The authors would like to thank support from the National Natural Science Foundation of China (Grant Nos. 11875052, 11575190, and 11135011). \vskip1.0cm {\small
1,477,468,750,763
arxiv
\section{Conclusions} \label{sec:conclusions} In this paper, we demonstrate the spontaneous apsidal clustering of orbits of low mass bodies in $N$-body simulations of a primordial scattered disk between $\sim$100--1000 AU. As in \citet{zderic2020a}, we find that apsidal clustering begins after the inclination instability has saturated, and that the inclination instability is key to the formation of the lopsided mode. In simulations where the orbit-averaged, gravitational influence of the giant planets is included, we find that apsidal clustering occurs provided that the inclination instability is not suppressed. We also find that apsidal clustering only forms near the inner edge of the disk in the 100--320 AU range with the specific range depending on the model, but we caution that our simulations have low numbers of particles particularly at large semi-major axes. The fast orbital precession caused by the giant planets pushes the location of apsidal clustering out to larger semi-major axis. Finally, we find that the resulting lopsided mode strength oscillates, but appears long-lasting. \citet{lyndenbell1979} proposed a mechanism to explain stellar bar formation in the center of galaxies that we extend here to near-Keplerian systems to explain the apsidal clustering that occurs in our simulations. Orbit-averaged torques from a weak, lopsided mode encourages orbits into precessing towards alignment with the mode.\footnote{We assume an initial small mode is seeded by random fluctuations within the disk.} In a Keplerian system, if $\nicefrac{\partial{\dot{\varpi}}}{\partial{e}} |_{a} < 0$ then orbits will tend to align with and reinforce the mode. We call regions of $e$-$a$ space where $\nicefrac{\partial{\dot{\varpi}}}{\partial{e}} |_{a} < 0$ clustering regions. We have created contour plots of $\dot{\varpi}$ as a function of eccentricity and semi-major axis within the disk at different times. We find that a clustering region forms during the peak of the inclination instability when the disk has formed a bowl-shape. The clustering region appears just before apsidal clustering begins, and it appears at the inner edges of the disk. In simulations with the orbit-averaged gravitational influence of the giant planets, the added $J_2$ inhibits circularization of the inner edge of the disk during the instability and amplifies circularization at larger semi-major axis. As a result, the clustering region is populated by bodies with larger semi-major axis in the $J_2$ models and apsidal clustering correspondingly occurs at larger semi-major axes. \begin{figure*}[!htb] \centering \includegraphics{gantt-timescales-J2N400.png} \caption{Rayleigh test results for $\omega$ and $\varpi$ binned by initial semi-major axis as a function of time for the J2N400 model scaled such that the inclination instability saturates (ceases exponential growth) at 250 Myr. Rayleigh test $p$-values less than 0.05 indicate that the distribution of $\omega$ or $\varpi$ is not uniform (i.e. it is clustered) and smaller $p$-values indicate stronger clustering. $\omega$-clustering begins after $\sim$10 Myr and persists until the end of the simulation at $\sim$1300 Myr, except for bodies with $a_0\in[100,180]$~AU. $\varpi$-clustering occurs for bodies with $a_0\in[100,320]$~AU after the inclination instability has saturated at $\sim$250 Myr, and it disappears and reappears intermittently.} \label{fig:clustering-J2N400} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics{gantt-timescales-J2N800.png} \caption{Rayleigh test results for $\omega$ and $\varpi$ binned by initial semi-major axis as a function of time for the J2N800 model scaled such that the inclination instability saturates (ceases exponential growth) at 250 Myr. Rayleigh test $p$-values less than 0.05 indicate that the distribution of $\omega$ or $\varpi$ is not uniform (i.e. it is clustered) and smaller $p$-values indicate stronger clustering. $\omega$-clustering begins after $\sim$10 Myr and persists until the end of the simulation at $\sim$1300 Myr. $\varpi$-clustering occurs primarily in bodies with $a_0\in[100,180]$~AU after the inclination instability has saturated, and it is more intermittent than in the J2N400 model. The initial $\varpi$-clustering in the outer $a_0$ bin is a random quirk of our initial conditions, and it quickly disappears.} \label{fig:clustering-J2N800} \end{figure*} The clustering region is directly correlated with the unique bowl-shaped orbital distribution created by the inclination instability. Due to orbital precession, the bowl-shaped distribution oscillates back and forth across the original plane of the disk, causing the clustering region to repeatedly disappear and reappear, and eventually, the bowl-shape disappears. However, we find that the lopsided mode created by the clustering region persists. We hypothesize that the mode eventually becomes massive enough to trap orbits without the help of the background disk potential. Surface density plots of our disks during the inclination instability show edge-on wing-like structures reminiscent of some debris disks (e.g. HD61005), and a lopsided mode in face-on views after the instability has saturated. In line-of-sight velocity, we see concentric circles of alternating sign associated with the bowl-shaped orbital distribution post-instability. Later, the lopsided mode creates spiral arms in line-of-sight velocity. Observational signatures like this in exoplanet disks could be caused by the inclination instability provided there is something to pump-up the orbital eccentricity of the bodies in the disk (e.g. a giant planet). In \citet{Zderic2020b}, we found that $\sim$20 Earth masses is required for a primordial scattered disk to resist the orbit-averaged quadrupole potential of the giant planets \textit {at their current locations} and undergo the inclination instability\footnote{Twenty earth masses is extreme for a primordial scattered disk (indeed perhaps too massive for the instability to have occurred in our solar system). While hundreds of earth masses may have been present in the early planetesimal disk beyond $\sim5$ AU, only a fraction appears to pass through the scattered disk region \citep{Nesvorny2018}.}. The e-folding timescale for the inclination instability in a scattered disk configuration with $N\rightarrow\infty$ and without added $J_2$ is \citep{Zderic2020b}, \begin{equation} t_{\rm e-fold} \sim \frac{2.4}{\pi} \frac{M_\odot}{M_d} P. \end{equation} For a 20 Earth mass disk, $t_{\rm e-fold} = 1.3\times10^{4}\,P$. Based on \citet{Zderic2020b} Figure 3, we expect this timescale to be increased by a factor of $\sim$4 in simulations with added $J_2 \lesssim J_{2,{\rm crit}}$. With the inner edge semi-major axis being $100\,{\rm AU}$, $P = 1000\,{\rm yr}$, and the e-folding timescale for the inclination instability in a 20 Earth mass Scattered Disk in the outer solar system is $\sim$50 Myr. It takes about five e-folding timescales for the inclination instability to saturate. Therefore, the inclination instability in this primordial scattered disk should saturate after $\sim$250 Myr. We can estimate the duration of $\omega$ and $\varpi$ clustering in the 20 Earth mass primordial scattered disk using the J2N400 and J2N800 simulations. We set the saturation time in these simulations to be 250 Myr and scale the subsequent evolution of the disk using the secular timescale. For example, $t_{\rm sec} = 2.6\,{\rm Myr}$ (see Equation~\ref{eq:tsec}) for a 20 Earth mass disk. The J2N800 simulations runs for $\sim$400 $t_{\rm sec} \approx 1050\,{\rm Myr}$ after the saturation of the instability. We stress that this scaling is approximate and meant to provide a qualitative, order-of-magnitude estimate for the duration of angular clustering in an unstable primordial scattered disk. In Figures~\ref{fig:clustering-J2N400} and \ref{fig:clustering-J2N800}, we show the $p$-values for the Rayleigh $z$ test for uniformity \citep{MardiaAndJupp} on $\omega$ and $\varpi$ for the J2N400 and J2N800 models as a function of time in Myr using this proposed scaling. Rayleigh $z$ test $p$-values less than 0.05 signify that the angular distribution is not consistent with a uniform distribution, i.e. that the distribution is clustered. We chose the Rayleigh test as it is sensitive to unimodal deviations from uniformity. In both models, $\omega$-clustering begins after just a few tens of Myr and persists for the duration of the simulation except in the inner semi-major axis bin (100-180 AU) where differential precession is the strongest. Intermittent clustering in $\varpi$ begins after the inclination instability has saturated in the inner two bins in both simulations. Note that binning the results by $a_0$ partially mitigates the effects of differential precession, prolonging the duration of angular clustering. Overall, we expect a 20 Earth mass primordial scattered disk to be able to sustain $\omega$-clustering for $a \gtrsim 180$~AU (if binned by semi-major axis) and intermittent periods of $\varpi$-clustering for $a\in[100,320]$~AU for Gyr timescales. The inclination instability can raise perihelia and inclinations of bodies in the outer solar system. As such, it can effectively trap planetesimal mass at semi-major axes of hundreds of AU as bodies are isolated from strong scattering encounters with the giant planets. In \citet{Zderic2020b} we show that orbits with semi-major axes between $\sim 200-500$ AU end up with a (rather extraordinary) median perihelion distance of 150 AU post-instability; see e.g., Figure 6. The observed Sednoids in this scenario mark the inner edge of a massive reservoir of extremely detached bodies originating from the primordial scattered disk. The mass remaining at hundreds of AU today is an open question that we are actively exploring. Our simulations show that, post-instability, inter-orbit torques induce eccentricity (but not necessarily inclination) oscillations on particles in this structure. In future work we will calculate the flux of particles back into the inner solar system through these oscillations (which can cause perihelia to drop below the orbit of Neptune) and ultimately the mass loss rate. \section{Finding the Clustering Region} \label{app:finding-clustering} \begin{figure*}[!htb] \centering \includegraphics[]{contour-method-compare.pdf} \caption{Comparison of the two methods used to calculate the precession rate in the frozen disk potential. The left panel shows $d\varpi/dt$ calculated from a test particle simulation on a 100 by 100 $e$-$a$ grid while the right panel shows $d\varpi/dt$ calculated with the torque method from Appendix~\ref{app:finding-clustering} on a 10 x 10 grid. The two methods agree qualitatively, though the torque calculation gives faster precession rates than the test particle simulations. Both plots were made using data from an $N=400$ compact configuration simulation without added $J_2$.} \label{fig:method-compare} \end{figure*} In Section~\ref{sec:clustering-region-emerges}, we show the clustering region within the disk using contour plots of $\dot{\varpi}$ in $e$-$a$ space. In these plots, $\dot{\varpi}$ is calculated using test particle simulations described in the same section. Alternatively, we can calculate instantaneous precession rate of a test orbit can be directly from the torques and forces it experiences without integrating the orbit. In particular, the orbital precession rate can be computed from the torque and the time derivative of the eccentricity vector, viz. \begin{align} \dot{\bm{e}}=\frac{\bm{f \times j}}{G M}+\frac{\bm{v \times \tau}}{G M}, \label{eq:ederiv} \end{align} where $\bm{f}$ and $\bm{\tau}$ are the specific force and torque on the test orbit; $\bm{v}$ is the velocity; $M$ is the central mass (see equation 1 in \citealt{madigan2017} and surrounding discussion). In order to validate the results of \S~\ref{sec:clustering-region-emerges}, we use equation~\eqref{eq:ederiv} to calculate the precession rates of test orbits injected into $N$-body simulations. Specifically, we \begin{enumerate} \item Discretize each orbit into one thousand, equal mass points, evenly-spaced in mean anomaly. \item Compute the total force and torque ($\bm{\tau}$) on each point along the test orbit from all of the disk orbits. \item Use equation~\eqref{eq:ederiv} to determine $\dot{\bm{e}}$ at each point along the test orbit. \item Average $\bm{\tau}$ and $\dot{\bm{e}}$ over the test orbit. \item Check for convergence by repeating the above steps with a new set of discretrized point (evenly spaced in mean anomaly between existing ones). With one thousand points, the results are always converged within 10\%. (And usually it is much better: for 90\% of test orbits we obtain convergence within 3\%.) \end{enumerate} Similar methods have previously been used in studies of resonant relaxation in the Galactic center (e.g. \citealt{gurkan&hopman2007}). To compare to the precession rate in the preceding section it is necessary to convert from $\dot{\bm{e}}$ and $\bm{\tau}$ to $\dot{\varpi}=\dot{\Omega}+\dot{\omega}$.\footnote{Note that $\dot{\varpi}=\dot{\Omega}-\dot{\omega}$ for retrograde orbits, which are not considered here.} The Kepler orbital angles, $\Omega$ and $\omega$, are related to the eccentricity and angular momentum vectors as follows \begin{align} &\Omega=\arctan\left(\frac{\hat{n}_y}{\hat{n}_x}\right)\nonumber,\\ &\omega=\arccos\left(\hat{n}\cdot \hat{e}\right)\nonumber,\\ &\bm{n}=\hat{\textbf{z}} \bm{\times} \hat{\textbf{\j}}. \end{align} Then $\dot{\Omega}$ and $\dot{\omega}$ can be approximated as \begin{align} &\dot{\Omega} \approx \frac{\Omega(\bm{j} (t+\delta t), \bm{e} (t+\delta t))-\Omega(\bm{j} (t), \bm{e} (t))}{\delta t}\nonumber,\\ &\dot{\omega} \approx \frac{\omega(\bm{j} (t+\delta t), \bm{e} (t+\delta t))-\omega(\bm{j} (t), \bm{e} (t))}{\delta t}\nonumber,\\ &\bm{j} (t+\delta t)\approx \bm{j}(t)+ \bm{\tau}\delta t\nonumber,\\ &\bm{e} (t+\delta t)\approx \bm{e}(t)+ \bm{e'}\delta t, \end{align} where $\delta t$ is a small time interval. Here we use $\delta t=10^{-6} |\bm{j}|/|\bm{\tau}|$. We have verified that the results do not depend on $\delta t$. The precession rate calculated using this torque method is compared to the precession rate from the test particle simulations in Figure~\ref{fig:method-compare}. Note that this plot shows results from a compact orbital configuration not a scattered disk orbital configuration. The compact configuration is an axisymmetric, nearly-flat disk of Keplerian orbits in which all bodies have identical eccentricities and nearly identical semi-major axes. This limited radial structure simplifies analysis. The two methods qualitatively agree, however, the test particle simulation method gives slower precession rates than the torque calculation. This is because two-body interactions in the test particle simulations (not accounted for in the torque calculation) weaken secular torques. \section{Introduction} \label{intro} Something odd is going on in the outer solar system: distant bodies in orbit beyond Neptune appear clustered in argument of perihelion \citep[$\omega$;][]{Trujillo2014} and longitude of perihelion \citep[$\varpi$;][]{Batygin2016}. Some have extreme inclinations that cannot be generated in the standard model of solar system evolution \citep{Gladman2009, Chen2016, Becker2018, Kaib2019}, and others are ``detached'' in the sense that they have perhelia that lie far beyond the gravitational reach of the giant planets \citep[e.g.,][]{Brown04}. Observational biases have been carefully demonstrated in outer solar system surveys \citep{Shankman2017,Lawler2017,Kavelaars2020,Napier2021} but whether they can fully explain the anomalous orbital structure of Trans-Neptunian objects (TNOs) remains a contentious issue \citep{Brown2017, Brown2019}. If they do not, the outer solar system requires a new source of gravitational perturbation. One such source could be a planet far beyond the orbit of Neptune (for reviews see \citet{Batygin2019} and \citet{Trujillo2020}). We propose a different, internal source: the self-gravity of the bodies themselves. The collective gravity of bodies on eccentric orbits in an axisymmetric near-Keplerian disk drives a dynamical instability \citep{Madigan2016,Madigan2018b}. This ``inclination instability'' exponentially grows the inclinations of orbits while decreasing their eccentricities, raising their perihelia and clustering their arguments of perihelion ($\omega$). In a recent paper, \citet{Zderic2020b}, we showed that $\mathcal{O}(20)$ Earth masses are required for the instability to occur in a primordial scattered disk between $\sim10^2-10^3$ AU in the solar system under the orbit-averaged, gravitational influence of the giant planets at their current locations. The instability can also generate a gap in perihelion at $\sim50 - 75$ AU, as observed in the outer solar system \citep{Kavelaars2020, Oldroyd2021}. The saturation timescale, that is, the time at which inclinations cease exponential growth, for the instability in a 20 Earth mass disk is far less than the age of the solar system. Therefore, to connect to the present-day outer solar system we need to understand the non-linear, saturated state of the instability. We are further motivated by the results of \citet{zderic2020a} where we discovered late-time apsidal clustering of orbits in the disk plane, albeit in simulations with highly idealized initial conditions. Here we show that the same late-time clustering occurs in a primordial scattered disk between $\sim10^2-10^3$ AU in the solar system under the gravitational influence of the giant planets. We essentially take the more realistic simulation conditions of \citet{Zderic2020b} and extend them past saturation to look for in-plane clustering. We show that the apsidal clustering can be explained using \citet{lyndenbell1979}'s mechanism for bar formation in disk galaxies. Our paper proceeds as follows: in \S\ref{sec:LB} we describe the Lynden-Bell mechanism for bar formation and show how it may be applied to near-Keplerian systems. In \S\ref{sec:methods} we describe our numerical methods and in \S\ref{sec:results} present our results. In \S\ref{sec:observables} we show surface density and line of sight velocity plots of our simulations at different times, and we conclude in \S\ref{sec:conclusions}. \section{The Lynden-Bell Mechanism in near-Keplerian Systems} \label{sec:LB} In \citeyear{lyndenbell1979}, Donald Lynden-Bell described a mechanism by which bars may be formed in the centers of galaxies. We reproduce the basic argument here. In a general galactic potential, a typical orbit is a rosette with an angle between $\pi$ and $2\pi$ linking consecutive apocenters. If we view an orbit from rotating axes, we may choose the rotation speed $\nu_i$ such that the angle between apocenters will be $\pi$. The orbit will then be bisymmetric, like a centered oval or ellipse. If $\nu$ is the mean angular speed of a star about the galaxy and $\kappa$ is its radial angular frequency, then we should choose $\nu_i = \nu - \nicefrac{\kappa}{2} > 0$. For near-circular orbits, $\nu_i$ will not vary much over a large region of a galaxy \citep{BinneyTremaine1987}. We now introduce a weak, bar-like potential, rotating with pattern speed $\nu_p \approx \nu_i$, and consider its interaction with an orbit. In the frame co-rotating with $\nu_p$, the star's orbit is an almost closed oval which rotates at a slow rate $\nu_i - \nu_p << \nu$. There is no time for the weak perturbing potential to affect the star's fast motion around the oval, so the orbit has an adiabatic invariant, $1/2\pi \oint \bm{p} \cdot d\bm{q} = 2J_f \sim {\rm const}$, where $\bm{q}$ is a vector of polar coordinates ($R, \phi$), $\bm{p}$ is the polar conjugate momentum, and the integral is taken over one closed, bi-symmetric orbit. However, the potential will exert a persistent weak torque on the oval as a whole because they move slowly with respect to one another. Hence the oval will change to another oval with the same $J_f$ but different angular momentum $j$. If the orbit is ahead of the bar in its rotation, its angular momentum will decrease due to the gravitational torque from the bar. Normally, $\nu_i$ will increase in response such that the orbit is repelled by the bar. In other words, the bar repels the orbit because $\nicefrac{\partial{\nu_i}}{\partial j} |_{J_f}$ (the Lynden-Bell derivative; \citealt{Poly2004}) is negative. In the abnormal case in which the Lynden-Bell derivative is positive however, the orbit will oscillate about the bar-like potential. In such cases, the orbit adds to the strength of the potential which will then be able to capture more and more orbits. To discover what regions of a galaxy lead to the barring of near-resonant orbits, Lynden-Bell calculated $\nu_i (J_f, j)$ for an isochrone galactic potential which permits analytic expressions for the angular frequencies $\kappa$ and $\nu$. He showed that an abnormal region is associated with central regions in this model where circular velocity rises with radius. \citet{Poly2020} recently expanded upon Lynden-Bell's work by mapping the equilibria of orbits as a function of $\nu_i$, the Lynden-Bell derivative, and the orbit's responsiveness to the bar potential. We now extend this argument to a near-Keplerian system, where the gravitational potential is dominated by a central mass. A typical orbit is an almost-closed ellipse. As in \citet{lyndenbell1979} we focus on the idealized planar problem, though we note that our simulations in the next sections are three-dimensional. The orientation of the ellipse in the orbital plane is given by the longitude of pericenter, $\varpi$, and its rate of change $\dot{\varpi}=d\varpi/dt=\nu - \kappa$ indicates its precession rate. If we view the orbit from rotating axes, we may choose the rotation speed such that the angle between apocenters is zero, $\nu_i = \dot{\varpi} = 0$. The orbit will then be a closed ellipse with the central body occupying one focus. Following Lynden-Bell's argument, we now introduce a weak, {\it lopsided} potential rotating with pattern speed $\nu_p \approx \dot{\varpi}$ and consider its gravitational influence on an orbit. Here the precession rate of the orbit is by definition much less than the orbital period even in an inertial frame. In this case, $\dot{\varpi} \ll \nu$ and $\nu_p \ll \nu$, thus $\dot{\varpi} - \nu_p \ll \nu$. Therefore, the secular average over mean anomaly is equivalent to Lynden-Bell’s average over fast orbital motion, and $J_f \rightarrow I$ where $I = \sqrt{G M a}$, $M$ is the central mass and $a$ is the semi-major axis (see \citealt{merritt2013, fouvry+2021}). The lopsided potential exerts a persistent torque on the orbit, changing the orbit's angular momentum at fixed semi-major axis. The specific angular momentum of a Kepler orbit is given by $j = \sqrt{G M a (1 -e^2)}$, where $e$ is the magnitude of the orbital eccentricity. At fixed semi-major axis, angular momentum is a monotonically decreasing function of eccentricity. In Kepler elements, the Lynden-Bell derivative $\left(\nicefrac{\partial{\nu_i}}{\partial j} |_{J_f}\right)$ is $\propto - \, \nicefrac{\partial{\dot{\varpi}}}{\partial e} |_{a}$. Lynden-Bell's `abnormal region' specifically refers to prograde precession with magnitude increasing with increasing angular momentum. In Kepler elements, this corresponds to a region where precession is prograde with magnitude decreasing with increasing eccentricity. The interpretation of `normal' and `abnormal' regions changes with context, e.g. the abnormal region described above is actually typical in lopsided eccentric disks \citep{Madigan2018}. Therefore, we will refer to regions where apsidal clustering is supported as {\it clustering regions}, and regions where clustering is not supported as {\it anti-clustering regions}. We note that it is also possible to have a clustering region with retrograde precessing orbits: if precession is retrograde and the magnitude of the precession rate increases with increasing eccentricity then orbits will be attracted to a perturbing potential. Orbits can be trapped in modes in near-Keplerian systems provided that $\nicefrac{\partial{\dot{\varpi}}}{\partial e} |_{a} < 0$ regardless of the sign of $\varpi$; we define the clustering region to be any region in the disk where $\nicefrac{\partial{\dot{\varpi}}}{\partial e} |_{a} < 0$. \section*{Acknowledgements} We thank the referees for their insightful comments and suggested changes which improved the quality of our manuscript. AM gratefully acknowledges support from the David and Lucile Packard Foundation. This work was supported by a NASA Solar System Workings grant (80NSSC17K0720) and a NASA Earth and Space Science Fellowship (80NSSC18K1264). This work utilized resources from the University of Colorado Boulder Research Computing Group, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. \software{\texttt{REBOUND} \citep{Rein2012}, \texttt{REBOUNDX} \citep{Tamayo2019}, \texttt{GNU Parallel} \citep{Tange2011a}} \section{Measuring apsidal clustering} \label{app:measure-apsidal} \begin{figure*}[!thb] \centering \includegraphics[width=0.75\textwidth]{compact-apsidal-clustering.pdf} \caption{Post-instability apsidal clustering in a compact orbital configuration. The top row shows $e_z$ on the left and a histogram of each particle's $\omega$ on the right at the two times marked by colored vertical lines in the $e_z$ plot. The bottom row shows $e_R$ on the left and a histogram of each particle's $\varpi$ on the right at the two times marked in the $e_R$ plot. This shows that the peaks in $e_R$ correspond to $\varpi$ clustering, and that zero $e_z$ can still be $\omega$-clustered. The inclination instability saturates at around $10\,t_{\rm sec}$ in this simulation. This data comes from a 400 particle compact configuration simulation without added $J_2$.} \label{fig:compact-clustering} \end{figure*} \begin{figure*}[!thb] \centering \includegraphics[width=0.75\textwidth]{aop-lan-lop.pdf} \caption{Orbital angles of bodies from the compact configuration simulation shown in Figure~\ref{fig:compact-clustering} at $t=11\,t_{\rm sec}$. Arguments of pericenter $\omega$ are highly clustered, while $\Omega$ and $\varpi$ show no clustering and are statistically consistent with a uniform distribution (Kuiper's test). This is what the bowl-shape driven by the inclination instability looks like in Kepler angles. } \label{fig:aop-clustering} \end{figure*} Here we demonstrate the connection between the components of the mean, normed eccentricity vector, $e_R$ and $e_z$ (see equation~\ref{eq:eReTez}), and the Kepler elements argument of pericenter, $\omega$, and longitude of pericenter, $\varpi$ . In Figure~\ref{fig:compact-clustering}, we reproduce a result from \citet{zderic2020a} in which we demonstrate the appearance of apsidal clustering in a simulation of particles in a compact orbital configuration. Initially, the disk is axisymmetric; $e_R$ is below the noise floor. As the top panel shows, the inclination instability begins at $t \lesssim t_{\rm sec}$ and saturates at $\sim\!10\,t_{\rm sec}$. After the instability saturates and orbits apsidally precess back through the mid-plane ($e_z \approx 0$), we begin to see statistically significant $e_R$, indicating in-plane apsidal clustering. The right panels show histograms of $\omega$ and $\varpi$ of all bodies at two times which are marked using colored-matching dashed lines. An $e_R$ above the noise floor corresponds to $\varpi$-clustering but an $e_z$ below the noise floor can still be $\omega$-clustered. Using the mean unit eccentricity vector instead of $\varpi$ to measure apsidal clustering has two advantages: the mean unit eccentricity vector is 3D and can capture out-of-plane clustering, and statistical analyses on compound angles like $\varpi$ can be misleading. We demonstrate the first point in Figure~\ref{fig:aop-clustering}. At $t\sim10\,t_{\rm sec}$, the bodies in the disk have a uniform $\varpi$ distribution suggesting that there is no apsidal clustering. However, the $z$ component of the mean unit eccentricity vector is large (see top left panel of Figure~\ref{fig:compact-clustering}), indicating that the orbits apses are strongly clustered perpendicular to the plane. Statistics on a compound angle can be misleading. If either $\omega$ or $\Omega$ is uniformly distributed, and $\omega$ and $\Omega$ are independent and have continuous distributions, then $\varpi$ will also be uniformly distributed. In essence, $\omega$ or $\Omega$, whichever is uniformly distributed, has the capacity to erase the others distribution in $\varpi$. This can be seen in Figure~\ref{fig:aop-clustering}. At $11\,t_{\rm sec}$, $\omega$ is highly clustered, nearly a delta function, while both $\Omega$ and $\varpi$ are uniformly distributed. We now prove this. We define the normalized distributions of $\varpi$, $\omega$, and $\Omega$ as $f(\varpi)$, $g(\omega)$, and $h(\Omega)$, and recall that $\varpi = \omega + \Omega$. These distributions are periodic, e.g., $f(\varpi) = f(\varpi+2\pi)$. The distribution of the sum of two continuous, independent random variables is given by the convolution of the two distributions, \begin{equation} f(\varpi) = \int_0^{2\pi} g\left(\varpi-\Omega\right) h\left(\Omega\right) d\Omega. \end{equation} If the distribution of $\Omega$ is uniform, $h(\Omega) = \nicefrac{1}{2\pi}$, then, \begin{equation} f(\varpi) = \frac{1}{2\pi} \int_0^{2\pi} g\left(\varpi-\Omega\right)d\Omega. \end{equation} Switching back to $\omega$, \begin{align} f(\varpi) &= \frac{1}{2\pi} \int_{\varpi-2\pi}^{\varpi} g(\omega)d\omega, \\ &= \frac{1}{2\pi} \int_{\varpi}^{\varpi + 2\pi} g(\omega - 2\pi)d\omega, \\ &= \frac{1}{2\pi} \int_{\varpi}^{\varpi + 2\pi} g(\omega)d\omega, \\ &= \frac{1}{2\pi} \int_{0}^{2\pi} g(\omega)d\omega, \\ &= \frac{1}{2\pi}, \end{align} where we've used the normalization of $g(\omega)$, \begin{equation*} 1 = \int_{0}^{2\pi} g(\omega) d\omega, \end{equation*} and the identity, \begin{equation*} \int_{y}^{y + 2\pi} F(x)dx = \int_{0}^{2\pi} F(x)dx, \end{equation*} which holds for any $y\in\mathbb{R}$ and function $F(x)$ periodic with period $2\pi$. This proof holds in the case where $\omega$ and/or $\Omega$ is uniformly distributed. \section{$N$-Body Simulations} \label{sec:methods} Our $N$-body simulations use the open-source framework REBOUND with the IAS15 adaptive timestep integrator \citep{Rein2012, Rein2015}\footnote{The fixed timestep WHFast integrator, while faster than IAS15, doesn't conserve energy and angular momentum well in this high eccentricity problem \citep[see also][]{Rauch1999}. The performance of the MERCURIUS integrator is similar to IAS15's due to frequent close encounters between particles.}. Additionally, we use REBOUNDx \citep{Tamayo2019} to add a zonal harmonic, $J_2$, to the central body to emulate the orbit-averaged effects of the giant planets. All particles in our simulations are massive and fully-interacting. In this paper, the Kepler elements semi-major axis ($a$), eccentricity ($e$), inclination ($i$), argument of pericenter ($\omega$), longitude of the ascending node ($\Omega$), and mean anomaly ($\mathcal{M}$) are used to describe the orbits. \begin{deluxetable}{ccc} \tablecaption{Model names and parameters.} \tablehead{ \colhead{Model ID} & \colhead{$J_2$} & \colhead{$N$} } \startdata N400 & No & 400\\ N800 & No & 800\\ J2N400 & Yes & 400\\ J2N800 & Yes & 800 \enddata \end{deluxetable} The total disk mass used in the simulations is $M_d = 10^{-3}\,M_\odot$ and the number of particles is 400 or 800 (see Table 1). This unrealistically large disk mass is chosen to accelerate secular dynamics (see Equation~\ref{eq:tsec}) within the disk reducing the number of orbits we need to simulate. In addition, the low $N$ is required to reduce the simulation walltime per orbit. The orbital distribution of our disks are initialized to approximately model a primordial scattered disk in the outer solar system \citep{Duncan1987}. The model is axisymmetric with an order of magnitude spread in semi-major axis, $a_0$, the values of which are drawn from a 1D log-uniform distribution between $[10^2,10^3]$~AU (this is equivalent to a surface density distribution of $a^{-2}$)\footnote{We have simulated other 1D semi-major axis distributions, for example $a^{-0.7}$ \citep{Napier2021} and $a^{-2.5}$ \citet{Duncan1987}. The instability proceeds similarly but its timescale decreases with increasing distribution steepness.}. All bodies have the same initial pericenter distance, $p_0= 30$~AU. Inclination $i_0$ is drawn from a Rayleigh distribution with a mean of $5^\circ$, and $\omega$, $\Omega$, and $\mathcal{M}$ are chosen uniformly from 0 to $2\pi$ radians\footnote{Disks with larger initial inclinations also undergo the instability provided that mean $i_0$ is less than $\sim20^\circ$.}. We add a $J_2$ potential to the central body in half of our simulations (see Table 1), and pick the $J_2$ moment to lie in the ``transition region'' where the $J_2$ potential alters the inclination instability without suppressing it \citep{Zderic2020b}. Our chosen disk mass and number of particles, choices forced by numerical limitations, are unrealistic. The instability timescale and the max $J_2$ that the disk can resist (that is, still undergo the instability) both depend on these key parameters. The low $N$ in our simulations leads to artificially strong self-stirring that weakens the secular torques that cause the inclination instability and increases differential precession by excessively spreading out the disk \citep{Madigan2018b}. For the same total mass, a disk with more particles will be able to resist a larger $J_2$ (that is, still undergo the instability). We determined how the inclination instability timescale scales with $M_d$ and $N$ in \citet{Madigan2018b}. Then in \citet{Zderic2020b}, we used that timescale scaling along with simulations of these disks with added $J_2$ to find that a $\sim20\,M_\oplus$ primordial scattered disk could resist the $J_2$ of the giant planets. We found that this realistic system would be in the transition region. The J2N400 and J2N800 models in this paper are in the transition region too. Therefore, these models, which have unrealistic $J_2$, $M_d$, and $N$, are dynamically similar to a 20 Earth mass primordial scattered disk, at least with regards to $J_2$. For the sake of reproducibility, the $J_2R^2$ used in these simulations is $0.3\,{\rm AU}^2$. We use the same $J_2 R^2$ value for the J2N400 and J2N800 even though these simulations have different critical $J_2$ because this $J_2 R^2$ is sufficient to put both models in the transition region. The $J_2 R^2$ for the solar system is 0.06 ${\rm AU}^2$ and using solely a secular scaling ($10^{-3}/20M_\oplus \approx 16$) $J_2 R^2$ would be 0.96 AU$^2$ for a $10^{-3} M_\odot$ mass disk. This $J_2 R^2$ is about 3 times larger than the actual $J_2 R^2$ used in our simulations. A $N \rightarrow \infty$ disk can resist about 3 times more $J_2 R^2$ than a $N = 400$ disk. Simulation times are given in units of the secular timescale: \begin{equation} \label{eq:tsec} t_{\rm sec} = \frac{1}{2\pi}\frac{M_\odot}{M_{\rm d}}P \end{equation} where $P$ is the orbital period at the innermost part of the disk. For $M_d = 10^{-3}\,M_{\oplus}$, $t_{\rm sec} \approx 160\,P \approx 0.16\,{\rm Myr}$, where $P(a = 100 \,{\rm AU}) = 10^3 \,{\rm yr}$. We give timescales for a more realistic 20 Earth mass primordial scattered disk with in Section~\ref{sec:conclusions}. \section{Simulated Observations} \label{sec:observables} While the inclination instability appears to be promising in explaining the clustered, detached orbits of extreme Trans-Neptunian Objects in the outer Solar System, it should also occur in exoplanet systems with at least one giant planet that can form a massive scattered disk. In particular, it provides a mechanism for creating asymmetric debris disk structures such as the wing-like features in HD 61005 \citep{MacGregor2019}. In Figure~\ref{fig:observables}, we plot snapshots of the J2N400 simulation\textemdash a primordial scattered disk with orbit-averaged gravitational influence of the giant planets. We first populate each orbit of the simulation with 100 particles spaced uniformly in mean anomaly to increase the effective resolution. We then make maps of surface density and velocity along the line of sight with a pixel resolution of 20 AU. The surface density, $\Sigma$, of the disk in face-on (top frames) and edge-on (bottom frames) orientations are plotted in the left-hand columns. Time is increasing from left to right and down the column. In the $x/y$-plane the particles orbit in the counter-clockwise direction. Except for the innermost edge of the disk, the orbits precess in the clockwise direction. The initially thin, axisymmetric disk undergoes the inclination instability, buckling above and below the plane ($t \approx$ 196\--303 $t_{\rm sec}$). The lopsided mode develops in the $x/y$-plane as differential precession disperses the asymmetric distribution of orbits in the $x/z$-plane. At early times, a spiral arm links the inner disk to the most over-dense region of the mode in the outer disk. \begin{figure*} \centering \includegraphics[trim=2.5cm 4.5cm 3.25cm 4.25cm, clip=true, width=\textwidth]{surf-den-and-los-velo.png} \caption{Snapshots in time of the J2N400 simulation\---a primordial scattered disk with the orbit-averaged gravitational influence of the giant planets. Surface density, $\Sigma$, and velocity along the line of sight, $v_{\rm los}$ are plotted for face-on and edge-on orientations. The inclination instability occurs around 196~$t_{\rm sec}$. At 303~$t_{\rm sec}$, we observe a spiral in the line of sight velocity when the disk is viewed face-on.} \label{fig:observables} \end{figure*} To the right of the surface density plots, we plot the corresponding velocity along the line of sight, $v_{\rm los}$. Red and blue colors illustrate red-shifted and blue-shifted velocities with respect to the observer. The initial velocity distribution is dominated by rotation around the Sun, as shown in the $x/z$-plane. The collective rolling and pitching of the orbits about their major and minor axes (captured by the angles $i_a$ and $i_b$ in Figure~\ref{fig:sd-J2N400}) is apparent in the velocity map at $196\, \, t_{\rm sec}$ which shows resulting concentric circles of red-shifted and blue-shifted velocities. We note that this is equivalent to the clustering of the orbits in argument of pericenter ($\omega$). At $t \approx 303 \, t_{\rm sec}$, a spiral arm in velocity space appears in the $x/y$-plane. This occurs as the amplitude of $i_a$ for the inner orbits ($a \lesssim 500$ AU) passes through zero but their $i_b$ values are significantly non-zero, as seen in Figure 2 at t $\approx 303 \, t_{\rm sec}$. The lopsided mode in the $x/y$-plane is apparent in surface density before we see the spiral arm in velocity space. The over-dense cluster of orbits leads to positive line of sight velocities on one side of the clump and negative on the other, leading to the appearance of a spiral. Another spiral arm appears in velocity space when the orbits in the disk pass through $i_a = 0$ again at $t \approx 480 \, t_{\rm sec}$ as seen in Figure 2. At all other times we see the concentric circles of red-shifted and blue-shifted velocities, alternating as the orbits coherently precess above and below the mid-plane. Velocities in the $x/z$-plane continue to show rotation in an increasingly thick disk. \section{Results} \label{sec:results} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{scatdisk-N400.pdf} \caption{Model N400: Apsidal clustering occurring in the inner edge of a scattered disk model without added $J_2$ after the inclination instability has saturated. The plotted quantities are binned by initial semi-major axis. Two noise floors are shown for $e_R$ and $e_z$ (see Section~\ref{sec:results} for noise floor definition). The inclination instability is shown by exponential growth in $i$, $i_a$, and $i_b$ and a corresponding decrease in $e$ to conserve total angular momentum of the disk. The instability saturates at $t\sim125\,t_{\rm sec}$. About $25\,t_{\rm sec}$ later, $e_R$ for the inner bin ($a_0 \in [100,180]$ AU) increases above the noise floor indicating in-plane apsidal clustering. About $50\,t_{\rm sec}$ later, slight in-plane apsidal clustering appears in the next bin ($a_0 \in [180,320]$ AU).} \label{fig:sd-N400} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{scatdisk-N800.pdf} \caption{Model N800: Apsidal clustering occurring in the inner edge of an 800 particle scattered disk model without added $J_2$ after the inclination instability has saturated. Same panels as in Figure~\ref{fig:sd-N400}. The plotted quantities are binned by initial semi-major axis, and $e_R$ and $e_z$ noise floors are shown. Compared to N400, the instability saturates at an earlier time, $t\sim100\,t_{\rm sec}$, and, in-plane apsidal clustering (shown by $e_R$) in the inner $a_0$ bin begins immediately after the instability saturates. Like N400, in-plane apsidal clustering propagates into the next $a_0$ bin ($a_0 \in [180,320]$ AU) about $75\,t_{\rm sec}$ later. Apsidal clustering in the $a_0 \in [180,320]$ AU bin is stronger and more consistent in N800 than in N400.} \label{fig:sd-N800} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{scatdisk-J2N400.pdf} \caption{Model J2N400: Apsidal clustering occurring in the inner edge of a scattered disk model with added $J_2$ after the inclination instability has saturated. Same panels as in Figure~\ref{fig:sd-N400}. Compared to N400 and N800, the inclination instability is delayed by the added $J_2$, saturating at $\sim200\,t_{\rm sec}$, and the innermost part of the disk barely undergoes the instability. Like in N400 and N800, a lopsided mode develops on the inner edge of the disk shortly after the inclination instability has saturated. Unlike N400 and N800, apsidal clustering appears in both inner disk bins $a_0 < 320$~AU simultaneously. In addition, apsidal clustering in the $a_0 \in [180,320]$ bin is as strong as in the $a_0 \in [100,180]$ bin.} \label{fig:sd-J2N400} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{scatdisk-J2N800.pdf} \caption{Model J2N800: Apsidal clustering occurring in the inner edge of an 800 particle scattered disk model with added $J_2$ after the inclination instability has saturated. Same panels as in Figure~\ref{fig:sd-N400}. Compared to J2N400, apsidal clustering is weakened in this simulation. Statistically significant clustering only occurs in the $a_0 \in [100,180]$ AU bin and this clustering is weaker than in the J2N400. This is different than the simulations without added $J_2$ where we saw similar to slightly more in-plane apsidal clustering as we increased the in particle number.} \label{fig:sd-J2N800} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics{scatdisk-contour.pdf} \caption{Model N400: Contours of the time derivative of the longitude of pericenter, $\dot{\varpi}$, for the inner edge ($a \in [80,200]\,{\rm AU}$) of the scattered disk simulation without $J_2$ from Figure~\ref{fig:sd-N400} at four times. The $e$ and $a$ of the disk particles are shown with black points and the mean $i_b$ of the disk at the time is shown in the top left. There is a clustering region at $t=104\,t_{\rm sec}$ and $t = 151\,t_{\rm sec}$ for $e\in[0.25,0.60]$. This clustering region is associated with the bowl-shaped orbital configuration created by the inclination instability. Note that the region of prograde precession, $e\in[0.05,0.25]$, also supports apsidal clustering as described in Section~\ref{sec:LB}. Apsidal clustering doesn't appear until after $151\,t_{\rm sec}$. At $104\,t_{\rm sec}$ the region $e\in[0.05,0.60]$ is only occupied by $\sim20$ particles, by $151\,t_{\rm sec}$ this has increased to $\sim65$ particles. There aren't enough particles in $e\in[0.05,0.60]$ at $104\,t_{\rm sec}$ for apsidal clustering to begin earlier.} \label{fig:scatdisk-contour} \end{figure*} We measure apsidal clustering using the mean, normed eccentricity vector, \begin{equation} \bm{\mu}_{\hat{\bm{e}}} = \sum_{i=1}^N \frac{\hat{\bm{e}}_i}{N}, \end{equation} where \begin{equation} \bm{e}_i = \frac{(\bm{v}_i \times \bm{j}_i)}{ GM_\odot} - \hat{\bm{r}}_i \end{equation} is the eccentricity vector of the $i$th orbit, and $\bm{r}_i$, $\bm{v}_i$, and $\bm{j}_i$ are the position, velocity, and specific angular momentum of the $i$th particle. The eccentricity vector points from the apocenter to the pericenter of the orbit. We use the cylindrical coordinates of $\bm{\mu}_{\hat{\bm{e}}}$ to look for apsidal clustering, \begin{subequations} \begin{align} e_R &= \sqrt{\mu_{\hat{\bm{e}},x}^2 + \mu_{\hat{\bm{e}},y}^2}, \\ e_\theta &= \arctan{\left[ \frac{\mu_{\hat{\bm{e}},y}}{\mu_{\hat{\bm{e}},x}} \right]} , \\ e_z &= \mu_{\hat{\bm{e}},z}. \end{align} \label{eq:eReTez} \end{subequations} The radial component, $e_R$, quantifies in-plane apsidal clustering, the azimuthal component, $e_\theta$, is used to calculate the pattern speed and direction of in-plane apsidal clustering, and the $z$ component, $e_z$, quantifies out-of-plane apsidal clustering. See Appendix~\ref{app:measure-apsidal} for a comparison to standard measures of apsidal clustering. We calculate the noise floor for $e_R$ and $e_z$ by creating one thousand $N=400$ or $N=800$ axisymmetric disks. For each disk, we draw argument of perihelia and longitude of the ascending node from a uniform distribution and inclination from a Rayleigh distribution with mean inclination equal to the mean inclination of the post-instability disk (e.g. $\sim 50 \, {\rm deg}$). We calculate $e_R$ and $e_z$ for each disk to obtain an empirical distribution for $e_R$ and for $e_z$ with $N=1000$ samples. We calculate the noise floors from these distributions (68th and 95th percentile centered on the mean, corresponding to one and two standard deviations of the Gaussian distribution). The noise floor is a function of $N$ with lower $N$ simulations having higher noise floors. We show the noise floors in Figures~\ref{fig:sd-N400}, \ref{fig:sd-N800}, \ref{fig:sd-J2N400} and \ref{fig:sd-J2N800} with grey bands. $e_R$ and $e_z$ values above the noise floor indicate statistically significant apsidal clustering. As first described in \citet{Madigan2016}, the spatial orientation of orbits can be quantified with the angles, \c{i}{a}\xspace, \c{i}{b}\xspace, and \c{i}{e}\xspace representing rotations of an orbit about its semi-major (${\hat a}$) axis, semi-minor (${\hat b} \equiv \hat{j} \times \hat{a}$) axis, and angular momentum vector ($\hat{j}$), respectively, such that \begin{subequations} \begin{align} \c{i}{a}\xspace &= \arctan\left[\unitvectorslope{b}\right], \\ \c{i}{b}\xspace &= \arctan\left[-\unitvectorslope{a}\right], \\ \c{i}{e}\xspace &= \arctan\left[\oldhat{a}_{\text{y}}, \oldhat{a}_{\text{x}}\right]. \end{align} \label{eq:iaibie} \end{subequations} \\ The subscripts $x$, $y$, and $z$ denote an inertial Cartesian reference frame with unit vectors, $\hat{x}$, $\hat{y}$, and $\hat{z}$. The angles \c{i}{a}\xspace, \c{i}{b}\xspace, and \c{i}{e}\xspace are equivalent to the roll, pitch and yaw of a boat or plane, and are useful for understanding the net gravitational torque acting on an orbit. The inclination instability is characterized by the mean $i_a$ (roll) and $i_b$ (pitch) of all the orbits in the disk growing exponentially with opposite signs. We use these angles in upcoming plots to see how the inclination instability proceeds in simulations with different parameters and how that affects the subsequent growth of a lopsided mode. \subsection{Inclination Instability} \label{sec:inc-ins} The axisymmetric disks of eccentric orbits in our simulations undergo a dynamical instability called the inclination instability due to the secular gravitational torques between orbits. The instability is characterized by exponential growth in inclination and a corresponding decrease in eccentricity. The initially thin disk expands into a cone or bowl shape\footnote{For a visualization of the bowl shape, the reader can jump ahead to the second row, third panel from the left of Figure~\ref{fig:observables}.}. As the orbits’ inclinations grow, they tilt in the same way with respect to the disk plane and oscillate coherently in $i_a$ and $i_b$. We describe the physical mechanism behind the inclination instability in \citet{Madigan2018b}. In Figures~\ref{fig:sd-N400}, \ref{fig:sd-N800}, \ref{fig:sd-J2N400}, and \ref{fig:sd-J2N800}, we show the inclination instability and its aftermath for models N400, N800, J2N400, and J2N800, respectively. Particles are binned by their initial semi-major axis, with the bin boundaries chosen such that the number of particles per bin is approximately equal.\footnote{We've verified that particles do not drift far from their initial semi-major axis during integration.} Note that the figures have different $x$-axes (time) but identical $y$-axes. In Figures~\ref{fig:sd-N400} and \ref{fig:sd-N800} (models N400 and N800), the largest growth in inclination occurs in the two innermost semi-major axis bins. In Figures~\ref{fig:sd-J2N400} and \ref{fig:sd-J2N800} (models J2N400 and J2N800) however, the innermost bin ($a < 180$ AU) flattens in inclination after a shorter exponential phase. This difference is seen again in the eccentricity evolution; the innermost semi-major axis bin drops to the lowest eccentricity values in the simulations without added $J_2$ whereas the drop is suppressed in the simulations with added $J_2$. In addition, the inclination instability saturates at later times in the J2N400 and J2N800 models than in N400 and N800 models ($\sim200\,t_{\rm sec}$ vs. $\sim 100\,t_{\rm sec}$). We attribute the difference between the two models to the strong differential apsidal precession in the innermost bin induced by the gravitational influence of the giant planets. This effect decreases the coherence time over which inter-orbit torques can act. The inclination instability produces \textit{out-of-plane} apsidal clustering, captured by both $e_z$ and $i_b$. We note that the longitude of pericenter, $\varpi = \omega + \Omega$, fails to find this clustering because it is sensitive only to in-plane clustering. \subsection{Apsidal Clustering in the Scattered Disk} \label{sec:scat-disk-cluster} In \citet{zderic2020a}, we found \textit{in-plane} apsidal clustering after the inclination instability had saturated in a simple, unrealistic orbital configuration. This ``compact configuration'' is characterized by an axisymmetric, nearly-flat disk of Keplerian orbits in which all bodies have identical eccentricities and nearly identical semi-major axes. Here, we report the same findings for our scattered disk model with and without added $J_2$. In Figures~\ref{fig:sd-N400} and \ref{fig:sd-N800}, $e_R$ traces the development of apsidal clustering in the $x/y$-plane at the inner edge ($a_0\in[100,180]$ AU) of Models N400 and N800\textemdash a massive scattered disk without giant planets. Values of $e_R$ above the noise floor indicate statistically significant in-plane apsidal clustering. As in \citet{zderic2020a}, apsidal clustering appears in the disk after the inclination instability has saturated. Note that apsidal clustering only appears for bodies with $a \lesssim 320$~AU with clustering first appearing in the $a_0 \in [100,180]$~AU bin and then travelling out into the $a_0 \in [180,320]$~AU bin at later times. Comparing the two models, apsidal clustering begins earlier, is more consistent (fewer oscillations), and is stronger in the $a_0 \in [180,320]$~AU bin in the higher $N$ model, N800, than in the N400 model. Finally, note that the mode strength regularly oscillates below the noise floor, particularly in N400. Figures~\ref{fig:sd-J2N400} and \ref{fig:sd-J2N800} show the development of apsidal clustering in the inner edge of our $J_2$ models, J2N400 and J2N800\textemdash a massive scattered disk with giant planets. We get apsidal clustering in both models (starting after $\sim200\,t_{\rm sec}$), though this clustering is weaker than it is for the models without added $J_2$. Apsidal clustering appears at later times in the $J_2$ models because the inclination instability is slowed by the added $J_2$, and clustering does not appear until after the instability has saturated. In J2N400, apsidal clustering in the $a_0\in[180,320]$~AU bin is stronger than it is in the N400 and N800 models. This reflects a general trend of our $J_2$ results. The $J_2$ potential disrupts the instability for the lowest $a$ bodies (compare the $a_0 \in [100, 180]$~AU bin in Figures~\ref{fig:sd-N400} and \ref{fig:sd-J2N400}), but it strengthens the instability in the outer $a_0$ bins. Bodies with $a_0 \gtrsim 180$~AU attain higher mean $i$, lower mean $e$, and stronger apsidal clustering post-instability in the J2N400 model than in N400 and N800 models. In J2N800, statistically significant apsidal clustering only appears in the $a_0 \in [100,180]$~AU bin, and it's weaker and shorter-lived than all the other models. This is unexpected\textemdash clustering was generally stronger in N800 than in N400 so we expect J2N800 to show apsidal clustering similar to or stronger than J2N400. We have multiple simulations of the $N=400$ models, all showing apsidal clustering. The N800 and J2N800 simulations we show here are the only ones of that particle number that were run long enough to show apsidal clustering. In the $N=400$ simulations, we find that the strength of clustering varies from simulation to simulation (being as weak as $e_R \approx 0.20$ at peak). Thus, it is possible that the weak apsidal clustering seen in J2N800 is just a peculiarity of that simulation's specific initial conditions. In all models, in-plane apsidal clustering appears in the inner semi-major axis bin(s) after the instability has saturated. The occurrence of apsidal clustering shortly after the inclination instability in both models suggests that this instability is responsible for the in-plane apsidal clustering. \begin{figure*}[!htb] \centering \includegraphics{scatdisk-contour-bin.pdf} \caption{Model N400: $\dot{\varpi}$ contour plots for the innermost two $a_0$ bins at $t=151\,t_{\rm sec}$. The left panel is the same as the bottom right panel in Figure~\ref{fig:scatdisk-contour}, and it shows a large clustering region ($e \lesssim 0.6$). The right panel shows a small, underpopulated clustering region at the lowest $a$ (note that the $e$ axes are different in the two panels). This explains why we only see strong apsidal clustering for $a \lesssim 180\,{\rm AU}$. The contour plots differ at their boundary, 180 AU. This is due to the different mean test particle $\omega$ in these two bins and it demonstrates the importance of $\omega$ in forming the clustering region.} \label{fig:scatdisk-contour-bin} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{scatdisk-contour-J2.pdf} \caption{Model J2N400: The emergence of a clustering region post-instability in a scattered disk model including $J_2$ (the same simulation as shown in Figure~\ref{fig:sd-J2N400}). We show contours of $\dot{\varpi}$ from three different $a$ bins ($a_0 \in [100, 180]$ AU, $a_0 \in [180, 320]$ AU and $a_0 \in [320,560]$ AU) at three different times. Mean $i_b$ for the disk is shown in the top left of the top panel. Peak mean $i_b$ is attained at $190\,t_{\rm sec}$, and a corresponding, populated clustering region is shown for $e \in [0.05, 0.55]$ and $a \in [125,300]\,{\rm AU}$. The deep trough of prograde precession at high $e$ and low $a$ in the bottom row of panels is due to the $J_2$ potential.} \label{fig:scatdisk-contour-J2} \end{figure*} \subsection{Emergence of the clustering region} \label{sec:clustering-region-emerges} Apsidal clustering in our simulations occurs after a Lynden-Bell clustering region appears. Here we show that the clustering region appears during the saturation of the inclination instability and is associated with the unique, `bowl'-shape of the mass distribution post-instability. We simulate test particles in the frozen gravitational potential of the disk to find $\dot{\varpi}$ as a function of $a$ and $e$. The disk particles from a fully interacting simulation are frozen onto their orbits at a specific time and a test particle is integrated in this background disk potential. We use these test particles to make contour plots of $\dot{\varpi}$ in $e$-$a$ space at specific instances to find clustering regions, if any exist. In these test particle simulations we have a test particle, a background disk, and a central body (with/without added $J_2$). The background disk particles are given the REBOUND/MERCURY `small particle' designation meaning they do not interact with each other; they only interact with the test particle and the central body (they do actually interact with each other indirectly through the central body as mentioned in \citet{Peng2020}; this effect is small). Paradoxically, we must give the test particle mass in order for it to interact with the background disk. The mass of the test particle is set to be so small that it negligibly affects the background disk bodies. The test particle simulations are integrated for 10 orbits, and the test particles are initialized with $a$ and $e$ drawn from a grid ($96 \times 96$), $\omega$ and $i$ calculated from the mean values of the local disk (same $a$ bin), and an $\Omega$ of 0, $\nicefrac{\pi}{4}$, $\nicefrac{\pi}{2}$, or $\nicefrac{3\pi}{4}$. The median $\dot{\varpi}$ is calculated from each set of four test particle simulations. This is the method used to create the contour plots shown in Figures~\ref{fig:scatdisk-contour}, \ref{fig:scatdisk-contour-bin}, and \ref{fig:scatdisk-contour-J2}. We have checked the accuracy of our test particle simulations with an alternative method which calculates the instantaneous precession rate of the test orbit directly from the torques and forces it experiences; the results are are consistent, as shown in Appendix~\ref{app:finding-clustering}. We only show $\dot{\varpi}$ for the $N=400$ simulations as the $N=800$ results are the same. In Keplerian elements, a clustering region (see Section~\ref{sec:LB}) is defined by $\nicefrac{\partial{\dot{\varpi}}}{\partial{e}} |_{a} < 0$. Apsidal precession within our disks is initially retrograde ($\dot{\varpi} < 0$) and with {\it magnitude} decreasing with increasing eccentricity ($\nicefrac{\partial{\dot{\varpi}}}{\partial{e}} |_{a} > 0$). When clustering regions appear within our disks, we see regions where the precession is retrograde with increasing magnitude and regions where precession is prograde with decreasing magnitude. In the contour plots, clustering regions are regions where, at fixed semi-major axis ($a$), the contours go from warmer to cooler colors with increasing eccentricity ($e$). In Figure~\ref{fig:scatdisk-contour}, we show the development of a clustering region in the inner edge of N400, the scattered disk model without added $J_2$. This figure shows the time derivative of the longitude of pericenter, $\dot{\varpi}$, in the scattered disk as a function of semi-major axis $a$ and orbital eccentricity $e$ for the inner edge of the disk. We show mean $i_b$ for the disk in the top left of the panels. Initially, all bodies in the scattered disk are on the line $a\,(1-e) = 30\,{\rm AU}$ (top left). Later, the inclination instability reduces the disk orbits' eccentricity, $e$, at roughly fixed semi-major axis, $a$ (top right), and causes the disk to buckle in to a bowl-shape. Notably, the $\dot{\varpi}$ contours have changed to admit a clustering region $\left(\nicefrac{\partial{\dot{\varpi}}}{\partial{e}} |_{a} < 0\right)$ covering $e \in [0.25, 0.60]$ and $a \in [80, 200]\,{\rm AU}$. This retrograde clustering region smoothly blends into a prograde region ($e \in [0.05, 0.25]$) which also facilitates apsidal clustering. Thus the whole region $e \in [0.05, 0.60]$ supports apsidal clustering. Immediately after the instability saturates, the apsidal clustering region is lightly populated. The eccentricity continues to drop after the inclination instability leaves the linear regime (bottom left). However, the clustering region has disappeared. This is because the disk has precessed out of the bowl-shape (mean $i_b \sim 0^\circ$). Finally, the orbits at the inner edge have precessed through the ecliptic and inverted the bowl-shaped mass distribution (mean $i_b > 0$) (bottom right). Again, the clustering region appears, but now it is sufficiently populated for in-plane apsidal clustering to take hold. Two things are apparent from this sequence. First, the clustering region coincides with the bowl-shaped orbital distribution (large mean $i_b$). Second, in-plane apsidal clustering only appears once the clustering region is sufficiently populated. Once apsidal clustering has been established and the lopsided mode has grown, it is no longer reliant on the clustering region produced by the bowl-shape to exist. The bowl-shape is not actively maintained after the inclination instability saturates. In Figure~\ref{fig:sd-N400}, differential precession slowly erodes mean $i_b$ in the disk, and will eventually erase the bowl-shape altogether. However, $e_R$ appears unaffected by this, and apsidal clustering actually reaches peak strength by the end of the simulation even though the mean $i_b$ has dropped quite low. The bowl-shape seeds the lopsided mode, but, once seeded, the mode is self-sustaining even though the strength of the mode oscillates. The clustering region appears towards the inner edge of the disk in N400. In Figure~\ref{fig:scatdisk-contour-bin}, we show the inner two semi-major axis bins of the disk ($a \in [100, 320]\,{\rm AU}$) at $151\,t_{\rm sec}$. The clustering region extends to $a>200$~AU, but it is largest at lower $a$ and it's unpopulated for $a \gtrsim 250$~AU. This explains why apsidal clustering is primarily only seen in the inner two bins of these simulations, and why apsidal clustering is slightly weaker for $a_0 > 180$~AU. The precession rates are discontinuous at 180 AU (top of the left panel and bottom of the right panel), and the two panels have different $x$ axes, exacerbating the apparent discontinuity. The discontinuity is due to the test particles in these two panels having different bin-averaged $\omega_0$ and $i_0$. The general features found in the N400 model are repeated in the J2N400 model: the clustering region appears around peak mean $i_b$ (in the `bowl'-shape), the semi-major axis location of the clustering region traces apsidal clustering, and the clustering region precedes apsidal clustering. In Figure~\ref{fig:scatdisk-contour-J2}, we show contours of $\dot{\varpi}$ across the disk at three distinct times for model J2N400, the same simulation depicted in Figure~\ref{fig:sd-J2N400}. A deep trough of prograde $\varpi$ precession from the added $J_2$ is seen at large eccentricity and small semi-major axis. Bodies near this trough do not undergo the inclination instability and their eccentricities are stable. The clustering region still forms (middle column), but at slightly larger semi-major axis because the lower semi-major axis portion of the disk is too disrupted by the $J_2$ potential. This is reflected in Figure~\ref{fig:sd-J2N400} by apsidal clustering in $a_0 \in [100,180]\,{\rm AU}$ and $a_0 \in [180,320]\,{\rm AU}$. The mean $i_a$ of the inner edge of the disk is $\sim0$ (due to the added $J_2$ precession), but mean $i_b > 0$. \begin{figure*}[!htb] \centering \includegraphics[scale=1.0]{pomegahist.pdf} \caption{Model N800: Clustering in $\varpi$ as it evolves over time in the inner $a_0$ bin ($a_0 \in [100, 180]$ AU). This figure shows a 2d-histogram of $\varpi$ as a function of time for all particles in this bin, and the colors represent the density of points (normalized to the densest area). We identify the mode as the highly dense region in this figure that generally precesses prograde, opposite to the individual orbits that precess retrograde. Retrograde precessing orbits can be captured in the prograde precessing mode, and orbits actually caught in the mode librate within the mode. The pattern speed of the mode is $\sim1.5^\circ\,t_{\rm sec}^{-1}$ which is of the same order of magnitude as the precession rate of the orbits in the disk.} \label{fig:mode-dynamics} \end{figure*} In our simulations, the formation of the clustering region coincides with peak mean $i_b$, signifying that the unique out-of-plane orbital distribution resulting from the inclination instability is responsible for the in-plane apsidal clustering. The post-instability bowl-shaped potential creates a clustering region at low eccentricity, and this region is simultaneously populated by the circularizing effect of the instability. A particle's precession rate depends on the forces it experiences throughout its orbit. From equation~\eqref{eq:ederiv} the orbital precession rate is \begin{equation} \dot{\varpi} \approx \left<f_r(r)\right> \frac{\sqrt{1-e^2}}{e} \sqrt{\frac{a}{G M}}, \label{eq:precApprox} \end{equation} where $\left<f_r(r)\right>$ is the orbit-averaged specific, radial force. This is negative if the force is radially inwards, such that precession will be retrograde. For retrograde precession, the magnitude of $\dot{\varpi}$ must increase with eccentricity in a clustering region. In contrast, the second term on the right-hand-side of equation~\eqref{eq:precApprox} decreases monotonically with eccentricity. Therefore, the orbit-averaged force must increase with eccentricity in a retrograde clustering region. This is somewhat unusual considering that typically in a Keplerian potential (i) precession is dominated by the forces near apocenter and (ii) these forces will be smaller at larger apocenters. However, this is not the case in the post-instability bowl-shaped orbital distribution. In Appendix~\ref{app:finding-clustering}, we measure the precession rates of orbits in the clustering region, by calculating the forces and torques at many points along the orbit. We find that changes in the precession rate (with eccentricity) can be dominated by points near pericenter. Additionally, forces can increase with apocenter. This behavior allows a clustering region to appear. For prograde precession, clustering will occur if the magnitude of $\dot{\varpi}$ decreases with eccentricity, which is typical for a Keplerian potential. In fact this will occur for any external force that decreases with radius (see equation~\ref{eq:precApprox}). \subsection{Mode direction} \label{subsec:mode-dir} In galaxies, the slowness condition, $\nu_i - \nu_p \ll \nu$, will only be satisfied if the mode and the orbits precess in the same direction because $\nu_i \lesssim \nu$ (galactic orbits are generally rosettes in the inertial frame). If ${\rm sgn}(\nu_i) = -{\rm sgn}(\nu_p)$ then $\nu_i - \nu_p \sim \nu_i$ and the orbits are not nearly closed in the rotating frame of the bar perturbation. However, in near-Keplerian systems, both $\dot{\varpi}_i$ and $\nu_p$ are much less than $\nu$ and $\dot{\varpi}_i - \nu_p \ll \nu$ is true even if ${\rm sgn}(\dot{\varpi}_i) = -{\rm sgn}(\nu_p)$. If the mode and the orbits precess in opposing directions, the relative orbital precession rate in the frame rotating with the mode will be greater than if the mode and orbit precess in the same direction. However, the relative orbital precession rate will still be $\mathcal{O}(t_{\rm sec}^{-1})$, and secular torques between the orbit and the mode can still be dynamically important. Therefore, in near Keplerian systems, it is not dynamically forbidden for a mode to form via the Lynden-Bell mechanism with a precession direction opposite the orbital precession direction. Indeed, we generally see the mode precess opposite to that of the orbits in our simulations. In Figure~\ref{fig:mode-dynamics}, we show a 2d-histogram of $\varpi$ as a function of time for all particles in inner $a_0$ bin ($a_0 \in [100, 180]$ AU) of model N800. We see from this figure that the individual orbits precess retrograde, and cluster together to form a mode starting around 100~$t_{\rm sec}$. The figure shows that the mode generally precesses prograde with a pattern speed of $\sim1.5^\circ\,t_{\rm sec}^{-1}$. The mode is capable of capturing orbits that precess counter to it. The captured orbits then librate within the mode, precessing prograde then retrograde within the mode. Eventually, orbits leave the mode and precess retrograde again within the disk.
1,477,468,750,764
arxiv
\section{\label{sec:introduction} Introduction} In a quantum wire (quasi-one-dimensional geometry), the quantities of interest, such as conductance, are not selfaveraging. \cite{dittrich97} Thus, a statistical description of a variety of quantum effects, such as universal conductance fluctuations, requires the conductance cumulants $\langle\!\langle g^n \rangle\!\rangle$ or conductance distribution $P(g)$ to be studied. Cumulants of current $C_j$, like $C_2$ that is directly related to shot noise, also yield information which is not contained in the averaged conductance. Higher-order mesoscopic fluctuations, though smaller, provide important additional information about the transport process. For example, the recent experiments \cite{mohanty02} where an anomalously asymmetric $P(g)$ was detected have spurred dispute \cite{deba} and interest on higher conductance cumulants in mesoscopic wires. This is due to the fact that noninteracting one-parameter scaling models suggest that near the diffusive region the conductance distribution $P(g)$ is close, but not identical, to Gaussian shape. \cite{altshuler86} For the moment, current-cumulant measurements on the third cumulant $C_3$ have been carried out \cite{reulet03} and a scheme for a detection of $C_4$ has recently been put forward. \cite{ankerhold05} The fundamental symmetries of the Hamiltonian of a disordered conductor manifest themselves in the statistical properties of the energy levels and particle states. The implications of symmetry are conveniently analyzed within the framework of random matrix theory. \cite{rmt} There are altogether ten different random matrix theories for different symmetric spaces \cite{caselle04} of the Hamiltonian. \cite{altland97} The symmetry classes are customarily referred by the Cartan's symbol for the symmetric space of the corresponding Hamiltonian. \cite{altland97} In this paper we consider seven symmetry classes: A(I,II), C(I), and D(III). Disordered normal metals may be studied through standard or Wigner-Dyson (WD) universality classes A(I,II). \cite{mehta91} The BdG classes C(I) and D(III) are appropriate, e.g., for ``disorder-facilitated'' quasiparticle transport \cite{senthil98} in unconventional superconductors. Compared to the WD ensembles, an extra degree of freedom arises for the BdG classes, since a distinction between particlelike and holelike quasiparticles of the BdG formalism is made. Further, the systems are classified according to the presence or absence of time-reversal (TR) and spin-rotation (SR) symmetry. The corresponding symmetric spaces are characterized by the multiplicities of the ordinary and long roots \cite{helgason78} denoted $m_0$ and $m_l$ (see, e.g., Table 1 of Ref.~\onlinecite{titov01}). In addition to the two symmetry parameters, the degeneracy $d$ of transmission and reflection eigenvalues has to be taken into account. Besides having definite values for certain symmetries, $m_0$ and $m_l$ may also be considered as interpolation parameters between different symmetry classes. In the absence of electron-electron interactions, phase randomization and local maximum entropy principles imply in a quasi-one-dimensional geometry a scaling equation, the Dorokhov-Mello-Pereyra-Kumar (DMPK) equation. \cite{dmpk} The DMPK equation yields the evolution of $P(g)$ as a function of dimensionless length $s$. We set $s=L/N\ell$, where $L$ is the length of the wire, $N$ is the number of channels, and $\ell$ is the elastic mean free path. In a wire geometry, the DMPK equation has also been generalized for the BdG structures \cite{brouwer00} and for the so-called chiral classes. \cite{brouwer98} Near metallic regime the most important mechanisms inducing deviations from Gaussian $P(g)$ are weak localization (WL) and weak antilocalization (WAL). WL and WAL result from the interference of the closed trajectories of the electron and their time-reversed conjugates. Constructive interference (WL) leads to a decrease of $g$ while destructive interference (WAL) enhances conductance. In a conductor with length shorter or comparable with the coherence length, in the absence of magnetic field, destructive interference may be realized in a material with strong spin-orbit scattering whereas spin-rotation symmetry leads to constructive interference. The W(A)L corrections to conductivity have been used to study experimentally the quantum coherence in mesoscopic structures (see, e.g., Ref.~\onlinecite{trionfi04} for recent measurements). It is very difficult to evaluate the exact influence of W(A)L on the values of $\langle\!\langle g^n \rangle\!\rangle$ with $n>3$ by such field theoretic approaches as diagrammatic techniques \cite{altshuler86, rossum97} or by the nonlinear sigma model. \cite{efetov97} A nonperturbative treatment of the first and second cumulants based on Fourier analysis on the supersymmetric manifold was given in Ref.~\onlinecite{mirlin94} for WD classes. The behavior of the lowest conductance cumulants $\langle\!\langle g^{1,2} \rangle\!\rangle$ has been intensively computed for the WD ensembles ($m_l=1$) recently, especially in the metal-insulator crossover region (see, e.g., Refs.~\onlinecite{perez02} and \onlinecite{muttalib03}). In Ref.~\onlinecite{altshuler86}, based on a $2+\epsilon$ expansion, the expression \begin{equation} \label{eq:abrik} \langle\!\langle g^n \rangle\!\rangle\sim \langle g\rangle^{2-n}, \qquad n<1/s \end{equation} was presented for the standard unitary ensemble \mbox{($m_0=1$, $m_l=1$)}. Likewise, for the symmetry class AI, but for a quasi-one-dimensional conductor (where one has \mbox{$\langle g\rangle\approx 1/s$}), the same equation was put forward in Ref.~\onlinecite{tartakovski95}. This formula has been widely accepted until now. For the BdG wires near metallic region, the dependence of $\langle g \rangle$ on $m_0$ and $m_l$ may be found in Ref.~\onlinecite{brouwer00}. Leading terms of $\langle\!\langle g^2 \rangle\!\rangle$ (universal conductance fluctuations) and some correction terms for $\langle\!\langle g^2 \rangle\!\rangle$ and for $C_2$ may be found in Ref.~\onlinecite{imamura01}. An essential singularity has been shown to occur for the first and second cumulants in the small $s$ expansion in the chiral unitary class. \cite{mudry99} Mac\^edo considered in Ref.~\onlinecite{macedo02} the cases with $m_0=2$ and $0\le m_l\le 2$ and found that the third cumulant contains only a component which is nonanalytic in $s$ at $s=0$. Localization in superconducting wires has been discussed in Refs.~\onlinecite{brouwer00} and \onlinecite{gruzberg05}. The conductance cumulants with $n\le 4$ were computed in Ref.~\onlinecite{perez02} for the WD classes A and AI by a Monte Carlo method. In this paper we present a recursion equation unifying the description of the WD and BdG symmetry classes and yielding the cumulants of the order $n<1/s$. For the two BdG classes with TR symmetry we find that the higher-order cumulants ($n\ge3$) contain no contributions that are analytic in $s$ at $s=0$. We elucidate the dependence of the higher cumulants on the universality class and give values for $\langle\langle g^n\rangle\rangle$ in terms of $m_0$ and $m_l$. We emphasize that even though Eq.~(\ref{eq:abrik}) is correct for $n=1,2$, for \mbox{$2<n<1/s$} there exists a more appropriate expression, our Eq.~(\ref{eq:sc}). Furthermore, we calculate the weak-localization corrections to the current cumulants $C_j$ with $j\le 10$. We consider a disordered quantum wire, i.e., a quasi-one-dimensional geometry. For the BdG universality classes we study heat transport whereas for the WD classes our results apply also for electrical transport. We calculate the cumulants at zero temperature, zero frequency, and at low voltage in the limit \mbox{$N\to\infty$},\mbox{$\ L/\ell\to\infty$}, \mbox{$s={\rm constant}$}, near the diffusive region, where one has $1/N\ll s\ll 1$. \section{Cumulants} \subsection{Method} The starting point for our analysis is the generalized DMPK equation which reads \cite{dmpk, brouwer00} \begin{eqnarray} \label{eq:dmpk} \lefteqn{\partial_{s}w_{s}(\mbox{\boldmath{$\lambda$}})= \frac{2N}{m_0 N+1+m_l-m_0} \sum_{i=1}^{N}\frac{\partial}{\partial \lambda_i} \bigg\{ }\nonumber\\ && \times [\lambda_i(1+\lambda_i)]^{(m_l+1)/2} J_{m_0} (\mbox{\boldmath{$\lambda$}}) \frac{\partial}{\partial\lambda_i} [\lambda_i(1+\lambda_i)]^{(1-m_l)/2} \frac{w_s(\mbox{\boldmath{$\lambda$}})}{J_{m_0}(\mbox{\boldmath{$\lambda$}})} \bigg\}. \nonumber\\ \end{eqnarray} The variables $\{\lambda_i\}_{i=1}^{N}$ are related to the transmission probabilities $\{\tau_i\}_{i=1}^N$ of the channels $\{i\}_{i=1}^{N}$ through \mbox{$\tau_i=(1+\lambda_i)^{-1}$} while $w_{s}(\mbox{\boldmath{$\lambda$}})$ is the distribution function for \mbox{$\mbox{\boldmath{$\lambda$}}=(\lambda_1,\ldots,\lambda_N)$}. The Jacobian $J_{m_0}(\mbox{\boldmath{$\lambda$}})$ is given by \mbox{$J_{m_0}(\mbox{\boldmath{$\lambda$}}) =\prod_{i<j}|\lambda_i-\lambda_j|^{m_0}$}. In a long wire, for the symmetry classes D(III), additional terms enter the DMPK equation. \cite{gruzberg05} Near diffusive regime such components, however, are irrelevant and we will ignore them. In order to calculate the cumulants we adopt a method reminiscent to the moment expansion method introduced in Refs.~\onlinecite{mello88} and \onlinecite{mello91}. It is convenient to introduce the moment generating function $Z_s(\mbox{\boldmath{$\mathrm{q}$}})$ and the cumulant generating function (CGF) $\varphi_{s}(\mbox{\boldmath{$\mathrm{q}$}})$, \begin{eqnarray} \label{eq:defcumul} \ln Z_s(\mbox{\boldmath{$\mathrm{q}$}})&=& \ln\langle\exp(-\mbox{\boldmath{$\mathrm{q}$}} \cdot\mbox{\boldmath{$\mathrm{T}$}})\rangle_s= \varphi_{s}(\mbox{\boldmath{$\mathrm{q}$}}) \end{eqnarray} in order to systematically solve the DMPK equation. Here one has $\mbox{\boldmath{$\mathrm{T}$}}= (T_1,\ldots,T_N)$, $T_k=\sum_{i}^{N}\tau_{i}^{k}$. The expectation value $\langle\cdots\rangle_s$ is taken with respect to the distribution of $\tau_i$s. The conductance cumulants may be obtained from \begin{equation} \langle\!\langle g^n\rangle\!\rangle\equiv (-d)^{n}\frac{\partial^{n} \varphi_{s}(\mbox{\boldmath{$\mathrm{q}$}})}{\partial q_1^{n}} \bigg{|}_{\mbox{\boldmath{$\mathrm{q}$}=\boldmath{$0$}}}. \label{ccdef} \end{equation} For large $N$ it is natural to seek the CGF in the form of the expansion \cite{gopar95} \begin{equation} \varphi_{s}(\mbox{\boldmath{$\mathrm{q}$}})= \sum_{j=-\infty}^{1}\varphi_{s}^{(j)}(\mbox{\boldmath{$\mathrm{q}$}})N^j \label{nexp}. \end{equation} In Ref.~\onlinecite{gopar95} it was found that in such an expansion, for the symmetry class A, $\varphi_{s}^{(j)}(\mbox{\boldmath{$\mathrm{q}$}})$ is an odd (even) polynomial of degree $2-j$ in $\mbox{\boldmath{$\mathrm{q}$}}$ with $j$ odd (even). For all the WD and BdG classes, in the region where $\varphi_{s}^{(j)}(\boldsymbol{\mathrm{q}})$ may be expanded in non-negative powers of $L/\ell$, it may be shown by induction from Eq.~(\ref{eq:dsvarphi}) in Appendix A and from the definition of the CGF, Eq.~(\ref{eq:defcumul}), that $\varphi_{s}^{(j)}(\boldsymbol{\mathrm{q}})$ is of the form \begin{equation} \varphi_{s}^{(j)}(\boldsymbol{\mathrm{q}}) = \sum_{n=1}^{2-j} \sum_{k_{1},\dots,k_{n}=1}^{\infty} A_{k_{1},\dots,k_{n}}^{(j)}(L/\ell) \prod_{i=1}^{n}q_{k_{i}}. \label{q2m} \end{equation} We seek $A_{k_{1},\dots,k_{n}}^{(j)}(L/\ell)$ in the form of a rational function of $L/\ell$. By using Eqs.~(\ref{eq:defcumul}) and (\ref{eq:dsvarphi}) it can be shown by induction that the further condition $1/N\ll s$ implies that $A_{k_{1},\dots,k_{n}}^{(j)}(L/\ell)$ takes the form \begin{equation} A_{k_{1},\dots,k_{n}}^{(j)}(L/\ell)= a_{k_{1},\dots,k_{n}}^{(j)}(L/\ell)^{-j}+ \mathcal{O}\left[(L/\ell)^{-j-1}\right]. \end{equation} Thus in the limit $N\to\infty,\ L/\ell\to\infty,\ s={\rm constant}$ we obtain for all the BdG and WD classes the expansion \begin{eqnarray} \label{eq:finalexp} \varphi_{s}(\mbox{\boldmath{$\mathrm{q}$}})= \sum_{j=-\infty}^{1}\sum_{n=1}^{2-j} \sum_{k_1,\ldots,k_{n}=1}^{\infty}a_{k_1,\dots,k_{n}}^{(j)}s^{-j} \prod_{i}^{n} q_{k_{i}}. \end{eqnarray} This is essentially an expansion in the inverse powers of large bare conductance and it is expected to converge in the metallic regime $1/N\ll s\ll 1$. We will show that, in the presence of the TR symmetry, $a_{k_1, \dots, k_{n}}^{(j)}$ with \mbox{$n>2$} actually vanish for the BdG classes CI and DIII. Thus for the classes CI and DIII, the cumulants higher than the second contain no components that are analytic in $s$ at $s=0$. From Eq.~(\ref{eq:dsvarphi}) in Appendix A one obtains the first of the coefficients $a_{1}^{(1)}=-1$. For the other coefficients we obtain after a lengthy calculation an algebraic recursive equation (see Appendix A for more details), \begin{widetext} \begin{eqnarray} \label{eq:recua} \lefteqn{\bigg(-m+2\sum_{p=1}^{n} k_{p}\bigg)a_{k_1,\dots,k_n}^{(m)}= \bigg[\sum_{j=m}^{1} \sum_{n_{1}=\text{max}(1,n+m-j)}^{\text{min}(2-j,n)} \sum_{p_1\neq p_2\neq\cdots\neq p_n=1}^{n} k_{p_{1}}n_1(n-n_1+1)\frac{1}{n!}} \nonumber\\ &&\times\bigg( \sum_{l=0}^{k_{p_{1}}-1} \Delta_{n_{1},n,l,k_{p_{1}}}^{j,m} a_{k_{p_1}-l,k_{p_2},\ldots,k_{p_{n_{1}}}}^{(j)} a_{l+1,k_{p_{n_{1}+1}},\ldots,k_{p_n}}^{(m-j+1)} -\sum_{l=0}^{k_{p_{1}}-2} a_{k_{p_1}-l-1,k_{p_2},\ldots,k_{p_{n_{1}}}}^{(j)} a_{l+1,k_{p_{n_{1}+1}},\ldots,k_{p_n}}^{(m-j+1)}\bigg)\bigg]\nonumber\\&& +\sum_{p=1}^{n}k_{p}\left\{ \left(\theta_1(n+1)\sum_{l=0}^{k_{p}-1} a_{k_{p}-l,l+1,k_1,\ldots,k_{p-1},k_{p+1},\ldots,k_{n}}^{(m+1)} +\frac{2\theta_2}{m_0 n}\sum_{r=1,r\neq p}^{n} k_r a_{k_{p}+k_r+1,k_1,\ldots,k_{p-1},k_{p+1},\ldots, k_{r-1},k_{r+1},\ldots,k_{n}}^{(m+1)}\right)\right.\nonumber\\ && - \left(k_{p}\rightarrow k_{p}-1 \right) +\frac{\theta_3}{m_0}[(m_0-2)k_p+m_l-1] a_{k_{p}+1,k_1,\ldots,k_{p-1},k_{p+1},\ldots,k_n}^{(m+1)} -\frac{\theta_3}{m_0}[(m_0-2)k_p+2m_l-m_0]a_{k_1,\ldots,k_n}^{(m+1)}{\Bigg\}}. \end{eqnarray} \end{widetext} Here we have \begin{equation*} \begin{split} a_{k_{p_1}-l,k_{p_2},\ldots,k_{p_{n_{1}}}}^{(j)}&=a_{k_{p_1}-l}^{(j)} \quad{\rm for}\quad n_1=1,\\ a_{l+1,k_{p_{n_{1}+1}},\ldots,k_{p_n}}^{(m-j+1)}&=a_{l+1}^{(m-j+1)} \quad{\rm for}\quad n_1=n. \end{split} \end{equation*} Moreover, we denote \begin{equation*} \begin{split} &\Delta_{n_{1},n,l,k_{p_{1}}}^{j,m}= (1-\delta_{j,m}\delta_{n_{1},n}\delta_{l,0}) (1-\delta_{j,1}\delta_{n_{1},1}\delta_{l,k_{p_{1}}-1}),\\ &\theta_1=\theta(-n-m),\quad \theta_2=\theta(n-2)\theta(2-n-m),\\ &\theta_3=\theta(1-n-m) \end{split} \end{equation*} with \begin{eqnarray*} \theta(n) = \left\{ \begin{array}{ll} 1 &{\rm for}\quad n \ge 0,\\ 0 &{\rm otherwise} \end{array} \right.. \end{eqnarray*} Note also that $\sum_{l=0}^{k_{p_1}-2}\equiv 0$ with $k_{p_1}=1$. For the higher cumulants this recursion relation is conveniently evaluated by a computer. Because of Eq.~(\ref{eq:finalexp}), the coefficients $a_{k_1,\ldots,k_n}^{(m)}$ are invariant under the change of subindices. The coefficients are evaluated (1) in the order of decreasing $m=1,0,-1,\ldots$, (2) for a given $m$ in the order of increasing $n$, and (3) for given $m$ and $n$ they may be calculated, e.g., in the order of increasing subindices. Equation (\ref{ccdef}) implies \begin{equation} \langle\!\langle g^n\rangle\!\rangle =(-d)^n n!\sum_{m=-\infty}^{1} a_{\underbrace{1,\dots,1}_{n}}^{(m)} s^{-m}. \label{ga} \end{equation} It then follows that, instead of \begin{equation} \label{eq:scnaive} \langle\!\langle g^{n} \rangle\!\rangle \sim s^{n+\delta_{m_0,2}-2},\qquad 2 < n < 1/s, \end{equation} that generalizes Eq.~(\ref{eq:abrik}) to the remaining six symmetry classes, we obtain the scaling \begin{equation} \label{eq:sc} \langle\!\langle g^{n} \rangle\!\rangle \sim s^{n+\delta_{m_0,2}-1},\qquad 2 < n < 1/s, \end{equation} that is down by one power of $s$ relative to the result of Altshuler, Kravtsov, and Lerner. Calculations based on a nonperturbative microscopic approach \cite{tartakovski95} suggest that there exist additional nonperturbative contributions that become significant for \mbox{$n>1/s$} and which lead to log-normal tails of the conductance distribution but are not describable within the one-parameter scaling approach. Since we cannot produce such nonperturbative components by our method we restrict our considerations to conductance cumulants of the order \mbox{$n<1/s$}. References \onlinecite{altshuler86} and \onlinecite{tartakovski95} missed the cancellation of the terms of the order \mbox{$\mathcal{O}(s^{n-2})$} since the numerical prefactors corresponding to $a_{k_1,\ldots,k_n}^{(2-n)}$ were not evaluated. The cancellation of the leading contribution of Eq.~(\ref{eq:abrik}) has already been pointed out for $n=3$ in Refs.~\onlinecite{gopar95}, \onlinecite{macedo94}, and \onlinecite{rossum97}. This cancellation is specific to quasi-one-dimensional geometry. \cite{rossum97} For higher $n$ the cancellation of the leading terms has been proved in Appendix B. \subsection{Conductance cumulants} For the numerical values of $\langle\!\langle g^n\rangle\!\rangle$, the cases $n=1,2$ have been covered, e.g., in Ref.~\onlinecite{macedo94} for the WD classes and in Ref.~\onlinecite{brouwer00} ($n=1$) and Ref.~\onlinecite{imamura01} ($n=1,2$) for the BdG classes. Our Eq.~(\ref{eq:recua}) provides an efficient derivation of these results. As an application of Eq.~(\ref{eq:recua}) we give the results for $n\le 6$, \begin{widetext} \begin{equation} \begin{split} \label{eq:gcumulants} d^{-1}\langle\!\langle g \rangle\!\rangle =& \frac{1}{s}+\frac{(m_{0}-2m_{l})}{3m_{0}}+[(3m_{0}-8m_{l}) (m_{0}-2)+4m_{l}(m_{l}-2)]\frac{s}{45m_{0}^2}\\ &+\{[-21m_{0}+6m_{0}^2+2m_{l}(-5m_{0}-4m_{l}+16)] (m_{0}-2)+8m_{l}(m_{l}-1)(m_{l}-2)\}\frac{2s^2}{945m_{0}^3}+ \mathcal{O}(s^3),\\ d^{-2}\langle\!\langle g^{2} \rangle\!\rangle =& \frac{2}{15m_{0}}+\frac{8(m_{0}-2)s}{315m_{0}^2}- [(15+19m_{0}-69m_{l})(m_{0}-2)+32m_{l}(m_{l}-2)] \frac{4s^2}{4725m_{0}^3}+\mathcal{O}(s^3),\\ d^{-3}\langle\!\langle g^3 \rangle\!\rangle=& -\frac{8(m_0-2)s^2}{1485m_0^{3}} +[(172\ 635+200\ 793m_0-751\ 117m_l)(m_0-2)+353\ 792m_l(m_l-2)] \frac{32s^3}{638\ 512\ 875m_0^4}\\ &+\{[-291\ 960-1\ 371\ 774m_0+554\ 337m_0^2+(2\ 303\ 952- 1\ 429\ 012m_0+205\ 356m_l)m_l](m_0-2)\\ &+286\ 720m_l(m_l-1)(m_l-2)\}\frac{8s^{4}}{212\ 837\ 625m_0^5} +\mathcal{O}(s^5),\\ d^{-4}\langle\!\langle g^4\rangle\!\rangle=& \frac{512(m_0-2)s^{3}}{155\ 925m_0^{4}} -[(17\ 006\ 315+15\ 370\ 851m_0-62\ 563\ 249m_l)(m_0-2)+ 29\ 630\ 464m_l(m_l-2)]\\ &\times\frac{32s^4}{54\ 273\ 594\ 375m_0^5}+\mathcal{O}(s^5),\\ d^{-5}\langle\!\langle g^5\rangle\!\rangle=& -\frac{11\ 229\ 952(m_0-2)s^{4}}{3\ 919\ 486\ 725m_0^{5}}+ [(55\ 989\ 824\ 345+45\ 233\ 187\ 091m_0-192\ 229\ 424\ 511m_l)(m_0-2)\\ &+91\ 546\ 451\ 968m_l(m_l-2)] \frac{128s^{5}}{510\ 443\ 155\ 096\ 875m_0^6}+\mathcal{O}(s^6),\\ d^{-6}\langle\!\langle g^6\rangle\!\rangle=& \frac{6\ 045\ 601\ 792(m_0-2)s^{5}}{1\ 747\ 609\ 738\ 875m_0^{6}} +\mathcal{O}(s^6). \end{split} \end{equation} \end{widetext} For a normal-metal wire ($m_l=1$), the term of the order $\mathcal{O}(s^2)$ in the second cumulant and the first term of $\langle\!\langle g^3 \rangle\!\rangle$ agree with Ref.~\onlinecite{macedo94}. For the symmetry class A (\mbox{$m_0=2$}, \mbox{$m_l=1$}), the subleading term of $\langle\!\langle g^3 \rangle\!\rangle$ coincides with Ref.~\onlinecite{macedo02}. For the BdG classes CI and DIII (\mbox{$m_0=2,m_l=0$} and \mbox{$m_0=2,m_l=2$}) the coefficients of the form $a_{k_{1}}^{(m)},a_{k_{1},k_{2}}^{(m)}$ with $m\le -1$ equal zero. By a similar argument as in Appendix B one may show that this implies that all the factors of the form $a_{k_{1},\ldots,k_{n}}^{(m)}$ with $n\ge 3$ and any $m$ vanish. Thus in the case of the symmetry classes CI and DIII, the cumulants higher than the second do not contain components which are analytic in $s$ at $s=0$. For these symmetry classes, the three lowest cumulants contain a contribution that is nonanalytic at $s=0$. \cite{macedo02} We consider it likely that there exists a residual effect of this nonanalytic behavior in the higher-order cumulants too so that these cumulants do not vanish, but we cannot prove this by our perturbative method. If such a residual effect exists it is special to the cumulants higher than the second in the context of classes CI and DIII that the nonperturbative components are actually the leading contributions in the diffusive regime. This absence of an analytical expression would indicate that the diagrammatic or semiclassical pictures \cite{altland96} do not lend themselves to the interpretation of these higher-order spectral correlations in the BdG wires with TR symmetry. The factor $(m_0-2)/m_0$ is a consequence of W(A)L. For the WD ensemble with broken TR symmetry (\mbox{$m_0=2$}, \mbox{$m_l=1$}) we obtain only even (odd) powers of $s$ in $\langle\!\langle g^n\rangle\!\rangle$ with $n$ even (odd), in agreement with Refs.~\onlinecite{gopar95} and \onlinecite{macedo94} whereas for the symmetry classes AI, AII, C, and D all the powers are present. \subsection{Current cumulants} As a second application of Eq.~(\ref{eq:recua}) we have calculated the current cumulants $C_j$ familiar from the context of full counting statistics and defined \cite{belzig02} in terms of the generating function $S(\chi)=-\ln[\sum_{N}P_{t_{0}}(N)\exp(iN\chi)]$, \begin{eqnarray} C_{j}&\equiv&-(-i)^{j}\frac{\partial^j}{\partial\chi^j}S(\chi)\bigg{|}_{\chi=0} . \end{eqnarray} Here $P_{t_{0}}(N)$ is the probability of $N$ electrons traversing through the sample in a time $t_{0}$ while $\chi$ is a so-called counting field. Since we have \begin{equation} \left\langle\sum_{i}\tau_i^j\right\rangle_s=\langle T_{j}\rangle_s =-\sum_{m=-\infty}^{1} a_j^{(m)}s^{-m}, \end{equation} the equation \cite{lee95} \begin{eqnarray} C_{j}&=&\frac{deVt_0}{h}\left\langle\sum_{i}\left[\tau(1-\tau)\frac{d}{d\tau} \right]^{j-1}\tau\bigg{|}_{\tau=\tau_i}\right\rangle_s \label{leeeq} \end{eqnarray} yields for the current cumulants \begin{equation} \begin{split} C_{1}=&Q_0+c_0+c_1,\\ C_{2}=&\frac{1}{3}Q_0+\frac{1}{15}c_{0}-\frac{3}{7}c_1, \ C_{3} = \frac{1}{15}Q_0+\frac{1}{315}c_0+\frac{23}{105}c_1,\\ C_{4}=&-\frac{1}{105}Q_0-\frac{11}{1575}c_0-\frac{401}{3465}c_1,\\ C_{5}=&-\frac{1}{105}Q_0-\frac{1}{1485}c_0+\frac{18\ 101}{315\ 315}c_1,\\ C_{6}=&\frac{1}{231}Q_0+\frac{47\ 221}{14\ 189\ 175}c_0-\frac{24\ 433}{945\ 945}c_1,\\ C_{7}=&\frac{27}{5005}Q_0+\frac{811}{2\ 027\ 025}c_0+\frac{62\ 993}{4\ 922\ 775}c_1,\\ C_{8}=&-\frac{3}{715}Q_0-\frac{1\ 790\ 851}{516\ 891\ 375}c_0-\frac{664\ 157}{70\ 509\ 285}c_1,\\ C_{9}=&-\frac{233}{36\ 465}Q_0-\frac{98\ 299\ 813}{206\ 239\ 658\ 625}c_0-\frac{1\ 095\ 799}{204\ 105\ 825}c_1,\\ C_{10}=&\frac{6823}{969\ 969}Q_0+\frac{23\ 610\ 799\ 591}{3\ 781\ 060\ 408\ 125}c_0\\ &+\frac{10\ 053\ 185\ 861}{3\ 478\ 575\ 575\ 475}c_1. \end{split} \end{equation} The next correction terms are of the order $\mathcal{O}(s^2)$. Here we have adopted the notations \begin{eqnarray} Q_0&=&\frac{I_0t_0}{e},\ I_0 =\frac{e^2}{h}\frac{d}{s}V,\ c_0=\frac{(m_0-2m_l)deVt_0}{3m_0 h},\nonumber\\ c_1&=&\frac{[(3m_0-8m_l)(m_0-2)+4m_l(m_l-2)]deVt_0s}{45m_0^{2}h}.\nonumber\\ \end{eqnarray} Equations ~(\ref{eq:recua}) and (\ref{leeeq}) provide a straightforward derivation for the leading contributions of $C_j$ since for these terms, in Eq.~(\ref{eq:recua}), only the sums in square brackets with \mbox{$n=m=j=n_1=p_1=1$} contribute. For the leading contributions of $C_j$, our results agree with those in Ref.~\onlinecite{lee95}. It can be shown by Eq.~(\ref{eq:recua}) that the correction terms of certain order $\mathcal{O}(s^{-m})$ depend similarly on $m_0$ and $m_l$ with all the $j$'s (through $c_0, c_1,\ldots$) and differ only by the numerical coefficients. This was proved in Ref.~\onlinecite{imamura02} for the terms of the order $\mathcal{O}(s^0)$. While the weak-localization corrections to the current cumulants have been studied before, e.g., in Ref.~\onlinecite{lerner88}, to our knowledge, the exact numerical values for $j>2$ have not been reported. Except for the first two cumulants, the ratio of the numerical factors before the universal correction term $c_0$ and before the bare first cumulant $Q_0$ is for even cumulants of the order, but smaller than unity and for odd cumulants smaller by about a factor of ten. \section{Discussion} In conclusion, we have presented a recursion equation covering the asymptotic behavior of the higher-order mesoscopic fluctuations for seven universality classes. We evaluated the values of the conductance cumulants $\langle\!\langle g^n\rangle\!\rangle$ with $n\le 6$ and the weak-localization corrections to the current cumulants $C_j$ for $j\le 10$. We discovered two qualitative results: (1) conductace cumulants of order larger than two and smaller than $1/s$ with $s$ the length of the wire in units of the number of channels times mean free path scale with $s$ with one less power than expected on the basis of naive scaling analysis and (2) the same cumulants are all vanishing in an expansion in powers of $s$ in the two BdG symmetry classes characterized by TR symmetry, the fact from which we conjecture pure nonanalytic dependence on $s$. As far as the noninteracting DMPK model is valid, $P(g)$ deviates from the Gaussian shape only slightly. These deviations may, however, in principle, be detected by generating a large number of uncorrelated disorder realizations by repeatedly heating and cooling the sample. \cite{mailly92} \begin{acknowledgments} M.P.V.S. acknowledges the financial support of Magnus Ehrnrooth Foundation and the Foundation of Technology (TES, Finland). We thank the Center for Scientific Computing for computing resources. \end{acknowledgments}
1,477,468,750,765
arxiv
\section{Introduction} Topological insulators (TIs) are a class of materials that are currently the focus of considerable attention since they represent a new state of matter in which the bulk is an insulator with an "inverted" energy gap induced by a strong spin-orbit coupling (SOC), which leads to the emergence of unusual gapless edge or surface states protected by time-reversal-symmetry\cite{Hasan2010, Qi2011, Ando2013, Kane2011}. First discovered in two-dimensional systems, one of the simplest topological insulators is the quantum spin Hall state, in which the SOC plays the same role as the magnetic field in the quantum Hall effect. In the quantum Hall effect, the bulk conductance is zero while the edge states are conducting with current flowing in one direction around the edge of the system. Similarly, in the quantum-spin-Hall state, the bulk is still insulating while edge-state electrons with opposite spins propagate in opposite directions, consistent with time-reversal symmetry. Theoretical concepts were soon generalized to three dimensions and shown experimentally in materials such as Bi$_{1-x}$Sb$_{x}$\cite{Fu2007}. As in the 2D case, the direction of an electron's motion along the surface of a 3D topological insulator is locked to the spin direction, which now changes continuously as a function of propagation direction, resulting in an unusual "planar metal". In the bulk of a TI, the electronic band structure resembles that of an ordinary band insulator, with the Fermi level falling between the conduction and valence bands. On the surface of a TI there are special states that fall within the bulk energy gap and allow surface metallic conduction. Although ordinary band insulators can also support conductive surface states: the locking of the spin and propagation directions eliminates the possibility of backscattering from nonmagnetic impurities. The first key experiment in this field was the observation of the 2D quantum-spin-Hall effect in a quantum-well structure made by sandwiching a thin layer of mercury telluride (HgTe) between layers of mercury cadmium telluride (Hg$_{x}$Cd$_{1-x}$Te), following a theoretical prediction\cite{Bernevig2006}. The first 3D TI to be probed using angle-resolved photoemission spectroscopy (ARPES) was the semiconducting alloy Bi$_{1-x}$Sb$_{x}$\cite{Hsieh2008}. Simpler versions of the 3D TI were theoretically predicted in Bi$_{2}$Te$_{3}$, Sb$_{2}$Te$_{3}$\cite{Zhang2009} and Bi$_{2}$Se$_{3}$\cite{Zhang2009, Xia2009} compounds with a large bulk gap and a gapless surface state consisting of a single Dirac cone. Later ARPES experiments indeed observed the linear dispersion relation of these surface states\cite{Xia2009, Chen2009}. These discoveries confirmed the ubiquitous existence in nature of this new topological state. In 2011, the notion of "topological crystalline insulators (TCIs)" was introduced to extend the topological classification of band structures to include certain crystal point group symmetries\cite{Fu2011}. This new state of matter features metallic surface states with quadratic band dispersion on high symmetry crystal surfaces, and it was shown that such a situation is realized in an insulating crystal having rocksalt structure. It has caused quite a sensation since the first example, SnTe, has been theoretically\cite{Hsieh2012} and experimentally\cite{Tanaka2012} confirmed to exhibit topological surface states in <001>, <110> and <111> surfaces. Soon after this discovery, the topological surface state in the Pb-doped Pb$_{1-x}$Sn$_{x}$Te\ and Pb$_{1-x}$Sn$_{x}$Se have been verified by ARPES and Landau level spectroscopy using scanning tunneling microscopy and spectroscopy\cite{Dziawa2012, Xu2012, Yan2014}, thus expanding the range of relevant materials. Alongside SnTe and the related alloys Pb$_{1-x}$Sn$_{x}$Se/Te, other chalcogenides such as SnS and SnSe that incorporate lighter elements have also been predicted to be TCIs even without the SOC\cite{Sun2013}. In theory, by applying external pressure, normal IV-VI rocksalt chalcogenides can be tuned into TCIs\cite{Barone2013}. Besides the materials with rocksalt crystal structure, Hsieh $et.al.$ predicted that the antiperovskite family are also promising materials for exploring the topological and other related properties\cite{Hsieh2014}. More recently, a new phase of Bi is stablized by strain, has been found to be a TCI based on mirror symmetry, similar to SnTe\cite{Munoz2016}. The discovery of TIs and TCIs has also stimulated the search for topological superconductors (TSCs), whose surfaces should exhibit Majorana fermions\cite{Qi2011}. Superconductors derived from TIs by doping have been considered as TSC candidates, such as Cu$_{x}$Bi$_{2}$Se$_{3}$\cite{Fu2010odd, Hor2010sc}. Since the topological surface states are protected from backscattering by disorder, it should be safe to tune the chemical potential through chemical substitution. The ARPES studies performed on Sn$_{1-x}$In$_{x}$Te\ (SIT) at $x$ = 0.045 confirmed that the topological surface states remain intact after In doping\cite{Sato2013}. In fact, SnTe becomes a superconductor upon substituting 2\% or more of Sn with In, which introduces hole carriers\cite{Erickson2009, Balakrishnan2013, Zhong2013, Novak2013}. Similarly, doping the TCI Pb$_{0.5}$Sn$_{0.5}$Te\ with > 10\% indium also induces superconductivity\cite{Zhong2014}. This spurs interest in searching for the superconducting analogue, a time-reversal-invariant topological superconductor (TSC), in this system. Point-contact spectroscopy experiments performed on SIT with varies In concentrations found that a zero-bias conductance peak is observed only in the cleanest samples with $x \approx$ 0.04, suggesting that there is competition between topological and non-topological superconducting states, and that disorder may determine the outcome\cite{Novak2013}. A challenge for characterizing the transport properties of surface states in TI/TCI materials such as Bi$_{2}$Se$_{3}$ and SnTe is the dominance of a pronounced bulk conductance\cite{Skinner2012, Butch2010}. Despite considerable efforts to reduce the bulk carrier density, such as modifying crystal growth method\cite{Jia2011}, reducing sample thickness\cite{Peng2009} and chemical counterdoping\cite{Hor2009, Analytis2010, Checkelsky2011}, the bulk conduction has proved difficult to suppress. Inspired by the goal of finding truly bulk-insulating topological materials, we have found that indium doping the TCI materials (Pb,Sn)Te can yield a huge bulk resistivity while maintaining topological surface states\cite{Zhong2015}. In this article, we present a review of the effects of indium substitution on the crystal structure, resistivity behavior, and electronic band structure in the TCI family (Pb$_{1-x}$Sn$_{x}$)$_{1-y}$In$_{y}$Te\ (PSIT). By varying the indium concentration, samples show an extreme range of low-temperature resistivities: with a few percent indium doping, the samples show weak-metallic behavior; with $\sim$6\% indium doping, samples have true bulk-insulating resistivity and present evidence for nontrivial topological surface states; with higher indium doping levels, superconductivity with a transition temperature $T_{c}$\ positively correlated to the indium concentration was observed. We consider this behavior from the standpoint of the localized impurity states associated with the indium dopants. \section{Results} \subsection{Crystal structure} \begin{figure} \centering \includegraphics[width=13cm]{Crystal_Structure.pdf} \caption{\label{fig:crystal} (color online) (\textbf{a}) A sketch of the crystal structure of SnTe with Sn atoms (yellow) partially replaced by Pb (grey) and In (red). (\textbf{b}) X-ray powder diffraction (XRD) patterns for SnTe (black), Pb$_{0.5}$Sn$_{0.5}$Te\ (blue) and (Pb$_{0.5}$Sn$_{0.5}$)$_{0.7}$In$_{0.3}$Te (red), respectively. Each dashed line marks the position of an XRD peak of a compound with the same color. (\textbf{c-e}) Optical microscope photos of the pristine surface of SnTe (c), Pb$_{0.5}$Sn$_{0.5}$Te\ (d) and (Pb$_{0.5}$Sn$_{0.5}$)$_{0.7}$In$_{0.3}$Te (e).} \end{figure} SnTe is a IV-VI semiconductor that crystallizes in the cubic rocksalt structure at room temperature, and maintains this structure after a certain degree of substitution of Sn with Pb and/or In (Figure 1a). Due to the unchanged crystal structure, the crystal point group symmetries that are essential to maintain the topological surface states remain the same. Because of the difference in lattice constants of the end members (PbTe > SnTe > InTe)\cite{Springer}, the lattice parameters of (Pb$_{1-x}$Sn$_{x}$)$_{1-y}$In$_{y}$Te\ compounds vary with $x$ and $y$. Figure 1b shows the XRD patterns of SnTe ($x=1$, $y=0$, black), Pb$_{0.5}$Sn$_{0.5}$Te\ ($x=0.5$, $y=0$, blue) and (Pb$_{0.5}$Sn$_{0.5}$)$_{0.7}$In$_{0.3}$Te ($x=0.5$, $y=0.3$, red), respectively. Compared to the parent compound SnTe, with a lattice constant $a=6.32$~\AA, Pb-doping increases the lattice constant ($a=6.39$~\AA\ for Pb$_{0.5}$Sn$_{0.5}$Te). Subsequent In-doping can then decrease the lattice constant ($a=6.36$~\AA\ for (Pb$_{0.5}$Sn$_{0.5}$)$_{0.7}$In$_{0.3}$Te). Similarly, a systematic shrinking of the unit cell as a function of In content has been observed in previous studies of SIT\cite{Zhong2013, Haldo2016}. The measured lattice parameters listed on the figure for the various compositions are qualitatively consistent with Vegard's law and the differences in radii of the ionic components. With more Sn or Pb atoms being replaced by In, the distortion of the crystal structure gets larger, and eventually the solubility limit is reached, indicated by the appearance of the secondary phase InTe. Common tools to characterize the surface states include angle-resolved photoemission spectroscopy (ARPES) and scanning tunneling microscopy (STM). To apply these techiques, one typically needs atomically-flat and well-oriented surfaces. For the topological insulator Bi$_{2}$Se$_{3}$, this is not be a problem, since it can be easily cleaved due to weak coupling between its layers. In the case of SnTe related compounds, however, the situation is more challenging due to their isotropic cubic structures. To illustrate, Figures 1c-1e show microscope photos of pristine surfaces. The flat, shiny planes are cleaved surfaces, and they become smaller with increasing Pb/In substitution. Thus, it appears that substitution of Pb or In atoms introduces lattice distortion and leads to smaller cleaved surfaces for surface-sensitive studies. STM studies of SIT single crystal samples have been successfully performed\cite{Sato2013}, as discussed in Sec.~2.5. Direct ARPES studies of PSIT single crystals are few, and it has proved more practical to perform measurements on thin films evaporated from previously characterized bulk samples\cite{Du2015, Zhong2015}. \subsection{Resistivity behaviors of In-doped Pb$_{1-x}$Sn$_{x}$Te} \begin{figure} \centering \includegraphics[width=9cm]{R_Summary.pdf} \caption{\label{fig:R} (color online) Temperature dependence of the resistivity for (Pb$_{0.5}$Sn$_{0.5}$)$_{1-y}$In$_{y}$Te single crystals with indium contents $0\leq y \leq0.30$.} \end{figure} The evolution of the electronic properties with composition have been investigated through transport measurements. Here we take (Pb$_{0.5}$Sn$_{0.5}$)$_{1-y}$In$_{y}$Te as an example to illustrate the effect of indium substitution. As shown in Figure 2, pure Pb$_{0.5}$Sn$_{0.5}$Te\ shows a metallic-like behavior with a $p$-type carrier density similar to SnTe. By introducing increasing amounts of indium into the Pb$_{0.5}$Sn$_{0.5}$Te\ system, single crystal samples show quite divergent, nonmonotonic variations in resistivity in the normal state. For the samples with one percent or less indium, the resistivity is weakly metallic, just like the resistivity behavior of pure SnTe\cite{Zhong2013} or Pb$_{1-x}$Sn$_{x}$Te\ without indium doping\cite{Dixon1968}. Increasing $y$ to 0.06, we observe that the resistivity at 10~K rises by five orders of magnitude. With further increases of $y$, the resistivity drops, but remains semiconducting, consistent with earlier studies\cite{Zhong2014, Vul1978, Kozub2006, Shamshur2008}. This resistivity behavior in the normal state is quite different from the case of In doped SnTe\cite{Zhong2013}, where all samples are weakly metallic in the normal sate. At low temperature, samples show true bulk-insulating resistivity and and present evidence for nontrivial topological surface states\cite{Zhong2015}. With higher indium doping levels, superconductivity with a transition temperature $T_{c}$\ positively correlated to the indium concentration was observed, and the highest $T_{c}$, {$\sim4.7$~K, was achieved for 45\% indium doped SnTe samples \cite{Zhong2013,Balakrishnan2013} and 30\% indium doped Pb$_{0.5}$Sn$_{0.5}$Te samples \cite{Zhong2015}. \begin{figure}[!b] \centering \includegraphics[width=15cm]{RH.pdf} \caption{\label{fig:RH} (color online) Weak antilocalization magnetoresistance of (Pb$_{1-x}$Sn$_{x}$)$_{1-y}$In$_{y}$Te. (\textbf{a}) MR for a (Pb$_{0.65}$Sn$_{0.35}$)$_{0.98}$In$_{0.02}$Te sample measured at temperatures of $T= 5$, 20, 30 and 50 K in perpendicular magnetic fields of $|B|\leq 7$ T. The WAL effect is overwhelmed at high temperatures by the bulk conduction states. (\textbf{b,c}) MR for (Pb$_{0.65}$Sn$_{0.35}$)$_{0.98}$In$_{0.02}$Te (black), (Pb$_{0.5}$Sn$_{0.5}$)$_{0.94}$In$_{0.06}$Te (blue) and (Pb$_{0.8}$Sn$_{0.2}$)$_{0.94}$In$_{0.06}$Te (red) measured at 5 K in a full field range of $\pm7$ T and the enlarged low-field regime $|B|\leq 0.6$ T. (\textbf{d,e}) Magnetoconductance $\Delta G=\Delta (1/R)$ of the (Pb$_{0.65}$Sn$_{0.35}$)$_{0.98}$In$_{0.02}$Te sample measured at 5~K and 20~K, respectively. Lines represent the result of fitting using WAL formula with fixed $\alpha=0.37$ and variable $l_\phi=51$~nm (20~K) and 58~nm (5~K). (\textbf{f}) Magnetoconductance of the (Pb$_{0.5}$Sn$_{0.5}$)$_{0.94}$In$_{0.06}$Te sample measured at 5~K. Line is fitted by the WAL formula together with an additional $-B^2$ term. } \end{figure} The effect of indium substitution is similar for other (Pb,Sn)Te compositions. Nonmonotonic variation in the normal-state resistivity with $y$ is also found in transport measurements of PSIT for many series with different $x$ values. Specifically, $x$=0.5 is not the only system that shows large bulk resistance when doped with a low concentration of indium. In the whole family of (Pb$_{1-x}$Sn$_{x}$)$_{1-y}$In$_{y}$Te, maximum resistivities that surpass 10$^{6}\ \Omega$cm are observed for $x$=0.25-0.30. Even for $x$=0.35, doping with 6\% In results in a rise in resistivity by 6 orders of magnitude at low temperature. These phenomena can be well explained in a picture where the chemical potential is pinned within the band gap, which will be discussed in detail in a later section. A common test of the topological character of surface states involves measurements of magnetoresistance (MR) at low temperature\cite{Bansal2012}. The symmetry-protected coupling of spin and momentum for surface states makes them immune to weak localization effects. Application of a transverse magnetic field breaks time-reversal symmetry\cite{Serbyn2014}, thus removing the topological protection and leading to a field-induced increase in resistance. Figure 3a shows data for $\Delta R=R(B)-R(0)$ measured at several temperatures for a magnetic induction $|B|\leq 7$~T applied perpendicular to the $(001)$ surface of the (Pb$_{0.65}$Sn$_{0.35}$)$_{0.98}$In$_{0.02}$Te sample. At temperatures of 30~K and below, the field dependence of the induced resistance has a form qualitatively consistent with that expected for weak anti-localization (WAL) of two-dimensional electron states.The MR curve at 5K (black) clearly shows a cusp near zero field, which is a sign of the WAL effect and suggests the dominance of topological surface states. At elevated temperature, the cusp disappears, and the curves in the low-field regime (not shown) are dominated by the parabolic $B$-dependence of the bulk states\cite{Akiyama2014, Kim2011WAL}, which is a reflection of the bulk carriers under a Lorentz force in a perpendicular field. The magnitude of the MR changes monotonically with temperature, a fact that needs further study to fully understand. In order to clarify the nature of surface states in samples with different compositions, in Fig. 3b, 3c we compare the MR behavior at 5 K between (Pb$_{0.8}$Sn$_{0.2}$)$_{0.94}$In$_{0.06}$Te (red), (Pb$_{0.65}$Sn$_{0.35}$)$_{0.98}$In$_{0.02}$Te (black), and (Pb$_{0.5}$Sn$_{0.5}$)$_{0.94}$In$_{0.06}$Te (blue). In the low-field regime, the $x$=0.35 sample clearly shows a WAL effect even with a few percent indium, which is consistent with the ARPES evidence that the topological surface states of In-doped SnTe are maintained when In doping concentration is roughly 4.5\%\cite{Sato2013}. To be more quantitative, we convert the data to conductance, $G$, and compare with the theoretical formula for WAL\cite{Hikami1980}, \begin{equation} \Delta G = {\alpha\over\pi} {e^2\over h} [\ln(B_\phi/B) - \psi({\textstyle\frac12}+B_\phi/B)], \end{equation} where $\psi$ is the digamma function and $\alpha$ is a number equal to $1/2$ times the number of conduction channels; $B_\phi= \Phi_0/(8\pi l_\phi^2)$, with $\Phi_0=h/e$ and $l_\phi$ being the electronic phase coherence length. For our system, one expects four Dirac cones crossing the Fermi surface\cite{Serbyn2014, Assaf2014}, which would give $\alpha=2$. Figure 3b shows that we get a good fit to the 20-K data for the $x=0.35$, $y=0.02$ sample with $\alpha=0.37$ and $l_\phi=51$~nm. Moving to $T=5$~K in Fig.~3d, the low-field data can be described by keeping $\alpha$ fixed and increasing $l_\phi$ to 58 nm; however, the data also exhibit a large oscillation about the calculated curve for $|B|>0.2$~T. This may be due to a Landau level crossing the Fermi energy\cite{Serbyn2014}. Turning to the $x=0.5$, $y=0.06$ sample, the 5-K data in Fig.~3d are well described by the WAL formula for $|B|<1$~T, with $\alpha=2.25$ and $l_\phi=100$~nm, but at larger $|B|$ we need an additional component that varies as $-B^2$. The latter contribution might come from bulk states. \subsection{Phase diagram} To summarize the effect of indium substitution on PSIT materials, we present in Fig.~4, a ternary phase diagram of the system to illustrate trends for several properties: the character of the low-temperature resistivity (metallic, insulating, superconducting) and the solubility limit. Here, the end members are SnTe, PbTe, and InTe. The closer to the end member, the higher concentration of that component. Each of the six dashed lines starting with the same end member InTe represent a series of PSIT with the fixed Sn:Pb ratios, as labeled by $x$. For low indium doping (blue region), samples show weak metallic resistivity, as in SnTe. A few percent indium doping turns the Pb$_{1-x}$Sn$_{x}$Te\ samples into true insulators (orange region). By increasing the In content further, superconductivity may be achieved (green region). When the indium content exceeds the solubility limit in the system (marked with white crosses), where additional In is no longer simply substituting for Pb/Sn, an impurity phase of InTe, with a tetragonal crystal structure, appears and the samples are no longer single crystals. The critical In concentrations that divide these various regions are illustrated with dashed lines. \begin{figure} \centering \includegraphics[width=10cm]{Phase_Diagram.pdf} \caption{\label{fig:diagram} (color online) A ternary phase diagram summarizing all the resistivity behaviors of (Pb$_{1-x}$Sn$_{x}$)$_{1-y}$In$_{y}$Te. Experimental results for SIT with In content up to 10\% is obtained from Ref. \cite{Erickson2009}. The solubility limit of In in PbTe (24\%) is obtained from Ref. \cite{Ravich2002}. Samples with weak metallic resistivity are shown in blue, with insulating resistivity are shown in orange, and with superconductivity are shown in green. White crosses represent the solubility limit of In, beyond which the sample no longer remains in a single phase and secondary InTe phase shows up.} \end{figure} From the resistivity behavior in the phase diagram, it can be seen that the In substitution effect shows consistent trends. Superconductivity emerges almost immediately with indium doping in SnTe. In Pb$_{1-x}$Sn$_{x}$Te, though, with increasing Pb content the amount of In needed to induce superconductivity goes up, and the range of superconductivity with respect to In-doping shrinks. Meanwhile, the bulk insulating region is broadens with increased Pb, and the maximum bulk resisitvity that can be achieved in the PSIT family is found in the $x$=0.30 and $x$=0.25 series\cite{Zhong2015}. Those materials along with previous reported bulk insulating TIs Sn-Bi$_{1.1}$Sb$_{0.9}$Te$_{2}$S\cite{Kushwaha2016} could provide good platforms to study the true topological 'insulators', in which bulk conduction would not dominate the transport behavior, assuming that their surface states remain topological. \subsection{Bulk band structure} \begin{figure}[!t] \centering \includegraphics[width=12cm]{Band_Structure.pdf} \caption{\label{fig:Band} (color online) Energy diagrams illustrating the relative location of the conduction, valence band and indium induced impurity band in the continuous series of Pb$_{1-x}$Sn$_{x}$Te\ alloys with low In doping level, where indium can be simply treated as a $p$-type dopant. In SnTe, the conduction band has a symmetry of $L_{6}^{+}$; this undergoes a band inversion at $x\sim$0.35 and the symmetry is inverted in PbTe. The band gap is illustrated with blue dashed lines, with the end member SnTe having 360 meV and PbTe having 190 meV \cite{Heremans2012}. The Fermi level, controlled by the indium impurity states, is indicated schematically by the red line. } \end{figure} To address the divergent resistivity behaviors, it is helpful to consider the bulk electronic structure. SnTe and other IV-VI materials with the rocksalt structure have long attracted attention as a model for small band gap semiconductors. The topologically distinct band structure of SnTe (nontrivial, $x$ = 1) and PbTe (trivial, $x$ = 0) involves a change in the ordering of the conduction and valence bands at $L$ points. This implies that the band gap of the alloy Pb$_{1-x}$Sn$_{x}$Te\ first closes and then re-opens as $x$ increases, as shown in Fig.~5\cite{Ravich2002, Kaidanov1985, Pankratov1987}. It follows that there must be a topological quantum phase transition upon varying the Pb/Sn ratio in Pb$_{1-x}$Sn$_{x}$Te, and experiments indicate that it occurs near $x_{c}\approx 0.35$ at low temperature\cite{Dimmock1966, Kaidanov1985, Pankratov1987, Gao2008, Xu2012, Tanaka2013}. Generally, it is believed that each In dopant will provide one less valence electron than Sn$^{2+}$ and Pb$^{2+}$, so that indium should be considered as a $p$-type dopant. In the case of SnTe, one begins with a $p$-type semiconductor due to Sn vacancies. With In doping, the number of cation vacancies decreases, which partially compensates the expected impact of the In; nevertheless, the $p$-type carrier density initially grows with increasing In concentration \cite{Zhang2013_SnTe,Novak2013}. The situation becomes more complicates for an Indium concentration above 10\%, where the sign of the Hall resistivity changes \cite{Haldo2016}, suggesting the possibility that two types of carriers are simultaneously present. In fact, in indium doped PbTe and Pb$_{1-x}$Sn$_{x}$Te, In-doping results in far less than 1 electron per impurity atom, which suggests In doping also introduces an impurity band that causes the effect of pinning the Fermi level\cite{Bushmarina1991, Heremans2012}. Evidence for the quasi-localized character of indium-induced states has been provided by a recent nuclear magnetic resonance study on Sn$_{0.9}$In$_{0.1}$Te\cite{Maeda2017}. In this scenario, the large bulk resistivity in series with $x=0.25-0.35$ is a consequence of indium sites introducing localized impurity states that pin the chemical potential\cite{Kaidanov1985, Ravich2002}; the electronic properties then depend on the position and width of the indium level. In the region of compositions where the localized impurity band lies in the band gap, or a position that is very close to the band edge, the free carrier concentration is extremely low at low temperature, which is reflected in the very large bulk resistivities for $x\sim0.30$ that we observe in the transport measurements. \begin{figure}[!t] \centering \includegraphics[width=15cm]{Resistivity2.pdf} \caption{\label{fig:R2} (color online) Temperature dependence of the resistivity for (Pb$_{1-x}$Sn$_{x}$)$_{0.97}$In$_{0.03}$Te (a) and (Pb$_{1-x}$Sn$_{x}$)$_{0.94}$In$_{0.06}$Te (b) single crystals. The resistivity value are shown in a logarithmic scale. } \end{figure} According to the schematic evolution of the band structure of Pb$_{1-x}$Sn$_{x}$Te\ and the energy of the In impurity level in Fig.~5, the chemical potential sits in the valence band on the Sn side, consistent with $p$-type metallic behavior, while it moves to the conduction band on the Pb-rich side. With a very small amount of indium doping on the Pb-rich side, the opposing trends of decreasing cation vacancies and increasing In-substitution initially lower the carrier density, leaving the system weakly metallic. With further increases in In content, the Fermi level drops into the band gap, where it gets pinned by the impurity state level. The magnitude of the resistivity will then depend largely on the size of the band gap, which is determined by the Sn content, $x$, instead of the indium content, $y$\cite{Ravich2002}. Figure 6 gives a summary of the variation in resistivity as a function of $x$ in PSIT compounds with either 3\% or 6\% indium doping. The same trends are found as a function of $x$, although the low-temperature resistivities tend to be higher for $y=0.06$. It is worth mentioning that a long relaxation time was observed in the bulk resistance for several samples, especially those that are truly bulk insulators, i.e. $x=0.25$, 0.30, 0.35. After a sample was quenched down to low temperature (liquid-helium), its resistivity gradually decreased with time. This relaxation phenomenon can last for days until the resistivity reaches a stable value. Previous studies on Pb$_{1-x}$Sn$_{x}$Te\ doped with group-III elements revealed similar time-dependent behavior, and it was explained in terms of the interaction between the crystal lattice and the non-equilibrium electron densities associated with the pinned chemical potential at the impurity level\cite{Kaidanov1985}. \subsection{Debate on topological superconductivity} At higher indium content (>10\%), superconductivity emerges in Pb$_{1-x}$Sn$_{x}$Te\ samples, with a typical superconducting transition temperature in the range of 3 to 5~K. There are intriguing questions about the nature of the superconductivity: is it conventional BCS superconductivity, or unconventional topological superconductivity? Topological superconductors are accompanied by gapless states at the edge or surface, which characterize the nontrivial topology of the bulk state and they may be composed of Majorana fermions \cite{Qi2009, Qi2010}. The first plausible example of TSC (associated with TI or TCI compounds) was Cu$_{x}$Bi$_{2}$Se$_{3}$\cite{Fu2010odd}. Experimental evidence from point-contact spectroscopy\cite{Sasaki2011, Kirzhner2012} showing zero-bias conductance peaks coexisting with a superconducting gap may be indicative of the unconventional superconductivity, which is necessary (but not sufficient) for TSC in inversion symmetric, time-reversal-invariant superconductors. Similarly, results for In-doped SnTe from both point-contact spectroscopy and high-resolution ARPES studies have been interpreted as evidence for odd-parity pairing and topological superconductivity \cite{Sasaki2012, Sato2013}. A markedly different conclusion was drawn, however, in an STM study on Cu$_{0.2}$Bi$_{2}$Se$_{3}$\cite{Levy2013}, which reported a superconducting gap without any zero bias anomalies. Later studies on the optimally doped TCI system SIT using thermal conductivity\cite{He2013}, magnetization and muon-spin rotation ($\mu$SR) measurements\cite{Saghir2014} also supported the conclusion that SIT has a ful superconducting gap in the bulk, and is more likely to be a conventional $s$-wave superconductor. Similarly, STM measurements\cite{Du2015} of the superconducting state as well as the superconducting energy gap in (Pb$_{0.5}$Sn$_{0.5}$)$_{0.7}$In$_{0.3}$Te\ on the high-symmetry (001) surface lead to the same conclusion, that the superconducting sample seems to be fully gapped without any in-gap states, contrary to the expectations for a topological superconductor. These controversies may be due to the complexity of the junctions in point contact measurements, since the spectra that are indicative of an unconventional superconductor can also be interpreted by other mechanisms\cite{Kirzhner2012, Du2015}. On the other hand, the observed fully-gapped tunneling spectra in STM measurements on Cu$_{x}$Bi$_{2}$Se$_{3}$ and SIT can be also explained by the results of exotic pairing states with additional parameters\cite{Levy2013}. In addition, in TCI compounds where the exotic surface states only exist on certain high-symmetry planes guaranteed by the mirror symmetry, the possibility of topological superconductivity feature cannot be ruled out from studies of the (001) plane alone \cite{Du2015}. Besides, due to the poor cleavability cubic (Pb$_{1-x}$Sn$_{x}$)$_{1-y}$In$_{y}$Te, it might be tricky to expose the desired surface for surface-sensitive measurements. The debate on topological superconductivity has recently been reinvigorated by a nuclear magnetic resonance study of Cu$_{0.3}$Bi$_{2}$Se$_{3}$ \cite{Matano2016}. There the authors find clear evidence for a breaking of the spin-rotation symmetry in the superconducting state, consistent with spin-triplet pairing. This will surely motivated further investigations. \section{Discussion} In recent years, In-doped SnTe and Pb$_{1-x}$Sn$_{x}$Te\ have been studied extensively both as examples of topological crystalline insulators and potentially-interesting superconductors. Our study of a broad range of compositions in the PSIT system shows that indium doping has a nonmonotonic effect on the electronic properties, which can be explained from the standpoint of the relative location of the indium-induced impurity band and the bulk band structure. In this article we have presented a summary which recaps our findings and conclusions, which can be instructive for future work on this system. In the effort of looking for a new topological superconductor, Tl$_{5}$Te$_{3}$ has been found to be tunable between superconducting and topological surface states by Sn-substitution\cite{Arpino2014, Arpino2015}, which is quite similar to the In-substitution effect on the Pb$_{1-x}$Sn$_{x}$Te\ system. These facts may imply that the topological surface states and the bulk superconductivity are two competing parameters. The goal of mixing the superconducting and topological characters remains a challenge. A plausible strategy of looking for Majorana fermions is to artificially construct topological insulator/conventional superconductor heterostructures and make of use the superconducting proximity effect\cite{Fu2008, Wang2012, Wang2013, Xu2014, Li2014, Xu2015}. Both SIT and PSIT would be perfect platforms for this purpose, since these systems undergo a continuous change from a TCI to a (likely conventional) superconductor. More specifically, the large bulk resistivity shows up in (Pb$_{1-x}$Sn$_{x}$)$_{1-y}$In$_{y}$Te\ with $x=0.25$--0.5, a $p$-type matrix can be realized in series the with $x=0.35$--1.0, and superconductivity can also be realized in the latter compounds, which makes this system quite promising for exploitation in heterostructures. \section{Experimental Section} \subsection{Sample preparation.} In the Pb-Sn-In-Te alloy system, several elemental metals or compounds are found to be superconducting near liquid-helium temperatures, as indicated in Fig. 7. To investigate the indium substitution effect on the (Pb,Sn)Te system, compounds with the composition in the area of the red triangle have been grown and carefully studied in this work. Single crystal samples with nominal composition, (Pb$_{1-x}$Sn$_{x}$)$_{1-y}$In$_{y}$Te\ ($x_{norm}$=0.2-0.5, 1.0, $y_{norm}$=0-0.5), were prepared via the vertical Bridgman method. Stoichiometric mixtures of high-purity (99.999\%) elements were sealed in double-walled evacuated quartz ampoules. The ampoules were set in a vertical position, heated at 950$^\circ$C\ in a three-zone vertical box furnace, with rocking to achieve homogeneous mixing of the ingredients. The crystal growth took place via slow cooling from 950 to 760 $^\circ$C\ at the rate of 1.5 $^\circ$C /hr, and then the samples were gradually cooled down to room temperature over another 3 days\cite{Zhong2013}. Figure 8a shows examples of the resulting crystals (before removal from the quartz tubes). For a few compositions, where we needed large crystals for ARPES measurements, we used the modified floating-zone method. In the normal traveling-solvent floating-zone (TSFZ) method, a polycrystalline rod is prepared as feed rod and seed rod, fixed at the upper or lower shaft of the furnace. During the growth, the molten zone is not in contact with any container. However, elements such as indium and tin easily evaporate and are reactive when heated to high temperature. Therefore, we used the modified TSFZ method, as explained in detail in our earlier publication\cite{Zhong2014}, where the starting material, prepared in the vertical Bridgman method, was sealed in a long quartz tube ($\sim$15 cm) and mounted at the bottom shaft in the floating zone furnace. The space inside the large glass chamber surrounding the quartz tube was filled with high-purity Ar at 1 bar to avoid oxygen diffusion through the quartz. During the growth the rotating shaft kept moving downwards at a velocity of 0.5-1 $mm/hr$, so that the new crystal gradually grew from the bottom of the starting ingot, resulting in a sample such as that shown in Fig.~8b. \begin{figure} \centering \includegraphics[width=10cm]{Phase_Diagram2.pdf} \caption{\label{fig:Phase2} (color online) Phase diagram of composition and superconductivity of the Pb-Sn-In-Te alloy system. The known superconducting metals or compounds Pb, Sn, In, InTe, and Sn$_{0.3}$In$_{0.7}$ are marked on the diagram, respectively. Based on this superconducting phase diagram, we did thorough studies in the region marked with red dashed lines. } \end{figure} For each sample used in the magnetization, transport and other measurements, the chemical composition was characterized using energy-dispersive X-ray spectroscopy (EDS). In this article, chemical composition values $x$ and $y$ correspond to the measured concentrations. \begin{figure} \centering \includegraphics[width=15cm]{Crystal_Ingot.pdf} \caption{\label{fig:Ingot} (color online) Single crystal rods of PSIT alloy grown by vertical Bridgman method (\textbf{a}) and modified floating zone method (\textbf{b}).} \end{figure} \subsection{Sample characterizations.} To identify the room-temperature crystal structure, each sample was characterized by X-ray powder diffraction (XRD) measured with Cu $K \alpha$ radiation from a model Rigaku Miniflex II. Microstructure and chemical composition of the samples were carefully investigated by an analytical high-resolution scanning electron microscope (SEM) equipped for EDS, model JEOL 7600F, located at the Center for Functional Nanomaterials (CFN) at Brookhaven National Laboratory. For each crystal piece, EDS was measured at 10 positions and the mean value was taken to characterize the sample. These measured $x$ and $y$ values agreed within the measurement uncertainty ($\pm$0.02), and the measured values are used throughout this article. Typical microstructure pictures of the PSIT cleaved surfaces were taken by optical microscope. To study the effect of indium substitution on the magnetic properties, dc magnetic susceptibility measurements were performed using a commercial superconducting quantum interference device (SQUID) magnetometer (MPMS, Quantum Design), for temperatures down to 1.75 K. The sample pieces were cut into an approximately cubic shape, typically weighing 0.1 g. For transport measurements, thin bar-like samples with typical dimensions of $4 \times1.5\times0.5$ mm$^3$ were cut from the bulk crystal and then polished. Electrical resistance was measured in the standard four-probe configuration, using gold wires and room-temperature-cured, fast-drying silver paint for the ohmic contacts on top side, and performed with a Keithley digital multimeter (model 2001), using the MPMS for temperature control. Measurement errors due to the contact geometry are estimated to be less than 10\%. \begin{acknowledgments} Work at Brookhaven National Laboratory is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Contract No. DE-SC0012704; use of facilities at the Center for Functional Nanomaterials was supported by the Office of Basic Energy Sciences, Division of Scientific User Facilities. W. K. acknowledges support from National Natural Science Foundation of China \#11674220 and 11447601, and Ministry of Science and Technology \#2016YFA0300500 and 2016YFA0300501.. \end{acknowledgments} \renewcommand\bibname{References}
1,477,468,750,766
arxiv
\section{Introduction} In this paper, we study symbiotic bright solitary wave solutions of two-component system of time-dependent nonlinear Schr\"{o}dinger equations called Gross-Pitaevskii equations given by \begin{equation}\label{1.3}\left\{\begin{array}{l} i \hbar\partial_t\psi_1=-\frac{\hbar^2}{2m}\Delta\psi_1+\tilde{V}_1(x)\psi_1+U_{11}|\psi_1|^2\psi_1+U_{12}|\psi_2|^2\psi_1,\\ i \hbar\partial_t\psi_2=-\frac{\hbar^2}{2m}\Delta\psi_2+\tilde{V}_2(x)\psi_2+U_{22}|\psi_2|^2\psi_2+U_{12}|\psi_1|^2\psi_2,\ x\in\Omega,\ t>0.\ \ \ \ \ \end{array}\right. \end{equation} which models a binary mixture of Bose-Einstein condensates with two different hyperfine states called a double condensate. Here $\Omega\subseteq \mathbb{R}^N (N\leq 3)$ is the domain for condensate dwelling, $\psi_j$'s are corresponding condensate wave functions, $\hbar$ is the Planck constant divided by $2\pi$ and $m$ is atom mass. The constants $U_{jj}\sim a_{jj}$, $j=1$,$2$, and $U_{12}\sim a_{12}$, where $a_{jj}$ is the intraspecies scattering length of the $j$-th hyperfine state and $a_{12}$ is the interspecies scattering length. Besides, $\tilde{V}_j$ is the trapping potential for the $j$-th hyperfine state. In physics, the usual trapping potential is given by $$ \tilde{V}_j(x)= \sum_{k=1}^N \tilde{a}_{j,k}(x_k-\tilde{z}_{j,k})^2\quad\hbox{ for }\:x=(x_1,\cdots,x_N)\in\Omega, j=1,2\,,$$ where $\tilde{a}_{j,k}\geq 0$ is the associated axial frequency, and $\tilde{z}_j=(\tilde{z}_{j,1},\cdots,\tilde{z}_{j,N})$ is the center of the trapping potential $\tilde{V}_j$. When the constant $U_{jj}$ is negative and large enough, self-interaction of the $j$-th hyperfine state is strongly attractive and the associated condensate tends to increase its density at the centre of the trap potential in order to lower the interaction energy (cf.~\cite{21}). This may result in spikes and bright solitons which can be observed experimentally in three dimensional domain (cf.~\cite{CTW}). Conversely, when the constant $U_{jj}$ becomes positive, self-interaction on the $j$-th hyperfine state turns into repulsion which cannot support the existence of bright solitons. To create bright solitons while each self-repulsive state cannot support a soliton by itself, the interspecies attraction may open a way to make two-component solitons called symbiotic bright solitons. Recently, symbiotic bright solitons in only one dimensional domain have been investigated as the interspecies scattering length $a_{12}$ is negative and sufficiently large (cf.~\cite{PB}). However, in two and three dimensional domains, the existence of symbiotic bright solitons has not yet been proved. In this paper, we want to show the existence of such solitons by studying the least energy solutions of two-component system of nonlinear Schr\"{o}dinger equations. To obtain symbiotic bright solitons in a double condensate, we may set $\psi_1(x,t)= u(x)\,e^{i\,\tilde{\lambda}_1\,t}$, $\psi_2(x,t)= v(x)\,e^{i\,\tilde{\lambda}_2\,t}$ and use Feshbach resonance to let $U_{jj}$'s, $\tilde{\lambda}_j$'s and $\tilde{a}_{j,k}$'s be very large quantities. By rescaling and some simple assumptions, the system (\ref{1.3}) with very large $U_{jj}$'s, $\tilde{\lambda}_j$'s and $\tilde{a}_{j,k}$'s is equivalent to the following singularly perturbed problem: \begin{equation}{\label{eq:1-1}} \left\{\begin{array}{l} \varepsilon^2 \Delta u- V_1 (x) u+\mu_1 u^3 + \beta u v^2 =0\ \ \ \mbox{in}\ \ \Omega,\\ \varepsilon^2 \Delta v- V_2 (x) v+\mu_2 v^3 + \beta u^2 v =0\ \ \ \mbox{in}\ \ \Omega,\\ u,v >0\ \ \mbox{in}\ \ \Omega,\\ u=v=0\ \ \mbox{on}\ \ \partial\Omega, \end{array}\right. \end{equation} where $u$ and $v$ are corresponding condensate amplitudes, $\varepsilon>0$ is a small parameter, and $\beta\sim -a_{12}\neq 0$ is a coupling constant. Here we may use the zero Dirichlet boundary condition which may come from~\cite{GP}. To study symbiotic bright solitons of double condensates, we consider two cases of the domain $\Omega$. One is to set $\Omega$ as the entire space ${\bf R}^N (N\leq 3)$. The other is to set $\Omega$ as a bounded smooth domain in ${\bf R}^N$. The constants $\mu_j\sim -U_{jj}\leq 0\,, j=1, 2\,,$ give repulsive self-interaction, and $\beta\sim -a_{12}>0$ means attractive interaction of solutions $u$ and $v$. Moreover, $V_j>0\,, j=1,2$ are the associated trapping potentials. Another motivation of studying the problem (\ref{eq:1-1}) may come from the formation of bright solitons in a mixture of a degenerate Fermi gas with a Bose-Einstein condensate in the presence of a sufficiently attractive boson-fermion interaction. Recently, there have been successful observations and associated experimental and theoretical studies of mixtures of a degenerate Fermi gas and a Bose-Einstein condensate (cf.~\cite{DJ}, \cite{MRR} and \cite{M}). Recently, the corresponding model has been given by \begin{equation}\label{bf1} \left\{\begin{array}{l} i \hbar\partial_t\varphi^B=-\frac{\hbar^2}{2m_B}\Delta\varphi^B+ V_B(x)\varphi^B +\,g_B N_B|\varphi^B|^2\varphi^B+ g_{BF}\,\displaystyle\sum_{j=1}^{N_F}\,|\varphi^F_j|^2\varphi^B\,,\\ i \hbar\partial_t\varphi^F_j=-\frac{\hbar^2}{2m_F}\Delta\varphi^F_j+ V_F(x)\varphi^F_j + g_{BF}\,N_B\,|\varphi^B|^2\varphi^F_j\,,\: x\in\Omega,\ t>0\,, j=1,\cdots, N_F\,, \end{array}\right. \end{equation} where $N_B$ and $N_F$ are the numbers, $m_B$ and $m_F$ are the mass of bosons and fermions, $V_B$ and $V_F$ are trap potentials, $\varphi^B$ and $\varphi^F_j$'s are wave functions of Bose-Einstein condensate and individual fermions, respectively. When the constant $g_B$ is positive i.e. repulsive self-interaction, and the constant $g_{BF}$ is negative and large enough enough i.e. strongly attractive interspecies interaction, bright solitons may appear in such a system. Using the system~(\ref{bf1}) (cf.~\cite{KB}), a novel scheme to realize bright solitons in one-dimensional atomic quantum gases (i.e. the domain $\Omega$ is one dimensional) can be found. Here we want to study bright solitons in two and three-dimensional atomic quantum gases i.e. the domain $\Omega$ is of two and three dimensional. As for the problem~(\ref{eq:1-1}), we may set $\varphi^B= u(x)\,e^{i\,\tilde{\lambda}_1\,t}/\sqrt{N_B}$, $\varphi^F_j= v_j(x)\,e^{i\,\tilde{\lambda}_2\,t}$ and suitable scales on $m_B, m_F, V_B, V_F, g_B, g_{BF}$ and $\tilde{\lambda}_j$'s. Then the system~(\ref{bf1}) can be transformed into \begin{equation}{\label{bf2}} \left\{\begin{array}{l} \varepsilon^2 \Delta u- V_1 (x) u+\mu_1 u^3 + \beta u \displaystyle\sum_{j=1}^{N_F}\,v_j^2 =0\ \ \ \mbox{in}\ \ \Omega,\\ \varepsilon^2 \Delta v_j- V_2 (x) v_j + \beta u^2 v_j =0\ \ \ \mbox{in}\ \ \Omega,\quad j=1,\cdots,N_F\,,\\ u,v_j >0\ \ \mbox{in}\ \ \Omega,\\ u=v=0\ \ \mbox{on}\ \ \partial\Omega\,, \end{array}\right. \end{equation} which can be generalized as a singular perturbation problem given by \begin{equation}{\label{bf3}} \left\{\begin{array}{l} \varepsilon^2 \Delta u- V_1 (x) u+\mu_1 u^3 + \beta u \displaystyle\sum_{j=1}^{m}\,v_j^2 =0\ \ \ \mbox{in}\ \ \Omega,\\ \varepsilon^2 \Delta v_j- V_2 (x) v_j +\mu_2 v_j^3 + \beta u^2 v_j =0\ \ \ \mbox{in}\ \ \Omega,\quad j=1,\cdots,m\,,\\ u,v_j >0\ \ \mbox{in}\ \ \Omega,\\ u=v=0\ \ \mbox{on}\ \ \partial\Omega\,, \end{array}\right. \end{equation} where $\mu_j\leq 0, j=1, 2$ are constants and $m=N_F\in \mathbb{N}$. In particular, the problem~(\ref{bf3}) becomes the problem~(\ref{eq:1-1}) as $m=1$. In this paper, we study the asymptotic behavior of so-called least-energy solutions of the problem~(\ref{eq:1-1}) which may give symbiotic bright solitons in two and three dimensional domains. By this, we mean \begin{enumerate} \item $(u_\varepsilon,v_\varepsilon)$ is a solution of (\ref{eq:1-1}), \item $E_{\varepsilon, \Omega, V_1, V_2} [u_\varepsilon,v_\varepsilon]\leq E_{\varepsilon, \Omega, V_1, V_2} [u,v]$ for any nontrivial solution $(u,v)$ of (\ref{eq:1-1}), \end{enumerate} where $E_{\varepsilon, \Omega, V_1, V_2} [u,v]$ is the energy functional defined as follows: \begin{eqnarray}{\label{eq:1-2}} E_{\varepsilon, \Omega, V_1, V_2} [u,v] & :=\ & \frac{\varepsilon^2}{2}\int_\Omega |\bigtriangledown u|^2 + \frac{V_1}{2} \int_\Omega u^2 -\frac{\mu_1}{4} \int_\Omega u^4\\ \nonumber & & +~~~\frac{\varepsilon^2}{2} \int_\Omega |\bigtriangledown v|^2 + \frac{V_2}{2} \int_\Omega v^2 - \frac{\mu_2}{4} \int_\Omega v^4 \\ \nonumber & & -~~~\frac{\beta}{2}\int_\Omega u^2 v^2\,, \end{eqnarray} for $u,v\in H_0^1(\Omega)$. Actually, it is easy to generalize our results to the problem~(\ref{bf3}) for $m\in\mathbb{N}$. In the case of $\Omega={\bf R}^N, N=2,3$, the least energy solution is also called ground state. In our previous papers~\cite{lw1}, \cite{lw2} and \cite{lw3}, we studied the existence and asymptotics of least energy solutions when $ \mu_1$ and $\mu_2$ are positive constants. Hereafter, we study the case that both $\mu_1$ and $\mu_2$ are non-positive constants. As $\beta \leq \sqrt{\mu_1 \mu_2}$, it is obvious that \begin{equation} \label{ncon} \int_\Omega [ \varepsilon^2 |\nabla u|^2 + V_1 u^2 + \varepsilon^2 |\nabla v|^2 + V_2 v^2] = \int_{\Omega} [2 \beta u^2 v^2 +\mu_1 u^4 + \mu_2 v^4] \leq 0 \end{equation} for any $(u, v)$ satisfying the problem~(\ref{eq:1-1}) and hence $ u, v \equiv 0$. To get nontrivial solutions of the problem~(\ref{eq:1-1}), the assumption $ \beta> \sqrt{ \mu_1 \mu_2}$ is necessary. So throughout the paper, we assume that \begin{equation} \label{basic} \mu_1\leq 0,\quad \mu_2\leq 0,\quad \beta > \sqrt{\mu_1 \mu_2}\,. \end{equation} To study least energy solutions, we define a Nehari manifold \begin{equation} \label{Nehari} N(\varepsilon, \Omega, V_1, V_2)= \Biggl\{ (u,v)\in H_0^1 (\Omega) \times H_0^1(\Omega) \Biggl| \begin{array}{l} \int_\Omega [\varepsilon^2 |\bigtriangledown u|^2 +V_1 u^2 + \varepsilon^2 |\bigtriangledown v|^2 + V_2 v^2]\\ = \int_\Omega [ 2 \beta u^2 v^2 +\mu_1 u^4 +\mu_2 v^4] \end{array} \Biggl\}. \end{equation} Note that here, unlike \cite{lw1}-\cite{lw3}, the Nehari manifold $N(\varepsilon, \Omega, V_1, V_2)$ has only one constraint. On such a manifold, we consider the minimization problem given by \begin{equation}{\label{eq:2-2}} c_{\varepsilon, \Omega, V_1, V_2} :=\ \inf_{(u,v)\in N(\varepsilon,\Omega, V_1, V_2), \atop{u, v\geq 0, \atop{u, v\not\equiv 0}}} E_{\varepsilon, \Omega, V_1, V_2} [u,v]\,. \end{equation} When $\varepsilon=1$, $V_j\equiv \lambda_j>0, j=1,2$ i.e. constant trapping potentials and the domain $\Omega={\bf R}^N$, the Euler-Lagrange equations of the problem~(\ref{eq:2-2}) are \begin{equation}{\label{eq:1-4g}} \left\{ \begin{array}{l} \Delta u -\lambda_1 u +\mu_1 u^3 + \beta u v^2 =\ 0 \ \ \mbox{in}\ \ {\bf R}^N,\\ \Delta v -\lambda_2 v +\mu_2 v^3 + \beta u^2 v =\ 0 \ \ \mbox{in}\ \ {\bf R}^N,\\ u, v \to 0 \ \ \mbox{as}\ \ |y|\to +\infty. \end{array}\right. \end{equation} For such a problem, we have \begin{theorem} \label{t1.1} Assume that (\ref{basic}) holds. Then $ c_{1, {\bf R}^N, \lambda_1, \lambda_2}$ is attained and hence the problem~(\ref{eq:1-4g}) admits a ground state solution which is radially symmetric and strictly decreasing. \end{theorem} Now we consider the existence of ground state solutions for nonconstant trapping potentials. Namely, we consider the problem of coupled nonlinear Schr\"{o}dinger equations given by \begin{equation}{\label{eq:1-4s}} \left\{ \begin{array}{l} \varepsilon^2 \Delta u -V_1 (x) u +\mu_1 u^3 + \beta u v^2 =\ 0 \ \ \mbox{in}\ {\bf R}^N,\\ \varepsilon^2 \Delta v -V_2 (x) v +\mu_2 v^3 + \beta u^2 v =\ 0 \ \ \mbox{in}\ {\bf R}^N,\\ u, v \to 0 \ \ \mbox{as}\ \ |y|\to +\infty, \end{array}\right. \end{equation} where $V_j$'s satisfy \begin{equation} 0<b_j^0= \inf_{x \in {\bf R}^N} V_j(x) \leq \lim_{|x| \to \infty} V_j(x)= b_j^\infty \leq +\infty, \quad j=1,2\,. \end{equation} Then we have the following theorem on the existence of ground state solutions of the problem~(\ref{eq:1-4s}). \begin{theorem} \label{t1.2} If either $b_1^\infty+ b_2^\infty = +\infty$ or \begin{equation} \label{c7} c_{\varepsilon, {\bf R}^N, V_1, V_2} < c_{\varepsilon, {\bf R}^N, b_1^\infty, b_2^\infty} \end{equation} Then $ c_{\varepsilon, {\bf R}^N, V_1, V_2}$ is attained and hence the problem~(\ref{eq:1-4s}) admits a ground state solution. \end{theorem} \noindent Our next theorem is to show the asymptotic behavior of these ground state solutions as follows: \begin{theorem}\label{t1.3} Assume (\ref{basic}) and \begin{equation}\label{rho1} \inf_{x\in\mathbb{R}^n} c_{1, {\bf R}^N, V_1(x),V_2(x) }< c_{1, {\bf R}^N, b_1^\infty, b_2^\infty} \end{equation} hold. Then \begin{itemize} \item[(i)]$ c_{\varepsilon, {\bf R}^N, V_1, V_2}$ is attained and the problem~(\ref{eq:1-4s}) admits a ground state solution $(u_\varepsilon, v_\varepsilon)$. \item[(ii)]Let $P^\varepsilon$ and $Q^\varepsilon$ be the unique local maximum points of $u_{\varepsilon}$ and $ v_\varepsilon$ respectively. Let $u_{\varepsilon}(P^\varepsilon+\varepsilon y):=U_{\varepsilon}(y), v_{\varepsilon} (Q^\varepsilon +\varepsilon y):=V_\varepsilon (y) $. Then as $\varepsilon\rightarrow0$, $(U_{\varepsilon},V_{\varepsilon})\rightarrow(U, V)$, where $(U, V)$ satisfies (\ref{eq:1-4g}). Furthermore, \begin{equation}\label{p2} \frac{|P^\varepsilon- Q^\varepsilon|}{\varepsilon}\rightarrow0\,,\quad c_{1, {\bf R}^N, V_1(P^\varepsilon), V_2 (Q^\varepsilon)} \to \inf_{x \in {\bf R}^N} c_{1, {\bf R}^N, V_1 (x), V_2 (x)}\,. \end{equation} \end{itemize} \end{theorem} \begin{remark}~~In general, the condition~(\ref{rho1}) is difficult to check. However, if $\inf\limits_{x\in{\bf R}^N}\,V_j (x)<\lim\limits_{|x|\rightarrow+\infty}\,V_j(x), j=1,2$, then (\ref{rho1}) is satisfied. \end{remark} Theorem~\ref{t1.3} can be extended to general bounded domains. Firstly, we set $\Omega$ as a bounded smooth domain and trapping potentials $V_j$'s as constants $\lambda_j$'s. Namely, we consider the following system \begin{equation}{\label{eq:1-1d}} \left\{\begin{array}{l} \varepsilon^2 \Delta u- \lambda_1 u+\mu_1 u^3 + \beta u v^2 =0\ \ \ \mbox{in}\ \ \Omega,\\ \varepsilon^2 \Delta v- \lambda_2 v+\mu_2 v^3 + \beta u^2 v =0\ \ \ \mbox{in}\ \ \Omega,\\ u,v >0\ \ \mbox{in}\ \ \Omega,\\ u=v=0\ \ \mbox{on}\ \ \partial\Omega\,. \end{array}\right. \end{equation} The asymptotic behavior of corresponding least energy solutions can be characterized by \begin{theorem}\label{t1.4}~~For any $\beta >\sqrt{\mu_1 \mu_2}$ and $\varepsilon$ sufficiently small, the problem~(\ref{eq:1-1d}) has a least energy solution $(u_\varepsilon,v_\varepsilon)$. Let $P_\varepsilon$ and $Q_\varepsilon$ be the local maximum points of $u_\varepsilon$ and $v_\varepsilon$, respectively. Then $|P_\varepsilon-Q_\varepsilon|/\varepsilon \to 0$, \begin{eqnarray}{\label{eq:1-3}} d(P_\varepsilon,\partial\Omega)\to \max_{P\in \Omega} d(P,\partial\Omega),\ d(Q_\varepsilon,\partial\Omega) \to \max_{P\in \Omega} d(P,\partial\Omega)\,, \end{eqnarray} and $u_\varepsilon(x), v_\varepsilon(x)\to 0$ in $C_{loc}^1 (\bar\Omega\backslash \{P_\varepsilon, Q_\varepsilon\})$. Furthermore, as $\varepsilon \to 0$, $(U_\varepsilon,V_\varepsilon)\to (U_0,V_0)$ which is a least-energy solution of~(\ref{eq:1-4g}), where $$U_\varepsilon(y):=\ u_\varepsilon(P_\varepsilon+\varepsilon y),\quad V_\varepsilon(y):=\ v_\varepsilon(P_\varepsilon+\varepsilon y)\,.$$ \end{theorem} \noindent By Theorem~\ref{t1.4}, we may generalize Theorem~\ref{t1.3} to bounded smooth domains. The main idea may follow the proof of Corollary~2.7 in~\cite{lw3}. Moreover, by the same arguments of Theorems~\ref{t1.1}-\ref{t1.4}, one may get similar results for the problem~(\ref{bf3}). As $\mu_1, \mu_2>0$, the assumption $\beta<\beta_0$ is essential in our previous works (cf.~\cite{lw1}-\cite{lw3}) for the existence and the asymptotic behaviors of ground state (least energy) solutions, where $0<\beta_0<\sqrt{\mu_1\,\mu_2}$ is a small constant. For larger $\beta$'s, results of ground and bound state solutions can be found in \cite{ac1}, \cite{BWW}, \cite{P1} and \cite{S1}. On the other hand, when the sign of $\mu_j$'s becomes negative i.e.~$\mu_1, \mu_2\leq 0$, the assumption of $\beta$'s can be changed as $\beta>\sqrt{\mu_1\,\mu_2}$ which is sufficient to prove the existence and the asymptotic behaviors of ground state solutions (see~Theorem~\ref{t1.1}-\ref{t1.4}). These are new results of two and three dimensional bright solitary wave solutions for negative $\mu_j$'s. Conventionally, there has been a vast literature on the study of concentration phenomena for single singularly perturbed nonlinear Schr\"{o}dinger equations with attractive self-interaction. See \cite{amn}, \cite{BDS}, \cite{bw1}, \cite{cl}, \cite{df}, \cite{df1}, \cite{df2}, \cite{DY}, \cite{GW2}, \cite{jt}, \cite{kw}, \cite{Li}, \cite{mm}, \cite{W1}, \cite{WW1}, \cite{zwang} and the references therein. In particular, a good survey can be found in \cite{n1} and \cite{n2}. However, until now, there are only few papers working on systems of coupled nonlinear Schr\"{o}dinger equations, especially for two and three dimensional Bose-Einstein condensates. This paper seems to be the first in showing rigorously that strong interspecies attraction may produce symbiotic bright solitons in two and three dimensional Bose-Einstein condensates even though self-interactions are repulsive. The organization of this paper is as follows: \noindent In Section~2, we extend the classical Nehari's manifold approach to a system of semilinear elliptic equations in order to find a least energy solution to~ the problem~(\ref{eq:1-1}). Hereafter, we need the condition $\beta>\sqrt{\mu_1\,\mu_2}$ for strong interspecies attraction. Using approximation argument and energy upper bound, we may show Theorem~\ref{t1.1}, \ref{t1.2} and Theorem \ref{t1.3} in Section~3 and 4, respectively. In Section~5, we follow the same ideas of~\cite{lw1} to complete the proof of Theorem~\ref{t1.4}. Throughout this paper, unless otherwise stated, the letter $C$ will always denote various generic constants which are independent of $\varepsilon$, for $\varepsilon$ sufficiently small. The constant $\sigma \in (0, \frac{1}{100})$ is a fixed small constant. \vskip 0.5cm \noindent {\bf Acknowledgments:} The research of the first author is partially supported by a research Grant from NSC of Taiwan. The research of the second author is partially supported by an Earmarked Grant from RGC of Hong Kong. The authors also want to express their sincere thanks to the referee's suggestions. \section{Nehari's Manifold Approach : Existence of a Least-Energy Solution to (\ref{eq:1-1})} \setcounter{equation}{0} In this section, we use Nehari's manifold approach to obtain a least energy solution to (\ref{eq:1-1}). Nehari's manifold approach has been used successfully in the study of single equations. Conti et al~\cite{CTV} have used Nehari's manifold to study solutions of competing species systems which are related to an optimal partition problem in $N$-dimensional domains. In our previous paper \cite{lw1}, we also used Nehari's manifold approach to find least energy solutions and symbiotic bright solitons. We consider the following minimization problem \begin{equation}\label{eq:2-2n} c_{\varepsilon, \Omega, V_1, V_2} :=\ \inf_{(u,v)\in N (\varepsilon,\Omega, V_1, V_2), \atop{u, v\geq 0, \atop{u, v\not\equiv 0}}} E_{\varepsilon, \Omega, V_1, V_2} [u,v] \end{equation} where $N (\varepsilon, \Omega, V_1, V_2)$ and $E_{\varepsilon, \Omega, V_1, V_2}$ are defined in Section~1. Note that, for $N\leq 3$, by the compactness of Sobolev embedding $H_0^1(\Omega) \hookrightarrow L^4 (\Omega), N(\varepsilon,\Omega, V_1, V_2)$ and $c_{\varepsilon, \Omega, V_1, V_2}$ are well-defined. Now we want to show that \begin{theorem} \label{t2.1} Let $\Omega$ be a smooth and bounded domain in ${\bf R}^N, N\leq 3$. Suppose that $\beta >\sqrt{\mu_1\mu_2}.$ Then for $\varepsilon$ sufficiently small, $c_{\varepsilon, \Omega, V_1, V_2}$ can be attained by some $(u_\varepsilon,v_\varepsilon)\in N(\varepsilon,\Omega, V_1, V_2)$ satisfying \begin{equation} \label{uepvep} C_1 \varepsilon^N \leq \int_\Omega u_\varepsilon^4 \leq C_2 \varepsilon^N, \ C_1 \varepsilon^N \leq \int_\Omega v_\varepsilon^4 \leq C_2 \varepsilon^N, \end{equation} where $C_1, C_2$ are two positive constants independent of $\varepsilon $ and $\Omega$. \end{theorem} We first note that if $(u,v)\in N(\varepsilon,\Omega, V_1, V_2)$, then \begin{eqnarray}{\label{eq:2-3}} E_{\varepsilon, \Omega, V_1, V_2} [u,v] & =\ & \frac 1 4 \Biggl( \varepsilon^2 \int_\Omega |\bigtriangledown u|^2 + \int_\Omega V_1 u^2 + \varepsilon^2 \int_\Omega |\bigtriangledown v|^2 + \int_\Omega V_2 v^2\Biggl) \\ \nonumber & =\ & \frac 1 4 \Biggl[\mu_1 \int_\Omega u^4 + 2\beta \int_\Omega u^2 v^2 + \mu_2 \int_\Omega v^4 \Biggl]. \end{eqnarray} \noindent Let $(u_n,v_n)$ be a minimizing sequence. Then by Sobolev embedding $H_0^1 (\Omega)\hookrightarrow L^q (\Omega)$ for $1<q<\frac{2N}{N-2}$, we see that $u_n\to u_\varepsilon$, $v_n\to v_\varepsilon$(up to a subsequence) for some functions $u_\varepsilon\geq 0$, $v_\varepsilon\geq 0$ in $L^4(\Omega)$ and hence \begin{equation}{\label{eq:2-4}} E_{\varepsilon, \Omega, V_1, V_2} [u_n,v_n]\to \frac 1 4 \Biggl[ \mu_1\int_\Omega u_\varepsilon^4 + 2\beta \int_\Omega u_\varepsilon^2 v_\varepsilon^2 +\mu_2 \int_\Omega v_\varepsilon^4\Biggl]=\ c_{\varepsilon, \Omega, V_1, V_2}. \end{equation} By (\ref{eq:2-4}) and the weak lower semicontinuity of the $H^1$ norm, we have \begin{equation}{\label{eq:2-5}} c_{\varepsilon, \Omega, V_1, V_2} \geq \frac 1 4 \Biggl( \varepsilon^2\int_\Omega |\bigtriangledown u_\varepsilon|^2 + \int_\Omega V_1 u_\varepsilon^2 +\varepsilon^2 \int_\Omega |\bigtriangledown v_\varepsilon|^2 + \int_\Omega V_2 v_\varepsilon^2\Biggl), \end{equation} and \begin{equation}{\label{eq:2-6}} \varepsilon^2 \int_\Omega |\bigtriangledown u_\varepsilon|^2 + \int_\Omega V_1 u_\varepsilon^2 +\varepsilon^2 \int_\Omega |\bigtriangledown v_\varepsilon|^2 + \int_\Omega V_2 v_\varepsilon^2 \leq \mu_1 \int_\Omega u_\varepsilon^4 +2 \beta \int_\Omega u_\varepsilon^2 v_\varepsilon^2+ \mu_2 \int_\Omega v_\varepsilon^4\,. \end{equation} Next we consider for $t >0$, \begin{equation}{\label{eq:2-8}} \beta_{(u, v)} (t) =\ E_{\varepsilon, \Omega, V_1, V_2} [\sqrt{t} u, \sqrt{t} v]\,. \end{equation} Our first claim is \begin{claim} If $ 2 \beta \int_\Omega u^2 v^2 +\mu_1 \int_\Omega u^4 +\mu_2 \int_\Omega v^4 >0$, then $\beta_{(u, v)}(t)$ attains a unique maximum point $t_0$, where \begin{equation} \label{eq:2-25} t_0=\frac{ \int_\Omega [ \varepsilon^2 |\bigtriangledown u|^2 +V_1 u^2 +\varepsilon^2 |\bigtriangledown v|^2 + V_2 v^2]}{\int_\Omega [ 2\beta u^2 v^2 + \mu_1 u^4 +\mu_2 v^4] }\,. \end{equation} Furthermore, $ (\sqrt{t_0}u, \sqrt{t_0} v) \in N (\varepsilon, \Omega, V_1, V_2)$. \end{claim} \begin{proof}~~Since \begin{eqnarray*} \beta_{(u, v)} (t) & =\ & t \Biggl[\frac{\varepsilon^2}{2} \int_\Omega |\bigtriangledown u|^2 + \frac{1}{2} \int_\Omega V_1 u^2 + \frac{\varepsilon^2}{2} \int_\Omega |\bigtriangledown v|^2 + \frac{1}{2} \int_\Omega V_2 v^2\Biggl] \\ & & - t^2 \Biggl[ \frac{\mu_1}{4} \int_\Omega u^4 + \frac{\mu_2}{4} \int_\Omega v^4 +\frac 1 2 \beta \int_\Omega u^2 v^2 \Biggl]\,, \end{eqnarray*} then the proof follows by simple calculations. We omit the details here. \end{proof} \noindent By Claim~1 and proper choice of $(u,v)$, it is easy to check that the Nehari manifold $N (\varepsilon, \Omega, V_1, V_2)$ is nonempty. Our second claim is \begin{claim} The inequalities of (\ref{uepvep}) hold if $\beta >\sqrt{\mu_1 \mu_2}$. \end{claim} \vskip 0.5cm \begin{proof}~~ We first prove the upper bound of $ c_{\varepsilon, \Omega, V_1, V_2}$. Since $ \beta >\sqrt{\mu_1 \mu_2}$, there exists $ \alpha\neq 0$ such that $ 2 \beta \alpha^2 + \mu_1 \alpha +\mu_2 >0$. In fact, we may set $\alpha=-\frac{ \mu_2}{\mu_1}$ if $\mu_j<0, j=1,2$. For $\varepsilon$ sufficiently small, we choose a test function $w$ such that $ \mbox{support} ( w) \subset B_{\varepsilon } (P)$ where $ P \in \Omega$. Let $ (u, v)= (\alpha w, w)$. Then $ \int_\Omega [ 2 \beta u^2 v^2 +\mu_1 u^4+\mu_2 v^4]>0$. By Claim~1, there exists $ t_0>0$ independent of $\varepsilon$ such that $ (\sqrt{t_0} u, \sqrt{t_0} v) \in N(\varepsilon, \Omega, V_1, V_2)$. Hence we obtain \begin{equation} \label{cep1} c_{\varepsilon, \Omega, V_1, V_2} \leq C \varepsilon^N\,, \end{equation} where $C$ is a positive constant independent of $\varepsilon$ and $\Omega$. Combining (\ref{cep1}) with (\ref{eq:2-3}), we obtain that \begin{equation}\label{2.9-1} \int_\Omega [ \varepsilon^2 |\nabla u_\varepsilon|^2 +V_1 u_\varepsilon^2 + \varepsilon^2 |\nabla v_\varepsilon^2| + V_2 v_\varepsilon^2] \leq C_2 \varepsilon^N\,.\end{equation} For (\ref{2.9-1}), we may rescale spatial variables by $\varepsilon$ and apply the standard Gagliardo-Nirenberg-Sobolev inequality in ${\bf R}^N$ (cf.~\cite{E1}). Consequently, \begin{equation}\label{ub1}\int_\Omega u_\varepsilon^4 \leq C_2 \varepsilon^N,\quad \int_\Omega v_\varepsilon^4 \leq C_2 \varepsilon^N\,,\end{equation} where $C_2$ is a positive constant independent of $\varepsilon$ and $\Omega$. For lower bound estimates, the definition of the manifold $N(\varepsilon, \Omega, V_1, V_2)$ may give \[ \int_\Omega [ \varepsilon^2 |\nabla u|^2 +V_1 u^2 + \varepsilon^2 |\nabla v^2| + V_2 v^2] \leq 2 \beta \int_\Omega u^2 v^2\,,\] for any $(u, v) \in N (\varepsilon, \Omega, V_1, V_2)$. On the other hand, as for (\ref{ub1}), we may rescale spatial variables by $\varepsilon$ and apply the standard Gagliardo-Nirenberg-Sobolev inequality in ${\bf R}^N$ (cf.~\cite{E1}) to derive \[ \int_\Omega [ \varepsilon^2 |\nabla u|^2 +V_1 u^2 + \varepsilon^2 |\nabla v^2| + V_2 v^2] \geq C \varepsilon^{N/2} \left[(\int_\Omega u^4)^{1/2} +(\int_\Omega v^4)^{1/2}\right] \geq C \varepsilon^{N/2} (\int_\Omega u^2 v^2)^{1/2} \] for any $(u, v) \in N (\varepsilon,\Omega, V_1, V_2)$, and hence we obtain that for any $ (u, v) \in N(\varepsilon, \Omega, V_1, V_2)$, $(u, v) \not \equiv (0, 0)$, \begin{equation} \label{uv} \int_\Omega u^2 v^2 \geq C \varepsilon^N\,, \end{equation} where $C$ is a positive constant independent of $\varepsilon$ and $\Omega$. Due to $\int_\Omega u^2 v^2\leq \left(\int_\Omega u^4\right)^{1/2}\,\left(\int_\Omega v^4\right)^{1/2}$, (\ref{ub1}) and (\ref{uv}) may yield lower bound estimates $ \int_\Omega u_\varepsilon^4 \geq C_1 \varepsilon^N$ and $\int_\Omega v_\varepsilon^4 \geq C_1 \varepsilon^N$, where $C_1$ is a positive constant independent of $\varepsilon$ and $\Omega$. \end{proof} Finally we claim that \begin{lemma} $(u_\varepsilon, v_\varepsilon)$ is a least-energy solution of (\ref{eq:1-1}). \end{lemma} \begin{proof} By Claim~2 and (\ref{eq:2-6}), we have $ 2 \beta \int_\Omega u_\varepsilon^2 v_\varepsilon^2 +\mu_1 \int_\Omega u_\varepsilon^4 +\mu_2 \int_\Omega v_\varepsilon^4 >0$. Moreover, by Claim~1, there exists $t_0>0$ such that $(\sqrt{t_0} u_\varepsilon, \sqrt{t_0} v_\varepsilon) \in N (\varepsilon,\Omega, V_1, V_2)$ i.e. \begin{equation}{\label{eq:2-12}} \varepsilon^2 \int_\Omega |\bigtriangledown u_\varepsilon|^2 + \int_\Omega V_1 u_\varepsilon^2 +\varepsilon^2 \int_\Omega |\bigtriangledown v_\varepsilon|^2 + \int_\Omega V_2 v_\varepsilon^2 = t_0 \left[ \mu_1 \int_\Omega u_\varepsilon^4 +2 \beta \int_\Omega u_\varepsilon^2 v_\varepsilon^2 + \mu_2 \int_\Omega v_\varepsilon^4 \right]\,. \end{equation} Consequently, (\ref{eq:2-6}) and (\ref{eq:2-12}) may give \begin{equation} \label{t01} t_0 \leq 1\,. \end{equation} On the other hand, \begin{equation}{\label{eq:2-16}} E_{\varepsilon, \Omega, V_1, V_2} [\sqrt{t_0}u_\varepsilon, \sqrt{t_0}v_\varepsilon] \geq c_{\varepsilon, \Omega, V_1, V_2} =\ \frac 1 4 \Biggl[ \mu_1 \int_\Omega u_\varepsilon^4 + 2\beta \int_\Omega u_\varepsilon^2 v_\varepsilon^2 + \mu_2 \int_\Omega v_\varepsilon^4 \Biggl], \end{equation} \begin{equation}{\label{eq:2-17}} E_{\varepsilon, \Omega, V_1, V_2} [\sqrt{t_0}u_\varepsilon, \sqrt{t_0}v_\varepsilon] = t_0^2 \frac 1 4 \Biggl[ \mu_1 \int_\Omega u_\varepsilon^4 + 2\beta \int_\Omega u_\varepsilon^2 v_\varepsilon^2 + \mu_2 \int_\Omega v_\varepsilon^4 \Biggl]. \end{equation} Since $t_0>0$, (\ref{eq:2-16}) and (\ref{eq:2-17}) imply that $ t_0 \geq 1$. Thus by (\ref{t01}), we obtain $ t_0=1$ and $(u_\varepsilon,v_\varepsilon)\in N(\varepsilon,\Omega, V_1, V_2)$. Therefore, $(u_\varepsilon,v_\varepsilon)$ attains the minimum $c_{\varepsilon, \Omega, V_1, V_2}$. Now we want to claim that $(u_\varepsilon, v_\varepsilon)$ is a nontrivial solution of (\ref{eq:1-1}). Since $(u_\varepsilon, v_\varepsilon)$ is an energy minimizer on the Nehari manifold $N(\varepsilon,\Omega, V_1, V_2)$, there exists a Lagrange multiplier $\alpha$ such that \begin{equation}{\label{eq:2-20}} \bigtriangledown E_{\varepsilon, \Omega, V_1, V_2} [u_\varepsilon,v_\varepsilon] + \alpha \bigtriangledown G[u_\varepsilon,v_\varepsilon] =\ 0\,, \end{equation} where \begin{equation}{\label{eq:2-21}} G[u,v] =\ \int_\Omega [ \varepsilon^2 |\bigtriangledown u|^2 +V_1 u^2 +\varepsilon^2 |\bigtriangledown v|^2 +V_2 v^2 ] - \int_\Omega[ \mu_1 u^4 +2\beta u^2 v^2+\mu_2 v^4]\,. \end{equation} Acting (\ref{eq:2-20}) with $(u_\varepsilon, v_\varepsilon)$, and making use of the fact that $ (u_\varepsilon, v_\varepsilon) \in N(\varepsilon, \Omega, V_1, V_2)$, we see that \[ \alpha \int_\Omega 2 [ \varepsilon^2 |\bigtriangledown u_\varepsilon|^2 +V_1 u_\varepsilon^2 +\varepsilon^2 |\bigtriangledown v_\varepsilon|^2 +V_2 v_\varepsilon^2 ] - 8 \alpha\int_\Omega[ \mu_1 u_\varepsilon^4 +2\beta u_\varepsilon^2 v_\varepsilon^2+\mu_2 v_\varepsilon^4]=0\,, \] and \[ \alpha \int_\Omega[ \mu_1 u_\varepsilon^4 +2\beta u_\varepsilon^2 v_\varepsilon^2+\mu_2 v_\varepsilon^4]=0\,.\] Since $ (u_\varepsilon, v_\varepsilon) \not \equiv (0, 0)$ and \[ \int_\Omega[ \mu_1 u_\varepsilon^4 +2\beta u_\varepsilon^2 v_\varepsilon^2+\mu_2 v_\varepsilon^4] = \int_\Omega [ \varepsilon^2 |\bigtriangledown u_\varepsilon|^2 +V_1 u_\varepsilon^2 +\varepsilon^2 |\bigtriangledown v_\varepsilon|^2 +V_2 v_\varepsilon^2] >0\,,\] then $ \alpha=0$. This proves that $$\bigtriangledown E_{\varepsilon, \Omega, V_1, V_2} [u_\varepsilon,v_\varepsilon]=\ 0$$ and hence $(u_\varepsilon,v_\varepsilon)$ is a critical point of $E_{\varepsilon, \Omega, V_1, V_2} [u,v]$ and satisfies (\ref{eq:1-1}). By Hopf boundary Lemma, it is easy to show that $u_\varepsilon>0$ and $v_\varepsilon>0$. Therefore, we may complete the proof of this Lemma and Theorem~\ref{t2.1}. \end{proof} Another useful characterization of $c_{\varepsilon, \Omega, V_1, V_2}$ is given as follows: \begin{lemma} \label{cnew} If $\beta > \sqrt{\mu_1 \mu_2}$, then we have \begin{eqnarray}\label{eq:2-25s} c_{\varepsilon, \Omega, V_1, V_2} &= &\ \inf_{u,v\in H_0^1(\Omega), \ u\not\equiv 0, v\not\equiv 0, \atop{\int_\Omega [ 2\beta u^2 v^2 +\mu_1 u^4 + \mu_2 v^4]>0} } \sup_{t>0} E_{\varepsilon, \Omega, V_1, V_2}[\sqrt{t}u, \sqrt{t}v] \\ &= & \inf_{u,v\in H_0^1(\Omega), \ u\not\equiv 0, v\not\equiv 0, \atop{\int_\Omega [ 2\beta u^2 v^2 +\mu_1 u^4 + \mu_2 v^4]>0} } \frac{ \int_\Omega [ |\nabla u|^2 + V_1 u^2 + |\nabla v|^2 +V_2 v^2]}{ (\int_\Omega [ 2\beta u^2 v^2 + \mu_1 u^4 +\mu_2 v^4])^{\frac{1}{2}}}. \notag \end{eqnarray} \end{lemma} \begin{proof} The last identity in (\ref{eq:2-25s}) follows from simple calculations. To prove (\ref{eq:2-25s}), we denote the right hand side of (\ref{eq:2-25s}) by $m_\varepsilon$. From Theorem~\ref{t2.1}, $c_{\varepsilon, \Omega, V_1, V_2}$ is attained at $(u_\varepsilon,v_\varepsilon)\in N(\varepsilon,\Omega, V_1, V_2)$. Moreover, by Claim~1 in Theorem~\ref{t2.1}, $E_{\varepsilon, \Omega, V_1, V_2} [\sqrt{t} u_\varepsilon, \sqrt{t}v_\varepsilon]$ attains its maximum at $t=1$. Hence \begin{equation}{\label{eq:2-26}} m_\varepsilon \leq c_{\varepsilon, \Omega, V_1, V_2} =\ E_{\varepsilon, \Omega, V_1, V_2} [u_\varepsilon, v_\varepsilon] =\ \sup_{t>0} E_{\varepsilon, \Omega, V_1, V_2} [\sqrt{t} u_\varepsilon, \sqrt{t} v_\varepsilon]. \end{equation} On the other hand, fix $u,v\in H_0^1(\Omega)$ such that $u, v\geq 0$ and $\int_\Omega [ 2 \beta u^2 v^2 + \mu_1 u^4 + \mu_2 v^4] >0$. Let $t_0$ be a critical point of $\beta_{(u, v)} (t)$. Then $(\sqrt{t_0}u, \sqrt{t_0}v)\in N(\varepsilon,\Omega, V_1, V_2)$, $$c_{\varepsilon, \Omega, V_1, V_2}\leq E_{\varepsilon, \Omega, V_1, V_2} (\sqrt{t_0}u, \sqrt{t_0}v) \leq \sup_{t>0} E_{\varepsilon,\Omega, V_1, V_2} [\sqrt{t}u, \sqrt{t}v]$$ and hence $c_{\varepsilon, \Omega, V_1, V_2}\leq m_\varepsilon$. Therefore, we may complete the proof of this Lemma. \end{proof} \section{Proofs of Theorem \ref{t1.1} and Theorem \ref{t1.2}} \setcounter{equation}{0} In this section, we prove Theorem \ref{t1.1} and Theorem \ref{t1.2} by approximation argument. Fix a ball $\Omega=B_k$, where $k$ is a large parameter tending to infinity. By Theorem~\ref{t2.1}, each $c_{\varepsilon, B_k, V_1, V_2}$ is attained by $ (u_k, v_k)$ a least energy solution of the following problem: \begin{equation}\label{eq2.1} \begin{cases} \varepsilon^2\triangle u(x)-V_1(x)u(x)+\mu_1 u^3 + \beta u v^2=0\ \mathrm{in} \ B_k,\\ \varepsilon^2\triangle v(x)-V_2(x)v(x)+ \mu_2 v^3 + \beta u^2 v=0\ \mathrm{in} \ B_k,\\ u, v>0 \ \ \mathrm{in} \ B_k, \ u=v=0\ \mathrm{on}\ \partial B_k. \end{cases} \end{equation} By examining the argument in the proof of Theorem~\ref{t2.1}, we may obtain the following estimates: \begin{equation}\label{c8} C_1\varepsilon^N\leq\int_{B_k}u_{k}^4\leq C_2\varepsilon^N,\quad C_1\varepsilon^N\leq\int_{B_k}v_{k}^4\leq C_2\varepsilon^N\,, \end{equation} where $C_1$ and $C_2$ are positive constants independent of $0<\varepsilon\leq1$ and $k\geq1$. By the system~(\ref{eq2.1}) and (\ref{c8}), we may derive that \begin{equation}\label{2.1-1} \int_{B_k} [ \varepsilon^2 |\nabla u_{k}|^2+V_1 u_k^2 + \varepsilon^2 |\nabla v_k|^2 + V_2 v_k^2]\leq C_3\varepsilon^N\,, \end{equation} where $C_3$ is a positive constant independent of $0<\varepsilon\leq1$ and $k\geq1$. We may extend each $u_{k}$ and $v_k$ equal to 0 outside $B_k$, respectively. Then (\ref{2.1-1}) may give \begin{equation}\label{he1}||u_{k}||_{H^1({\bf R}^N)} +||v_{k}||_{H^1({\bf R}^N)} \leq C_4\varepsilon^{N/2}\,,\end{equation} where $C_4$ is a positive constant independent of $0<\varepsilon\leq1$ and $k\geq1$. Now we study the asymptotic behavior of $u_{k}, v_k$ as $k\rightarrow\infty$. Due to (\ref{he1}), we obtain that as $k\rightarrow\infty$, $u_{k}\rightharpoonup \bar{u}, v_{k}\rightharpoonup \bar{v}$, where $\bar{u}, \bar{v}\geq0$ and $\bar{u}, \bar{v}\in H^1({\bf R}^N)$. Moreover, the standard elliptic regularity theorem may give that $(\bar{u}, \bar{v})$ is a solution of the system \begin{equation}\label{system} \left\{\begin{array}{llll} &\varepsilon^2 \Delta \bar{u}-V_1 \bar{u} + \mu_1 \bar{u}^3 + \beta\bar{v}^2 \bar{u} &=0 &\mbox{in} \ {\bf R}^N\,, \\ &\varepsilon^2 \Delta\bar{v}-V_2 \bar{v} + \mu_2 \bar{v}^3 + \beta\bar{u}^2 \bar{v}&=0 \ &\mbox{in} \ {\bf R}^N\,.\end{array}\right. \end{equation} Then we have the following lemma, whose proof is exactly same as those of Theorem~3.3 in~\cite{lw3}. \begin{lemma}\label{l2.3} \hspace*{0.1cm} \begin{itemize} \item[(a)]As $k\rightarrow\infty$, $c_{\varepsilon,B_k, V_1, V_2}\rightarrow c_{\varepsilon, {\bf R}^N, V_1, V_2}\,,$ \item[(b)]If $\bar{u} \not \equiv 0, \bar{v} \not \equiv 0$, then $(\bar{u},\bar{v})$ is a solution of (\ref{eq:1-4s}) and attains $c_{\varepsilon, {\bf R}^N, V_1, V_2}$, i.e. $(\bar{u},\bar{v})$ is a ground state solution of (\ref{eq:1-4s}). \end{itemize} \end{lemma} It remains to show that $ \bar{u} \not \equiv 0, \bar{v} \not \equiv 0$. Note that if $\bar{u} \equiv 0$, then $\bar{v}$ satisfies \begin{equation} \varepsilon^2 \Delta \bar{v}- V_2 \bar{v} + \mu_2 \bar{v}^3=0\,. \end{equation} Due to $\mu_2 \leq 0$, it is obvious that $\bar{v} \equiv 0$. Therefore, we only need to exclude the case that $\bar{u} \equiv\bar{v}\equiv 0$. Suppose $ V(x) \equiv \lambda_1$ and $V_2 (x) \equiv \lambda_2$. Then by the Maximum Principle and Moving Plane Method, both $u_k$ and $v_k$ are radially symmetric, strictly decreasing and satisfy \begin{equation}{\label{bf2k}} \left\{\begin{array}{l} \varepsilon^2 \Delta u_k- \lambda_1 u_k+\mu_1 u_k^3 + \beta u_k v_k^2 =0\ \ \ \mbox{in}\ \ B_k,\\ \varepsilon^2 \Delta v_k- \lambda_2 v_k + \mu_2 v_k^3+ \beta u_k^2 v_k =0\ \ \ \mbox{in}\ \ B_k \,,\\ u_k=u_k(r),v_k=v_k(r) >0\ \ \mbox{in}\ \ B_k,\\ u=v=0\ \ \mbox{on}\ \ \partial B_k. \end{array}\right. \end{equation} Here we have used the fact that $\lambda_j>0$, $\mu_j\leq 0\,, j=1, 2$ and $\beta>0$. Moreover, since the origin $0$ is the maximum point of $u_k$ and $v_k$, then $\Delta u_k(0)\,, \Delta v_k(0)\leq 0$ and $u_k(0)\,, v_k(0)>0$. Hence by (\ref{bf2k}), we have $$ \beta (v_k (0))^2 \geq -\mu_1 (u_k(0))^2 + \lambda_1, \ \ \beta (u_k (0))^2 \geq -\mu_2 (v_k(0))^2 + \lambda_2\,. $$ Consequently, as $k \to +\infty$, \begin{eqnarray}\label{m3.2} \beta (v_0 (0))^2 &\geq& -\mu_1 (u_0(0))^2 + \lambda_1 \geq \lambda_1\,, \\ \nonumber \beta (u_0 (0))^2 &\geq& -\mu_2 (v_0(0))^2 + \lambda_2\geq \lambda_2\,.\end{eqnarray} Here we have used the fact that $\mu_j\leq 0$ and $(u_k, v_k) \to (u_0, v_0)$ in $C_{loc}^2 ({\bf R}^N)$. Therefore, (\ref{m3.2}) may imply that $ u_0 \not \equiv 0, v_0 \not \equiv 0$ and $(u_0, v_0) \in N(1, {\bf R}^N, \lambda_1, \lambda_2)$ is a minimizer of $ c_{1, {\bf R}^N, \lambda_1, \lambda_2}$. On the other hand, any minimizer of $c_{1, {\bf R}^N, \lambda_1, \lambda_2}$, called $(U_0,V_0)$, must satisfy \begin{equation}{\label{eq:3-20}} \left\{ \begin{array}{l} \Delta U_0 - \lambda_1 U_0 + \mu_1 U_0^3 + \beta U_0 V_0^2=\ 0 \ \ \mbox{in} \ \ {\bf R}^N,\\ \Delta V_0 - \lambda_2 V_0 + \mu_2 V_0^3 + \beta U_0^2 V_0 =\ 0 \ \ \mbox{in}\ \ {\bf R}^N,\\ U_0, V_0 > 0, U_0, V_0 \in H^1 ({\bf R}^N). \end{array}\right. \end{equation} Due to $\beta>0$, the problem~(\ref{eq:3-20}) is of cooperative systems. By the moving plane method (cf.~\cite{T}), $(U_0, V_0)$ must be radially symmetric and strictly decreasing. This may complete the proof of Theorem~\ref{t1.1}. To finish the proof of Theorem \ref{t1.2}, we divide the proof into two cases as follows: \noindent {\bf Case 1:} either $ b_1^\infty = \infty$ or $ b_2^\infty =\infty$. \begin{proof} In this case, we note that \begin{align} c_{\varepsilon, B_k, V_1, V_2}&=\frac{1}{4} \int_{B_k} \bigg[ \mu_1 u_{k}^4+2\beta u_{k}^2v_{k}^2+\mu_2v_{k}^4\bigg]\notag\\ &\leq C_3\varepsilon^N\,,\notag\end{align} and \begin{align} c_{\varepsilon, B_k, V_1, V_2}&=\frac{1}{4} \int_{B_k} \bigg[ \varepsilon^2 |\nabla u_{k}|^2+ V_1 u_k^2 + \varepsilon^2 |\nabla v_k|^2 + V_2 v_k^2\bigg] \notag\\ &\geq C_4\varepsilon^{N/2}\Bigg(\sqrt{\int_{B_k}u_{k}^4}+\sqrt{\int_{B_k}v_{k}^4}\Bigg)\,.\notag \end{align} Consequently, \begin{align}\label{id6.2} C_5\varepsilon^N\leq c_{\varepsilon, B_k, V_1, V_2}\leq C_6\varepsilon^N, \end{align} where $C_5,C_6$ are independent of $\varepsilon\leq1,\ k\geq1$. This gives \begin{align}\notag \int_{B_k} [ \varepsilon^2 |\nabla u_{k}|^2+ V_1 u_k^2 + \varepsilon^2 |\nabla v_k|^2 + V_2 v_k^2] \leq C_7\varepsilon^N. \end{align} By Sobolev's embedding (since $N \leq 3$), \begin{equation}\label{id6.3} \int_{B_k}u_{k}^6\leq C_8\varepsilon^N,\quad \int_{B_k\cap\{|x|\geq R\}}u_{k}^2\leq C_9\varepsilon^N\cdot\frac{1}{\min\limits_{|x|\geq R}V_1(x)}\,. \end{equation} Hence \begin{align}\label{id6.4} \int_{B_k\cap\{|x|\geq R\}}u_{k}^4&\leq\bigg(\int_{B_k\cap\{|x|\geq R\}} u_{k}^2\bigg)^{1/2}\bigg(\int_{B_k\cap\{|x|\geq R\}}u_{k}^6\bigg)^{1/2 }\notag\\ &\leq C_{10} \varepsilon^N \cdot\left(\frac{1}{\min\limits_{|x|\geq R}V_1(x)}\right)^{1/2}. \end{align} By (\ref{c8}) and (\ref{id6.4}), we have \begin{align}\label{id6.5} \int_{B_k\cap\{|x|\leq R\}}u_{k}^4\geq\Bigg(C_1-\frac{C_{10}}{\sqrt{\min\limits_{|x|\geq R}V_1(x)}}\Bigg)\varepsilon^N. \end{align} Thus if $u_{k}\rightharpoonup\overline{u}$, then $\overline{u}\geq0$ and \begin{align}\label{id6.6} \int_{B_R }\overline{u}^4\geq\Bigg(C_1-\frac{C_{10}}{\sqrt{\min\limits_{|x|\geq R}V_1(x)}}\Bigg)\varepsilon^N. \end{align} Due to $b_1^\infty=+\infty$, we may choose $R$ large enough such that $C_1-\frac{C_{10}}{\sqrt{\min\limits_{|x|\geq R}V_1(x)}}\geq\frac{1}{2}C_1$. Consequently, $\int_{B_R }\overline{u}^4\geq\frac{1}{2}C_1\varepsilon^N$ and hence $\overline{u}\not\equiv 0$.\\ \end{proof} \noindent {\bf Case 2:} $ b_j^\infty<+\infty,\ j=1,2$ \begin{proof} Suppose $\overline{u}\equiv\overline{v}\equiv0$. Then \begin{equation}\label{id3.2} u_{k}, v_k\rightarrow0\ \mathrm{in}\ \mathbb{C}_{loc}^2({\bf R}^N). \end{equation} Let $M$ and $R$ be such that \begin{equation}\label{id3.3} |V_j(x)-b_j^\infty|<\frac{1}{M}\quad\hbox{ for }\: |x|\geq R\,. \end{equation} Let $\chi_R(x)$ be a smooth cut-off function such that $\chi_R(x)=1$ for $|x|\leq R$, $\chi_R(x)=0$ for $|x|\geq 2R$. Now we set \begin{equation}\label{id3.4} \widetilde{u}_{k}=u_{k}(1-\chi_R)\,,\quad\widetilde{v}_{k}=v_{k}(1-\chi_R)\,. \end{equation} Then we have \begin{equation}\notag \int_{{\bf R}^N}|\nabla \widetilde{u}_{k}|^2=\int_{{\bf R}^N}|\nabla u_{k}|^2 -2\int_{{\bf R}^N}\nabla u_{k}\cdot\nabla(u_{k}\chi_R) +\int_{{\bf R}^N}|\nabla (u_{k}\chi_R)|^2, \end{equation} and \begin{equation}\notag \lim_{k\rightarrow+\infty}\Bigg(\Bigg|\int_{{\bf R}^N}\nabla u_{k}\cdot\nabla(u_{k}\chi_R)\Bigg| +\int_{{\bf R}^N}|\nabla u_{k}\chi_R|^2\Bigg)=0\,. \end{equation} Now we denote $o(1)$ as the terms that approach zero as $k\rightarrow\infty$. Thus we can write \begin{equation}\label{id3.5} \int_{{\bf R}^N}|\nabla \widetilde{u}_{k}|^2=\int_{{\bf R}^N}|\nabla u_{k}|^2+o(1). \end{equation} Similarly, \begin{equation}\notag \int_{{\bf R}^N}|\nabla \widetilde{v}_{k}|^2=\int_{{\bf R}^N}|\nabla v_{k}|^2+o(1), \int_{{\bf R}^N}V_1\widetilde{u}_{k}^p=\int_{{\bf R}^N}V_1 u_{k}^p+o(1), \int_{{\bf R}^N}V_2\widetilde{v}_{k}^p=\int_{{\bf R}^N}V_2 v_{k}^p+o(1) \end{equation} for all $ 2\leq p \leq 6$. Hence $E_{\varepsilon,B_k, V_1, V_2}[u_{k},v_{k}]=c_{\varepsilon, B_k, V_1, V_2}=E_{\varepsilon, B_k, V_1, V_2}[\widetilde{u}_{k},\widetilde{v}_{k}]+o(1)$. Moreover, \begin{align}\label{id3.6} &\int_{{\bf R}^N} [\varepsilon^2 |\nabla \widetilde{u}_{k}|^2+b_1^\infty \widetilde{u}_{k}^2 +\varepsilon^2 |\nabla \widetilde{v}_{k}|^2+ b_2^\infty \widetilde{v}_{k}^2 ] \\ - & \int_{{\bf R}^N} [ \mu_1\widetilde{u}_{k}^4+2\beta \widetilde{u}_{k}^2\widetilde{u}_{k}^2 +\mu_1\widetilde{v}_{k}^4] \notag\\ =&\int_{{\bf R}^N}(b_1^\infty-V_1(x))\widetilde{u}_{k}^2+\int_{{\bf R}^N}(b_2^\infty-V_2(x))\widetilde{v}_{k}^2 +o(1)\notag\\ =&O\Big(\frac{1}{M}\int_{{\bf R}^N}(\widetilde{u}_{k}^2 + \widetilde{v}_k^2) \Big)+o(1)\notag\\ =&O\Big(\frac{1}{M}\Big)+o(1),\ j=1,2\,.\notag \end{align} Similarly, we have \begin{equation}\label{id3.6-1} \int_{{\bf R}^N} [ 2 \beta \widetilde{u}_k^2 \widetilde{v}_k^2 +\mu_1 \widetilde{u}_k^4+ \mu_2 \widetilde{v}_k^4] = \int_{{\bf R}^N} [ 2 \beta u_k^2 v_k^2 +\mu_1 u_k^4+ \mu_2 v_k^4] + o(1) C \varepsilon^N. \end{equation} Hence by (\ref{id3.6}), (\ref{id3.6-1}) and (\ref{eq:2-25}) of Claim~1 in Theorem~\ref{t2.1}, we see that the unique critical point $\widetilde{t}$ of the function $E_{\varepsilon, {\bf R}^N, b_1^\infty, b_2^\infty}[\sqrt{t}\widetilde{u}_{k},\sqrt{t}\widetilde{v}_{k}]$ satisfies \begin{equation}\label{id3.7} |\widetilde{t}-1|=O\Big(\frac{1}{M}\Big)+o(1)\,, \end{equation} which yields \begin{align} E_{\varepsilon, {\bf R}^N, b_1^\infty, b_2^\infty} \Big[\sqrt{\widetilde{t}}\widetilde{u}_{k},\sqrt{\widetilde{t}}\widetilde{v}_{k}\Big] =&E_{\varepsilon, {\bf R}^N, b_1^\infty, b_2^\infty}[\widetilde{u}_{k},\widetilde{v}_{k}]+O\Big(\frac{1}{M}\Big)+o(1)\notag\\ =&E_{\varepsilon, {\bf R}^N, V_1, V_2}[\widetilde{u}_{k},\widetilde{v}_{k}]+O\Big(\frac{1}{M}\Big)+o(1)\notag\\ =&E_{\varepsilon, {\bf R}^N, V_1, V_2}[u_{k},v_{k}]+O\Big(\frac{1}{M}\Big)+o(1)\notag\\ =&c_{\varepsilon, B_k, V_1, V_2}+O\Big(\frac{1}{M}\Big)+o(1).\notag \end{align} On the other hand, \begin{equation}\label{id3.8} \Big(\sqrt{\widetilde{t}}\widetilde{u}_{k},\sqrt{\widetilde{t}}\widetilde{v}_{k}\Big)\in N (\varepsilon, {\bf R}^N, b_1^\infty, b_2^\infty) \end{equation} and then \begin{equation}\label{id3.9} E_{\varepsilon, {\bf R}^n, b_1^\infty, b_2^\infty} \Big[\sqrt{\widetilde{t}}\widetilde{u}_{k},\sqrt{\widetilde{t}}\widetilde{v}_{k}\Big] \geq c_{\varepsilon, {\bf R}^N, b_1^\infty, b_2^\infty} \end{equation} Consequently, $c_{\varepsilon, {\bf R}^N, b_1^\infty, b_2^\infty}\leq c_{\varepsilon, B_k, V_1, V_2}+O\big(\frac{1}{M}\big)+o(1)$. Letting $M\rightarrow+\infty$ and $k\rightarrow+\infty$, we obtain $c_{\varepsilon, {\bf R}^N, b_1^\infty, b_2^\infty}\leq c_{\varepsilon, {\bf R}^N, V_1, V_2}$ which may contradict with (\ref{c7}). Therefore, we may complete the proof of Theorem~\ref{t1.2}. \end{proof} \section{Proof of Theorem~\ref{t1.3}} \setcounter{equation}{0} \ \ \ \ In this section, we study the asymptotic behavior of $(u_{\varepsilon},v_{\varepsilon})$ as $\varepsilon\rightarrow 0$. Firstly, the energy upper bound is stated as follows: \begin{lemma}\label{lem5.1} For $\beta>0$ and $0<\varepsilon<<1$, \begin{equation}\label{id5.1} c_{\varepsilon, {\bf R}^N, V_1, V_2} \leq\varepsilon^N [\inf\limits_{x\in{\bf R}^N} c_{1, {\bf R}^N, V_1(x),V_2(x) } +o(1)]. \end{equation} \end{lemma} \begin{proof}[\bf P{\small ROOF}] Fix a point $x_0 \in {\bf R}^N$. Let $(U_0, V_0)$ be a minimizer of $ c_{1, {\bf R}^N, V_1 (x_0), V_2 (x_0)}$. We set $ u(x)= U_0 (\frac{x-x_0}{\varepsilon}), v(x)= V_0 (\frac{x-x_0}{\varepsilon})$ and then use (\ref{eq:2-25s}) to compute the upper bound of $ c_{\varepsilon, {\bf R}^N, V_1, V_2}$. Due to $ c_{\varepsilon, {\bf R}^N, \lambda_1, \lambda_2} = \varepsilon^N c_{1, {\bf R}^N, \lambda_1, \lambda_2}$, the rest of the proof is simple and thus omitted. \end{proof} Let $u_{\varepsilon}(P^\varepsilon)=\sup\limits_{x\in{\bf R}^N}u_{\varepsilon}(x)$ and $v_{\varepsilon}(Q^\varepsilon)=\sup\limits_{x\in{\bf R}^N}v_{\varepsilon}(x)$. We want to claim that $\sup\limits_{\varepsilon>0}(|P^\varepsilon|+|Q^\varepsilon|)<+\infty$. To this end, we need to show that both $u_{\varepsilon}$ and $v_{\varepsilon}$ are uniformly bounded. In fact, as for the proof of (\ref{id6.3}), we have \begin{equation}\label{id6.7} \int_{{\bf R}^N}(u_{\varepsilon}^q + v_\varepsilon^q)\leq c\varepsilon^N,\ 2\leq q\leq6. \end{equation} The equation of $u_{\varepsilon}$ gives \begin{align} \varepsilon^2\triangle u_{\varepsilon}=&V_1u_{\varepsilon}-\mu_1u_{\varepsilon}^3 -\beta u_{\varepsilon}v_{\varepsilon}^2\notag\\ \geq&-\beta v_{\varepsilon}^2u_{\varepsilon}\notag\\ =&-C(x)u_{\varepsilon}\quad\hbox{ in }\:\mathbb{R}^N\,.\notag \end{align} Let $\widetilde{U}_{\varepsilon}(y)=u_{\varepsilon}(\varepsilon\,y)$, and $C_\varepsilon(y)= C(\varepsilon\,y)$. Then \begin{equation}\label{id6.8} \triangle\widetilde{U}_{\varepsilon}+C_\varepsilon(y)\widetilde{U}_{\varepsilon} \geq 0\quad\hbox{ in }\:\mathbb{R}^N\,,\quad\hbox{ and }\: C_\varepsilon\in L^3({\bf R}^N)\,. \end{equation} By the subsolution estimate (Theorem~8.17 of~\cite{GT}) \begin{equation}\label{id6.9} |\widetilde{U}_{\varepsilon}(y)|\leq C\Bigg(\int_{B(y,1)}|\widetilde{U}_{\varepsilon}|^2\Bigg)^{1/2}, \end{equation} where $C>0$ is independent of $\varepsilon$. Hence by (\ref{id6.7}) and (\ref{id6.9}), we see that $||\widetilde{U}_{\varepsilon}||_{L^\infty}\leq C$ and hence $0<u_{\varepsilon}\leq C$. Similarly, we may obtain $0<v_{\varepsilon}\leq C$. \noindent {\bf Claim~3:}~~{\it If $ |P^\varepsilon| \to +\infty$, then $ b_1^\infty <+\infty$.} Suppose $ b_1^\infty=+\infty$. Since $P^\varepsilon$ is a local maximum point of $u_{\varepsilon}$, then $\triangle u_{\varepsilon}(P^\varepsilon)\leq 0$. Hence by the equation of $u_{\varepsilon}$, we may obtain \begin{equation} V_1(P^\varepsilon)u_{\varepsilon}(P^\varepsilon)-\mu_1 u_{\varepsilon}^3(P^\varepsilon)-\beta u_{\varepsilon }(P^\varepsilon)v_{\varepsilon}^2(P^\varepsilon)=\varepsilon^2\triangle u_{\varepsilon}(P^\varepsilon)\leq0,\notag \end{equation} which implies that \begin{equation}\label{id6.10} V_1(P^\varepsilon)\leq\beta v_\varepsilon^2 (P^\varepsilon)\leq C, \end{equation} and hence \begin{equation}\label{id6.11} |P^\varepsilon|\leq C_0\,.\end{equation} Therefore, we may complete the proof of Claim~3. Moreover, we may also claim that $b_2^\infty <+\infty$. In fact, suppose $b_2^\infty=+\infty$. Set $ U_\varepsilon(y):= u_\varepsilon (P^\varepsilon+\varepsilon y), V_\varepsilon (y):= v_\varepsilon (P^\varepsilon +\varepsilon y) $. Then $U_\varepsilon \to U_0$ in $C^2_{loc} ({\bf R}^N)$ and $ V_\varepsilon \to V_0$ in $C^2_{loc} ({\bf R}^N)$, where $(U_0, V_0)$ satisfies \begin{equation} \label{uv00} \Delta U_0 - b_1^\infty U_0 + \mu_1 U_0^3 +\beta U_0 V_0^2=0 \ \mbox{in } \ {\bf R}^N. \end{equation} Hence by (\ref{id6.10}), we may obtain $V_0(0)>0$, and then $ V_0 \not \equiv 0$. This implies that \begin{eqnarray*} c_{\varepsilon, {\bf R}^N, V_1, V_2} & = & \frac{1}{4}\int_{{\bf R}^N} [ \varepsilon^2 |\nabla u_\varepsilon|^2 +V_1 u_\varepsilon^2 +\varepsilon^2 |\nabla v_\varepsilon|^2 + V_2 v_\varepsilon^2] \\ & \geq & \frac{1}{4}\int_{|x|>R} [ \varepsilon^2 |\nabla u_\varepsilon|^2 +V_1 u_\varepsilon^2 +\varepsilon^2 |\nabla v_\varepsilon|^2 + V_2 v_\varepsilon^2] \\ & \geq & \frac{1}{4} \int_{|x|>R} V_2 v_\varepsilon^2 \\ & \geq & C \varepsilon^N \left[ \inf_{|x|>R} V_2 (x)\right] \end{eqnarray*} which contradicts with (\ref{id5.1}). Here we have used the hypothesis that $b_2^\infty=+\infty$. Thus we may assume that $b_1^\infty <+\infty$ and $b_2^\infty <\infty$. As before, $(U_\varepsilon, V_\varepsilon)$ converges to $(U_0, V_0)$ satisfying \begin{equation} \label{uv0} \Delta U_0 - b_1^\infty U_0 + \mu_1 U_0^3 +\beta U_0 V_0^2=0, \ \Delta V_0 - b_2^\infty V_0 + \mu_1 V_0^3 +\beta V_0 U_0^2=0 \ \mbox{in } \ {\bf R}^N\,. \end{equation} Then again $V_0 \not \equiv 0$ since otherwise, $ (U_0,V_0)\equiv (0,0)$ which is impossible. Moreover, \begin{eqnarray*} c_{\varepsilon, {\bf R}^N, V_1, V_2} & = & \frac{1}{4}\int_{{\bf R}^N} [ \varepsilon^2 |\nabla u_\varepsilon|^2 +V_1 u_\varepsilon^2 +\varepsilon^2 |\nabla v_\varepsilon|^2 + V_2 v_\varepsilon^2] \\ & \geq & \frac{1}{4}\int_{|x|>R} [ \varepsilon^2 |\nabla u_\varepsilon|^2 +V_1 u_\varepsilon^2 +\varepsilon^2 |\nabla v_\varepsilon|^2 + V_2 v_\varepsilon^2] \\ & \geq & \varepsilon^N \frac{1}{4}\int_{{\bf R}^N} [ |\nabla U_0|^2 +b_1^\infty U_0^2 +|\nabla V_0|^2 + b_2^\infty V_0^2] + o(\varepsilon^N) \\ & \geq & \varepsilon^N [ c_{1, {\bf R}^N, b_1^\infty, b_2^\infty} + o(1)] \end{eqnarray*} which may contradict with (\ref{id5.1}). Therefore, we complete the proof of $\displaystyle\sup_{\varepsilon>0}|P^\varepsilon |+|Q^\varepsilon| <+\infty$. Let $(P^\varepsilon, Q^\varepsilon)\rightarrow(P^0, Q^0)$. As before, $(U_\varepsilon, V_\varepsilon)= (u_{\varepsilon} (P^\varepsilon +\varepsilon y), v_\varepsilon (P^\varepsilon +\varepsilon y)) \to (U_0, V_0)$, where $(U_0, V_0)$ satisfies \begin{equation}\notag \begin{cases} \triangle U-V_1(P^0)U+\mu_1 U^3+\beta U V^2=0\quad\hbox{ in }\:{\bf R}^N\,,\\ \triangle V-V_2(P^0)V+\mu_2 V^3+\beta U^2 V=0\quad\hbox{ in }\:{\bf R}^N\,. \end{cases} \end{equation} Then by the strong Maximum Principle, $U_0, V_0>0$. Furthermore, we have \begin{equation}\notag \lim_{\varepsilon\rightarrow0}\varepsilon^{-N}c_{\varepsilon, {\bf R}^N, V_1, V_2}\geq c_{1, {\bf R}^N, V_1(P^0),V_2( P^0)}. \end{equation} Hence by Lemma~\ref{lem5.1}, \begin{equation}\notag c_{1, {\bf R}^N, V_1(P^0),V_2(P^0)} \leq\inf_{x\in{\bf R}^N} c_{1, {\bf R}^N, V_1(x),V_2(x)}\,, \end{equation} i.e. $c_{1, {\bf R}^N, V_1(P^0),V_2(P^0)}=\displaystyle\inf_{x\in{\bf R}^N} c_{1, {\bf R}^N, V_1(x),V_2(x)}\,.$ It remains to show that $\frac{|P^\varepsilon-Q^\varepsilon|}{\varepsilon}\rightarrow 0$. In fact, if $\frac{|P^\varepsilon-Q^\varepsilon|}{\varepsilon}\to +\infty$, then similar arguments may give \begin{equation}\notag \lim_{\varepsilon\rightarrow0}\varepsilon^{-N}c_{\varepsilon, {\bf R}^N, V_1, V_2}\geq c_{1, {\bf R}^N, V_1(P^0),V_2( P^0)}+ c_{1, {\bf R}^N, V_1(Q^0),V_2( Q^0)} \geq 2 \inf_{x \in {\bf R}^N} c_{1, {\bf R}^N, V_1 (x), V_2 (x)} \end{equation} which is impossible. On the other hand, if $\frac{|P^\varepsilon-Q^\varepsilon|}{\varepsilon}\to c \not = 0$, then $U_0$ and $V_0$ may have different maximum points. This may contradict with the fact that both $U_0$ and $V_0$ are radially symmetric and strictly decreasing. Thus $\frac{|P^\varepsilon-Q^\varepsilon|}{\varepsilon}\rightarrow 0$. The uniqueness of $P^\varepsilon,Q^\varepsilon$ may follow from Claim~8 of~\cite{lw1}. Therefore, we may complete the proof of Theorem~\ref{t1.3}. \section{ Proof of Theorem \ref{t1.4}} \setcounter{equation}{0} In this section, we follow the same ideas of~\cite{lw1} to prove Theorem~\ref{t1.4}. As for the proof of Lemma~4.2 in~\cite{lw1}, the upper bound of $c_{\varepsilon, \Omega, \lambda_1, \lambda_2}$ is given by \begin{lemma} For $\beta > \sqrt{\mu_1 \mu_2}$, \begin{equation} {\label{eq:4-17}} c_{\varepsilon, \Omega, \lambda_1, \lambda_2} \leq \varepsilon^N \Biggl\{ c_{1, {\bf R}^N, \lambda_1, \lambda_2}+ c_1\,e^{-2\sqrt{\lambda_1}(1-\sigma) R_\varepsilon} + c_2\,e^{-2\sqrt{\lambda_2}(1-\sigma)R_\varepsilon } \Biggl\}\,, \end{equation} where $R_\varepsilon = \frac{1}{\varepsilon} \displaystyle\max_{P \in \Omega} d(P, \partial \Omega)$ and $c_j$'s are positive constants. \label{upper} \end{lemma} \noindent Furthermore, the asymptotic behavior of $(u_\varepsilon, v_\varepsilon)$'s can be summarized as follows: \begin{lemma} For $\varepsilon$ sufficiently small, $u_\varepsilon$ has only one local maximum point $P_\varepsilon$ and $v_\varepsilon$ has only one local maximum point $Q_\varepsilon$ such that \begin{equation}{\label{eq:5-5}} \frac{d(P_\varepsilon,\partial\Omega)}{\varepsilon} \to +\infty,\quad \frac{d(Q_\varepsilon,\partial\Omega)}{\varepsilon} \to +\infty,\quad \frac{|P_\varepsilon-Q_\varepsilon|}{\varepsilon}\to 0. \end{equation} Let $U_\varepsilon(y) :=\ u_\varepsilon (P_\varepsilon +\varepsilon y)$, $V_\varepsilon(y) :=\ (Q_\varepsilon+\varepsilon y)$. Then $(U_\varepsilon, V_\varepsilon)\to (U_0,V_0)$, where $(U_0,V_0)$ is a least-energy solution of (\ref{eq:1-4g}). Moreover, \begin{equation} \label{uvdecay} \varepsilon \left|\bigtriangledown u_\varepsilon\right|+|u_\varepsilon|\leq C e^{-\sqrt{\lambda_1}(1-\sigma)\frac{|x-P_\varepsilon|}{\varepsilon}}, \quad \varepsilon \left|\bigtriangledown v_\varepsilon\right|+|v_\varepsilon|\leq C e^{-\sqrt{\lambda_2}(1-\sigma)\frac{|x-Q_\varepsilon|}{\varepsilon}}\,. \end{equation} \end{lemma} Now we want to complete the proof of Theorem~\ref{t1.4}. We may assume that, passing to a subsequence, that $P_\varepsilon$ (or $Q_\varepsilon) \to x_0\in \bar \Omega$. Thus $$d_\varepsilon =\ d(P_\varepsilon,\partial\Omega) \to d_0 :=\ d(x_0,\partial\Omega),\ \ \mbox{as}\ \ \varepsilon\to 0.$$ Note that $d_0$ may be zero. Given $\sigma>0$ a small constant, we may choose $d'_0>0$ and $\sigma'>0$ slightly smaller than $\sigma$ such that $$\mbox{vol}(B(x_0,d'_0))=\ \mbox{vol} (\Omega\cap B(x_0,d_0+\sigma)) \quad\hbox{ and }\quad d'_0<d_0+\sigma'\,.$$ Besides, we may set $\eta_\varepsilon$ as a $C^\infty$ cut-off function such that $$\left\{\begin{array}{lll} &\eta_\varepsilon(s) =\ 1\ \ &\mbox{for}\ \ 0\leq s\leq d_\varepsilon+\sigma'\,,\\ &\eta_\varepsilon(s) =\ 0 \ \ &\mbox{for}\ \ s>d_\varepsilon+\sigma\,, \\ &0\leq\eta_\varepsilon\leq 1\,, &|\eta'_\varepsilon|\leq C\,.\end{array}\right.$$ Let $\tilde u_\varepsilon(x)=\ u_\varepsilon \eta_\varepsilon(|P_\varepsilon-x|)$ and $\tilde v_\varepsilon(x) =\ v_\varepsilon\eta_\varepsilon(|Q_\varepsilon-x|)$. Then we have \begin{equation} \lim_{\varepsilon \to 0} \varepsilon^{-N} \int_\Omega [ 2 \beta \tilde{u}_\varepsilon^2 \tilde{v}_\varepsilon^2 + \mu_1 \tilde{u}_\varepsilon^4 +\mu_2 \tilde{v}_\varepsilon^4 ] = \int_{{\bf R}^N} [ 2 \beta U_0^2 V_0^2 +\mu_1 U_0^4 + \mu_2 V_0^4]>0\,. \end{equation} Hence \[ \int_\Omega [ 2 \beta \tilde{u}_\varepsilon^2 \tilde{v}_\varepsilon^2 + \mu_1 \tilde{u}_\varepsilon^4 +\mu_2 \tilde{v}_\varepsilon^4 ]>0\,,\] as $\varepsilon$ sufficiently small. By the decay estimate~(\ref{uvdecay}) and Lemma~\ref{cnew}, we obtain that \begin{eqnarray}\label{6.4-1} c_{\varepsilon, \Omega, \lambda_1, \lambda_2} &\geq& E_{\varepsilon, \Omega, \lambda_1, \lambda_2} [tu_\varepsilon,t v_\varepsilon] \\ &\geq& E_{\varepsilon, \tilde{\Omega}, \lambda_1, \lambda_2} [t\tilde u_\varepsilon, t\tilde v_\varepsilon] -\varepsilon^N \exp \left[-\frac{2\sqrt{\lambda_1}}{ \varepsilon} (d_\varepsilon+\sigma')\right] -\varepsilon^N \exp \left[-\frac{2\sqrt{\lambda_2}}{ \varepsilon} (d_\varepsilon+\sigma')\right]\nonumber \end{eqnarray} for all $t\in [0,2]$, where $\tilde{\Omega}= \Omega \cap B(x_\varepsilon, d_\varepsilon +\sigma)$ and $x_\varepsilon$ can be $P_\varepsilon$ or $Q_\varepsilon$. Let $R_\varepsilon=\ \frac{d'_\varepsilon}{\varepsilon}$, where $d'_\varepsilon$ is chosen such that $$\mbox{vol} (B(0,d'_\varepsilon)) =\ \mbox{vol} (\Omega\cap B(x_\varepsilon,d_\varepsilon+\sigma)).$$ Using Schwartz's symmetrization, we have $$\int_{B(0, d'_\varepsilon)} (\tilde u_\varepsilon^*)^2 (\tilde v_\varepsilon^*)^2 \geq \int_{\tilde{\Omega}} \tilde u_\varepsilon^2 \tilde v_\varepsilon^2$$ and then \begin{equation} \label{uvin} \int_{B(0, d'_\varepsilon)} [ 2 \beta (\tilde u_\varepsilon^*)^2 (\tilde v_\varepsilon^*)^2 + \mu_1 (\tilde{u}_\varepsilon^*)^4 + \mu_2 (\tilde{v}_\varepsilon^*)^4] \geq \int_{\tilde{\Omega}} [ 2 \beta \tilde u_\varepsilon^2 \tilde{v}_\varepsilon^2 +\mu_1 \tilde{u}-\varepsilon^4 +\mu_2 \tilde{v}_\varepsilon^4] >0\,. \end{equation} Thus \begin{equation}\label{6.6} E_{\varepsilon, B(0, d'_\varepsilon), \lambda_1, \lambda_2} [t\tilde u_\varepsilon^*, t \tilde v_\varepsilon^*] \leq E_{\varepsilon, \tilde{\Omega}, \lambda_1, \lambda_2} [t \tilde u_\varepsilon, t \tilde v_\varepsilon]\,,\quad\forall\, t\in [0,2]\,.\end{equation} Here we have used the fact that $\beta >0$. By~(\ref{uvin}) and Claim~1 of Theorem~\ref{t2.1}, there exists $t^*\in (0,2]$ such that $$E_{\varepsilon, B(0, d'_\varepsilon), \lambda_1, \lambda_2} [t^* \tilde u_\varepsilon^*, t^* \tilde v_\varepsilon^*] \geq E_{ \varepsilon, B(0, d'_\varepsilon), \lambda_1, \lambda_2}[t \tilde u_\varepsilon^*, t\tilde v_\varepsilon^*]\,, \quad\forall t \geq 0\,.$$ Then by (\ref{6.4-1}) and (\ref{6.6}), \begin{eqnarray*} & \ & E_{\varepsilon, B(0, d'_\varepsilon), \lambda_1, \lambda_2} [t^* \tilde u_\varepsilon^*, t^* \tilde v_\varepsilon^*] \\ & \leq & E_{\varepsilon, \tilde{\Omega}, \lambda_1, \lambda_2} [t^* \tilde u_\varepsilon, t^* \tilde v_\varepsilon]\\ & \leq & c_{\varepsilon, \Omega, \lambda_1, \lambda_2} + \varepsilon^N \exp \left[-\frac{2\sqrt{\lambda_1}}{\varepsilon} (d_\varepsilon+\sigma')\right] + \varepsilon^N \exp \left[-\frac{2\sqrt{\lambda_2}}{\varepsilon} (d_\varepsilon+\sigma')\right], \\ & \ & E_{\varepsilon, B(0, d'_\varepsilon), \lambda_1, \lambda_2}[t^* \tilde u_\varepsilon^*, t^* \tilde v_\varepsilon^*] \\ & =\ & \sup_{t>0} E_{\varepsilon, B(0, d'_\varepsilon), \lambda_1, \lambda_2} [t\tilde u_\varepsilon^*, t\tilde v_\varepsilon^*]\\ & \geq & \varepsilon^N \inf_{u, v \geq 0, \atop{u \not \equiv 0, v \not \equiv 0, \atop{(u, v)\in N(1,R_\varepsilon, \lambda_1, \lambda_2) } } } E_{1, B_{R_\varepsilon}, \lambda_1, \lambda_2} [u,v]\\ & \geq & \varepsilon^N \Biggl\{ c_{1, {\bf R}^N, \lambda_1, \lambda_2} + c_3\,\exp \Biggl[ -\frac{2 (1+\sigma)\sqrt{\lambda_1}}{ \varepsilon} (d_\varepsilon+o(1))\Biggl]\Biggl\} \\ & & + \varepsilon^N\, c_4\,\exp \Biggl[ -\frac{2(1+\sigma)\sqrt{\lambda_2}}{ \varepsilon} (d_\varepsilon+o(1))\Biggl]\,, \end{eqnarray*} where $c_j$'s are positive constants. Here the last inequality may follow from Lemma~\ref{upper} and Theorem~4.1 of \cite{lw1}. Thus \begin{equation} \label{lowerbound2} c_{\varepsilon, \Omega, \lambda_1, \lambda_2} \geq \varepsilon^N \left\{\begin{array}{lll} & c_{1, {\bf R}^N, \lambda_1, \lambda_2} + c_3\,\exp \Biggl[ -\frac{2 (1+\sigma)\sqrt{\lambda_1}}{ \varepsilon} (d_\varepsilon+o(1))\Biggl] \\ & + c_4\,\exp \Biggl[ -\frac{2(1+\sigma)\sqrt{\lambda_2}}{ \varepsilon} (d_\varepsilon+o(1))\Biggl]\end{array} \right\}\,. \end{equation} Combining the lower and upper bound of $c_{\varepsilon, \Omega, \lambda_1, \lambda_2}$, we obtain \[ c_3\,\exp \Biggl[ -\frac{2 (1+\sigma)\sqrt{\lambda_1}}{ \varepsilon} (d_\varepsilon+o(1))\Biggl] + c_4\,\exp \Biggl[ -\frac{2(1+\sigma)\sqrt{\lambda_2}}{ \varepsilon} (d_\varepsilon+o(1))\Biggl] \] \[ \leq c_1\,\exp \Biggl[ -\frac{2 (1-\sigma) \sqrt{\lambda_1}}{ \varepsilon} (d_0+o(1))\Biggl] + c_2\,\exp \Biggl[ -\frac{2 (1-\sigma)\sqrt{\lambda_2}}{ \varepsilon} (d_0+o(1))\Biggl]\,. \] This then shows that $d(P_\varepsilon,,\partial\Omega), d(Q_\varepsilon,\partial\Omega) \to \displaystyle\max_{P \in\Omega} d(P,\partial\Omega)$ since $ |P_\varepsilon -Q_\varepsilon| \to 0$. \qed
1,477,468,750,767
arxiv
\section{Introduction} Cavity optomechanical systems offer a new arena for studying nonlinear optics, the quantum behavior of massive objects, and possible connections between quantum optics and condensed matter systems \cite{2007_OpticsExpress_Kippenberg_Vahala, Marquardt2009Optomechanics, Heinrich2011Collective, Pikovski2012Probing, Isart2011Large, Chang2009Cavity}. Many of the scientific goals for this field share two prerequisites: cooling a mechanical mode close to its ground state, and detecting its zero-point motion with an adequate signal-to-noise ratio. The first experiment to satisfy these prerequisites used a conventional dilution refrigerator to cool a piezoelectric mechanical element coupled to a superconducting qubit \cite{OConnell2010Quantum}. The base temperature of the refrigerator ensured that one of the higher-order vibrational modes (a dilatational mode with resonance frequency $\sim$ 6 GHz) was in its quantum mechanical ground state. At the same time, the mechanical element was strongly coupled to a superconducting qubit via its piezoelectric charge, ensuring that the presence of a single phonon in the dilatational mode could be detected with high fidelity. Despite the success of this approach, many optomechanics experiments would benefit from the use of low-order mechanical modes, mechanical modes with higher quality factors $Q$ (the mechanical element used in Ref. \cite{OConnell2010Quantum} had $Q \sim 260$), and direct coupling between the mechanical element and the electromagnetic field (i.e., rather than via a qubit). In addition, some experiments will require the mechanical system to couple to optical frequencies (i.e., visible and near-infrared light) \cite{Stannigel2010Optomechanical} in addition to microwaves \cite{Regal2011From}. A number of groups have developed optomechanical systems in which a high-quality, low-order vibrational mode of an object is coupled to a microwave or optical cavity of very low loss \cite{2007_OpticsExpress_Kippenberg_Vahala, Marquardt2009Optomechanics}. These high-quality-factor mechanical devices typically resonate at frequencies far too low to be cooled to the ground state by conventional refrigeration techniques. Nevertheless, their vibrational modes can be cooled well below the ambient temperature using coherent states of the electromagnetic field (produced, e.g., by an ideal, noiseless laser) \cite{Rae2007Theory, Marquardt2007Quantum}. The technique of using coherent laser light to reduce the temperature of another system (i.e. ``laser cooling'') has been used with great success in the atomic physics community to both prepare a single trapped ion in its motional ground state \cite{Diedrich1989Laser} and provide one of the cooling stages necessary to achieve Bose Einstein condensation in a dilute atomic gase\cite{Wieman1999Atom}. Laser cooling also has a long history in optomechanics, and a number of descriptions of laser-cooled optomechanical systems have been presented in the literature \cite{Braginsky1970Invesitgation, Braginsky1967Ponderomotive, 2007_OpticsExpress_Kippenberg_Vahala, Marquardt2009Optomechanics}. To date, two groups have described experiments in which laser cooling (or its microwave analog) has been used to reduce the vibrations of a solid object close to its quantum mechanical ground state (i.e., to mean phonon number less than unity) \cite{Teufel2011Sideband,Chan2011Laser}. In these experiments the electromagnetic drive provided both the cooling and single-sideband readout of the mechanical motion. To achieve a mean phonon number very close to zero, a number of technical obstacles must be overcome. In general, laser cooling is optimized when the mechanical mode is weakly coupled to its thermal bath and well coupled to an electromagnetic cavity. This can be achieved by using a mechanical oscillator of high $Q$, and by applying a strong drive to an optical cavity of high finesse $F$. However even when these criteria are met, there is a minimum temperature that can be achieved by laser cooling. For a laser without any classical noise, this limit is set by the quantum fluctuations of the light in the cavity. Also, as described in Refs. \cite{Rae2007Theory} and \cite{Marquardt2007Quantum}, a laser without classical noise can achieve ground state cooling only if the optomechanical system is in the resolved sideband regime (i.e., the mechanical frequency is larger than the cavity loss rate). However if the laser that is driving the cavity exhibits classical fluctuations, its cooling performance will be degraded because classical fluctuations carry a non-zero entropy \cite{Rabl2009Phasenoise, Diosi2008Laser}. Qualitatively speaking, the fluctuating phase and amplitude of the light result in fluctuating radiation pressure inside the cavity, which in turn leads to random motion of the mechanical element that is indistinguishable from thermal motion. This point has been discussed in the optomechanics literature, and may play an important role in some experiments \cite{Kippenberg2011Phase}. Here we present a description of an experiment that meets many of the criteria for ground state laser cooling and detection (in that a high quality mechanical element is coupled to a high-finesse cavity in a cryogenic environment), but whose cooling performance is limited by classical laser noise. This experiment employs a membrane-in-the-middle geometry \cite{2008_Nature_Thompson_Harris}, in which a flexible dielectric membrane is placed inside a free-space optical cavity. The typical dimensions of free-space optical cavities lead to the requirement that the membrane have a lateral dimension $\sim$ 1 mm to avoid clipping losses at the beam waist. This leads to a fundamental drum-head mode with a resonance frequency $\sim 10^5$ Hz, requiring laser cooling to $\sim$ 1 \textmu K in order to reach the ground state. Despite this low temperature, this type of optomechanical system is appealing for a number of reasons. The Si$_3$N$_4$~ membranes used here exhibit exceptionally high quality factors $Q$ (even when they are patterned into more complex shapes \cite{KimblePersonal}), low optical absorption \cite{Sankey2010Strong}, and compatibility with monolithic, fiber-based optical cavities \cite{FlowersJacobs2012FiberCavityBased}. Furthermore, the membrane-in-the-middle geometry provides access to different types of optomechanical coupling that may serve as useful tools for addressing quantum vibrations \cite{2008_Nature_Thompson_Harris, Sankey2010Strong, 2008_NJP_Jayich_Harris}. At a cryogenic base temperature of 400 mK, we observe a mechanical quality factor $Q > 4\times 10^6$ for the 261-kHz fundamental membrane mode, and a cavity resonance halfwidth of 60 kHz, meaning the system operates in the resolved sideband limit. We monitor the membrane's thermal motion using a heterodyne optical circuit capable of simultaneously measuring both of the mechanical sidebands, and find that the observed optical spring and damping quantitatively agree with theory. To quantify the role of classical laser noise in this system, as well as optomechanical systems more generally, we also present a detailed theoretical model of optomechanical systems that are subject to classical laser noise. This model describes the roles of amplitude noise, phase noise, and amplitude-phase correlations in the multiple beams that are typically used to cool and measure an optomechanical system. Expressions are derived for the heterodyne spectrum expected for optomechanical systems in the presence of correlated noise sources, and we discuss the limits that classical laser noise imposes on cooling and reliably measuring the mean phonon number. \section{Cryogenic Apparatus} \begin{figure} \includegraphics[width=\textwidth]{figure1.pdf} \caption{Cryogenic optomechanical system. \ \textbf{a} Two Nd-YAG lasers probe and cool the cryogenic optomechanical system. The lasers are frequency-locked $\sim$ 9 GHz apart by feeding back on the beat signal from a fast photodiode (FPD). The majority of the signal laser's output serves as a heterodyne local oscillator. The rest is shifted 80 MHz with an acousto-optical modulator (AOM) and then phase modulated using an electro-optical modulator (EOM) with 22\% of the power in $\pm$15 MHz sidebands. These beams land on a sampler, and a small amount is sent to the cold cavity. The remainder lands on a ``reference'' photo diode (RPD) to monitor the heterodyne phase. Light leaving the cavity is collected by another ``signal'' photodiode (SPD) to monitor the membrane's motion. The signal laser is locked to the cavity with the Pound-Drever-Hall (PDH) method using the 15 MHz sidebands. The frequency and amplitude of the cooling laser are fine-tuned with an additional AOM (not shown). \textbf{b} Mechanical ringdown measurement, showing the membrane's amplitude after a drive piezo is turned off. \textbf{c} Cavity ringdown measurement, showing power leaving the cavity after the drive laser is turned off. The solid lines in \textbf{b} and \textbf{c} show exponential fits to the data. \textbf{d} Power spectral density of the heterodyne sidebands from the membrane's Brownian motion at 400 mK. The frequency is plotted relative to $\omega_{\text{if}}/2\pi$ = 80 MHz, and the lower sideband (red) has been folded on top of the upper sideband (blue) for comparison. \ } \label{fig1} \end{figure} Figure \ref{fig1}a shows a schematic of our cryogenic optomechanical system. A 1.5 mm $\times$ 1.5 mm $\times$ 50 nm stoichiometric Si$_3$N$_4$~membrane resides at the center of a (nominally) $3.39$ cm long optical cavity. The membrane is mounted on a three-axis cryogenic actuator allowing us to tilt the membrane about two axes and displace it along the cavity axis. The cavity, membrane, and a small set of guiding optics are cooled to approximately 400 mK in a $^3$He cryostat. Free-space laser light is coupled to the cavity via one of the cryostat's clear-shot tubes. The most reliable way to measure the membrane's mechanical quality at 400 mK is to perform a mechanical ringdown by driving the membrane at its resonant frequency ($\omega_\text{m}= 2\pi \times 261.15$ kHz) to large amplitude with a nearby piezo, shutting off the drive, and monitoring the decay of the membrane's vibrations. We monitor the membrane's motion interferometrically using a laser of wavelength 935 nm, which is far enough from the design wavelength of our cavity mirror coatings (1064 nm) that the cavity finesse is $\sim 1$; this ensures the measurement exerts no significant back action upon the membrane. Figure \ref{fig1}b shows a typical mechanical ringdown measurement. To ensure the membrane motion is in the linear regime, we let it ring down until its frequency stabilizes before fitting the data to an exponential curve (the inferred time constant is then insensitive to the choice of time window). The observed ringdown time $\tau_\text{m} = 5.3$ s corresponds to a mechanical quality factor $Q$ = 4.3 million at 400 mK, though this value varies with thermal cycling (i.e. between 400 mK and 4 K), and typically ranges from $\sim 4-5$ million. As shown in Fig. \!\!\!\!\!\ref{fig1}a, two independent Nd-YAG lasers (wavelength $\lambda$ = 1064 nm) provide a total of five beams for driving the cavity and performing the heterodyne detection of the membrane's motion (described below). To achieve a large optomechanical back action with these lasers, we require a high-finesse optical cavity. The top and bottom mirrors in Fig. \!\ref{fig1}a are designed to have a power reflectivity exceeding 99.98\% and 99.998\% respectively at $\lambda = 1064$ nm, which would correspond to a cavity finesse of 30,000. Generally these mirrors perform above this specification, however. Figure \ref{fig1}c shows the results of a typical cavity ringdown measurement performed by toggling the power of a laser driving the cavity, and collecting the power leaking out of the cavity when the drive is shut off. The measured time constant $\tau_c = $ 1.34 \textmu s corresponds to a finesse of $F=37,000$. This value generally depends on the day the data was taken and the orientation of the membrane. It is lower than the value we measured after initially cooling to 400 mK ($\sim 80,000$). We believe this reduction was caused by either gradual condensation of materials on the surfaces over months of operation, or a change in the membrane's alignment, which can steer the cavity mode away from a high-performance region of the end mirrors (a spatial dependence of cavity-mirror performance was also observed in Ref. \cite{Sankey2010Strong}). The finesse measured in Fig. \ref{fig1}c corresponds to a cavity loss rate of $\kappa/2\pi$~= 120 kHz, meaning the cryogenic optomechanical system operates in the resolved sideband regime, a condition necessary for ground-state cooling \cite{Rae2007Theory,Marquardt2007Quantum}. The first purpose of this apparatus is to perform a heterodyne measurement of the membrane's motion. As shown in Fig. \ref{fig1}a, light from the ``signal laser'' is split into several frequencies before it interacts with the cavity. The inset of Fig. \ref{fig1}a shows a summary of the relative magnitudes and frequencies of the laser light landing on the cavity, with dashed lines roughly illustrating the susceptibility of the different cavity resonances. Most of the light serves as a local oscillator tuned far from the cavity resonance; this power $P_{\text{lo}}$ simply bounces off the first cavity mirror and returns to a ``signal'' photodiode (SPD). A small fraction of this light is shifted by $\omega_{\text{if}}/2\pi$~= 80 MHz using an acousto-optical modulator (AOM) and is used to both lock the laser near the cavity resonance and record the membrane's motion. Locking is achieved via the Pound-Drever-Hall technique \cite{Black2001Introduction} with 15 MHz sidebands generated by an electro-optical modulator (EOM). A sampler directs $\sim 5$\% of these beams' power into the cryostat and cavity. We use the remaining 95\% (sent to a ``reference'' photodiode RPD) to monitor the laser's phase and power. The sampler then passes $\sim 95$\% of the light escaping the cryostat through to the signal photodiode. This signal is demodulated at the beat note $\omega_{\text{if}}/2\pi$~= 80 MHz in order to simultaneously detect the two sidebands generated by the membrane's thermal motion. Figure \ref{fig1}d shows a typical power spectral density of these sidebands. A peak appears at the membrane's fundamental mechanical frequency $\omega_\text{m}/2\pi \approx 261.1$ kHz as expected. The sidebands are identical, as expected for an interferometric measurement in which the laser noise contributes a negligible amount of force noise compared to the thermal bath and the mean phonon number is $\gg1$. The second purpose of this apparatus is to manipulate the membrane with optical forces, and so we include a second (cooling / pump) laser that addresses a different longitudinal mode of the cavity. If the cooling and signal beams address the same cavity mode, the beating between the two beams leads to a large heterodyne signal that clouds our measurement and a strong mechanical drive at the beat frequency (which usually is close to the mechanical frequency). This can cause the system to be unstable and makes the data difficult to interpret. To overcome this challenge, we lock the cooling and signal lasers such that they address different longitudinal cavity modes roughly 9 GHz apart. The longitudinal modes are chosen to be two free spectral ranges apart so that the dependence of cavity resonance frequency on membrane displacement is approximately the same for the two modes. This way, drift or vibrations in the membrane mount will (to lowest order) not change the relative frequencies of the modes. With the lasers locked in this way, any beating between the cooling and signal lasers occurs at frequencies that are irrelevant to the membrane's mechanics. As shown in Fig. \ref{fig1}a, the two lasers are locked by picking off a small portion of both beams and generating an error signal based on the frequency of their beat note. We have locked the free-running lasers $\sim 9$ GHz apart with an RMS deviation of $\sim 10$ Hz. When the signal laser is simultaneously locked to the membrane cavity, however, this performance degrades to an RMS deviation of $\sim 1$ kHz; this is because the membrane cavity is quite sensitive to environmental noise such as acoustic vibrations in the room, which injects additional noise into the signal laser's frequency (this first-generation cryogenic apparatus did not include significant vibration isolation). When the two lasers are locked to each other and the signal laser is locked to the membrane cavity, the cooling laser can then be fine-tuned relative to its cavity mode using an additional AOM (not shown). \begin{figure} \includegraphics[width=\textwidth]{figure2.pdf} \caption{Response of membrane to the cooling laser. \textbf{a} Typical heterodyne spectra (red and blue sidebands folded on top of each other) for increasing values of cooling laser power $P_\text{p}$ at $\Delta_\text{p}/2\pi = -250$ kHz. Solid lines are fits to Fano lineshapes (simultaneously fitting the width and frequency of both sidebands). \textbf{b} Membrane frequency and damping determined from Fano fits (similar to \textbf{a}) for different values of $\Delta_\text{p}/2\pi$. Solid lines represent a simultaneous fit of these two data sets to optomechanical theory.} \label{fig2} \end{figure} The cooling beam adds a significant optomechanical damping and spring to the membrane, so the linewidth and center frequency of the sidebands in Fig. \ref{fig1}d depend on its detuning $\Delta_\text{p}$ and power $P_\text{p}$. Figure \ref{fig2}a shows typical heterodyne spectra for the cooling beam red-detuned by $\Delta_\text{p}/2\pi = -250$ kHz. As the cooling power $P_\text{p}$ is increased (from $P_\text{p}=0$ in Fig. \ref{fig1}a), the membrane's vibrations are laser cooled; the linewidth increases and the integrated area under the curve decreases qualitatively as expected. At high $P_\text{p}$ the red and blue sidebands exhibit a large asymmetry. We find the spectra are always well-fit by a Fano lineshape. Figure \ref{fig2}b shows the membrane's mechanical frequency and damping as a function of $\Delta_\text{p}$. We simultaneously fit the frequency and damping to the theory described in Ref. \cite{Marquardt2007Quantum} (and outlined in section \ref{section-model} below), allowing four parameters to vary: the free spectral range FSR, the ratio between the cavity's loss through the entrance mirror to the total cavity loss $\kappa_{\text{ext}}/\kappa$, the bare mechanical frequency $\omega_\text{m}$, and and the signal beam detuning $\Delta_\text{s}$. The results of this fit are: FSR $= 8.7673410 \text{ GHz} \pm 5 \text{ kHz}$ (Note the statistical fit error was 460 Hz. The quoted error reflects the precision of a frequency measurement used to generate 9 GHz error signal.), $\kappa_{\text{ext}}/\kappa = 0.243 \pm 0.003$, $\omega_\text{m}/2\pi = 261150.3 \pm 0.9 $ Hz, and $\Delta_\text{s}/2\pi = -880 \pm 250$ Hz. The precise value of the FSR adds an overall offset to $\Delta_\text{p}$ (i.e. a horizontal shift in Fig. \ref{fig2}b). The estimate of FSR from this fit is significantly more precise than our independent estimate of 8.84 GHz based on cavity length. The ratio $\kappa_{\text{ext}}/\kappa$ simultaneously scales the optical spring and damping strength. This can be independently estimated as $\kappa_{\text{ext}}/\kappa = 0.2$ from measuring the cavity ringdown and the fraction of the incident light lost in the cavity with the laser tuned on resonance (64 \% in this case). This estimate is lower than the fit value by 20\%, which we attribute to imperfect cavity mode matching and that the membrane position varies by $\sim 10$ nm during measurements, which can affect the cavity finesse \cite{Sankey2010Strong}. We allow the bare mechanical frequency to float because we find that it can drift by a few Hz on the hour time scale. This adds a constant offset to the frequency plot in Fig. \ref{fig2}b. Finally, for this particular experiment we locked the signal beam as close to resonance as possible, but as this tends to drift on the scale of hours, we left $\Delta_\text{s}$ as a fitting parameter. $\Delta_\text{s}$ is responsible for adding a very small constant offset to the damping and spring. All other parameters such as the cavity finesse and input power were measured independently. The simultaneous fit is thus heavily constrained and agrees with the data very well. We also find that the fit is similarly convincing if we simply fix $\Delta_\text{s} = 0$ and $\omega_\text{m}/2\pi = 261.15$ kHz (a typical value of $\omega_\text{m}$). While the optical spring and damping in Fig. \ref{fig2}b are well-modeled by standard theory, the interpretation of the sideband amplitudes and lineshapes in Fig. \!\!\ref{fig2}a is not obvious. As we now discuss, the Fano lineshape arises from interference between the membrane's response to classical laser noise and the classical laser noise itself, an effect similar to what is seen in single-sideband measurements in other optomechanical systems \cite{Rocheleau2010Preparation,Teufel2011Sideband,Gavartin2012Hybrid}. \section{General Model of Optomechanics with Classical Laser Noise} \label{section-model} In this section, we present and solve the equations of motion for the optical cavity and the mechanical oscillator. Since the local oscillator beam is far off any cavity resonance frequency, we can neglect it here. We will let $\hat{a}_\mathrm{s}$ be the bosonic annihilation operator of the cavity mode addressed by the lock/signal beam, whereas $\hat{a}_\mathrm{p}$ is the annihilation operator for the cavity mode addressed by the cooling (pump) beam. The position operator of the mechanical oscillator is $\hat{x} = x_0 + x_\mathrm{zpf} \left(\hat{c} + \hat{c}^\dagger\right)$, where $\hat{c}$ is the phonon annihilation operator, $x_0 = \langle \hat{x} \rangle$ and $x_\mathrm{zpf}$ is the size of the zero point fluctuations. The Hamiltonian is \begin{equation} \label{eq:Hamiltonian} H = \sum_{j = \mathrm{s},\mathrm{p}} \hbar \left(\omega_j + g_j \hat{x} \right) \hat{a}^\dagger_j \hat{a}_j + \hbar \omega_\mathrm{m} \hat{c}^\dagger \hat{c} + H_\mathrm{drive} + H_\mathrm{diss} \end{equation} The interaction term describes the modulation of the cavity resonance frequencies by the motion of the mechanical oscillator, $H_\mathrm{drive}$ describes the laser drive and $H_\mathrm{diss}$ describes the coupling to both the electromagnetic and mechanical environment. This coupling to external degrees of freedom is conveniently described by input-output theory \cite{Collett1984Squeezing,Clerk2010Introduction}, which gives rise to the equations of motion \begin{eqnarray} \label{eq:EOM} \dot{\hat{a}}_j & = & -\left(\frac{\kappa_j}{2} + i \omega_j \right) \hat{a}_j - i g_j \hat{x} \hat{a}_j + \sqrt{\kappa_{j,\mathrm{ext}}} \, \hat{a}_{j,\mathrm{in}} + \sqrt{\kappa_{j,\mathrm{int}}} \, \hat{\xi}_{j} \quad , \quad j = \mathrm{s},\mathrm{p} \\ \dot{\hat{c}} & = & - \left(\frac{\gamma}{2} + i \omega_\mathrm{m} \right) \hat{c} - i \sum_j g_j \hat{a}^\dagger_j \hat{a}_j + \sqrt{\gamma} \, \hat{\eta} \ . \end{eqnarray} Here, $\kappa_{j,\mathrm{ext}}$ is the decay rate of mode $j$ through the mirror which couples the cavity to the external laser drive, whereas $\kappa_{j,\mathrm{int}}$ describes other types of optical decay. The total linewidth of cavity mode $j$ is $\kappa_j = \kappa_{j,\mathrm{ext}} + \kappa_{j,\mathrm{int}}$. The input modes $\hat{\xi}_{j}$ describe optical vacuum noise and fulfill $\langle \hat{\xi}_{j}(t) \hat{\xi}^\dagger_{j'}(t') \rangle = \delta(t-t') \delta_{j,j'}$ and $\langle \hat{\xi}^\dagger_{j}(t) \hat{\xi}_{j'}(t') \rangle = 0$. The coupling to the laser drive is described by the input mode \begin{equation} \label{eq:Input} \hat{a}_{j,\mathrm{in}}(t) = e^{-i \Omega_{j} t} \left[K_j + \frac{1}{2} \left(\delta x_j(t) + i \, \delta y_j(t) \right) \right] + \hat{\xi}_{j,\mathrm{in}} \end{equation} where $K_j = \sqrt{P_j/\hbar \Omega_j}$, with $\Omega_\mathrm{s}$ ($\Omega_\mathrm{p}$) being the drive frequency and $P_\mathrm{s}$ ($P_\mathrm{p}$) the power of the lock (cooling) beam. We have introduced the classical variables $\delta x_j$ and $\delta y_j$ which describe technical laser amplitude and phase noise, respectively. Since we will only be concerned with the noise close to the mechanical frequency $\omega_\mathrm{m}$, we can assume a white noise model where \begin{eqnarray} \label{eq:ClassNoise} \langle \delta x_j(t) \delta x_{j'}(t') \rangle & = & C_{j,xx} \delta(t-t') \delta_{j,j'} \\ \langle \delta y_j(t) \delta y_{j'}(t') \rangle & = & C_{j,yy} \delta(t-t') \delta_{j,j'} \nonumber \\ \langle \delta x_j(t) \delta y_{j'}(t') \rangle & = & C_{j,xy} \delta(t-t') \delta_{j,j'} \nonumber \end{eqnarray} The amplitude and phase noise is characterized by the real numbers $C_{j,xx}, C_{j,yy} \geq 0$ and $C_{j,xy}$ that are proportional to laser power. The Cauchy-Bunyakovsky-Schwarz inequality dictates that $C_{j,xy}^2 \leq C_{j,xx} C_{j,yy}$. Note that $C_{j,xx} = 1$ or $C_{j,yy} = 1$ corresponds to the condition in which the laser's classical noise is equal to its quantum noise. The operator $\hat{\xi}_{j,\mathrm{in}}$ describes vacuum noise and obeys the same relations as $\hat{\xi}_{j}$. The intrinsic linewidth of the mechanical oscillator is $\gamma$, and $\hat{\eta}$ describes thermal noise obeying $\langle \hat{\eta}(t) \hat{\eta}^\dagger(t') \rangle \approx \langle \hat{\eta}^\dagger(t) \hat{\eta}(t') \rangle = n_\mathrm{th} \delta(t-t')$, where $n_\mathrm{th} \approx k_\mathrm{B}T/\hbar \omega_\mathrm{m}$ is the phonon number in the absence of laser driving. For sufficiently strong driving and weak optomechanical coupling, we can linearize the equations of motion by considering small fluctuations around an average cavity amplitude. We write \begin{equation} \label{eq:MeanFluct} \hat{a}_j(t) = e^{-i \Omega_{j} t} \left( \bar{a}_j + \hat{d}_j(t) \right) \end{equation} where \begin{equation} \label{eq:Mean} \bar{a}_j = \frac{\sqrt{\kappa_{j,\mathrm{ext}}} \, K_j}{\kappa_j/2 - i \Delta_j} \end{equation} and $\Delta_j = \Omega_j - \omega_j - g_j x_0$ is the laser detuning from the cavity resonance in the presence of a static membrane. Defining the dimensionless position operator $\hat{z} = \hat{c} + \hat{c}^\dagger$, the Fourier transform as $f^{(\dagger)}[\omega] = \int_{-\infty}^{\infty} d t \, e^{i \omega t} f^{(\dagger)}(t)$, and the susceptibilities \begin{equation} \label{eq:Susceptibilities} \chi_{j,\mathrm{c}}[\omega] = \frac{1}{\kappa_j/2 - i (\omega + \Delta_j)} \quad , \quad \chi_\mathrm{m}[\omega] = \frac{1}{\gamma/2 - i (\omega - \omega_\mathrm{m})} \ , \end{equation} the solution to the linearized equations can be expressed as \begin{eqnarray} \label{eq:SolutionEOM} \hat{d}_j[\omega] & = & \chi_{j,\mathrm{c}}[\omega] \Big(\zeta_j[\omega] - i \alpha_j \hat{z}[\omega] \Big) \\ \hat{z}[\omega] & = & \frac{1}{N[\omega]} \Bigg[\sqrt{\gamma} \left( \chi_\mathrm{m}^{-1 \, \ast}[-\omega] \eta[\omega] + \chi_\mathrm{m}^{-1}[\omega] \eta^\dagger[\omega] \right) \notag\\ &&- 2 \omega_\mathrm{m} \sum_j \left( \alpha_j^\ast \chi_{j,\mathrm{c}}[\omega] \zeta_j[\omega] + \alpha \chi_{j,\mathrm{c}}^\ast[-\omega] \zeta^\dagger[\omega] \right) \Bigg] \ . \end{eqnarray} We have introduced the effective coupling rates $\alpha_j = g_j x_\mathrm{zpf} \bar{a}_j$, the operators \begin{equation} \label{eq:zetadef} \zeta_j[\omega] = \sqrt{\kappa_{j,\mathrm{ext}}} \left[ \frac{1}{2} \left(\delta x_j[\omega] + i \delta y_j[\omega] \right) + \hat{\xi}_{j,\mathrm{in}}[\omega] \right] + \sqrt{\kappa_{j,\mathrm{int}}} \, \hat{\xi}_{j}[\omega] \ , \end{equation} and the function \begin{equation} \label{eq:Ndef} N[\omega] = \chi^{-1}_\mathrm{m}[\omega] \chi^{-1 \, \ast}_\mathrm{m}[-\omega] - 2 i \omega_\mathrm{m} \sum_j |\alpha_j|^2 \left( \chi_{j,\mathrm{c}}[\omega] - \chi^\ast_{j,\mathrm{c}}[-\omega] \right) \ . \end{equation} Eqs.~\eqref{eq:SolutionEOM} gives the optical output field $\hat{a}_{j,\mathrm{out}}(t) = \sqrt{\kappa_{j,\mathrm{ext}}} \, \hat{a}_j(t) - \hat{a}_{j,\mathrm{in}}(t)$ from mode $j$. For later use, we calculate the average phonon number $n_\mathrm{m} = \langle \hat{c}^\dagger \hat{c} \rangle$. In the weak coupling limit $|\alpha_\mathrm{s}|, |\alpha_\mathrm{p}| \ll \kappa_\mathrm{s}, \kappa_\mathrm{p}$, one finds \begin{equation} \label{eq:19} n_\mathrm{m} = \frac{\gamma n_\mathrm{th} + \sum_j \gamma_{j} n_{j}}{\tilde{\gamma}} \ . \end{equation} Here, $\tilde{\gamma} = \gamma + \gamma_\mathrm{s} + \gamma_\mathrm{p}$ is the effective mechanical linewidth, and the optical contributions to it are given by \begin{equation} \label{eq:gammaOpt} \gamma_{j} = -4 |\chi_{j,\mathrm{c}}[\omega_\mathrm{m}]|^2 |\chi_{j,\mathrm{c}}[-\omega_\mathrm{m}]|^2 \Delta_j |\alpha_j|^2 \kappa_j \, \omega_\mathrm{m} \ . \end{equation} Furthermore, we define \begin{eqnarray} \label{eq:23} \gamma_{j} n_{j} &=& \frac{|\alpha_j|^2}{4} \Big\{ \kappa_{j,\mathrm{ext}} \Big[|B_{j,+}[\omega_\mathrm{m}]|^2 C_{j,xx} + |B_{j,-}[\omega_\mathrm{m}]|^2 C_{j,yy} \notag \\ & & + 2 \, \mathrm{Im} (B_{j,+}[\omega_\mathrm{m}] B^\ast_{j,-}[\omega_\mathrm{m}]) C_{j,xy} \Big] + \kappa_j |\chi_{j,\mathrm{c}}[-\omega_\mathrm{m}]|^2 \Big\} \ \end{eqnarray} with $B_{j,\pm}[\omega] = e^{-i \phi_j} \chi_{j,\mathrm{c}}[\omega] \pm e^{i \phi_j} \chi^\ast_{j,\mathrm{c}}[-\omega]$ and $e^{i \phi_j} = \alpha_j/|\alpha_j|$. Finally, we also note that the optical spring effect leads to an effective mechanical resonance frequency $\tilde{\omega}_\text{m} = \omega_\text{m} + \delta_\mathrm{s} + \delta_\mathrm{p}$, where \begin{equation} \label{eq:deltaOmegam} \delta_j = 2 |\chi_{j,\mathrm{c}}[\omega_\mathrm{m}]|^2 |\chi_{j,\mathrm{c}}[-\omega_\mathrm{m}]|^2 \Delta_j |\alpha_j|^2 [(\kappa_j/2)^2 - \omega_\mathrm{m}^2 + \Delta_j^2] \end{equation} is the shift due to mode $j$. \section{Toy example} To illustrate the role of technical noise in the optical sidebands, we consider a simplified example. We treat the optomechanical system classically, and focus on a single optical mode (omitting the index) with amplitude $a(t) = e^{-i \Omega t} (\bar{a} + d(t))$, where $d(t)$ are the classical fluctuations around a mean amplitude $\bar{a}$. In addition to neglecting vacuum noise, we also neglect laser phase noise and thermal noise of the mechanical bath. Finally, we consider the case where the cavity is driven on resonance, i.e.~$\Delta = 0$. The equations of motion are then \begin{eqnarray} \label{eq:ToyEOM} \dot{d} & = & - \frac{\kappa}{2} d - i \alpha z + \frac{\sqrt{\kappa_\mathrm{ext}}}{2} \delta x(t) \\ \dot{c} & = & - \left(\frac{\gamma}{2} + i \omega_\mathrm{m} \right) c - i \alpha \left(d + d^\ast \right) \end{eqnarray} with $\alpha$ real. Instead of considering white amplitude noise, we imagine that the amplitude of the drive is modulated at a frequency $\omega_\mathrm{n}$, such that $\delta x(t) = 2 \sqrt{C_{xx}} \cos \omega_\mathrm{n} t$. The optical force on the oscillator is then proportional to \begin{equation} \label{eq:ToyForce} d(t) + d^\ast(t) = 2 \sqrt{\kappa_\mathrm{ext} C_{xx} } \, |\chi[\omega_\mathrm{n}]| \cos (\omega_\mathrm{n} t - \vartheta_\mathrm{n}) \end{equation} where the phase $\vartheta_\mathrm{n}$ is defined by $\chi_\mathrm{c}[\omega_\mathrm{n}] = |\chi_\mathrm{c}[\omega_\mathrm{n}]| e^{i \vartheta_\mathrm{n}}$. The dimensionless oscillator position becomes \begin{equation} \label{eq:ToyzOpt} z(t) = 2 \sqrt{\kappa_\mathrm{ext} C_{xx} } \, \alpha \, |\chi_\mathrm{c}[\omega_\mathrm{n}]| \Big[ \cos (\omega_\mathrm{n} t - \vartheta_\mathrm{n}) \mathrm{Im} \, \chi_\mathrm{m}[\omega_\mathrm{n}] - \sin (\omega_\mathrm{n} t - \vartheta_n) \mathrm{Re} \, \chi_\mathrm{m}[\omega_\mathrm{n}] \Big] \end{equation} when assuming $\omega_\mathrm{n}$ is positive and close to $\omega_\mathrm{m}$, and $\omega_\mathrm{m}/\gamma \gg 1$. The real part of the mechanical susceptibility is a Lorentzian as a function of $\omega_\mathrm{n}$, whereas the imaginary part is antisymmetric around the mechanical frequency: \begin{equation} \label{eq:ToyRealImag} \mathrm{Re} \, \chi_\mathrm{m}[\omega_\mathrm{n}] = \frac{\gamma/2}{(\gamma/2)^2 + (\omega_\mathrm{n} - \omega_\mathrm{m})^2} \quad , \quad \mathrm{Im} \, \chi_\mathrm{m}[\omega_\mathrm{n}] = \frac{\omega_\mathrm{n} - \omega_\mathrm{m}}{(\gamma/2)^2 + (\omega_\mathrm{n} - \omega_\mathrm{m})^2} \ . \end{equation} As one would expect, the mechanical oscillation goes through a phase shift of $\pi$ as the modulation frequency $\omega_\mathrm{n}$ is swept through the mechanical resonance, and the oscillation is out of phase with the force at resonance $\omega_\mathrm{n} = \omega_\mathrm{m}$. We write the optical output amplitude $d_\mathrm{out}(t) = \sqrt{\kappa_\mathrm{ext}} d(t) - \delta x(t)/2$ as a sum of two terms, \begin{equation} \label{eq:ToydOut} d_\mathrm{out}(t) = d_{\mathrm{out},\delta x}(t) + d_{\mathrm{out},z}(t) \ , \end{equation} where $d_{\mathrm{out},\delta x}(t)$ is the amplitude for the reflected and cavity filtered signal $\delta x(t)$, whereas $d_{\mathrm{out},z}(t)$ is the part that comes from the motion of the mechanical oscillator. We define the output spectrum as \begin{equation} \label{eq:ToySpectrum} S[\omega] = \int_{-\infty}^\infty d \tau \, e^{i \omega \tau} \langle d_\mathrm{out}^\ast(t + \tau) d_\mathrm{out}(t) \rangle_\mathrm{time} \ , \end{equation} where $\langle \ \rangle_\mathrm{time}$ denotes averaging over the time $t$. The spectrum consists of three terms, $S[\omega] = S_{\delta x,\delta x}[\omega] + S_{z,z}[\omega] + S_{\delta x,z}[\omega]$. The first term is the spectrum of $d_{\mathrm{out},\delta x}(t)$, which becomes \begin{equation} \label{eq:ToySxx} S_{\delta x,\delta x}[\omega] = \frac{C_{xx}}{4} \big|\kappa_\mathrm{ext} \chi_\mathrm{c}[\omega_\mathrm{n}] - 1 \big|^2 \times 2 \pi \Big[\delta(\omega - \omega_\mathrm{n}) + \delta(\omega + \omega_\mathrm{n}) \Big] \ . \end{equation} The absolute value describes the promptly reflected signal, the cavity filtered signal, and their interference. The second term in $S[\omega]$ is the spectrum of $d_{\mathrm{out},z}(t)$, which is proportional to the position spectrum of the mechanical oscillator. We find \begin{equation} \label{eq:ToySzz} S_{z,z}[\omega] = \kappa_\mathrm{ext}^2 \alpha^4 C_{xx} |\chi_\mathrm{c}[\omega_\mathrm{n}]|^4 |\chi_\mathrm{m}[\omega_\mathrm{n}]|^2 \times 2 \pi \Big[\delta(\omega - \omega_\mathrm{n}) + \delta(\omega + \omega_\mathrm{n}) \Big] \ . \end{equation} This is proportional to the absolute square of the mechanical susceptibility, which has a Lorentzian dependence on $\omega_\mathrm{n}$, as one would expect from a damped and driven harmonic oscillator. Note also that $S_{z,z}[\omega]$ is symmetric in $\omega$ as is required of a spectrum of a real, classical variable \cite{Clerk2010Introduction}. The last term in $S[\omega]$ results from optomechanical correlations between the modulation $\delta x$ and the oscillator position $z$: \begin{eqnarray} \label{eq:ToySxz} S_{\delta x,z}[\omega] & \equiv & \int_{-\infty}^\infty d \tau \, e^{i \omega \tau} \langle d^\ast_{\mathrm{out},z}(t + \tau) d_{\mathrm{out},\delta x}(t) + d^\ast_{\mathrm{out},\delta x}(t + \tau) d_{\mathrm{out},z}(t) \rangle_\mathrm{time} \notag\\ & = & \kappa_\mathrm{ext} \alpha^2 C_{xx} |\chi_\mathrm{c}[\omega_\mathrm{n}]|^2 \Big[ \left( \kappa_\mathrm{ext} |\chi_\mathrm{c}[\omega_\mathrm{n}]| \cos \vartheta_\mathrm{n} - \cos 2 \vartheta_\mathrm{n} \right) \mathrm{Re} \, \chi_\mathrm{m}[\omega_\mathrm{n}] \notag \\ & &- \left( \kappa_\mathrm{ext} |\chi_\mathrm{c}[\omega_\mathrm{n}]| \sin \vartheta_\mathrm{n} - \sin 2 \vartheta_\mathrm{n} \right) \mathrm{Im} \, \chi_\mathrm{m}[\omega_\mathrm{n}] \Big] \notag \\ & & \times 2 \pi \Big[\delta(\omega - \omega_\mathrm{n}) - \delta(\omega + \omega_\mathrm{n}) \Big] \ . \end{eqnarray} We see that this term depends on both the real and imaginary parts of the mechanical susceptibility. Note also that the term $S_{\delta x,z}[\omega]$ is antisymmetric in $\omega$. So far we considered amplitude modulation at a single frequency $\omega_\mathrm{n}$. In the case of white noise, there is amplitude modulation at all frequencies simultaneously. The spectrum in that case can be found by simply integrating the above spectrum over all frequencies $\omega_\mathrm{n}$. In the limit where the mechanical decay rate is small compared to the cavity decay rate, $\gamma \ll \kappa$, this gives a spectrum consisting of a noise floor, a Lorentzian $|\chi_\mathrm{m}[\omega]|^2$, and the antisymmetric function given by the imaginary part of the mechanical susceptibility. There are two important lessons to be learned from this calculation. The first is that the sidebands of the optical output spectrum are not Lorentzian in general, but can also have an antisymmetric part due to optomechanical correlations. The second is that even if the antisymmetric parts are small or vanish (which for example happens when $\kappa_\mathrm{ext} = \kappa$ in this example) and the two sidebands are Lorentzian, one cannot necessarily conclude that an asymmetry between these peaks at zero detuning is due to the mechanical oscillator being in the quantum regime. An asymmetry between the Lorentzian peaks can also occur due to classical optomechanical correlations. In Section \ref{sec:SidebandWeights}, we will see that neglecting this effect can lead to an underestimation of the effective phonon number. \section{The heterodyne spectrum} \label{section-heterodyne} We now calculate the heterodyne spectrum that results from beating between the local oscillator beam and one of the beams entering the cavity. For this calculation, we need not specify whether it is the lock or the cooling beam that is used for readout. We will simply refer to it as the measurement beam below. To simplify the notation, we will drop the subscript ($\mathrm{s}$ or $\mathrm{p}$) on the operators and parameters that refer to the measurement beam. The other beam will not affect the heterodyne spectrum, except indirectly through the renormalized frequency, linewidth, and mean phonon number of the mechanical oscillator. We can thus omit this beam in the discussion below. The local oscillator beam is at the frequency $\Omega - \omega_\mathrm{if}$, where $\omega_\mathrm{if} > 0$ is the intermediate frequency between the measurement beam and the local oscillator. Including the local oscillator, the external input mode is now \begin{equation} \label{eq:InputLO} \hat{a}_{\mathrm{in}}(t) = e^{-i \Omega t} \left[ K + \frac{1}{2} \Big(\delta x(t) + i \, \delta y(t) \Big) \right] \left(1 + \sqrt{r} \, e^{i (\omega_\mathrm{if} t + \theta)} \right) + \hat{\xi}_{\mathrm{in}}(t) \end{equation} where $r = (P_{lo}/P) \times \omega_{s}/(\omega_s + \omega_{if}) \approx P_{lo}/P \gg 1$ is the ratio between the local oscillator power and the power of the beam used for measurement. The phase $\theta$ is not important here, as the spectrum will not depend on it. Since $\omega_\mathrm{if} \gg \omega_\mathrm{m}, \kappa$, the local oscillator does not affect the mechanical oscillator and we can assume that it is promptly reflected. The output mode can be expressed as $\hat{a}_\mathrm{out}(t) = e^{-i \Omega t} \left(\bar{a}_\mathrm{out}(t) + \hat{d}_\mathrm{out}(t) \right) $ where $\bar{a}_\mathrm{out}(t)$ describes the average amplitudes of the reflected beams, \begin{equation} \label{eq:OutMean} \bar{a}_\mathrm{out}(t) = - K \left( \rho + \sqrt{r} \, e^{i (\omega_\mathrm{if} t + \theta)} \right) \ , \end{equation} with $\rho = 1 - \kappa_{\mathrm{ext}}/(\kappa/2 - i \Delta)$. The first term describes the measurement beam which can be attenuated by the interaction with the cavity if there is internal dissipation, i.e.~if $\kappa_{\mathrm{int}} \neq 0$. The second term describes the promptly reflected local oscillator. The fluctuations around these average amplitudes are given by \begin{equation} \label{eq:OutFluct} \hat{d}_\mathrm{out}(t) = \sqrt{\kappa_\mathrm{ext}} \hat{d}(t) - \frac{1}{2}\Big(\delta x(t) + i \, \delta y(t) \Big) \left(1 + \sqrt{r} \, e^{i (\omega_\mathrm{if} t + \theta)} \right) - \hat{\xi}_{\mathrm{in}}(t) \end{equation} where $\hat{d}(t)$ is given by Eq.~\eqref{eq:SolutionEOM}. The term proportional to $\sqrt{r}$ is the promptly reflected technical noise in the local oscillator beam. To calculate the spectrum $S[\omega]$ of the photocurrent $i(t)$, we need to evaluate \begin{equation} \label{eq:SpectrumDef} S[\omega] = \lim_{T \rightarrow \infty} \frac{1}{T} \int_{-T/2}^{T/2} d t \, \int_{-\infty}^{\infty} d \tau \, e^{i \omega \tau} \, \overline{i(t) i(t + \tau)} \ , \end{equation} where the average involves an average over the photoelectron counting distribution \cite{Carmichael1993Open}, which itself is an ensemble average. The current-current correlation function can be expressed by \cite{Carmichael1987Spectrum} \begin{eqnarray} \label{eq:CurrentCorr} \overline{i(t) i(t + \tau)} = G^2 \Big( \sigma^2 \langle : \hat{I}(t) \hat{I}(t + \tau) : \rangle + \sigma \langle \hat{I}(t) \rangle \delta(\tau) \Big) \ . \end{eqnarray} where $\hat{I}(t) = \hat{a}^\dagger_\mathrm{out}(t) \hat{a}_\mathrm{out}(t)$, the colons indicate normal and time ordering, and $\sigma$ is the dimensionless detection efficiency. $G$ is the photodetector gain in units of charge, i.e.~the proportionality constant between the current and the number of photon detections per time. Although $G$ is in general frequency dependent, we will assume that it is approximately constant over an interval of the effective mechanical linewidth $\tilde{\gamma}$. The last term in Eq.~\eqref{eq:CurrentCorr} is due to self-correlation of photoelectric pulses (here we have assumed the detector has infinite bandwidth for simplicity). The flux operator $\hat{I}(t)$ has many terms, but we are only interested in the beating terms that oscillate at approximately the intermediate frequency $\omega_\mathrm{if}$. The noise in $\hat{I}(t)$ at the sidebands $\omega_\mathrm{if} \pm \omega_\text{m}$ has two contributions - beating between the average local oscillator beam and the fluctuations in the measurement beam, and beating between the average measurement beam and the noise in the local oscillator beam. Both of these contributions are proportional to $K \sqrt{r}$. We let $S_\mathrm{rr}[\omega]$ denote the spectrum $S[\omega]$ at the red sideband, i.e.~around the frequency $\omega_\mathrm{r} = \omega_\mathrm{if} - \tilde{\omega}_\text{m}$. After a straightforward but tedious derivation, we find \begin{equation} \label{eq:SpectrumRed} S_\mathrm{rr}[\omega] = G_\mathrm{r}^2 \, \sigma \, r K^2 \left[ F_\mathrm{rr} + \frac{\tilde{\gamma} L_\mathrm{rr} + (\omega - \omega_\mathrm{r}) A_\mathrm{rr}}{(\tilde{\gamma}/2)^2 + (\omega - \omega_\mathrm{r})^2} \right] \ , \end{equation} where we have made the assumption of weak coupling $|\alpha| \ll \kappa$ and $G_\mathrm{r}$ is the gain at frequency $\omega_\mathrm{r}$. The spectrum consists of three terms. The first term is a constant noise floor, whose size is determined by the coefficient \begin{eqnarray} \label{eq:Frr} F_\mathrm{rr} & = & 1 + \frac{\sigma}{4} \Big[\left(|\rho|^2 + |\kappa_\mathrm{ext} \chi_\mathrm{c}[-\omega_\mathrm{m}] - 1|^2 \right) \left(C_{xx} + C_{yy} \right) \notag\\ & & - 2 \, \mathrm{Re} \big[\rho^\ast \left(\kappa_\mathrm{ext} \chi_\mathrm{c}[-\omega_\mathrm{m}] - 1\right)\left(C_{xx} + 2 i C_{xy} - C_{yy} \right) \big] \Big] \ . \end{eqnarray} The first term in \eqref{eq:Frr} is due to shot noise, and the other terms result from technical noise. As a sanity check, we note that for $\kappa_\mathrm{ext} = 0$ or for $|\Delta| \rightarrow \infty$, i.e.~when the measurement beam does not enter the cavity, this coefficient reduces to $F_\mathrm{rr} = 1 + \sigma C_{xx}$. This is independent of phase noise, as it should be since a photodetector cannot detect phase noise directly. The second term in Eq.~\eqref{eq:SpectrumRed} is a Lorentzian centered on the frequency $\omega_\mathrm{r}$ with a width equal to the mechanical linewidth $\tilde{\gamma}$. The coefficient of this term is \begin{eqnarray} \label{eq:Lrr} L_\mathrm{rr} & = & \sigma \kappa_\mathrm{ext} |\alpha|^2 \Big[ |\chi_\mathrm{c}[-\omega_\mathrm{m}]|^2 (n_\mathrm{m} + 1) + \mathrm{Re} \, \tilde{B}[\omega_\mathrm{m}]\Big] \end{eqnarray} with \begin{eqnarray} \label{eq:Btilde} \tilde{B}[\omega] & = & \frac{\kappa_\mathrm{ext}}{4} |\chi_\mathrm{c}[-\omega]|^2 e^{-i \phi} \Big[(C_{xx} + i C_{xy}) B_+[\omega] + (i C_{xy} - C_{yy}) B_-[\omega]\Big] \notag\\ & & - \frac{1}{4} \chi^\ast_\mathrm{c}[-\omega] e^{-i \phi} \Big[ (C_{xx} B_+[\omega] + i C_{xy} B_-[\omega])(1 + \rho) \notag\\ & & + (i C_{xy} B_+[\omega] - C_{yy} B_-[\omega])(1 - \rho) \Big] \end{eqnarray} and $B_\pm[\omega] = e^{-i \phi} \chi_\mathrm{c}[\omega] \pm e^{i \phi} \chi_\mathrm{c}^\ast[-\omega]$. The first term in (\ref{eq:Lrr}) is the contribution from the mechanical oscillator spectrum, whereas the second originates from optomechanical correlations between the oscillator position and the technical laser noise. The third term in the red sideband spectrum Eq.~\eqref{eq:SpectrumRed} is proportional to the imaginary value of the effective mechanical susceptibility and thus changes sign at $\omega_\mathrm{r}$. This antisymmetric term is absent if there is no technical laser noise. Its coefficient is \begin{eqnarray} \label{eq:Arr} A_\mathrm{rr} & = & 2 \sigma \kappa_\mathrm{ext} |\alpha|^2 \, \mathrm{Im} \, \tilde{B}[\omega_\mathrm{m}] \ . \end{eqnarray} We now move on to the blue sideband at $\omega_\mathrm{b} = \omega_\mathrm{if} + \tilde{\omega}_\text{m}$ and denote the spectrum around this frequency by $S_\mathrm{bb}[\omega]$, finding \begin{equation} \label{eq:SpectrumBlue} S_\mathrm{bb}[\omega] = G_\mathrm{b}^2 \, \sigma \, r K^2 \left[ F_\mathrm{bb} + \frac{\tilde{\gamma} L_\mathrm{bb} + (\omega - \omega_\mathrm{b}) A_\mathrm{bb}}{(\tilde{\gamma}/2)^2 + (\omega - \omega_\mathrm{b})^2} \right] \ , \end{equation} where $G_\mathrm{b}$ is the photodetector gain at the frequency $\omega_\mathrm{b}$. The spectrum at the blue sideband has the same three terms as the red sideband, but with different coefficients. The noise floor is determined by \begin{eqnarray} \label{eq:Fbb} F_\mathrm{bb} & = & 1 + \frac{\sigma}{4} \Big[\left(|\rho|^2 + |\kappa_\mathrm{ext} \chi_\mathrm{c}[\omega_\mathrm{m}] - 1|^2 \right) \left(C_{xx} + C_{yy} \right) \notag\\ && - 2 \, \mathrm{Re} \big[\rho^\ast \left(\kappa_\mathrm{ext} \chi_\mathrm{c}[\omega_\mathrm{m}] - 1\right)\left(C_{xx} + 2 i C_{xy} - C_{yy} \right) \big] \Big] \ , \end{eqnarray} the coefficient of the Lorentzian term is \begin{eqnarray} \label{eq:Lbb} L_\mathrm{bb} & = & \sigma \kappa_\mathrm{ext} |\alpha|^2 \Big[ |\chi_\mathrm{c}[\omega_\mathrm{m}]|^2 n_\mathrm{m} - \mathrm{Re} \, \tilde{B}[-\omega_\mathrm{m}]\Big] \end{eqnarray} and the coefficient of the antisymmetric term is \begin{eqnarray} \label{eq:Abb} A_\mathrm{bb} & = & - 2 \sigma \kappa_\mathrm{ext} |\alpha|^2 \, \mathrm{Im} \, \tilde{B}[-\omega_\mathrm{m}] \ . \end{eqnarray} \section{Sideband weights} \label{sec:SidebandWeights} Let us define the sideband weights $W_\mathrm{rr}$ and $W_\mathrm{bb}$ as the frequency integral of the spectra $S_\mathrm{rr}[\omega] - S_{0,\mathrm{rr}}$ and $S_\mathrm{bb}[\omega] - S_{0,\mathrm{bb}}$, where $S_{0,\mathrm{rr}}$ and $S_{0,\mathrm{bb}}$ are the noise floors at the red and blue sidebands, respectively. We also assume that the difference in gains at the red and blue sidebands are accounted for. The antisymmetric parts proportional to $A_\mathrm{rr}$ and $A_\mathrm{bb}$ will not contribute to the integral, and we find that the ratio of the sideband weights is \begin{equation} \label{eq:RatioSidebandWeights} \frac{W_\mathrm{bb}}{W_\mathrm{rr}} = \frac{ |\chi_\mathrm{c}[\omega_\mathrm{m}]|^2 n_\mathrm{m} - \mathrm{Re} \, \tilde{B}[-\omega_\mathrm{m}] }{ |\chi_\mathrm{c}[-\omega_\mathrm{m}]|^2 (n_\mathrm{m} + 1) + \mathrm{Re} \, \tilde{B}[\omega_\mathrm{m}] } \ . \end{equation} In the absence of technical laser noise, and at zero detuning $\Delta = 0$, this reduces to the Boltzmann weight, $W_\mathrm{bb}/W_\mathrm{rr} = n_\mathrm{m}/(n_\mathrm{m} + 1)$, as is well known \cite{Rae2007Theory,Marquardt2007Quantum}. In general, however, the ratio of the sideband weights do not provide a direct measure of the effective phonon number $n_\mathrm{m}$. To determine $n_\mathrm{m}$ by this method, one needs to know the detuning $\Delta$, the decay rates $\kappa, \kappa_\mathrm{ext}$, and the noise coefficients $C_{xx}$ etc.~to a sufficient accuracy. To illustrate that one needs to be careful in this regard, let us for a moment assume that $\kappa_\mathrm{int} = \Delta = 0$ and that phase noise dominates, i.e.~$C_{xx} \ll C_{xy} , C_{yy}$. This gives \begin{equation} \label{eq:RatioExample} \frac{W_\mathrm{bb}}{W_\mathrm{rr}} = \frac{n_\mathrm{m} + C_{xy} |\chi_\mathrm{c}[\omega_\mathrm{m}]|^2 \kappa \omega_\mathrm{m}/2 }{n_\mathrm{m} + 1 + C_{xy} |\chi_\mathrm{c}[\omega_\mathrm{m}]|^2 \kappa \omega_\mathrm{m}/2} = \frac{n_\mathrm{est}}{n_\mathrm{est} + 1} \ , \end{equation} such that one would naively estimate the average phonon number to be $n_\mathrm{est} = n_\mathrm{m} + C_{xy} |\chi_\mathrm{c}[\omega_\mathrm{m}]|^2 \kappa \omega_\mathrm{m}/2$ if technical noise is neglected. We see that, since the cross-correlation coefficient $C_{xy}$ can be negative, this can potentially lead to underestimating the phonon number. Note also that the absence of the phase noise coefficient $C_{yy}$ in this simple example crucially depends on the assumption of exactly zero detuning. \begin{figure} \includegraphics[width=\textwidth]{figure3.pdf} \caption{Laser noise and cooling limits. \textbf{a} Classical phase noise $C_{yy}$ of our cooling laser for $P_\text{p}$ = 1 \textmu W measured using the cold cavity as a reference, both with (blue) and without (red) the filter cavity. Near $\omega_\text{m}$, the unfiltered noise background corresponds to $C_{yy} = 200$~at 1 \textmu W. At nearby frequencies (e.g. 270 kHz) the filter cavity performs as expected, but membrane vibrations (at 261 kHz) and other technical noise added by our system clouds the measurement of $C_{yy}$ at other frequencies. The large peak at 263 kHz corresponds to an intentional, known phase modulation (applied with an EOM) that we use as a reference to calibrate this data. The unfiltered data was taken with $P_\text{lo} = 423$~\textmu W and $P_\text{p} = $ 1.5 \textmu W. The filtered data was taken with $P_\text{lo} = $~239 \textmu W, $P_\text{p} =$ 16.3 \textmu W. \textbf{b} Predicted phonon occupancy versus cooling laser power for zero (red), one (blue), and two (purple) passes through the filter cavity described in the text.} \label{fig3} \end{figure} \section{Discussion} The above analysis makes it clear that in order to reliably perform a calibrated heterodyne thermometry measurement, we must first develop a reliable characterization of the laser's classical noise. We have made some initial estimates using the experimental apparatus described above. It is straightforward to determine the amplitude noise $C_{xx}$ by directly measuring laser power fluctuations with a photodiode (and subtracting the shot noise and the photodiode's dark noise) \cite{Yang2011Progress}. For our cooling laser, this yields a value $C_{xx}$ = 0.02 for laser power $P_p = 1$ \textmu W. We can estimate the phase noise $C_{yy}$ by using the optical circuit described above, and allowing the membrane cavity to serve as a reference. We do this by comparing the noise spectra of the laser light leaving the cryostat under two conditions: with the laser tuned far from the cavity resonance (so the signal photodiode is only sensitive to amplitude noise) and with the laser near resonance (so phase noise is converted to amplitude noise) \cite{Yang2011Progress}. Figure \ref{fig3}a shows a plot of the cooling laser's phase noise near $\omega_\text{m}$. The ``unfiltered'' (red) spectrum corresponds to the free-running cooling laser used in the experiment. A peak from the membrane's thermal motion, along with a known phase modulation peak at 263 kHz (use to calibrate this data), sits on top of a broad background arising from the cooling laser's intrinsic phase noise of $C_{yy} \approx 200$ at 1 \textmu W near $\omega_\text{m}$. The estimate shown in Fig. \ref{fig3}a assumes $C_{xy} = 0$ for simplicity, though letting $C_{xy}$ vary over the allowed range $\pm \sqrt{C_{xx}C_{yy}}$ only changes this estimate by a few percent. Given this estimate of the cooling laser's classical noise, we can estimate the fundamental limits of laser cooling with this system using Eq. \ref{eq:19} above. The curve labeled ``unfiltered'' in Fig. \ref{fig3}b shows the expected average phonon occupancy as a function of power for the free-running cooling laser. Also included in this calculation is a 1.5 \textmu W signal laser with $\Delta_\text{s} = 0$, $C_{xx} = 0.13$, $C_{xy} = 0$, and $C_{yy} = 780$. These values of $C_{xx}$ and $C_{yy}$ correspond to similar measurements of the signal laser, and we again assume $C_{xy} \approx 0$ (the result in Fig. \ref{fig3}b is insensitive to the value of $C_{xy}$). The minimum phonon occupancy that could be achieved with the current cryogenic apparatus is $\sim 30$, corresponding to a temperature $\sim$ 375 \textmu K. In an effort to reduce the classical noise, we have inserted a filter cavity in the cooling laser's room-temperature beam path. This cavity has a resonance width $\kappa_\text{filter}/2\pi$ = 22 kHz, meaning the cooling laser's classical noise power should scale down by a factor $1 + 4\omega_\text{m}^2/\kappa_\text{filter}^2 \sim 500$. We lock the filter cavity to the free-running cooling laser and measure its noise again as shown in Fig. \ref{fig3}a. We observe the expected reduction at some frequencies near $\omega_\text{m}$ (e.g. 270 kHz), and attribute the remaining noise structure to our use of the acoustically-sensitive membrane cavity as the measurement reference. Nonetheless, the observation of filtered laser noise while locked to the cryogenic cavity is encouraging, and we expect the filter cavity to perform as predicted over the full spectrum in a vibration-isolated system. Once the filter cavity is locked to the cooling laser, it is straightforward to rotate the polarization of the output light and pass it through the filter cavity again with no additional feedback \cite{Hald2005Efficient}. This enables four poles of passive filtering, and would further reduce the cooling laser noise. Such a double-filtered cooling laser would allow the membrane to be laser cooled very close to its quantum mechanical ground state, as shown in Fig. \ref{fig3}b. \section{Acknowledgments} The authors acknowledge support from AFOSR (No. FA9550-90-1-0484), NSF 0855455, NSF 0653377, and NSF DMR-1004406. KB acknowledges financial support from The Research Council of Norway and from the Danish Council for Independent Research under the Sapere Aude program. The authors would also like to acknowledge helpful conversations and technical support from N. Flowers-Jacobs. \section{References}
1,477,468,750,768
arxiv
\section{Introduction} The \ftime{generalized hypergeometric series} $ \texthyper{p}{q}% {\alpha_1,\ldots,\alpha_{p}} {\beta_1,\ldots,\beta_q}{x} $ is defined by \begin{equation}\label{E:hser_pre} \hyper{p}{q}% {\alpha_1,\alpha_2,\ldots,\alpha_{p}} {\beta_1,\beta_2,\ldots,\beta_q}{x} \coloneqq \sum_{n=0}^\infty \frac{(\alpha_1)_n(\alpha_2)_n\cdots(\alpha_p)_n} {(\beta_1)_n(\beta_2)_n\cdots(\beta_q)_n}\cdot\frac{x^n}{n!} \end{equation} for the given non-negative integer numbers $p$ and $q$, complex parameters $\alpha_1,\alpha_2,\ldots,\alpha_p$,\linebreak $\beta_1,\beta_2,\ldots,\beta_q$ and $x$ (see, e.g.,~\cite[\S2.1]{AAR}), where $(z)_0 \coloneqq 1$, $(z)_n \coloneqq z(z+1)(z+2)\cdots(z+n-1)$ $(n\geq 1)$ is the \ftime{Pochhammer symbol}. Using series~\eqref{E:hser_pre}, one can represent most of~elementary and special functions, as well as constants that arise in mathematics and physics (see, e.g.,~\cite{Olver2010}). Hence, this class of series has many applications in the approximation theory and numerical analysis. Evaluating a~generalized hypergeometric series of the form~\eqref{E:hser_pre} can be a~very challenging numerical problem. First, its~convergence or divergence depends mainly on the values of the numbers $p$ and $q$. Second, the variety of the parameters~$\alpha_j, \beta_j, x$ leads to different types of the convergence or divergence of the series. It is worth mentioning that the \ftime{sequence transformations} may be very useful numerical tools for the summation of the series given by~\eqref{E:hser_pre}. In the most recent edition of the book~\textit{Numerical Recipes} by Press et al., some classic sequence transformations are described involving summation of convergent or divergent series; see~\cite[\S 5.3]{NumRec}. Probably the best known class of sequence transformations are Pad\'e approximants which deal with partial sums of the power series and transform them to a~double indexed sequence of rational approximants. For further information on Pad\'e approximants, we refer to the books by Baker~\cite{Baker75} or by Baker and Graves-Morris~\cite{BakerGravesMorris}. As an alternative to the theory of Pad\'e approximants, one can use the sequence transformations, for which the fundamentals were given by Wimp \cite{Wimp}, and the extrapolation methods described by~Bezinski and Redivo Zaglia in~\cite{BrezinskiZaglia91} or~Sidi~\cite{Sidi03}. Most of the classic algorithms were also well summarized by Weniger in the~report~\cite{Weniger} and Homeier in~\cite{Homeier}. Undoubtedly these sequence transformations have many common properties with Pad\'e approximants, but in the case of generalized hypergeometric series they can be much more powerful tools. It is worth mentioning that evaluation of special functions with the help of convergence acceleration techniques is also considered in the book by~Olver et al.~\cite[\S3.9]{Olver2010}. In general, the success of convergence acceleration of the series~$\sum_{n=0}^\infty a_n$ very often depends on the behavior of the sequence of its \ftime{partial sums} $s_n \coloneqq \sum_{j=0}^{n-1} a_j$ and~\ftime{remainders} $r_n \coloneqq \sum_{j=0}^\infty a_{n+j}$. Supposing $s_n\to s$, one has $s=s_n+r_n$. We say that $\{s_n\}$ converges \ftime{linearly}, if $\lim_{n\to\infty} (s_{n+1}-s)/(s_n-s)\eqqcolon\varsigma$ and $0<\abs\varsigma<1$. In the case of $\varsigma=1$, we have \ftime{logarithmic} convergence, which is usually the most difficult to accelerate. If~$p-q=1$ then the series~\eqref{E:hser_pre} may be extremely slowly convergent and practically unusable. Therefore, the methods of convergence acceleration of~such series are particularly important. For instance, in order to use any of so-called \ftime{Levin-type} sequence~transformations, one should first provide good estimates $\omega_n$ of the remainders $r_n$; see, e.g., \cite{Homeier}. This can be achieved with the help of recent results given by Willis in~\cite{Willis2012}. Namely, he analyzed the following asymptotic relation for the remainders of the~series~\eqref{E:hser_pre} with $p-q=1$: \begin{equation} s-s_n = r_n \sim \mu x^n n^\lambda \sum_{k=0}^\infty \frac{c_k}{n^k}. \end{equation} He derived the recurrence relation for the coefficients $c_k$, which gives the ability to approximate the remainders $r_n$ up to the desired order. Willis combined this asymptotic property with a~classic technique of extrapolation and obtained a~numerical algorithm of computing 2-dimensional array containing approximations of the limit of the series. Namely, the truncated estimate \[ \transf\omega nm \coloneqq x^n n^\lambda \sum_{k=0}^\infty \frac{c_k}{n^k}, \qquad m\in\field{N}}% symbol of set of natural numbers N={1,2,..., \] is such that \[ s = s_n + \mu \, \transf\omega nm + \bigO{x^n n^{\lambda-m}} \] and thus the new approximation defined by the weighted average \[ \transf snm \coloneqq \frac{\transf \omega{n+1}m s_n - \transf\omega nm s_{n+1}}{\transf\omega{n+1}m-\transf\omega nm} \] is a~better approximation of the partial sum $s_n$ in the sense that the sequence $\transf snm$ converges to the limit $s$ faster than the sequence of partial sums $s_n$. It is worth remarking that Willis' method is very efficient at the branch point $x=1$. However, it does not seem to provide good numerical properties, if $x\approx 1$. In this paper, we continue the analysis of $\mathscr{Q}$ transformation, introduced by us in~\cite{WoznyNowak}, applied to the summation of~generalized hypergeometric series~\eqref{E:hser_pre} with~$p-q=1$. Following the notation given in~\cite{WoznyNowak}, we consider the series \begin{equation}\label{E:hser} \sum_{n=0}^\infty a_n \qquad\text{with}\qquad a_n \coloneqq \frac{(\alpha_1)_n(\alpha_2)_n\cdots(\alpha_p)_n} {(\beta_1)_n(\beta_2)_n\cdots(\beta_p)_n}\,x^n \quad (x\in\fC), \end{equation} where $\boldsymbol\alpha\coloneqq (\alpha_1,\alpha_2,\ldots,\alpha_p)$, $\boldsymbol\beta\coloneqq (\beta_1,\beta_2,\ldots,\beta_p)$ are vectors of~complex parameters. Notice that the~series~\eqref{E:hser} corresponds to the following generalized hypergeometric series \begin{equation* \hyper{p+1}{p} {\alpha_1,\alpha_2,\ldots,\alpha_p,1} {\beta_1,\beta_2,\ldots,\beta_p} {x} \!, \end{equation*} which is the case of~the series~\eqref{E:hser_pre} with $p-q=1$. Let us remark that $\mathscr{Q}$ transformation can be applied to the series~\eqref{E:hser_pre} also in the case with none of the upper parameters being equal to $1$, since one can always use the following obvious relation \[ \hyper{r}{s}{\alpha_1,\alpha_2,\ldots,\alpha_r}{\beta_1,\beta_2,\ldots,\beta_s}{x} = \hyper{r+1}{s+1}{\alpha_1,\alpha_2,\ldots,\alpha_r,1}{\beta_1,\beta_2,\ldots,\beta_s,1}{x}. \] The main purpose of this paper is to give more theoretical properties of the $\mathscr{Q}$ transformation. For a~detailed comparison with other methods of convergence acceleration, we refer to~\cite{WoznyNowak} and~\cite{Wozny2010}, where the bunch of numerical examples involving many classic and recent sequence transformations, such as Aitken's iterated $\Delta^2$ process \cite{Aitken26}, Wynn's $\varepsilon$-algorithm \cite{Wynn56}, $t$ and $u$ variants of Levin \cite{Levin73} and Weniger \cite{Weniger92} transformations, Homeier's transformations \cite{Hom94}, method proposed by Lewanowicz and Paszkowski \cite{LewanowiczPaszkowski}, method proposed by {\v{C}{\'i}{\v z}ek} et~al. \cite{CZS} and the method of Paszkowski \cite{Paszkowski08}, were given; see also the Brezinski's review of convergence acceleration techniques \cite{Brezinski00}. The convergence of~the series~\eqref{E:hser} depends on the variety of parameters $\boldsymbol\alpha$, $\boldsymbol\beta$ and the complex number $x$. If $\abs x<1$, then the series converges absolutely. On the unit circle, the convergence is more subtle and depends on the real part of the parameter $\sigma \coloneqq 1+\sum_{i=1}^p \alpha_i - \sum_{i=1}^p \beta_i$. Namely, the series~\eqref{E:hser} converges at $x=1$, if~$\Re\sigma < 0$, as well as for $\abs x=1$ ($x\neq 1$), if $\Re\sigma < 1$. Many other mathematical properties of the generalized hypergeometric series ${}_{p+1}F_p$ can be found in such classic sources as \cite{Abramowitz}, \cite{AAR}, \cite{Magnus} or~\cite{Slater}. It is also worth mentioning the recent research of Miller and Paris \cite{MillerParis2011,MillerParis2012,MillerParis2013}, Rathie and Paris \cite{RathieParis2013} and Kim et al.~\cite{KimRathieParis2014}, where certain transformations were proposed to simplify the so-called \textit{order} of generalized hypergeometric series. Several refinements of such a~class of series can be also found in the very recent work of Wang \cite{Wang2016}. In~\cite{WoznyNowak}, the authors introduced the technique of~summation of some slowly convergent series, which appeared to be very efficient also in the case of the generalized hypergeometric series. The proposed $\mathscr{Q}^{(m)}$ transformation is defined by a~certain linear difference operator $\opL{m}_n$ in the following way: \begin{equation}\label{E:Tnw:L} \Tnw nm \coloneqq \frac{\opL{m}_n(s_n)}{\opL{m}_n(1)}, \qquad m\in\field{N}}% symbol of set of natural numbers N={1,2,.... \end{equation} Here, and in the sequel, every difference operator acts upon $n$ and not upon $m$. The meaning of the linear operator $\opL{m}_n$ is that it annihilates a~finite part of the remainder $r_n$. More precisely, it is required that \begin{equation} \label{E:Lm:sat} \opL{m}_n\left(a_n+a_{n+1}+\ldots+a_{n+m-1}\right)= 0. \end{equation} In a~general case, for~arbitrary sequence $a_n$, the computation of~the quantities $\Tnw nm$ is rather complicated and the numerical version of the algorithm is recommended; see~\cite{Wozny2010}. However, in the case of the series~\eqref{E:hser}, one can compute the quantities $\Tnw nm$ in a~very efficient way, i.e., using the~algorithm involving certain recurrence formulas for numerators and denominators in~\eqref{E:Tnw:L}; cf.~\cite[Alg.~1, Thm.~2]{WoznyNowak}. For~convenience of~reference, we briefly summarize the main results from~\cite{WoznyNowak} in the case of the series~\eqref{E:hser}. For a~given vector $\boldsymbol\gamma=(\gamma_1,\gamma_2,\ldots,\gamma_p)$, we use the~following shorthand notation \begin{equation} \label{E:mult} \mult{\gamma}_n\coloneqq\prod_{j=1}^p (\gamma_j)_n,\qquad n\in\field{N}}% symbol of set of natural numbers N={1,2,...\cup\{0\}, \end{equation} and write $a_n = \mult{\alpha}_n/\mult{\beta}_n\,x^n$. The operators $\opL{m}_n$ satisfying \eqref{E:Lm:sat} can be written in the form \begin{equation} \label{E:Lm:p1Fp} \opL{m}_n = \opDelta^{mp}\left(\frac{\mult{\beta}_{n+m-1}}{\mult{\alpha}_{n}\,x^{n}} \opID\right),\qquad{m\in\field{N}}% symbol of set of natural numbers N={1,2,...}; \end{equation} see~\cite[Eq.~(3.4)]{WoznyNowak}. Here, the \ftime{forward difference operator} $\opDelta$ and the~\ftime{identity operator} $\opID$ are defined according to $\opDelta z_n \coloneqq z_{n+1}-z_n$ and $\opID z_n \coloneqq z_n$, respectively; higher powers of~the~operator $\opDelta$ are defined recursively, i.e., $\opDelta^0 z_n \coloneqq z_n$ and $\opDelta^m z_n \coloneqq \opDelta( \opDelta^{m-1} z_n), m\in\field{N}}% symbol of set of natural numbers N={1,2,...$. From a~computational point of view, it is worth noting that operators $\opL{m}_n$ can be written also in the factored form $\opL{m}_n = \opP{m}_n\opP{m-1}_n\cdots\opP{1}_n$, where the operators $\opP m_n$ are defined by \begin{subequations} \label{E:Pm:q1Fq} \begin{align} \opP{1}_n &\coloneqq \opDelta^p\left(\frac{\mult{\beta}_n}{\mult{\alpha}_n}\,x^{-n} \opID\right), \\ \label{E:Pm:q1Fq:b} \opP{m}_n &\coloneqq \sum_{j=0}^{p} \binom{mp}{j} \left[\opDelta^j\prod_{j=1}^{p} \left({\beta_j+n+m(p+1)-j-2}\right) \right]\opDelta^{p-j},\wideversion{\qquad}{\quad} m\geq 2. \end{align} \end{subequations} Thus, the quantities $\Tnw nm$ can be computed using the following recursive scheme (see~\cite[Alg.~1]{WoznyNowak}): \begin{subequations}\label{E:alg} \begin{alignat}{3} \opN 0n &\coloneqq s_n,&\qquad \opD 0n &\coloneqq 1,\\ \opN mn &\coloneqq \opP m_n(\opN{m-1}{n}),&\qquad \opD mn &\coloneqq \opP m_n(\opD{m-1}{n}),&\qquad m&\geq 1,\\ \Tnw{n}{m} &= \frac{\opN{m}{n}}{\opD mn}. \end{alignat} \end{subequations} From eqs.~\eqref{E:Pm:q1Fq} and~\eqref{E:alg}, one may conclude that \begin{equation}\label{E:Tnw:transformation} \Tnw nm = \Tnw nm( s_n, s_{n+1}, \ldots, s_{n+\ell(m)} ), \end{equation} where $\ell(m) = mp$, which means that $\mathscr{Q}^{(m)}$ transforms the sequence $\{s_n\}_{n=0}^\infty$ to the sequence $\{\Tnw nm\}_{n=0}^\infty$ whose $n$-th element depends on $s_n, s_{n+1}, \ldots, s_{n+mp}$. Let us remark that Levin-type sequence transformations produce the double indexed quantities \[ \transf{\mathcal L}nm \coloneqq \transf{\mathcal L}nm( \{\omega_n\}, s_n, s_{n+1}, \ldots, s_{n+\ell(m)} ), \] depending both on the partial sums and on the sequence of remainder estimates~$\{\omega_n\}$ (see, e.g., \cite{Homeier}), with such relationships as $\ell(m) = m$, $\ell(m) = m+1$, $\ell(m) = 2m$ or $\ell(m) = 3m$; see, e.g., \cite[\S2.7]{Weniger}. For example, the classic variants of Levin transformation have $\ell(m) = m$ and involve the following choices of remainder estimates: $\omega_n = a_n$, $\omega_n = a_{n+1}$, $\omega_n = (n+1) a_n$ or $\omega_n = \frac{a_n a_{n+1}}{a_n - a_{n+1}}$; see~the paper of Levin \cite{Levin73} and work of Smith and Ford \cite{SmithFord79}. The advantage of $\mathscr{Q}^{(m)}$ transformation is that the information about remainder estimates $\omega_n$ is a~priori hidden in the analytic form of the~operators~$\opP m_n$, given by the explicit formulas~\eqref{E:Pm:q1Fq}. It should be remarked that the~operators~\eqref{E:Pm:q1Fq} seem to provide the~transformation $\mathscr{Q}^{(m)}$ which is a~very powerful numerical tool for the summation of the generalized hypergeometric series \eqref{E:hser_pre} with $p-q=1$. Some theoretical properties of~$\mathscr{Q}^{(m)}$ transformation were also given in~\cite{WoznyNowak}. In the case of the series~\eqref{E:hser}, we can summarize them as follows. If $p=1$, then $\mathscr{Q}^{(m)}$ transformation is~equivalent to Wynn's $\varepsilon$ algorithm \cite{Wynn56} in the sense that $\Tnw{n}{m} = \varepsilon_{2m}^{(n)}$, which follows from the general property given in~\cite[\S2.3]{WoznyNowak}. It is worth mentioning that the explicit formula for $\varepsilon_{2m}^{(n)}$ in the case of $\texthyper{2}{1}{\alpha,1}{\beta}{x}$ has already been given by Sidi in~\cite[Ex.~2]{Sidi81} (see also \cite[\S17.3]{Sidi03}). The mentioned relation between $\mathscr{Q}^{(m)}$ and $\varepsilon$ transformations does not hold for $p>1$. Supposing that the series~\eqref{E:hser} is convergent, we also know that~$\mathscr{Q}^{(m)}$ transformation is \ftime{regular} for all $m\in\field{N}}% symbol of set of natural numbers N={1,2,...$, if~$x\neq 1$, i.e., \[ \lim_{n\to\infty}\Tnw{n}{m}\left(s_n\right) = \hyper{p+1}{p}{\alpha_1,\alpha_2,\ldots,\alpha_p,1} {\beta_1,\beta_2,\ldots,\beta_p}{x}; \] see~\cite[Thm.~5]{WoznyNowak}. What is more, it possesses the following asymptotic behavior: \[ \hyper{p+1}{p}{\alpha_1,\alpha_2,\ldots,\alpha_p,1} {\beta_1,\beta_2,\ldots,\beta_p}{x}- \Tnw{n}{m}\left(s_n\right) =\bigO{x^{{n+m(p+1)}}}, \qquad x\rightarrow 0; \] see~\cite[Thm.~6]{WoznyNowak}. Moreover, a~lot~of numerical tests show that the sequence $\Tnw nm$ not only converges to the limit of the series but also converges much faster for bigger and bigger values of~$m$. The~paper is organized as follows. In Section \ref{S:new}, we give some new properties of $\mathscr{Q}^{(m)}$ transformation in the case of the series~\eqref{E:hser} including the main result, which is the convergence acceleration theorem. Later, in~Section~\ref{S:experiments}, we give some numerical examples and, in Section~\ref{S:problems}, discuss further problems concerning theoretical properties of $\mathscr{Q}^{(m)}$ transformation. \section{Main result}\label{S:new} Let us consider~the~transformation $\Tnw{}m$, defined~in~\eqref{E:Tnw:L}, in~the case of the series~\eqref{E:hser}. Following~\cite{WoznyNowak}, we define the functions $\transf{\lambda}{j}{m}(n) \equiv \transf{\lambda}{j}{m}(n,p,x,\boldsymbol\alpha,\boldsymbol\beta)$ by \begin{equation*} \transf{\lambda}{j}{m}(n) \coloneqq \wideversion{% \frac{\mult{\alpha}_{n+mp}\,x^{n+mp}}{\mult{\beta}_{n+m-1}} \left[ (-1)^{mp-j}\binom{mp}{j} \frac{\mult{\beta}_{n+j+m-1}}{\mult{\alpha}_{n+j}\,x^{n+j}} \right], \quad j=0,1,\ldots,mp, }{% (-1)^{mp-j}\binom{mp}{j} \frac{\mult{\alpha}_{n+mp}\,x^{n+mp}}{\mult{\beta}_{n+m-1}} \frac{\mult{\beta}_{n+j+m-1}}{\mult{\alpha}_{n+j}\,x^{n+j}},\; j=0,1,\ldots,mp, } \end{equation*} and thus, using~\eqref{E:Lm:p1Fp}, we can write the transformation~$\Tnw{}m$ as follows: \begin{equation} \label{E:Q:lambda} \Tnw{n}{m} = \frac{\optr Lm(s_n)}{\optr Lm(1)} = \frac{\displaystyle \sum_{j=0}^{mp}{\transf{\lambda}{j}{m}(n)}\, s_{n+j}}{\displaystyle \sum_{j=0}^{mp}{\transf{\lambda}{j}{m}(n)}}. \end{equation} Since \begin{equation}\label{E:lambda} \transf{\lambda}{j}{m}(n) = \binom{mp}{j}(-x)^{mp-j}\,\prod_{i=1}^p \left[ (\alpha_i+n+j)_{mp-j}(\beta_i+n+m-1)_j \right] \end{equation} (see~\cite[Eq.~(3.11)]{WoznyNowak}), the quantity $\Tnw nm$ is a~linear combination of the quantities $s_n$, $s_{n+1}$, \ldots, $s_{n+mp}$ with~coefficients~$\transf\lambda jm(n)$ being polynomials of degree $mp^2$ in~$n$. In the~lemma~below, we~express the~element $\Tnw nm$ in terms of~$s_n$ and~$a_n, a_{n+1}, \ldots, a_{n+mp-1}$. In the sequel, we use the following polynomials in~$n$: \begin{equation}\label{E:M} {M^{(m)}(n)} \equiv M^{(m)}_0(n), \qquad {M^{(m)}_k(n)} \coloneqq \sum_{j=k}^{mp}\transf{\lambda}{j}{m}(n), \qquad k=0,1,\ldots,mp. \end{equation} \begin{lemma}\label{L:Q:a} The quantity $\Tnw nm$ can be written as the following linear combination involving the partial sum $s_n$ and the terms $a_n$, $a_{n+1}$, \ldots, $a_{n+mp-1}$: \begin{equation} \Tnw{n}{m} = s_n+\sum_{k=0}^{mp-1} \frac{M^{(m)}_{k+1}(n)}{M^{(m)}(n)} \, a_{n+k}, \qquad m,n\in\field{N}}% symbol of set of natural numbers N={1,2,.... \end{equation} \end{lemma} \begin{proof} First, let us observe that \( \Tnw{n}{m} = \sum_{j=0}^{mp}\transf{\lambda}{j}{m}(n) s_{n+j}/M^{(m)}(n). \) Second, we have \begin{multline* \Tnw{n}{m} =\frac{\displaystyle \sum_{j=0}^{mp}\transf{\lambda}{j}{m}(n)\left(s_n+a_n+a_{n+1}+\ldots+a_{n+j-1}\right)}{M^{(m)}(n)}\\ =\frac{\displaystyle \sum_{j=0}^{mp}\transf{\lambda}{j}{m}(n) s_n+\sum_{k=0}^{mp-1} \left(\sum_{j=k+1}^{mp}\transf{\lambda}{j}{m}(n)\right)a_{n+k}}{M^{(m)}(n)} =s_n+\sum_{k=0}^{mp-1} \frac{M^{(m)}_{k+1}(n)}{{M^{(m)}(n)}} a_{n+k}. \end{multline*} \end{proof} Now, we are going to give some theoretical results about the convergence acceleration performed by $\mathscr{Q}$ transformation. The first theorem gives the necessary and sufficient condition for the convergence acceleration. Next, we show that this condition holds under a~certain assumption which is discussed later --- in Subsection~\ref{comments}. The statement concerning the convergence acceleration of the linearly convergent series~\eqref{E:hser} is given thereafter. We also analyze the convergence at the points $x=\pm 1$. \begin{theorem}\label{T} Consider the series~\eqref{E:hser} with partials sums $s_n$ converging to $s$. The transformation $\mathscr{Q}^{(m)}$ accelerates the convergence of $\{s_n\}$, i.e., \begin{equation} \lim_{n\to\infty}\frac{\Tnw nm - s}{s_n-s} = 0, \qquad m\in\field{N}}% symbol of set of natural numbers N={1,2,..., \end{equation} if and only if \begin{equation}\label{E:T:cond} \lim_{n\to\infty} \sum_{k=0}^{mp-1} \frac{M^{(m)}_{k+1}(n)}{M^{(m)}(n)} \cdot \frac{a_{n+k}}{r_n} = 1, \end{equation} where $M_k(n)$ and $M(n)$ are defined by~\eqref{E:M}. \end{theorem} \begin{proof} The theorem follows immediately from Lemma~\ref{L:Q:a}, since \begin{equation*} \lim_{n\to\infty}\frac{\Tnw nm - s}{s_n-s} = 1 - \lim_{n\to\infty}\left(\frac{1}{r_n} \sum_{k=0}^{mp-1} \frac{M^{(m)}_{k+1}(n)}{M^{(m)}(n)} a_{n+k}\right). \end{equation*} \end{proof} In the next theorem, we show that the condition~\eqref{E:T:cond} is fulfilled for a~certain class of convergent hypergeometric series~\eqref{E:hser}. Let us remark that the further properties hold upon the following assumption. \begin{assumption}\label{asmpt} We assume that the remainders of the~series~\eqref{E:hser} satisfy \begin{equation* \lim_{n\to\infty} \frac{r_{n+1}}{r_n} = x. \end{equation*} \end{assumption} \begin{theorem}\label{T:M:a} If~Assumption~\ref{asmpt} holds and $x\neq 1$, then the condition \eqref{E:T:cond} is satisfied for all $m\in\field{N}}% symbol of set of natural numbers N={1,2,...$, and thus the transformation $\mathscr{Q}^{(m)}$ accelerates the convergence of $\{s_n\}$. \end{theorem} \begin{proof} Let $\transf cjm$ be the~coefficient for the term $n^{mp^2}$ in~$\transf\lambda jm(n)$, i.e., \[ \transf cjm \coloneqq \binom{mp}{j}(-x)^{mp-j}, \quad j=0,1,\ldots,mp; \] cf.~\eqref{E:lambda}. Using binomial theorem, one can check that the following two relationships hold: \begin{equation} \label{E:sum_cj} \sum_{j=0}^{mp} \transf cjm = (1-x)^{mp}, \qquad \sum_{j=0}^{mp} \transf cjm \, x^j = 0. \end{equation} Therefore, if~$x\neq 1$, the polynomial $\transf M{}m(n)$, given by~\eqref{E:M}, is~exactly of~degree $mp^2$. This yields \[ \lim_{n\to\infty} \frac{M^{(m)}_{k+1}(n)}{M^{(m)}(n)} = \sum_{j=k+1}^{mp} c^{(m)}_j \bigg/ \sum_{j=0}^{mp} c^{(m)}_j = (1-x)^{-mp} \sum_{j=k+1}^{mp} c^{(m)}_j. \] Using the fact that $a_{n+k} = r_{n+k}-r_{n+k+1}$ and Assumption~\ref{asmpt}, we obtain \begin{multline*} \lim_{n\to\infty} \sum_{k=0}^{mp-1} \frac{M^{(m)}_{k+1}(n)}{M^{(m)}(n)} \cdot \frac{a_{n+k}}{r_n} = (1-x)^{-mp} \sum_{k=0}^{mp-1} \sum_{j=k+1}^{mp} c^{(m)}_j (x^{k}-x^{k+1}) \\ = (1-x)^{-mp} \left( \sum_{j=0}^{mp} c^{(m)}_j - \sum_{j=0}^{mp} c^{(m)}_j x^j \right). \end{multline*} Now, the result follows from~\eqref{E:sum_cj}. \end{proof} \subsection{Some comments}\label{comments} Let us note that Assumption~\ref{asmpt} holds for each~series~\eqref{E:hser} with $\abs{x}<1$, which implies the linear convergence. It follows directly from~\cite[Thm.~1, p.~6]{Wimp} providing that \[ x = \lim_{n\to\infty} \frac{a_{n+1}}{a_n} = \lim_{n\to\infty} \frac{r_{n+1}}{r_n}, \] if $\abs x<1$. This yields the theoretical explanation of convergence acceleration performed by~$\mathscr{Q}^{(m)}$ transformation in the case of linearly convergent series~\eqref{E:hser}. Assumption~\ref{asmpt} is also satisfied if $x=1$ and all the terms $a_n$ are real and have the same sign; see~\cite[Thm.~2, p.~26]{Clark68}. In this case, the series~\eqref{E:hser} converges logarithmically. It is quite remarkable that the transformation $\mathscr{Q}^{(m)}$ seems to be very efficient also in such a~case. However, we cannot use Theorem~\ref{T:M:a} for $x=1$ in order to justify that~$\mathscr{Q}^{(m)}$ transformation leads to an~acceleration of the convergence. Let us remark that Assumption~\ref{asmpt} holds also for a~certain class of the series~\eqref{E:hser} with $x=-1$. Namely, from \cite[Thm.~3, p.~26]{Clark68}, we obtain that it is satisfied, if $\mult{\alpha}_n/\mult{\beta}_n$ decreases monotonically to zero and \begin{equation}\label{E:alt:cond} \lim_{n\to\infty} \frac{1+t_{n+1}}{1+t_n}=1, \end{equation} where $t_n \coloneqq a_{n+1}/a_n$. In the next section, we give some comments and numerical examples showing the consequences of~the given theorems. \section{Experiments}\label{S:experiments} Since the main purpose of this paper is to give the theoretical properties of the $\mathscr{Q}^{(m)}$ transformation, we refer to~\cite{WoznyNowak} and~\cite{Wozny2010}, for the numerical examples displaying, among other things, the detailed comparison with many classic methods of convergence acceleration. However, we would like to give some new examples in order to depict the conclusions of the theory given in Section~\ref{S:new}. We use $\mathscr{Q}^{(m)}$ transformation in order to approximate the sum of the generalized hypergeometric series $\texthyper{3}{2}{\alpha_1,\alpha_2,1}{\beta_1,\beta_2}{x}$. Let us remark that one can compute the array of quantities~$\Tnw nm$ using the recurrence formulas~\eqref{E:alg} and replacing the~operators~$\opP m_n$ with the equivalent ones: \begin{multline} \label{eq:Pm:3F2} \opP{m}_n \coloneqq x^2(n+2m+\alpha_1-2)_2(n+2m+\alpha_2-2)_2\,\opID\\ -2x(n+2m+\alpha_1-1)(n+2m+\alpha_2-1) [(n+2m+\beta_1-2)(n+2m+\beta_2-2)-m^2+m] \opE\\ +(n+m+\beta_1-1)(n+m+\beta_2-1)(n+3m+\beta_1-2)(n+3m+\beta_2-2)\,\opE^2 \end{multline} (cf.~\cite[Eq.~(3.8)]{WoznyNowak}), where the~\ftime{forward shift operators} $\opE$ and $\opE^2$ are defined according to $\opE z_n\coloneqq z_{n+1}$, and $\opE^2 z_n \coloneqq z_{n+2}$. In all the examples, we start with the column $\Tnw n0$ containing some finite number of partial sums of the series. Next, we use recurrence formulas~\eqref{E:alg} and obtain the triangular array of the quantities $\Tnw nm$. In the first example, we consider the alternating series \eqref{E:hser} with $x=-1$. In this case, one can use Theorem~\ref{T:M:a} in order to explain the convergence acceleration performed by the $\mathscr{Q}$ transformation. Indeed, we obtain highly accurate approximations in the triangular array $\Tnw nm$. In the second example, we consider the linearly convergent series~\eqref{E:hser}. In order to obtain a~very slow convergence, we take $x\approx 1$. Again, as a~consequence of Theorem~\ref{T:M:a}, the transformation $\mathscr{Q}$ provides a~quite good approximation of the limit of the series. In the last example, we consider the series~\eqref{E:hser} with $x=1$, which means the logarithmic convergence. One can observe that the $\mathscr{Q}$ transformation is very efficient, although this good performance cannot be justified by Theorem~\ref{T:M:a}. All the numerical experiments were made in \textsf{Maple{\small\texttrademark}~14} system, using floating point arithmetic with $32$ decimal digits precision. Consequently, as~in~\cite{WoznyNowak}, we measure the \ftime{accuracy} of the~approximation $z$ of the sum $s\neq 0$ by the number of exact significant decimal digits, in the following sense: \[ \mathtt{acc}(z)\coloneqq -\log_{10}\abs{\frac{z}{s}-1}. \] \begin{example}\label{EX:alt} Let us consider the following expansion of the integral involving the square root function: \begin{equation}\label{E:ex:alt} \frac{1}{z}\int_{0}^{z} \sqrt{1+y^\alpha} \mathrm{d}y = \hyper{2}{1}{\frac1\alpha, -\frac12}{1+\frac1\alpha}{-z^\alpha} \qquad (\alpha>0, \; 0<z\leq 1). \end{equation} Since the first parameter of the hypergeometric series is not equal to $1$, we consider the convergence acceleration of the series \wideversion{$\texthyper{3}{2}{1/\alpha,-1/2,1}{1+1/\alpha,1}{-z^\alpha}$.} {\[\texthyper{3}{2}{1,1/\alpha,-1/2}{1,1+1/\alpha}{-z^\alpha}.\]} We put $\alpha = 1/3$ and $z=1$. Thus, we obtain an alternating series of the form~\eqref{E:hser} with \begin{equation} \label{E:ex:alt:a} a_{n} = \frac{3 (-\tfrac12)_n }{(n+3) n!} \, x^n, \qquad x=-1, \end{equation} converging to $(44\sqrt2-16)/35\approx 1.3207256213. $ The convergence is quite slow since straightforward computation gives (underline depicts the correct digits): \wideversion{\[ s_{ 10} = \underline{1.32}19178336, \quad s_{ 100} = \underline{1.32072}97959, \quad s_{ 1000} = \underline{1.3207256}346, \quad s_{10000} = \underline{1.3207256213}. \]}{% \begin{alignat*}{2} s_{ 10} &= \underline{1.32}19178336,& \qquad s_{ 100} &= \underline{1.32072}97959, \\ s_{ 1000} &= \underline{1.3207256}346,& \qquad s_{10000} &= \underline{1.3207256213}. \end{alignat*} } This yields the following number of exact significant decimal digits of the limit of the series: \[ \mathtt{acc}(s_{10}) = 3.0,\quad \mathtt{acc}(s_{100}) = 5.5,\quad \mathtt{acc}(s_{1000}) = 8.0,\quad \mathtt{acc}(s_{10000}) = 10.5. \] One can check that $t_n \coloneqq a_{n+1}/a_{n} = -[(2n-1)(n+3)]/[(2n+2)(n+4)]$, and thus the equation~\eqref{E:alt:cond} holds. Since the fraction in~\eqref{E:ex:alt:a} decreases monotonically to zero, we conclude that Assumption~\ref{asmpt} is satisfied; cf.~\cite[Thm.~3, p.~26]{Clark68}. From Theorem~\ref{T:M:a}, we obtain that $\mathscr{Q}^{(m)}$ transformation accelerates the convergence of the considered series. Indeed, the accuracy of the quantities $\Tnw nm$ increases for bigger and bigger values of $m$; see Table~\ref{TAB:ex:alt}. It is worth noting that the quantity $\Tnw17$ gives about $21$ digits of accuracy, while the partial sums $s_1$, $s_2$, \ldots, $s_{15}$, that it depends on (see~Lemma~\ref{L:Q:a}), give less than $4$ digits. \begin{table}[htb] \caption{\label{TAB:ex:alt} Values of $\mathtt{acc}(\Tnw nm)$ for the hypergeometric series~\eqref{E:ex:alt} with $\alpha=1/3$ and $z=1$.} \center \begin{tabular}{c|rrrrrrrr} $n\diagdown m$ & \multicolumn{1}{c}{$ 0$} & \multicolumn{1}{c}{$ 1$} & \multicolumn{1}{c}{$ 2$} & \multicolumn{1}{c}{$ 3$} & \multicolumn{1}{c}{$ 4$} & \multicolumn{1}{c}{$ 5$} & \multicolumn{1}{c}{$ 6$} & \multicolumn{1}{c}{$ 7$} \\\hline $ 1$ & $ 0.6$ & $ 3.1$ & $ 6.1$ & $ 9.5$ & $12.6$ & $15.9$ & $19.3$ & $21.7$\\ $ 2$ & $ 1.4$ & $ 3.8$ & $ 6.9$ & $10.3$ & $13.4$ & $16.9$ & $19.9$\\ $ 3$ & $ 1.8$ & $ 4.4$ & $ 7.6$ & $11.0$ & $14.2$ & $17.9$ & $20.6$\\ $ 4$ & $ 2.1$ & $ 4.9$ & $ 8.2$ & $11.6$ & $14.9$ & $18.9$\\ $ 5$ & $ 2.3$ & $ 5.3$ & $ 8.7$ & $12.1$ & $15.6$ & $20.6$\\ $ 6$ & $ 2.5$ & $ 5.6$ & $ 9.2$ & $12.7$ & $16.2$\\ $ 7$ & $ 2.7$ & $ 5.9$ & $ 9.6$ & $13.1$ & $16.7$\\ $ 8$ & $ 2.8$ & $ 6.2$ & $10.0$ & $13.6$\\ $ 9$ & $ 2.9$ & $ 6.4$ & $10.4$ & $14.0$\\ $10$ & $ 3.0$ & $ 6.6$ & $10.7$\\ $11$ & $ 3.1$ & $ 6.8$ & $11.0$\\ $12$ & $ 3.2$ & $ 7.0$\\ $13$ & $ 3.3$ & $ 7.2$\\ $14$ & $ 3.4$\\ $15$ & $ 3.5$ \end{tabular} \end{table} \end{example} \begin{example}\label{EX:linconv} Let us consider the linearly convergent series \begin{equation* \hyper{2}{1} {\frac16,\frac13} {\frac12} {\frac{25}{27}} = \hyper{3}{2} {\frac16,\frac13,1} {\frac12,1} {\frac{25}{27}} = \frac34\sqrt3 \approx 1.2990381057; \end{equation*} see \cite{ZuckerJoyce2001}. The straightforward computation yields \[ s_{10} = \underline{1.2}573432291, \quad s_{100} = \underline{1.29903}15516, \quad s_{200} = \underline{1.29903810}41. \] This means the following accuracies: \[ \mathtt{acc}(s_{10}) = 1.5, \quad \mathtt{acc}(s_{100}) = 5.3, \quad \mathtt{acc}(s_{200}) = 8.9. \] However, using only partial sums $s_{1}, s_{2}, \ldots, s_{25}$, one can compute the triangular array of the~elements $\Tnw nm$ for~$1\leq n+2m\leq 25$, $0\leq m\leq 12$. For instance, we have \[ \Tnw{1}{10} = \underline{1.29903810}82, \quad \Tnw{1}{11} = \underline{1.29903810}60, \quad \Tnw{1}{12} = \underline{1.2990381057}, \] which yields the following accuracies: \[ \mathtt{acc}(\Tnw{1}{10}) = 7.0, \quad \mathtt{acc}(\Tnw{1}{11}) = 7.9, \quad \mathtt{acc}(\Tnw{1}{12}) = 8.7. \] The following is worth remarking: $1^\circ$ for the fixed subscript $n$, the quantities $\Tnw nm$ (for consecutive values of $m$) are of better and better accuracy (namely, every next element has, more or less, one exact decimal digit more); $2^\circ$ since Assumption~\ref{asmpt} is satisfied, this is in agreement with~Theorem~\ref{T:M:a}, providing that \[ \lim_{n\to\infty} \frac{\Tnw nm - s}{s_n - s} = 0, \] which means that each column of the array $\Tnw nm$ converges to $s$ faster than the sequence of partial sums $s_n$; $3^\circ$ the accuracy of the quantities $\Tnw nm$ shows the faster and faster convergence for consecutive columns of the table. \end{example} \begin{example}\label{EX:logconv1} Let us consider the complex vectors of~parameters: $\boldsymbol\alpha=(1.7+2.5i, 1.5+2.0i)$, $\boldsymbol\beta=(1.3-3.0i, 3.2-4.0i)$. The hypergeometric series \[ \texthyper{3}{2}{\alpha_1,\alpha_2,1}{\beta_1,\beta_2}{1} \approx 0.7808031959823745-0.2060305207425406 i \] is extremely slowly convergent. Indeed, the partial sums $s_{n}$ for $n=10^3, 10^5, 10^6$ give accuracy of about $2.3$, $2.9$ and $3.2$ exact significant decimal digits, respectively. In contrast, using only partial sums $s_1, s_2, \ldots, s_{15}$, the transformation $\mathscr{Q}$ gives the quantities $\Tnw nm$ being very good approximations of the limit; see Table~\ref{TAB:logconv1}. The reason for this is that the sufficient condition in~Theorem~\ref{T} is arguably satisfied. Indeed, the straightforward computation yields that numerical values of \[ \sum_{k=0}^{mp-1} \frac{M^{(m)}_{k+1}(n)}{M^{(m)}(n)} \cdot \frac{a_{n+k}}{r_n} \] approach $1.0$ when $n\to\infty$. Moreover, these values are closer and closer to $1.0$ for bigger and bigger values of $m$. We believe this explains the convergence acceleration performed by $\mathscr{Q}$ transformation; cf. Table~\ref{TAB:logconv1}. \begin{table}[htb] \caption{\label{TAB:logconv1} Values of~$\mathtt{acc}(\Tnw nm)$ for the hypergeometric series in Example~\ref{EX:logconv1}.} \center \begin{tabular}{c|rrrrrrrr} $n\diagdown m$ & \multicolumn{1}{c}{$ 0$} & \multicolumn{1}{c}{$ 1$} & \multicolumn{1}{c}{$ 2$} & \multicolumn{1}{c}{$ 3$} & \multicolumn{1}{c}{$ 4$} & \multicolumn{1}{c}{$ 5$} & \multicolumn{1}{c}{$ 6$} & \multicolumn{1}{c}{$ 7$} \\\hline $ 1$ & $ 0.4$ & $ 2.6$ & $ 4.6$ & $ 6.6$ & $ 8.5$ & $10.3$ & $12.2$ & $14.1$\\ $ 2$ & $ 0.7$ & $ 3.0$ & $ 5.1$ & $ 7.0$ & $ 9.0$ & $10.9$ & $12.7$\\ $ 3$ & $ 0.9$ & $ 3.4$ & $ 5.5$ & $ 7.5$ & $ 9.4$ & $11.3$ & $13.2$\\ $ 4$ & $ 1.1$ & $ 3.6$ & $ 5.8$ & $ 7.8$ & $ 9.8$ & $11.7$\\ $ 5$ & $ 1.2$ & $ 3.9$ & $ 6.1$ & $ 8.2$ & $10.2$ & $12.1$\\ $ 6$ & $ 1.3$ & $ 4.0$ & $ 6.3$ & $ 8.5$ & $10.5$\\ $ 7$ & $ 1.3$ & $ 4.2$ & $ 6.5$ & $ 8.7$ & $10.8$\\ $ 8$ & $ 1.4$ & $ 4.3$ & $ 6.7$ & $ 9.0$\\ $ 9$ & $ 1.4$ & $ 4.4$ & $ 6.9$ & $ 9.2$\\ $10$ & $ 1.5$ & $ 4.5$ & $ 7.1$\\ $11$ & $ 1.5$ & $ 4.6$ & $ 7.2$\\ $12$ & $ 1.5$ & $ 4.7$\\ $13$ & $ 1.5$ & $ 4.8$\\ $14$ & $ 1.6$\\ $15$ & $ 1.6$ \end{tabular} \end{table} \end{example} \section{Further problems}\label{S:problems} Let us remark that the statement in~Theorem~\ref{T:M:a} does not consider the case of $x=1$ in the series~\eqref{E:hser}. This case leads to the logarithmic convergence which is usually the most difficult to sum. Although we cannot use Theorem~\ref{T:M:a} in order to prove the convergence acceleration for the $\mathscr{Q}$ transformation, one can always try to check if~condition~\eqref{E:T:cond} is satisfied. This is exactly what was depicted in Example~\ref{EX:logconv1}. We strongly believe that the condition~\eqref{E:T:cond} is fulfilled for all the classes of logarithmically convergent series satisfying Assumption \ref{asmpt}. However, we do not know how to prove it in a~general way. Let us remark that in this~case, the polynomials $M^{(m)}(n)$, given by~\eqref{E:M}, are no longer of the~degree~$mp^2$; cf.~\eqref{E:sum_cj}. Therefore, the proof of Theorem~\ref{T:M:a} needs to be different in such a~case. However, one can try to check if the condition~\eqref{E:T:cond} is satisfied, provided that the~terms and~remainders of the~series~\eqref{E:hser} have, at~least formally, the following asymptotic representation: \begin{equation*} \frac{a_{n+1}}{a_n} \sim 1 + \frac{{b_1}}{n} + \frac{{b_2}}{n^2} + \ldots, \qquad \frac{r_{n+1}}{r_n} \sim 1 + \frac{{d_1}}{n} + \frac{{d_2}}{n^2} + \ldots. \end{equation*} Hence, one can check that \begin{alignat*}{2} {b_1} &= \sum_{j=1}^p\alpha_j-\sum_{j=1}^p\beta_j,& \qquad {b_2} &= \frac12\left( b_1^2 - \sum_{j=1}^p\alpha_j^2+\sum_{j=1}^p\beta_j^2 \right),\\ {d_1} &= b_1+1,& \qquad {d_2} &= (b_1^2+b_1 b_2+b_1+b_2)/b_1. \end{alignat*} Thus, using the fact that \[ \frac{a_{n+k}}{r_n} = \frac{r_{n+1}}{r_n} \frac{r_{n+2}}{r_{n+1}} \cdots \frac{r_{n+k}}{r_{n+k-1}} \left(1-\frac{r_{n+k+1}}{r_{n+k}}\right), \] one can obtain, after some algebra, that \[ \frac{1}{r_n} \sum_{k=0}^{mp-1} \frac{M^{(m)}_{k+1}(n)}{M^{(m)}(n)} a_{n+k} \sim 1 + \frac{\pi_2}{n^2} + \frac{\pi_3}{n^3} + \ldots. \] This compared with~condition~\eqref{E:T:cond} in~Theorem~\ref{T:M:a} explains the acceleration~of~the convergence obtained in Example~\ref{EX:logconv1}. But still the~statement in~Theorem~\ref{T} is an~open problem in the case of a~general form of the~series~\eqref{E:hser} with~$x=1$. Maybe even more interesting is to explain why we observe that the quantities $\Tnw nm$ give better and~better approximation of the limit of the series, for bigger and bigger values of~$m$; cf.~Tables~\ref{TAB:ex:alt} and~\ref{TAB:logconv1}. We believe that the main reason is~the relationship~\eqref{E:Lm:sat} satisfied by the operators~$\opL m_n$ defining~$\mathscr{Q}^{(m)}$ transformation. However, the proof of \[ \lim_{n\to\infty} \frac{\Tnw n{m+1}-s}{\Tnw nm-s} = 0, \qquad m\in\field{N}}% symbol of set of natural numbers N={1,2,..., \] seems to be very difficult not only in the case of logarithmic convergence, but also for a~linearly convergent series~\eqref{E:hser}. We leave it for the further research. \section*{Acknowledgements} The authors would like to thank the anonymous referees for valuable remarks and suggestions, as well as for pointing out many interesting possibilities of future research in the area of series acceleration. We would also like to express our gratitude to Prof.~E.~J.~Weniger for his interesting comments on our previous research and for stimulating us to further work during the conference \textit{Approximation and extrapolation of convergent and divergent sequences and series} (Luminy, 2009). \bibliographystyle{abbrv}
1,477,468,750,769
arxiv
\section{Introduction} \label{sec:intro} A {\em generalised $d$-gon} is a point--line incidence geometry $\Gamma$ whose bipartite incidence graph has diameter $d$ and girth $2d$. If each point of $\Gamma$ is incident with at least three lines, and each line is incident with at least three points, then $\Gamma$ is said to be {\em thick}. By the well-known Feit--Higman Theorem~\cite{FeitHigman}, thick finite generalised $d$-gons exist only for $d \in \{2,3,4,6,8\}$. In the present paper, we are concerned with the cases $d=6$ (generalised {\em hexagons}), and $d=8$ (generalised {\em octagons}). A {\em collineation} (or {\em automorphism}) of $\Gamma$ is a permutation of the point set of $\Gamma$, together with a permutation of the line set, such that the incidence relation is preserved (equivalently, an automorphism of the incidence graph of $\Gamma$ that preserves the parts). The only known thick finite generalised hexagons and octagons arise as natural geometries for certain exceptional groups of Lie type: $\text{G}_2(q)$ and $^3\text{D}_4(q)$ are collineation groups of generalised hexagons, and $^2\textnormal{F}_4(q)$ acts on a generalised octagon. In each case, the action of the collineation group is primitive on both the points and the lines of $\Gamma$, and transitive on the {\em flags} of $\Gamma$, namely the incident point--line pairs. Each action is also point-distance-transitive --- that is, transitive on each set of ordered pairs of points at a given distance from each other in the incidence graph --- and line-distance-transitive. Buekenhout and Van Maldeghem~\cite{BvMdistance} showed that point-distance-transitivity implies point-primitivity for a thick finite generalised hexagon or octagon, and proved that there exist no point-distance-transitive examples other than the known {\em classical} examples. The existence of other point-primitive or flag-transitive (thick finite) generalised hexagons or octagons remains an open question. Schneider and Van Maldeghem \cite{SvM} showed that a group $G$ acting point-primitively, line-primitively, and flag-transitively on a thick finite generalised hexagon or octagon must be an almost simple group of Lie type. That is, $S \le G \le \operatorname{Aut}(S)$, with $S$ a finite simple group of Lie type. Bamberg et~al.~\cite{PointPrimitiveGH&O} then showed that point-primitivity alone is sufficient to imply the same conclusion. We continue this work here, treating the families of Lie type groups which are of fixed rank and fixed characteristic. \begin{Thm} \label{thm:new} Let $G$ be a point-primitive collineation group of a thick finite generalised hexagon or generalised octagon $\Gamma$, with $S \le G \le \operatorname{Aut}(S)$ for some nonabelian finite simple group $S$. Then $S$ is not a Suzuki group or a Ree group of type $^2\textnormal{G}_2$. Moreover, if $S$ is a Ree group of type $^2\textnormal{F}_4$, then, up to point--line duality, $\Gamma$ is isomorphic to the classical Ree--Tits generalised octagon. \end{Thm} Theorem~\ref{thm:new} is proved in three sections: the Suzuki groups are considered in Section~\ref{sec:Sz}; the small and large Ree groups are dealt with in Sections~\ref{sec:2G2} and~\ref{sec:2F4}, respectively. \section{Preliminaries} \label{sec:pre} Let us first collect some basic facts and definitions. If a finite generalised hexagon or octagon $\Gamma$ is thick, then there exist constants $s,t \ge 2$ such that each point (line) of $\Gamma$ is incident with exactly $t+1$ lines ($s+1$ points), and $(s,t)$ is called the {\em order} of $\Gamma$. If $\mathcal P$ denotes the point set of $\Gamma$, then \cite[p.~20]{HendrikBook} \begin{equation} \label{pts} |\mathcal P| = \begin{cases} (s+1)(s^2t^2+st+1) & \text{if } \Gamma \text{ is a generalised hexagon,}\\ (s+1)(st+1)(s^2t^2+1) & \text{if } \Gamma \text{ is a generalised octagon.} \end{cases} \end{equation} Moreover, the integers $st$ and $2st$ are squares in the respective cases where $\Gamma$ is a generalised hexagon or generalised octagon. \begin{La} \label{2part} Let $\mathcal{P}$ be the point set of a thick finite generalised hexagon or generalised octagon $\Gamma$. \begin{itemize} \item[\textnormal{(i)}] If $2^a$ divides $|\mathcal{P}|$, where $a\ge 1$, then $|\mathcal{P}| > 2^{3a}$. \item[\textnormal{(ii)}] If $\Gamma$ is a generalised hexagon and $3^a$ divides $|\mathcal{P}|$, where $a\ge 1$, then $|\mathcal{P}| > 3^{3a-4}$. \item[\textnormal{(iii)}] If $\Gamma$ is a generalised octagon and $2^a3^b$ divides $|\mathcal{P}|$, where $a\ge 0$ and $b\ge 1$, then $|\mathcal{P}| > 2^a3^{2b}$. \end{itemize} \end{La} \begin{Prf} Let $(s,t)$ be the order of $\Gamma$. (i) First suppose that $\Gamma$ is a generalised hexagon. Since $s^2t^2+st+1$ is odd, $2^a$ must divide $s+1$. In particular, $s+1\ge 2^a$, and hence $s\ge 2^{a-1}$. Therefore, $|\mathcal{P}| > (s+1)s^2t^2 \ge 2^a(2^{a-1})^{2}2^2 = 2^{3a}$. Now let $\Gamma$ be a generalised octagon. Since $2st$ is a square, $st$ must be even, so $(st+1)(s^2t^2+1)$ is odd, and hence $2^a$ must divide $s+1$. Therefore, $|\mathcal{P}| > (s+1)s^3t^3 \ge 2^a(2^{a-1})^32^3 = 2^{4a} > 2^{3a}$. (ii) Since $s^2t^2+st+1$ is not divisible by $9$, $s+1$ must be divisible by $3^{a-1}$. In particular, $s+1\ge 3^{a-1}$, and hence $s\ge 3^{a-2}$. Therefore, $|\mathcal{P}| > (s+1)s^2t^2 \ge 3^{a-1}(3^{a-2})^22^2 > 3^{3a-4}$. (iii) Since $2st$ is a square, $st$ is even, so $s^2t^2+1$ is divisible by neither $2$ nor $3$. Hence, $2^a3^b$ divides $(s+1)(st+1)$. In particular, $(s+1)(st+1) \ge 2^a3^b$. Let us say that $s+1$ is divisible by $3^c$, and that $st+1$ is divisible by $3^d$, where $c+d=b$. If $c\ge 1$, then $s > 3^{c-1/2}$; and if $d\ge 1$, then $st > 3^{d-1/2}$. Also, $t-1 = (st+1) - (s+1)t$ is divisible by $3^{\min\{c,d\}}$, so $t > 3^{\min\{c,d\}}$. If $c \ge d$, then $c\ge 1$ and $t > 3^d$, and hence $|\mathcal{P}| > (s+1)(st+1)(st)^2 \ge 2^a3^b (3^{c-1/2}3^d)^2 = 2^a3^{b+2(c+d)-1} = 2^a3^{3b-1} \ge 2^a3^{2b}$. If $d > c$, then $d \ge (b+1)/2$, so $|\mathcal{P}| > (s+1)(st+1)(st)^2 \ge 2^a3^b (3^{d-1/2})^2 \ge 2^a3^b (3^{b/2})^2 = 2^a3^{2b}$. \end{Prf} Recall that a permutation group $G \le \operatorname{Sym}(\Omega)$ acts {\em primitively} on the set $\Omega$ if it acts transitively and preserves no nontrivial partition of $\Omega$, and that this is equivalent to the stabiliser $G_\omega$ of a point $\omega \in \Omega$ being a maximal subgroup of $G$. A maximal subgroup $M$ of an almost simple group $G$ with minimal normal subgroup $S$ is said to be a {\em novelty} maximal subgroup if $S \cap M$ is not maximal in $S$. Our notation is mostly standard: we write $D_{n}$ for a dihedral group of order $n$; $C_n$ denotes a cyclic group of order $n$; $[n]$ denotes an unspecified group of order $n$; and, for $q$ a prime power, $E_q$ denotes an elementary abelian group of order $q$. For information about the Suzuki and Ree simple groups of Lie type, we refer the reader to \cite{WilsonBook}, and the other references mentioned below. \section{Proof of Theorem~\ref{thm:new}: $S$ a Suzuki group} \label{sec:Sz} We now adopt the hypothesis of Theorem~\ref{thm:new}, assuming additionally that $S$ is isomorphic to $\textnormal{Sz}(q)={}^2\textnormal{B}_2(q)$, where $q=2^m$ with $m$ odd and at least $3$. Then \[ |S| = q^2(q^2+1)(q-1) = q^2(q+\sqrt{2q}+1)(q-\sqrt{2q}+1)(q-1). \] The outer automorphism group of $S$ is cyclic of order $m$. If we let $\sigma$ denote a generator of this group, then we have $G = S : \langle \sigma^j \rangle$ for some divisor $j$ of $m$. Let $\mathcal{P}$ be the point set of $\Gamma$, and let $x \in \mathcal{P}$. Observe first that the stabiliser $G_x$ cannot contain $S$: if it did, then $G_x$ would have the form $S:K$ for some maximal subgroup $K$ of the cyclic group $\langle \sigma^j \rangle$, and hence $|G:G_x|=|\mathcal{P}|$ would be a prime, which is seen to be impossible upon inspection of \eqref{pts}. Now, as explained in \cite[Section~7.3]{ColvaBook}, $G$ has no novelty maximal subgroups. Therefore, $S_x = G_x \cap S$ is a maximal subgroup of $S$, so $S$ itself acts primitively on $\mathcal{P}$, and hence to prove the theorem we may assume that $G=S$. The maximal subgroups of $S$ are \cite[Table 8.16]{ColvaBook}, up to conjugacy, \begin{itemize} \item[(i)] $E_q . E_q . C_{q-1}$, \item[(ii)] $D_{2(q-1)}$, \item[(ii)] $C_{q\pm\sqrt{2q}+1} : C_4$, \item[(iv)] $\textnormal{Sz}(q_0)$, where $q=q_0^r$ with $r$ prime and $q_0 > 2$. \end{itemize} \subsection{Case~(i)} \label{sz parabolic} Suppose that $S_x \cong E_q . E_q . C_{q-1}$. Suzuki~\cite{Suzuki} showed that $S$ is 2-transitive in this action. Since $S$ preserves the incidence relation on $\Gamma$, and therefore distance in the incidence graph of $\Gamma$, we have that the diameter of the incidence graph is at most three, a contradiction. \subsection{Cases~(ii)--(iv)} For the remaining cases, we apply Lemma~\ref{2part}(i). If $S_x \cong D_{2(q-1)}$, then \[ |\mathcal{P}| = |S:S_x| = \tfrac{1}{2} q^2(q^2+1) = 2^{2m-1}(2^{2m}+1) < 2^{4m}, \] contradicting Lemma~\ref{2part}(i) with $a=2m-1$, which says that $|\mathcal{P}| > 2^{6m-3}$. If $S_x \cong C_{q\pm\sqrt{2q}+1}: C_4$, then \[ |\mathcal{P}| = |S:S_x| = \tfrac{1}{4}q^2(q\mp+\sqrt{2q}+1)(q-1) = 2^{2m-2}(2^m\mp 2^{(m+1)/2} + 1)(2^m - 1) < 2^{4m-1}, \] contradicting Lemma~\ref{2part}(i) with $a=2m-2$, which says that $|\mathcal{P}| > 2^{6m-6}$. Finally, suppose that $S_x \cong \textnormal{Sz}(q_0)$, where $q=q_0^r$ with $r$ prime and $q_0 > 2$. Writing $q_0=2^\ell$, we have \[ |\mathcal{P}| = |S:S_x| = 2^{2\ell(r-1)} \frac{(2^{2\ell r} + 1)(2^{\ell r} - 1)}{(2^{2\ell} + 1)(2^{\ell} - 1)} < 2^{5\ell(r-1)+2}, \] contradicting Lemma~\ref{2part}(i) with $a=2\ell(r-1)$, which says that $|\mathcal{P}| > 2^{6\ell(r-1)}$. \section{Proof of Theorem~\ref{thm:new}: $S$ a Ree group of type $^2\textnormal{G}_2$} \label{sec:2G2} We now adopt the hypothesis of Theorem~\ref{thm:new} and assume that $S \cong {}^2\textnormal{G}_2(q)$, where $q=3^m$ with $m$ odd and at least $3$. Then \[ |S| = q^3(q^3+1)(q-1) = q^3(q+\sqrt{3q}+1)(q-\sqrt{3q}+1)(q^2-1). \] Let $\mathcal{P}$ be the point set of $\Gamma$, and let $x\in \mathcal{P}$. The outer automorphism group of $S$ is cyclic (of order $m$), so, as in Section~\ref{sec:Sz}, we first deduce that $G_x$ is a maximal subgroup of $G$ not containing $S$. The maximal subgroups of $G$ were determined by Kleidman~\cite[Theorem~C]{Kleidman}. In particular, $G$ has no novelty maximal subgroups, so it suffices to prove the theorem in the case where $G=S$. The maximal subgroups of $S$ are, up to conjugacy, \begin{itemize} \item[(i)] $E_q.E_q .E_q.C_{q-1}$, \item[(ii)] $C_2 \times \text{PSL}_2(q)$, \item[(iii)] $(E_4 \times D_{(q+1)/2}) : C_3$, \item[(iv)] $C_{q\pm\sqrt{3q}+1} : C_6$, \item[(v)] ${}^2\textnormal{G}_2(q_0)$, where $q=q_0^r$ with $r$ prime. \end{itemize} \subsection{Case~(i)} Suppose that $S_x \cong E_q.E_q .E_q.C_{q-1}$. Then $S$ acts 2-transitively on $\mathcal{P}$ \cite[p.~251]{DixonMortimer}. The same argument as in Section~\ref{sz parabolic} now provides a contradiction. \subsection{$\Gamma$ a generalised hexagon: cases~(ii)--(v)} \label{sec4.2} For cases~(ii)--(v) with $\Gamma$ a generalised hexagon, we use Lemma~\ref{2part}(ii). First suppose that $S_x \cong C_2\times \text{PSL}_2(q)$. The order of $S_x$ is $q(q^2-1)$, so \[ |\mathcal{P}| = |S:S_x| = q^2(q^2-q+1) = 3^{2m}(3^{2m}-3^m+1) < 3^{4m}, \] contradicting Lemma~\ref{2part}(ii) with $a=2m$, which says that $|\mathcal{P}| > 3^{6m-4}$. If $S_x \cong (E_4\times D_{(q+1)/2}) : C_3$, then \[ |\mathcal{P}| = |S:S_x| = \tfrac{1}{6}q^3(q-1)(q^2-q+1) = \tfrac{1}{2} 3^{3m-1}(3^m-1)(3^{2m}-3^m+1) < 3^{6m-1}, \] contradicting Lemma~\ref{2part}(ii) with $a=3m-1$, which says that $|\mathcal{P}| > 3^{9m-7}$. If $S_x \cong C_{q\pm \sqrt{3q} + 1} : C_6$, then \[ |\mathcal{P}| = |S:S_x| = q^3(q^2-1)(q\mp \sqrt{3q} + 1) = 3^{3m}(3^{2m}-1)(3^m\mp 3^{(m+1)/2} + 1) < 3^{6m+1}, \] contradicting Lemma~\ref{2part}(ii) with $a=3m$, which says that $|\mathcal{P}| > 3^{9m-4}$. Finally, suppose that $S_x \cong {}^2G_2(q_0)$, where $q=q_0^r$ with $r$ prime. Writing $q_0=3^\ell$, we have \[ |\mathcal{P}| = |S:S_x| = 3^{3\ell(r-1)} \frac{(3^{3\ell r} + 1)(3^{\ell r} - 1)}{(3^{3\ell} + 1)(3^{\ell} - 1)} < 3^{7\ell(r-1)+2}. \] If $\ell(r-1) \ge 3$, then this contradicts Lemma~\ref{2part}(ii) with $a=3\ell(r-1)$, which gives $|\mathcal{P}| > 3^{9\ell(r-1)-4}$. Otherwise, $(\ell,r)=(1,3)$, and there is no valid solution $(s,t)$ to equation~\eqref{pts}. \subsection{$\Gamma$ a generalised octagon: cases~(ii)--(iv)} Now suppose that $\Gamma$ is a generalised octagon. We first use Lemma~\ref{2part}(iii) to rule out cases~(ii)--(iv) for $S_x$, computing $|S:S_x|$ in each case as in Section~\ref{sec4.2}. First suppose that $S_x \cong C_2\times \text{PSL}_2(q)$. Then \[ |\mathcal{P}| = |S:S_x| = 3^{2m}(3^{2m}-3^m+1) < 3^{4m}, \] contradicting Lemma~\ref{2part}(iii) with $a=0$ and $b=2m$, which says that $|\mathcal{P}| > 3^{4m}$. Next, suppose that $S_x \cong (E_4\times D_{(q+1)/2}) : C_3$. Observe that $3^{3m}+1$ is divisible by $4$, because $3m$ is odd. Therefore, \[ |\mathcal{P}| = |S:S_x| = 2\cdot 3^{3m-1} \frac{3^{3m}+1}{4} < 2\cdot 3^{6m-2}, \] while Lemma~\ref{2part}(iii) with $a=1$ and $b=3m-1$ gives $|\mathcal{P}| > 2\cdot 3^{6m-2}$, a contradiction. Finally, suppose that $S_x \cong C_{q\pm \sqrt{3q} + 1} : C_6$. Observe that $3^{2m}-1$ is divisible by $2^3$ because $m$ is odd, and that $3^m\mp 3^{(m+1)/2} + 1$ is even. Therefore, \begin{align*} |\mathcal{P}| = |S:S_x| &= 2^4 3^{3m} \frac{(3^{2m}-1)(3^m\mp 3^{(m+1)/2} + 1)}{2^4} \\ &\le 2^4 3^{3m} \frac{(3^{2m}-1)(3^m + 3^{(m+1)/2} + 1)}{2^4} < 2^4 3^{6m-2}, \end{align*} while Lemma~\ref{2part}(iii) with $a=4$ and $b=3m$ gives $|\mathcal{P}| > 2^43^{6m}$, a contradiction. \subsection{$\Gamma$ a generalised octagon: case~(v)} Finally, we consider case~(v) with $\Gamma$ a generalised octagon. The approach is similar to that used for cases~(ii)--(iv), but requires a little more care. Suppose that $S_x \cong {}^2\textnormal{G}_2(q_0)$, where $q=q_0^r$ with $r$ prime. Writing $q_0=3^\ell$, we have \begin{equation} \label{smallReeBound} |\mathcal{P}| = 3^{3\ell(r-1)} \frac{(3^{3\ell r} + 1)(3^{\ell r} - 1)}{(3^{3\ell} + 1)(3^{\ell} - 1)} < 3^{7\ell(r-1)+\epsilon}, \quad \text{where } \epsilon := \frac{\log\left(\frac{3^4}{(3^3-1)(3-1)}\right)}{\log(3)} \approx 0.336. \end{equation} To verify the inequality in \eqref{smallReeBound}, one checks that $(3^{3\ell} + 1)(3^\ell -1) \ge 3^{4\ell-\epsilon}$, because $\ell \ge 1$, and that $(3^{3\ell r} + 1)(3^{\ell r} - 1) < 3^{4\ell r}$. Let us re-write this inequality as \[ |\mathcal{P}| < 3^{ 7b/3+\epsilon}, \quad \text{where } b := 3\ell(r-1). \] Note also that $b\ge 6$, because $r\ge 3$. For a contradiction, we now show that $|\mathcal{P}| > 3^{7b/3+\epsilon}$. By \eqref{smallReeBound}, $3^b$ is the highest power of $3$ dividing $|\mathcal{P}|$. Since $2st$ is a square, $st$ is even, so $s^2t^2+1$ is not divisible by $3$. Hence, by \eqref{pts}, $3^b$ divides $(s+1)(st+1)$. As in the proof of Lemma~\ref{2part}(iii), let us say that $s+1$ is divisible by $3^c$, and that $st+1$ is divisible by $3^d$, where $b=c+d$. Recall also (from that proof) that $t > 3^{\min\{c,d\}}$. To show that $|\mathcal{P}| > 3^{7b/3+\epsilon}$, we consider four cases. First suppose that $c \ge d$. Then $t > 3^d$, and $c \ge 1$ so $s \ge 3^c-1 > 3^{c-1/2}$. Hence, $|\mathcal{P}| > (s+1)(st+1)(st)^2 > 3^b (3^{c-1/2}3^d)^2 = 3^{b+2(c+d)-1} = 3^{3b-1} > 3^{7b/3+1}$, with the final inequality holding because $b \ge 6 > 3$. Next, suppose that $d/2+1/2 \le c < d$. Then $6 \le b = c+d \le 3c-1$. In particular, $c \ge (b+1)/3$; and $c\ge 3$ so $s \ge 3^c-1 \ge 3^{c-\delta}$, where $\delta := 3 - \log(3^3-1)/\log(3)$. Moreover, $t>3^c$, and hence $|\mathcal{P}| > 3^b(st)^2 > 3^b(3^{c-\delta}3^c)^2 = 3^{b+4c-2\delta} \ge 3^{7b/3 + (4/3-2\delta)}$. It follows that $|\mathcal{P}| > 3^{7b/3 + \epsilon}$, because $1.26 \approx 4/3-2\delta > \epsilon \approx 0.336$. Now suppose that $c \le d/2-1/2$. Then $6 \le b = c+d \le 3d/2-1/2$. In particular, $d\ge (2b+1)/3$; and $d\ge 5$ so $st \ge 3^d-1 \ge 3^{d-\delta'}$, where $\delta' := 5 - \log(3^5-1)/\log(3)$. Therefore, $|\mathcal{P}| > 3^b(st)^2 > 3^{b+2d-2\delta'} = 3^{7b/3+2/3-2\delta'}$, and it follows that $|\mathcal{P}| > 3^{7b/3 + \epsilon}$, because $0.659 \approx 2/3-2\delta' > \epsilon \approx 0.336$. Finally, suppose that $d/2-1/2 < c < d/2+1/2$. Since $c$ and $d$ are integers, this is equivalent to saying that $c=d/2$. Now, suppose first, towards a contradiction, that $(s+1)(st+1)$ is actually equal to $3^b$. Then $s+1=3^c$, $st+1=3^{2c}$, and \eqref{smallReeBound} implies that \begin{equation} \label{c=d/2} (s^2t^2+1)(3^{3\ell} + 1)(3^{\ell} - 1) = (3^{3\ell r} + 1)(3^{\ell r} - 1). \end{equation} However, this is impossible, because the left- and right-hand sides of \eqref{c=d/2} are not congruent modulo $3$. Indeed, $st = 3^{2c}-1 \equiv 2 \pmod 3$, so $s^2t^2+1 \equiv 4+1 \equiv 2 \pmod 3$; $3^{3\ell} + 1 \equiv 1 \pmod 3$; and $3^\ell - 1 \equiv 2 \pmod 3$; and hence the left-hand side of \eqref{c=d/2} is congruent to $1$ modulo $3$. On the other hand, the right-hand side of \eqref{c=d/2} is congruent to $2$ modulo $3$. Therefore, $(s+1)(st+1)$ is strictly larger than $3^b$. Indeed, it is larger by a factor of at least $5$, because by \eqref{smallReeBound} we see that $|\mathcal{P}|/3^b$ is divisible by neither $2$ nor $3$ (to verify that $|\mathcal{P}|/3^b$ is odd, apply \cite[Lemma~2.5]{GuestPraeger}). Therefore, $|\mathcal{P}| > 5\cdot 3^b(st)^2 > 3^{b+1}(st)^2$. Since $6 \le b = 3d/2$, we have $d\ge 4$, and so $st \ge 3^d-1 \ge 3^{d-\delta''}$, where $\delta'' := 4 - \log(3^4-1)/\log(3)$. Hence, $|\mathcal{P}| > 3^{b+1+2d-2\delta''} = 3^{7b/3+1-2\delta''}$, and it follows that $|\mathcal{P}| > 3^{7b/3 + \epsilon}$, because $0.977 \approx 1-2\delta'' > \epsilon \approx 0.336$. \section{Proof of Theorem~\ref{thm:new}: $S$ a Ree group of type $^2\textnormal{F}_4$} \label{sec:2F4} In this final section, we adopt the hypothesis of Theorem~\ref{thm:new} while assuming that $S \cong {}^2\textnormal{F}_4(q)$, where $q=2^m$ with $m$ odd and at least $3$. Then \[ |S| = q^{12}(q^6+1)(q^4-1)(q^3+1)(q-1). \] Let $\mathcal{P}$ be the point set of $\Gamma$, and let $x\in \mathcal{P}$. The outer automorphism group of $S$ is cyclic, so we again observe that $G_x$ is a maximal subgroup of $G$ not containing $S$. A result of Malle~\cite{Malle} tells us that $G$ has no novelty maximal subgroups, so it again suffices to prove the theorem in the case where $G=S$. The maximal subgroups of $S$ (listed also in \cite[Section~4.9.3]{WilsonBook}) are, up to conjugacy, \begin{multicols}{2} \begin{itemize} \item[(i)] $P_1 := [q^{10}] : (\textnormal{Sz}(q) \times C_{q-1})$, \item[(ii)] $P_2 := [q^{11}] : \mathrm{GL}_2(q)$; \item[(iii)] $\textrm{SU}_3(q) : C_2$, \item[(iv)] $\textrm{PGU}_3(q) : C_2$, \item[(v)] $\textnormal{Sz}(q) \wr C_2$, \item[(vi)] $\textrm{Sp}_4(q) : C_2$, \item[(vii)] ${}^2\textnormal{F}_4(q_0)$, where $q = q_0^r$ with $r$ prime, \item[(viii)] $(C_{q + 1} \times C_{q+1}) : \textrm{GL}_2(3)$, \item[(ix)] $C_{(q \pm \sqrt{2q} + 1)^2} : [96]$, \item[(x)] $C_{q^2 +q+1\pm \sqrt{2q}(q+1)} : C_{12}$. \end{itemize} \end{multicols} The groups $P_1$ and $P_2$ are maximal parabolic subgroups of $S$. The group $P_1$ is a point stabiliser in the action of $S$ on the classical generalised octagon, whilst $P_2$ is a point stabiliser in the action of $S$ on the dual \cite[Section~4.9.4]{WilsonBook}. We must show that $S_x$ cannot be isomorphic to any of the groups in cases~(iii)--(x), and, further, that if $S_x$ is isomorphic to either $P_1$ or $P_2$, then $\Gamma$ is the classical generalised octagon or its dual. \subsection{Cases~(i)--(ii) with $\Gamma$ a generalised octagon} \label{2F41stCase} Suppose that $\Gamma$ is a generalised octagon and that $S_x$ is isomorphic to either $P_1$ or $P_2$. In either action, the group $S$ has rank five. That is, the point stabiliser $S_x$ has five orbits on the set $\mathcal P$ \cite[p.~167]{WilsonBook}. For $i\in\{0,2,4,6,8\}$, denote by $\Gamma_i(x)$ the set of points at distance $i$ from $x$ in the incidence graph of $\Gamma$. Since each of these sets is nontrivial and $S_x$-invariant, the pigeonhole principle shows that each is an orbit of $S_x$. Since $S$ acts transitively on $\mathcal P$, we find that $S$ acts distance-transitively on $\mathcal{P}$. Now the main result of \cite{BvMdistance} shows that $\Gamma$ is isomorphic to the classical generalised octagon associated with $S$, or its dual. \subsection{Case~(i) with $\Gamma$ a generalised hexagon} \label{large ree hexagon} Suppose that $\Gamma$ is a generalised hexagon, with $S_x \cong [q^{10}] : (\textnormal{Sz}(q) \times C_{q-1})$. Since $|\textnormal{Sz}(q)| = q^2(q^2+1)(q-1)$, \[ |\mathcal{P}| = |S:S_x| = (q^4-q^2+1)(q^3+1)(q^2+1)(q+1). \] Equivalently (subtracting $1$ from both sides), \begin{equation} \label{2F4P1pts-1} s^3t^2 + s^2(t+1) + s(t+1) = q^{10} + q^9 + q^7 + q^6 + q^4 + q^3 + q. \end{equation} Now, $S$ acts primitively and distance-transitively on the points of a generalised octagon of order $(q,q^2)$, with point stabiliser $[q^{10}] : (\textnormal{Sz}(q) \times C_{q-1})$ and nontrivial subdegrees \cite[Section~4.9.4]{WilsonBook} \begin{equation} \label{subdegrees2F4points} n_1 := q(q^2+1),\quad n_2 := q^4(q^2+1),\quad n_3 := q^7(q^2+1),\quad n_4 := q^{10}. \end{equation} Recall the notation $\Gamma_i(x)$ from Section~\ref{2F41stCase}. Then we have \cite[p.~19]{HendrikBook} \begin{equation} \label{distancesHexagon} |\Gamma_2(x)| = s(t+1),\quad |\Gamma_4(x)| = s^2t(t+1),\quad |\Gamma_6(x)| = s^3t^2, \end{equation} and $S_x$ preserves the sets $\Gamma_i(x)$. Hence, each $\Gamma_i(x)$ is a union of $S_x$-orbits, and so for $i\in\{2,4,6\}$, we have $|\Gamma_i(x)| = \sum_{k=1}^4 \delta_{i,k} n_k$, for some $\delta_{i,k} \in \{0,1\}$ (with $\delta_{i,k}\delta_{j,k}=0$ for $i \neq j$). We show that this leads to a contradiction. {\bf Claim~$1$:} $|\Gamma_2(x)| = n_1$. The proof of the claim is by contradiction. If not, then $|\Gamma_2(x)| \ge n_2 = q^4(q^2+1)$. Since $s,t \ge 2$, and so in particular $t \ge \tfrac{2}{3}(t+1)$, it follows that \begin{align*} |\Gamma_4(x)| &\ge \tfrac{2}{3}s^2(t+1)^2 = \tfrac{2}{3} |\Gamma_2(x)|^2 \ge \tfrac{2}{3} q^8(q^2+1)^2, \\ |\Gamma_6(x)| &\ge 2s^2t^2 \ge \tfrac{4}{3} |\Gamma_4(x)| \ge \tfrac{8}{9} q^8(q^2+1)^2. \end{align*} Since the left-hand side of \eqref{2F4P1pts-1} is $|\Gamma_2(x)| + |\Gamma_4(x)| + |\Gamma_6(x)|$, this implies that \[ \tfrac{14}{9} q^8(q^2+1)^2 + q^4(q^2+1) \le q^{10} + q^9 + q^7 + q^6 + q^4 + q^3 + q, \] which is certainly false for $q \ge 8$. {\bf Claim~$2$:} $|\Gamma_4(x)| = n_2$. The proof is again by contradiction. If not, then $|\Gamma_4(x)| \ge n_3 = q^7(q^2+1)$, because $|\Gamma_2(x)| = n_1 = q(q^2+1)$ by Claim~1. This implies the following inequality, which is certainly false for $q \ge 8$: \[ q^6 = \frac{q^7(q^2+1)}{q(q^2+1)} \le \frac{|\Gamma_4(x)|}{|\Gamma_2(x)|} = \frac{s^2t(t+1)}{s(t+1)} = st < s(t+1) = q(q^2+1). \] By Claims~1 and~2, we must have $|\Gamma_6(x)| = n_3+n_4 = q^7(q^3+q^2+1) > q^8(q^2+1)$, and hence \[ s > \frac{s^3t^2}{s^2t(t+1)} = \frac{|\Gamma_6(x)|}{|\Gamma_4(x)|} > \frac{q^8(q^2+1)}{q^4(q^2+1)} = q^4. \] This is impossible, because $s(t+1) = q(q^2+1)$ by Claim~1 (and hence certainly $s < q(q^2+1) < q^4$). \subsection{Case~(ii) with $\Gamma$ a generalised octagon} Suppose that $\Gamma$ is a generalised hexagon, with $S_x \cong [q^{11}] : \textrm{GL}_2(q)$. Since $|\textrm{GL}_2(q)| = q(q^2-1)(q-1)$, \[ |\mathcal{P}| = |S:S_x| = (q^4-q^2+1)(q^2+1)^2(q^3+1). \] Equivalently (subtracting $1$ from both sides), \begin{equation} \label{2F4P1pts-1ii} s^3t^2 + s^2(t+1) + s(t+1) = q^{11} + q^9 + q^8 + q^6 + q^5 + q^3 + q^2. \end{equation} Now, $S$ acts primitively and distance-transitively with stabiliser $[q^{11}] : \textrm{GL}_2(q)$ on the points of a generalised octagon of order $(q^2,q)$, namely the point--line dual of the generalised octagon from case~(i). The nontrivial subdegrees are \cite[p.~167]{WilsonBook} \begin{equation} \label{subdegrees2F4pointsii} n_1 := q^2(q+1),\quad n_2 := q^5(q+1),\quad n_3 := q^8(q+1),\quad n_4 := q^{11}. \end{equation} For $x\in \mathcal{P}$, we again have \eqref{distancesHexagon}, and $S_x$ must preserve the sets $\Gamma_i(x)$, $i\in\{2,4,6\}$, so each $|\Gamma_i(x)|$ is equal to a sum of the subdegrees $n_1,\dots,n_4$, as in Section~\ref{large ree hexagon}. We show that this leads to a contradiction. {\bf Claim~$1$:} $|\Gamma_2(x)| = n_1$. The proof of the claim is by contradiction. If not, then $|\Gamma_2(x)| \ge n_2 = q^5(q+1)$. Since $s,t \ge 2$, and so in particular $t \ge \tfrac{2}{3}(t+1)$, it follows that \begin{align*} |\Gamma_4(x)| &\ge \tfrac{2}{3}s^2(t+1)^2 = \tfrac{2}{3} |\Gamma_2(x)|^2 \ge \tfrac{2}{3} q^{10}(q+1)^2, \\ |\Gamma_6(x)| &\ge 2s^2t^2 \ge \tfrac{4}{3} |\Gamma_4(x)| \ge \tfrac{8}{9} q^{10}(q+1)^2. \end{align*} Since the left-hand side of \eqref{2F4P1pts-1ii} is $|\Gamma_2(x)| + |\Gamma_4(x)| + |\Gamma_6(x)|$, this implies the following inequality, which is false for $q \ge 8$: \[ \tfrac{14}{9} q^{10}(q+1)^2 + q^5(q+1) \le q^{11} + q^9 + q^8 + q^6 + q^5 + q^3 + q^2, \] {\bf Claim~$2$:} $|\Gamma_4(x)| = n_2$. The proof is again by contradiction. If not, then $|\Gamma_4(x)| \ge n_3 = q^8(q+1)$, because $|\Gamma_2(x)| = n_1 = q^2(q+1)$ by Claim~1. This implies the following inequality, which is false for $q \ge 8$: \[ q^6 = \frac{q^8(q+1)}{q^2(q+1)} \le \frac{|\Gamma_4(x)|}{|\Gamma_2(x)|} = \frac{s^2t(t+1)}{s(t+1)} = st < s(t+1) = q^2(q+1). \] By Claims~1 and~2, we must have $|\Gamma_6(x)| = n_3+n_4 = q^8(q^3+q+1) > q^9(q^2+1)$, and hence \[ s > \frac{s^3t^2}{s^2t(t+1)} = \frac{|\Gamma_6(x)|}{|\Gamma_4(x)|} > \frac{q^9(q^2+1)}{q^5(q+1)} = \frac{q^4(q^2+1)}{q+1}. \] This, however, contradicts $s(t+1) = q^2(q^2+1)$ (namely Claim~1). \subsection{Cases (iii)--(ix)} We now deal with cases~(iii)--(ix), for which we use Lemma~\ref{2part}(i) to contradict the equality $|\mathcal{P}| = |S:S_x|$. First suppose that $S_x$ is isomorphic to either $\textrm{SU}_3(q) : C_2$ or $\textrm{PGU}_3(q) : C_2$. In either case, we have $|S_x| = 2q^3(q^3+1)(q^2-1)$, and hence \[ |\mathcal{P}| = |S:S_x| = \tfrac{1}{2} q^9(q^6+1)(q^2+1)(q-1) = 2^{9m-1}(2^{6m}+1)(2^{2m}+1)(2^m-1) < 2^{18m+1}. \] However, Lemma~\ref{2part}(i) with $a=9m-1$ gives $|\mathcal P | > 2^{27m-3}$, which is a contradiction. If $S_x \cong \textnormal{Sz}(q) \wr C_2$, then $|S_x| = 2q^4(q^2+1)^2(q-1)^2$, so \[ |\mathcal{P}| = |S:S_x| = \tfrac{1}{2}q^8(q^4-q^2+1)(q^3+1)(q+1) = 2^{8m-1}(2^{4m}-2^{2m}+1)(2^{3m}+1)(2^m+1) < 2^{16m+1}, \] contradicting Lemma~\ref{2part}(i) with $a=8m-1$, which gives $|\mathcal{P}| > 2^{24m-3}$. If $S_x \cong \mathrm{Sp}_4(q) : C_2$, then $|S_x| = 2q^4 (q^4-1)(q^2-1)$, so \[ |\mathcal{P}| = |S:S_x| = \tfrac{1}{2} q^8(q^6+1)(q^2-q+1) = 2^{8m-1}(2^{6m}+1)(2^{2m}-2^m+1) < 2^{16m}, \] contradicting Lemma~\ref{2part}(i) with $a=8m-1$, which gives $|\mathcal{P}| > 2^{24m-3}$. Now suppose that $S_x \cong {}^2F_4(q_0)$, where $q = q_0^r$ with $r$ prime. Writing $q_0 = 2^\ell$, we have \[ |\mathcal{P}| = |S:S_x| = 2^{12\ell(r-1)} \frac{(2^{6r\ell}+1)(2^{4r\ell}-1)(2^{3r\ell}+1)(2^{r\ell}-1)}{(2^{6\ell}+1)(2^{4\ell}-1)(2^{3\ell}+1)(2^\ell-1)} < 2^{26\ell(r-1)+ 4}. \] However, Lemma~\ref{2part}(i) with $a=12\ell(r-1)$ gives $|\mathcal P| > 2^{36\ell(r-1)}$, a contradiction (because $\ell \ge 1$). Finally, suppose that $S_x$ is as in one of the cases~(viii)--(x). Then the highest power of $2$ dividing $|S_x|$ is $2^5$ (arising in case~(ix)), so $|\mathcal{P}|=|S:S_x|$ is divisible by $2^{12m-5}$, and Lemma~\ref{2part}(i) therefore gives $|\mathcal{P}| > 2^{36m-15}$. On the other hand, we certainly have $|\mathcal{P}| < |S| < 2^{30m}$, which is a contradiction (because $36m-15 \le 30m$ if and only if $m \leq 5/2$, but we have $m \ge 3$).
1,477,468,750,770
arxiv
\section*{Acknowledgments} \ifCLASSOPTIONcaptionsoff \newpage \fi \newpage \bibliographystyle{plainnat} \section{Introduction} \label{sec:intro} Soft continuum manipulator arms are lightweight, cheap to make, and their inherent compliance carries the promise to enable safe interaction with humans, delicate objects, and the environment ~\cite{immega1995ksi,hannan2001elephant,mcmahan2005design,grissom2006design,cianchetti2013stiff,mahl2014variable,hughes2016soft,phillips2018dexterous}. These properties would make them ideal platforms for tasks involving physical human-robot interaction such as feeding~\cite{miller1998assistive} or handling products in a warehouse~\cite{correll2016analysis}. Yet, so far, the real-world application of soft manipulators has been limited. This is due to the difficulty involved in controlling such systems, as they exhibit infinite degrees-of-freedom, nonlinear material properties, and large deflections under loading \cite{george2018control}. These characteristics greatly complicate manipulation tasks such as pick and place, which require consistent control performance regardless of payload. Control-oriented models that describe soft manipulator behavior under varying loading conditions could enable the automation of such tasks. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/overview-2.pdf} \caption{ A soft continuum manipulator is tasked with following a reference trajectory $r[k]$ while carrying an unknown payload. At each time step a model predictive controller computes the optimal input $u[k]$ to follow the reference trajectory based on a linear Koopman operator model, and the position of the end effector $y[k]$ is measured by a motion capture system. An observer computes a payload estimate $\hat{w}$ based on the previously measured inputs and outputs, which is then incorporated into the state of the model $z[k]$ via a lifting function.} \label{fig:overview} \end{figure} Recently, data-driven modeling techniques have emerged as a powerful tool to address the challenge of modeling soft continuum manipulators. A primary benefit of these techniques is that a description of an input/output relationship can be obtained from system observations without explicitly defining a system state. This is especially useful for obtaining reduced-order models of continuum robots that have essentially infinite-dimensional kinematics, without making simplifying physical assumptions such as piecewise constant curvature \cite{webster2010design}, pseudo-rigid-body mechanics \cite{howell1996evaluation}, quasi-static behavior \cite{bruder2018iros, thuruthel2018model, gravagne2003large, trivedi2008geometrically}, or simplified geometry \cite{sedal2019comparison, bruder2017model, sedal2017constitutive, bishop2012parallel}. A potential downside of data-driven modeling is that it requires system behavior to be observed under a wide range of operating conditions, including those that may be dangerous to a robot or its surroundings. Fortunately, compared to conventional rigid-bodied manipulators, soft manipulators pose much less of a physical threat to themselves and their surroundings. It is hence possible to automatically and safely collect data under a wide range of operating conditions, making soft robots well suited for data-driven modeling approaches. Within the class of data-driven methods, deep learning or neural networks have been the primary choice for describing the input/output behavior of soft manipulators. For instance, \citet{satheeshbabu2019open} used deep reinforcement learning to achieve open-loop position control of a soft manipulator comprised of fiber-reinforced actuators; \citet{hyatt2019model} utilized a linearization of a neural network model and model predictive control to control the position of a bellows-actuated manipulator; and \citet{thuruthel2018model} used a combination of a recurrent neural network and supervised reinforcement learning to achieve closed-loop control of a pneumatically-driven soft manipulator. This controller was shown to compensate for disturbance such as end effector loading. Moving forward, models that are able to predict system behavior as a function of load rather than treating them as a disturbance are required to improve system performance with regard to accuracy and speed. An alternative to deep learning and artificial neural networks is a data-driven modeling approach based on Koopman operator theory. This approach yields a linear model that can be used for control. These properties are in stark contrast to deep learning or neural network models, whose creation requires solving a nonlinear optimization problem for which global convergence may not be guaranteed \cite{boyd2004convex}, and in which the control input usually appears non-linearly. The Koopman operator, on which this approach is based, describes the evolution of scalar-valued functions along trajectories of a nonlinear dynamical system, acting as a linear embedding of the nonlinear dynamics in an infinite-dimensional function space \cite{budivsic2012applied}\cite[Ch.~7]{brunton2019data}. An approximation of this infinite-dimensional operator is identified via linear regression on input/output data \cite{bruder2019nonlinear, mauroy2016linear}, providing a linear model representation that is compatible with established linear control techniques \cite{Abraham-RSS-17, korda2018linear}. A Koopman-based approach has been successfully used to control several robotic systems such as a Sphero SPRK, \cite{Abraham-RSS-17}, a quadcopter \cite{abraham2019active}, and a robotic fish \cite{mamakoukas2019local}. Koopman-based system identification and control has also been successfully demonstrated on a soft continuum manipulator to control the 2D projection of its end-effector \cite{Vasudevan-RSS-19}. \textbf{This paper presents a Koopman-based framework that explicitly incorporates loading conditions into the model to enable real-time control design.} By incorporating loads into the model as states, our approach is able to estimate loading online via an observer within the control loop (see Fig.~1). This observer infers the most likely value of the loading condition given a series of input/output measurements. The knowledge that is gained in this process enables consistent control performance under a wide range of loading conditions. In fact, the idea of estimating loading conditions has been explored for rigid-bodied robots in the past \cite{colome2013external,Funkhouser-RSS-19}. By using our proposed Koopman-based approach, we are able to transfer these rigid-body results into the world of soft robots, which are subject to continuous deformation under load. \textbf{Using this approach, we demonstrate real-time, fully autonomous control of a pneumatically actuated soft continuum manipulator}. In several validation experiments our controller proves itself to be more accurate and more precise across various payloads than several other model-based controllers which do not incorporate loading. To the best of the authors' knowledge, this paper presents the first implementation of a closed-loop controller that explicitly accounts for loading on a soft continuum manipulator and the first demonstration of autonomous pick and place of objects of unknown mass on a soft continuum manipulator. The rest of this paper is organized as follows: Section \ref{sec:sysid} formally introduces the Koopman operator and describes how to construct models of nonlinear dynamical systems from data. Section \ref{sec:loadest} introduces a method for incorporating loading conditions into the model. Section \ref{sec:mpc} describes our Koopman-based model predictive controller and a method for estimating loading conditions online. Section \ref{sec:experiments} describes the set of experiments used to evaluate the performance of our Koopman-based model predictive controller on a pneumatically actuated soft continuum manipulator, including trajectory following while carrying an unknown payload, and autonomously sorting objects by mass. Section \ref{sec:discussion} discusses the results of these experiments and concludes the paper. \section{System Identification} \label{sec:sysid} This section describes a system identification method to construct linear state space models of nonlinear controlled discrete dynamical systems from input/output data. In particular, rather than describing the evolution of a dynamical system's state directly, which may be a nonlinear mapping, the (linear) Koopman operator describes the evolution of scalar-valued functions of the state, which is a linear mapping in the infinite-dimensional space of all scalar-valued functions. \subsection{The Kooman Operator} \label{sec:koop_rep} Consider an input/output system governed by the following differential equation for the output: \begin{align} \dot{y}(t) &= F ( y(t) , u(t) ) \label{eq:nlsys} \end{align} where $y(t) \in Y \subset \mathbb{R}^n$ is the output and $u(t) \in U \subset \mathbb{R}^m$ is the input of the system at time $t \geq 0$, ${F}$ is a continuously differentiable function, and $Y$ and $U$ are compact subsets. Denote by $\phi_\tau (y_0, u_0)$ the \emph{flow map}, which is the solution to \eqref{eq:nlsys} at time $\tau$ when beginning with the initial condition $y_0$ at time $0$ and a constant input $u_0$ applied for all time between $0$ and $\tau$. The system can be lifted to an infinite-dimensional function space $\mathcal{F}$ composed of all square-integrable real-valued functions with compact domain $Y \times U \subset \mathbb{R}^{n \times m}$. Elements of $\mathcal{F}$ are called \emph{observables}. In $\mathcal{F}$, the flow of the system is characterized by the set of Koopman operators $\mathcal{K}_\tau : \mathcal{F} \to \mathcal{F}$, for each $\tau \geq 0$, which describe the evolution of the observables ${f \in \mathcal{F}}$ along the trajectories of the system according to the following definition: \begin{align} \mathcal{K}_\tau f = f \circ \phi_\tau, \label{eq:koopman} \end{align} where $\circ$ indicates function composition. A consequence of this definition is that for a specific time step $\tau$, the Koopman operator $\mathcal{K}_\tau$ defines an infinite-dimensional linear discrete dynamical system that advances the value of an observable by $\tau$, \begin{align} f( y(t + \tau) , \tilde{u} ) &= \mathcal{K}_\tau f( y(t) , \tilde{u} ) \label{eq:Uf} \end{align} where $\tilde{u}$ is a constant input over the interval $[t , t + \tau]$. Since this is true for \emph{any} observable function $f$, the Koopman operator can be used to advance the output itself by applying it to the set of functions $\{ f_i : f_i (y(t) , \tilde{u}) = y_i (t) \}_{i=1}^n$, advancing their values according to \eqref{eq:Uf}, and stacking the results as a vector: \begin{align} y( t + \tau ) &= \begin{bmatrix} \mathcal{K}_{\tau} f_1 \left( y (t) , \tilde{u} \right) & \cdots & \mathcal{K}_{\tau} f_n \left( y (t) , \tilde{u} \right) \end{bmatrix}^\top . \label{eq:ystep} \end{align} In this way, the Koopman operator provides an infinite-dimensional linear representation of a nonlinear dynamical system \cite{budivsic2012applied}. \subsection{Koopman-based System Identification} \label{sec:koopid} Since the Koopman operator is an infinite-dimensional object, we have to settle for its projection onto a finite-dimensional subspace, which can be represented as a matrix. Using the Extended Dynamic Mode Decomposition (EDMD) algorithm \cite{williams2015data, mauroy2016linear, mauroy2017koopman}, we identify a finite-dimensional matrix approximation of the Koopman operator via linear regression applied to observed data. The remainder of this subsection describes the mathematical underpinnings of this process. Define ${\bar{\mathcal{F}} \subset \mathcal{F}}$ to be the subspace of $\mathcal{F}$ spanned by ${N>n+m}$ linearly independent basis functions ${ \{ \psi_i : \mathbb{R}^{n \times m} \to \mathbb{R} \}_{i=1}^N}$, and define the \emph{lifting function} ${\psi : \mathbb{R}^{n \times m} \to \mathbb{R}^N}$ as: \begin{align} \psi(y,u) &:= \begin{bmatrix} \psi_1(y,u) & \cdots & \psi_N (y,u) \end{bmatrix}^\top. \label{eq:lift} \end{align} Any observable $\bar{f} \in \bar{\mathcal{F}}$ can be expressed as a linear combination of the basis functions \begin{align} \bar{f} &= \theta_1 \psi_1 + \cdots + \theta_N \psi_N \label{eq:fexpanded} \end{align} where each $\theta_i \in \mathbb{R}$ is a constant. Thus $\bar{f}$ evaluated at $y,u$ can be concisely expressed as \begin{align} \bar{f}(y,u) &= \theta^\top \psi (y,u) \label{eq:fvec} \end{align} where ${\theta := [ \theta_1 \, \cdots \, \theta_N ]^\top}$ acts as the \emph{vector representation} of $\bar{f}$. Given this vector representation for observables, a linear operator on $\bar{\mathcal{F}}$ can be represented as an ${N \times N}$ matrix. We denote by $\bar{\mathcal{K}}_\tau \in \mathbb{R}^{N \times N}$ the approximation of the Koopman operator on $\bar{\mathcal{F}}$, which operates on observables via matrix multiplication: \begin{align} \bar{\mathcal{K}}_\tau \theta = \theta' \end{align} where $\theta , \theta'$ are each vector representations of observables in $\bar{\mathcal{F}}$. Our goal is to find a $\bar{\mathcal{K}}_\tau$ that describes the action of the infinite dimensional Koopman operator $\mathcal{K}_\tau$ as accurately as possible in the $L^2$-norm sense on the finite dimensional subspace $\bar{\mathcal{F}}$ of all observables. For $\bar{\mathcal{K}}_{\tau}$ to perfectly mimic the action of $\mathcal{K}_\tau$ on any observable ${\bar{f} \in \bar{\mathcal{F}} \subset \mathcal{F}}$, according to \eqref{eq:koopman} the following should be true for all $y \in Y$ and $u \in U$, \begin{align} \bar{\mathcal{K}}_\tau \bar{f}(y,u) &= \bar{f} \circ \phi_\tau(y,u) \label{eq:first} \\ ( \bar{\mathcal{K}}_\tau {\theta} )^\top {\psi}(y,u) &= {\theta}^\top {\psi} \circ \phi_\tau(y,u) \label{eq:second}\\ \bar{\mathcal{K}}_\tau^\top \psi(y,u) &= {\psi} \circ \phi_\tau(y,u), \label{eq:UbarEq} \end{align} where \eqref{eq:second} follows by substituting \eqref{eq:fvec} and \eqref{eq:UbarEq} follows since the result holds for all $\bar{f}$. Since this is a linear equation, it follows that for a given ${y \in Y}$ and $u \in U$, solving \eqref{eq:UbarEq} for $\bar{\mathcal{K}}_\tau$ yields the best approximation of $\mathcal{K}_\tau$ on $\bar{\mathcal{F}}$ in the $L^2$-norm sense \cite{penrose1956best}: \begin{align} \bar{\mathcal{K}}_\tau = \left( {\psi}(y,u)^\top \right)^\dagger ( {\psi} \circ \phi_\tau(y,u) )^\top \label{eq:Uapprox} \end{align} where superscript $\dagger$ denotes the Moore-Penrose pseudoinverse. To approximate the Koopman operator from a set of experimental data, we take $K$ discrete measurements in the form of so-called ``snapshots'' ${ (a[k] , b[k] , u[k]) }$ for each ${ k \in \{1,\ldots,K\} }$ where \begin{align} a[k] &:= y[k] \\ b[k] &:= \phi_{T_s} (y[k] , u[k]), \label{eq:ab} \end{align} $y[k]$ denotes the output corresponding to the $k^\text{th}$ measurement, $u[k]$ is the constant input applied between $a[k]$ and $b[k]$, and $T_s$ is the sampling period, which is assumed to be identical for all snapshots. Note that consecutive snapshots do not have to be generated by consecutive measurements. We then lift all of the snapshots according to \eqref{eq:lift} and compile them into the following ${K \times N}$ matrices: \begin{align} & \Psi_a := \begin{bmatrix} {\psi}(a[1] , u[1])^\top \\ \vdots \\ {\psi}(a[K] , u[K])^\top \end{bmatrix}, && \Psi_b := \begin{bmatrix} {\psi}(b[1] , u[1])^\top \\ \vdots \\ {\psi}(b[K] , u[K])^\top \end{bmatrix}. \label{eq:Psi} \end{align} $\bar{\mathcal{K}}_{T_s}$ is chosen so that it yields the least-squares best fit to all of the observed data, which, following from \eqref{eq:Uapprox}, is given by \begin{align} \bar{\mathcal{K}}_{T_s} &:= \Psi_a^\dagger \Psi_b. \label{eq:Ubar} \end{align} Sometimes a more accurate model can be obtained by incorporating delays into the set of snapshots. To incorporate these delays, we modify the snapshots to have the following form \begin{align} & a[k] := \begin{bmatrix} y[k] \\ \vdots \\ y[k-d] \\ u[k-1] \\ \vdots \\ u[k-d] \end{bmatrix}, && b[k] := \begin{bmatrix} \phi_{T_s} (y[k],u[k]) \\ \vdots \\ y[k-d+1] \\ u[k-1] \\ \vdots \\ u[k-d] \end{bmatrix} \label{eq:snapd} \end{align} where $d$ is the number of delays. We then modify the domain of the lifting function such that ${ \psi : \mathbb{R}^{\left( n+(n+m)d \right) \times m} \to \mathbb{R}^{N} }$ to accommodate the larger dimension of the snapshots. Once these snapshots have been assembled, the model identification procedure is identical to the case without delays. \subsection{Linear Model Realization Based on the Koopman Operator} \label{sec:linid} For dynamical systems with inputs, we are interested in using the Koopman operator to construct discrete linear models of the following form \begin{equation} \begin{aligned} z[j+1] &= A z[j] + B u[j] \\ y[j] &= C z[j] \label{eq:linSys} \end{aligned} \end{equation} for each $j \in \mathbb{N}$, where $y[0]$ is the initial output, $z[0]$ is the initial state, and $u[j] \in \mathbb{R}^m$ is the input at the $j^{\text{th}}$ step. Specifically, we desire a representation in which the input appears \emph{linearly}, because models of this form are amenable to real-time, convex optimization techniques for feedback control design, as we describe in Section \ref{sec:mpc}. With a suitable choice of basis functions $\{ \psi_i \}_{i=1}^N$, $\bar{\mathcal{K}}_{T_s}$ can be constructed such that it is decomposable into a linear system representation like \eqref{eq:linSys}. One way to achieve this is to define the first $N-m$ basis functions as functions of the output only, and the last $m$ basis functions as indicator functions on each component of the input, \begin{align} \psi_i(y,u) &= g_i(y), \quad \forall i \in \{1,\ldots, N-m\} \label{eq:gi} \\ \psi_i(y,u) &= u_i, \quad \forall i \in \{i = N-m+1, \ldots, N\} \label{eq:ui} \end{align} where $g_i : \mathbb{R}^n \to \mathbb{R}$ and $u_i$ denotes the $i^{\text{th}}$ element of $u$. This choice ensures that the input only appears in the last $m$ components of $\psi(y,u)$, and an $N-m$ dimensional state can be defined as $z = g( y ) \in \mathbb{R}^{N-m}$, where ${ g : \mathbb{R}^{n} \to \mathbb{R}^{N-m} }$ is defined as \begin{align} g( y ) &:= \begin{bmatrix} g_1 (y) & \cdots & g_{N-m} (y) \end{bmatrix}^\top. \end{align} Following from \eqref{eq:Ubar}, the transpose of $\bar{\mathcal{K}}_{T_s}$ is the best transition matrix between the elements of the lifted snapshots in the $L^2$-norm sense. This implies that given the lifting functions defined in \eqref{eq:gi} and \eqref{eq:ui}, $\bar{\mathcal{K}}_{T_s}$ is the minimizer to \begin{align} \underset{\check{\mathcal{K}}}{\min} \sum_{k=1}^K \left\lVert {\check{\mathcal{K}}}^\top \begin{bmatrix} g( a[k] ) \\ u[k] \end{bmatrix} - \begin{bmatrix} g( b[k] ) \\ u[k] \end{bmatrix} \right\rVert_2^2. \label{eq:min-K} \end{align} Also note that given ${ z = g( y ) }$ as the state of our linear system, the best realizations of $A$ and $B$ in the $L^2$-norm sense are the minimizers to \begin{align} \underset{\check{A}, \check{B}}{\min} \sum_{k=1}^K \left\lVert \check{A} g(a[k]) + \check{B} u[k] - g(b[k]) \right\rVert_2^2 . \label{eq:min-AB} \end{align} Therefore, by comparing \eqref{eq:min-K} and \eqref{eq:min-AB}, one can confirm that $A$ and $B$ are embedded in $\bar{\mathcal{K}}_{T_s}$ and can be isolated by partitioning it as follows: \begin{align} \bar{\mathcal{K}}_{T_s}^\top &= \begin{bmatrix} A_{(N-m) \times (N-m)} & B_{(N-m) \times m} \\ O_{m \times (N-m)} & I_{m \times m} \end{bmatrix} \label{eq:AB} \end{align} where $I$ denotes an identity matrix, $O$ denotes a zero matrix, and the subscripts denote the dimensions of each matrix. We can also define the first $n$ basis functions as indicator functions on each component of the output, i.e. \begin{align} g_i(y) &= y_i \quad \forall i \in \{1, \ldots, n \} \label{eq:psi-1-n}. \end{align} Then, $C$ is defined as the matrix which projects the first $n$ elements of the state onto the output-space, \begin{align} C &= \begin{bmatrix} I_{n \times n} & O_{n \times (N-n)} \end{bmatrix}. \label{eq:C} \end{align} \subsection{Incorporating Loading Conditions Into the Model} \label{sec:loadest} For robots that interact with objects or their environment, understanding the effect of external loading on their dynamics is critical for control. These loading conditions alter the dynamics of a system, but are generally not directly observable. This poses a challenge for model-based control, which relies on an accurate dynamical model to choose suitable control inputs for a given task. We desire a way to incorporate loading conditions into our dynamic system model and to estimate them online. We can achieve this by including them within the states of our Koopman-based lifted system model, and then constructing an online observer to estimate their values. This strategy utilizes the underlying model to infer the most likely value of the loading conditions given past input/output measurements. Let $w \in \mathbb{R}^{p}$ be a parametrization of loading conditions. For example, $w$ might specify the mass at the end effector of a manipulator arm. We incorporate $w$ into the state $z$ using a new lifting function ${ \gamma : \mathbb{R}^{n \times p} \to \mathbb{R}^{(N-m)(p+1)} }$, which accepts $w$ as a second input and is defined as: \begin{align} z &= \gamma(y,w) = \begin{bmatrix} g(y) \\ g(y) w_1 \\ \vdots \\ g(y) w_p \end{bmatrix} = \underbrace{ \begin{bmatrix} g(y) & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & g(y) \end{bmatrix} }_{\Gamma(y)} \begin{bmatrix} 1 \\ w \end{bmatrix} \label{eq:gamma} \end{align} where ${ \Gamma(y) \in \mathbb{R}^{\left((N-m)(p+1)\right) \times (p+1)} }$ is the matrix formed by diagonally concatenating $g(y)$, ${p+1}$ times, and $w_i$ denotes the $i^\text{th}$ element of $w$. Note that because this lifting function requires the loading condition $w$ as an input, it must also be included in the snapshots ${ (a[k] , b[k] , u[k] , w[k] )}$ to construct a Koopman model that accounts for loading. Although $w$ is not measured directly, its value can be inferred based on the system model and past input-output measurements. We construct an observer that estimates the value of $w$ at the $j^\text{th}$ timestep by solving a linear least-squares problem using data from the $N_w$ previous timesteps. Notice that the output at the $j^\text{th}$ timestep $y[j]$ can be expressed in terms of the input $u[j-1]$, the output $y[j-1]$, and the load $w[j-1]$ at the previous timestep by combining the system model equations of \eqref{eq:linSys} and then substituting \eqref{eq:gamma} for $z[j-1]$, \begin{align} y[j] &= C A z[j-1] + C B u[j-1] \\ &= C A \Gamma( y[j-1] ) \begin{bmatrix} 1 \\ w[j-1] \end{bmatrix} + C B u[j-1]. \label{eq:y-from-zu} \end{align} Solving for the best estimate of $w[j-1]$ in the $L^2$-norm sense, denoted $\hat{w}[j-1]$ yields the following expression, \begin{align} \begin{bmatrix} 1 \\ \hat{w}[j-1] \end{bmatrix} &= \left( C A \Gamma( y[j-1] ) \right)^\dagger \left( y[j] - C B u[j-1] \right) \label{eq:what-jm1} \end{align} where $^\dagger$ denotes the Moore-Penrose psuedoinverse. Under the assumption that the loading is equal to some constant $\tilde{w}$ over the previous $N_w$ time steps, i.e. $w[i] = \tilde{w}$ for $i=j-N_w,\ldots,j$, we can similarly find the best estimate over all $N_w$ timesteps. Since this estimate is based on more data it should be more accurate and more robust to noisy output measurements. We define the following two matrices, \begin{align} & \Lambda_A = \begin{bmatrix} C A \Gamma( y[j-1] ) \\ \vdots \\ C A \Gamma( y[j-N_w] ) \end{bmatrix} \\ & \Lambda_B = \begin{bmatrix} y[j] - C B u[j-1] \\ \vdots \\ y[j-N_w +1] - C B u[k-N_w] \end{bmatrix} \label{eq:Lambda-matrices} \end{align} Then, following from \eqref{eq:what-jm1}, the best estimate for $\bar{w}$ over the past $N_w$ timesteps in the $L^2$-norm sense, denoted $\hat{w}$, is given by \begin{align} \begin{bmatrix} 1 \\ \hat{w} \end{bmatrix} &= \Lambda_A^\dagger \Lambda_B, \label{eq:what} \end{align} where $^\dagger$ again denotes the Moore-Penrose psuedoinverse. \section{Control} \label{sec:mpc} A system model enables the design of model-based controllers that leverage model predictions to choose suitable control inputs for a given task. In particular, model-based controllers can anticipate future events, allowing them to optimally choose control inputs over a finite time horizon. A popular model-based control design technique is model predictive control (MPC), wherein one optimizes the control input over a finite time horizon, applies that input for a single timestep, and then optimizes again, repeatedly \cite{rawlings2009model}. For linear systems, MPC consists of iteratively solving a convex quadratic program (QP). Importantly, this is also the case for Koopman-based linear MPC control, wherein one solves for the optimal sequence of control inputs over a receding prediction horizon \cite[Eq.~23]{korda2018linear}. The predictions of this Koopman-based controller depend on the estimate of the loading conditions $\hat{w}$. This estimate must be periodically updated using the method described in Section~\ref{sec:loadest}, but for systems with relatively stable loading conditions, it is computationally inefficient to compute a new estimate at every time step. Therefore, we define a load estimation update period $N_e$ as the number of time steps to wait between load estimations. Increasing $N_e$ will likely increase the accuracy of each load estimate, but will also reduce responsiveness to changes in the loading conditions. To balance accuracy with responsiveness, we update $\hat{w}$ every $N_e$ time steps by setting it equal to the average of the new load estimate and the previous $N_r$ load estimates, where $N_r$ is another user defined constant. Algorithm \ref{alg:mpc} summarizes the closed-loop operation of this Koopman-based MPC controller with these periodic load estimation updates. \begin{algorithm}[t] \SetAlgoLined \KwIn{ Prediction horizon: $N_h$ \\ \hspace{32pt} Model matrices: $A , B , C$} \For{ $k = 0 , 1 , 2 , ... $}{ \eIf{$k \mod N_e = 0$}{ Estimate $\hat{w}'[j]$ via \eqref{eq:what} \\ $\hat{w}[k]$ = mean($ \hat{w}'[j] , \hat{w}[k-1] , \ldots , \hat{w}[k-N_{r}] $) \\ $j = j+1$ }{ $\hat{w}[k] = \hat{w}[k-1]$ } \textbf{Step 1:} Solve QP to find optimal input $(u[i]^*)_{i=0}^{N_h}$ \\ \textbf{Step 2:} Set $u[k] = u[0]^*$ \\ \textbf{Step 3:} Apply $u[k]$ to the system } \caption{Koopman MPC with Load Estimation } \label{alg:mpc} \end{algorithm} \subsection{Practical Considerations: Selection of Basis Functions and Dimensional Reduction} \label{sec:svd} The effectiveness of a finite-dimensional approximation $\bar{\mathcal{K}}_{T_s}$ in representing the actions of the infinite-dimensional Koopman operator $\mathcal{K}_{T_s}$ depends greatly upon the subspace upon which it is projected $\bar{\mathcal{F}}$, i.e. the span of the basis functions selected. \Dan{finish rest of paper before coming back to complete this section. Only include if there is space.} \section{Experiments} \label{sec:experiments} This section describes the soft continuum manipulator and the set of experiments used to demonstrate the efficacy of the modeling and control methods described in Sections \ref{sec:sysid} and \ref{sec:mpc}. Footage from these experiments is included in a supplementary video file\footnote{\href{https://youtu.be/g2yRUoPK40c}{https://youtu.be/g2yRUoPK40c}}. \subsection{System Identification of Soft Robot Arm} \label{sec:sysid-arm} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/robot_full.pdf} \caption{The soft robot arm consists of three bending sections, each actuated by three pneumatic artificial muscles (PAMs). The actuators are surrounded by a sleeve of flexible PVC foam, and pressurized air is supplied to the actuators via air hoses that wind around the exterior. The end effector consists of a granular jamming vacuum gripper \cite{amend2012positive}, which is connected to a vacuum pump by a hose that runs along the interior of the arm.} \label{fig:robot-schematic} \end{figure} To validate the modeling and control approach described in the previous section, we applied it to a soft robot arm capable of picking-up objects and moving its end effector in three-dimensional space. The robot, shown in Fig.~\ref{fig:robot-schematic}, is 70~cm long and has a diameter of 6~cm. It is made up of three pneumatically actuated bending sections and an end effector comprised of a granular jamming vacuum gripper \cite{amend2012positive}. Each section is actuated by three pneumatic artificial muscles (PAMs) \cite{tondu2012modelling} which are adhered to a central spine consisting of an air hose encased in flexible PVC foam tubing. Another much larger sleeve of flexible PVC foam surrounds the actuators, which serves to dampen high frequency oscillations and make the body of the arm softer overall. The air pressure inside the actuators is regulated by $9$ Enfield TR-010-g10-s pneumatic pressure regulators that accept ${0-10}$V command signals corresponding to pressures of approximately ${0 - 275}$~kPa, and are connected to the actuators by air hoses that wrap around the outside of the foam sleeves. The exterior of the arm is covered in retro-reflective markers which are tracked using a commercial OptiTrack motion capture system. We quantified the stochastic behavior of our soft robot system by observing the variations in output from period-to-period under sinusoidal inputs with a period of 10 seconds and a sampling time of $T_s = 0.083$ seconds with a zero-order-hold between samples. Over 60 periods, the trajectory of the end effector deviated from the mean trajectory by an average of 9.45~mm and with a standard deviation of 7.3~mm. This inherent stochasticity limits the tracking performance of the system, independent of the employed controller. For the purposes of constructing a dynamic model for the arm, the input was chosen to be the command voltages into the 9 pressure regulators and at each instance in time was restricted to $[0,10]^{9}$. The output was chosen to live in $\mathbb{R}^9$ and corresponds to the positions of the ends of each of the 3 bending sections in Cartesian coordinates with the last 3 coordinates corresponding to the end effector position. The parametrization of the loading condition lives in $\mathbb{R}_{+}$ and is chosen to be the mass of the object held by the gripper. Data for constructing models was collected over $49$ trials lasting approximately $10$ minutes each. A randomized ``ramp and hold'' type input and a load from the set $\{ 0,50,100,150,200,250,300 \}$ grams was applied during each trial to generate a representative sampling of the system's behavior over its entire operating range. Three models were fit from the data: a linear state-space model using the subspace method \cite{van2012subspace}, a linear Koopman model that \emph{does not} take loading into account using the approach described in Section~\ref{sec:koopid}, and a linear Koopman model that \emph{does} incorporate loading using the approach from Section~\ref{sec:loadest}. Each of theses models was fit using the same set of $325,733$ randomly generated data points just once, independent of any specific task. The linear state-space model provides a baseline for comparison and was identified using the MATLAB System Identification Toolbox \cite{MATLAB:2017}. This model is a 9 dimensional linear state-space model expressed in observer canonical form. The first Koopman model (without loading) was identified on a set of ${ K = 325,732 }$ snapshot pairs $\{ a[k] , b[k] , u[k] \}_{k=1}^K$ that incorporate a single delay ${d = 1}$: \begin{align} a[k] &= \begin{bmatrix} y[k]^\top & y[k-1]^\top & u[k-1]^\top \end{bmatrix}^\top \label{eq:real-a} \\ b[k] &= \begin{bmatrix} \left( \phi_{T_s} (y[k]) + \sigma[k] \right)^\top & y[k]^\top & u[k]^\top \end{bmatrix}^\top. \label{eq:real-b} \end{align} Note that the dimension of each snapshot is ${ 2n+m = 2(9)+9 = 27 }$ due to the inclusion of the delay, and we denote by $y^d[k] \in \mathbb{R}^{27}$ one of these outputs which has delays included at some time $k$. The the lifting function ${ \psi : \mathbb{R}^{27 \times 9} \to \mathbb{R}^{111} }$ was defined as \begin{align} \psi(y^d[k],u[k]) &= \begin{bmatrix} g(y^d[k]) \\ u[k] \end{bmatrix} \label{eq:real-lift-1} \end{align} where the range of $g$ has dimension ${ N = 102 }$, ${ g_i(y^d[k]) = y^d_i[k] }$ for $i = 1,...,27$, and the remaining 75 basis functions $\{ g_i : \mathbb{R}^{27} \to \mathbb{R} \}_{i=28}^{102}$ are polynomials of maximum degree 2 that were selected by evaluating the snapshot pairs on the set of all monomials of degree less then or equal to 2, then performing principle component analysis (PCA) \cite[Ch.~1.5]{brunton2019data} to identify a reduced set of polynomials that can still explain at least 99\% of this lifted data. The second Koopman model (with loading) was identified on the same set of snapshot pairs as the first model, but with the loading included $\{ a[k] , b[k] , u[k] , w[k] \}_{k=1}^K$. The lifting function ${ \psi : \mathbb{R}^{27 \times 9} \to \mathbb{R}^{231} }$ was defined as, \begin{align} \psi(y^d[k],u[k],w[k]) &= \begin{bmatrix} \gamma(y^d[k],w[k]) \\ u[k] \end{bmatrix} = \begin{bmatrix} g(y^d[k]) \\ g(y^d[k]) w[k] \\ u[k] \end{bmatrix} \label{eq:real-lift-2} \end{align} where the range of $g$ has dimension ${ N = 111 }$, $g_i(y^d[k]) = y^d_i[k]$ for $i = 1,...,27$, and the remaining 84 basis functions $\{ g_i : \mathbb{R}^n \to \mathbb{R} \}_{i=28}^{111}$ are polynomials of maximum degree 2 that were selected using the same PCA method described in the previous paragraph.. \subsection{Description of Controllers} \label{sec:mpc-arm} Three model predictive controllers were constructed using the data-driven models described in the previous section. Each controller uses one of the identified models to compute online predictions and is denoted by an abbreviation specifying which model, \begin{itemize} \item L-MPC: Uses the linear state-space model \item K-MPC: Uses the Koopman model without loading \item KL-MPC: Uses the Koopman model with loading \end{itemize} All three controllers solve a quadratic program at each time step using the Gurobi Optimization software \cite{gurobi}. They run in closed-loop at $12$~Hz, feature an MPC horizon of $1$ second ($N_h = 12$), and a cost function that penalizes deviations of the position of the end effector from a reference trajectory over the prediction horizon. \subsection{Experiment 1: Trajectory Following with Known Payload} \label{sec:exp1} We first evaluated the relative performance of the three controllers when the payload at the end effector is known. With this information given, the manipulator is tasked with moving the end effector along a three-dimensional reference trajectory lasting 20~seconds. Six trials were completed for payloads of 25, 75, 125, 175, 225, and 275 grams. The actual paths traced out by the end effector and the tracking error over time for 3 of the trials are displayed in Fig.~\ref{fig:control-known-load}, and the RMSE tracking error for all 6 trials is compiled in Table~\ref{tab:comparison}. It should be noted that only the KL-MPC controller is capable of actually utilizing knowledge of the payload, since the other 2 controllers are based on models that do not incorporate loading conditions. \begin{figure*}[t] \centering \parbox[][][c]{0.32\linewidth}{\centering L-MPC} \parbox[][][c]{0.32\linewidth}{\centering K-MPC} \parbox[][][c]{0.32\linewidth}{\centering KL-MPC} \includegraphics[width=0.32\linewidth]{figures/LMPC_fixed.png} \includegraphics[width=0.32\linewidth]{figures/KMPC_fixed.png} \includegraphics[width=0.32\linewidth]{figures/KLMPC_fixed.png} \caption{Experiment 1 Results: The end effector trajectories for the L-MPC (left), K-MPC (center), and KL-MPC (right) controllers when the true value of the payload is known. Trajectories corresponding to a payload of 25g are shown in blue, trajectories with a payload of 125g are shown in red, trajectories with a payload of 225g are shown in yellow, and the reference trajectory is shown in grey.} \label{fig:control-known-load} \end{figure*} \begin{table} \rowcolors{2}{white}{gray!25} \setlength\tabcolsep{5pt} \centering \caption{Experiment 1: RMSE (mm) over entire trial} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \rowcolor{white} & \multicolumn{6}{c |}{\textbf{Payloads (grams)}} & & \textbf{Std.} \\ \hhline{~------~~} \rowcolor{white} \multirow{-2}{*}{\textbf{Controller}} & 25 & 75 & 125 & 175 & 225 & 275 & \multirow{-2}{*}{\textbf{Avg.}} & \textbf{Dev.} \\ \hline L-MPC & 73.0 & 72.9 & 72.6 & 71.9 & 72.3 & 74.3 & 72.8 & 0.8 \\ K-MPC & 55.4 & 33.9 & 29.5 & 20.0 & 24.8 & 27.8 & 31.9 & 12.4 \\ KL-MPC & \textbf{26.1} & \textbf{23.7} & \textbf{20.6} & \textbf{19.5} & \textbf{18.2} & \textbf{20.4} & \textbf{21.4} & \textbf{2.9} \\ \hline \end{tabular} \label{tab:comparison} \end{table} \subsection{Experiment 2: Online Estimation of Unknown Payload} \label{sec:exp2} We evaluated the performance of the online load estimation method described in Sections \ref{sec:loadest} and \ref{sec:mpc} under randomized ``ramp and hold'' type inputs and a sampling time of $T_s = 0.083$ seconds. New estimates were calculated every ${N_e = 12}$ timesteps by solving \eqref{eq:what} using measurements from the previous ${ N_w = 30 }$ timesteps, and $\hat{w}$ was computed by averaging over the most recent ${ N_r = 360 }$ estimates. Three trials were conducted with payloads of 25, 125, and 225 grams, none of which were in the set of payloads used for system identification, and the results are displayed in Fig.~\ref{fig:load-estimation} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/load_estimation-2.png} \caption{Experiment 2 Results: Online payload estimation under random inputs using the method described in Section~\ref{sec:loadest}. Three trials are shown for payloads of 25g, 125g, and 225g, with the actual payload used for each trial marked by a dotted line, and the payload estimate marked a solid line. Results for the 25g payload are shown in blue, results for the 125g payload are shown in red, and results for the 225g payload are shown in yellow.} \label{fig:load-estimation} \end{figure} \subsection{Experiment 3: Trajectory Following with Unknown Payload} \label{sec:exp3} To evaluate the efficacy of the combined control and load estimation method summarized by Algorithm~\ref{alg:mpc}, we measured the manipulator's performance in tracking a periodic reference trajectory when the payload is not known. Once again three trials were conducted with payloads of 25, 125, and 225 grams. The periodic reference trajectory was a circle with a diameter of 200~mm. Note that this trajectory was not part of the training data. The KL-MPC controller was run at 12~Hz and $\hat{w}$ was updated according to the same parameters as in Experiment~2 (${N_e = 12 , N_w = 30 , N_r = 360 }$). Results of this experiment are shown in Fig.~\ref{fig:control-unknown-load}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/control_unknown_load.png} \caption{Experiment 3 Results: Periodic trajectory following with an unknown payload. The payload estimate over time (top) and tracking error over time (bottom) are shown for three trials with payloads of 25g, 125g, and 225g. Results for the 25g payload trial are shown in blue, results for the 125g payload trial are shown in red, and results for the 225g payload trial are shown in yellow.} \label{fig:control-unknown-load} \end{figure} \subsection{Experiment 4: Automated Object Sorting (Pick and Place)} \label{sec:exp4} The load estimation algorithm and KL-MPC controller were utilized to perform automated object sorting by mass. Five objects were selected, each with mass between 0 and 250~grams, and five cups were placed in front of the manipulator, each corresponding to a 50~gram interval between 0 and 250~grams (i.e. 0-50, 50-100, etc.). The range from 250-300 grams was not used for this experiment, because such loads too severely reduce the workspace of the robot. The objects used and their masses are shown in Fig.~\ref{fig:objects}. Given one of these objects, the task was to place the object into the cup corresponding to its mass. For each trial, a human assists the manipulator with grabbing the object, then the manipulator performs KL-MPC with load estimation (Algorithm \ref{alg:mpc}) while following a circular reference trajectory for 15 seconds. After 15 seconds, load estimation stops, and a ``drop-off'' reference trajectory is selected that will move the end effector towards the cup corresponding to the most recent payload estimate. The manipulator then uses KL-MPC to follow the ``drop-off'' trajectory and deposits the object into the cup. This cycle repeats until all 5 objects are sorted into the proper cup. Using this strategy, the manipulator properly sorted 5 out of 5 objects in 2 separate trials, using a different set of objects each time (see Fig.~\ref{fig:objects}). Footage of these trials can be seen in the supplementary video file. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/objects-2-better.png} \caption{Objects used for Experiment 4: In each trial, the soft manipulator sorted a set of five objects according to their mass, based on an estimate computed by the online observer described in Section~\ref{sec:loadest}. The set of objects used for each trial are separated by row, and the mass of each object is written below it.} \label{fig:objects} \end{figure} \section{Discussion and Conclusion} \label{sec:discussion} This paper uses a Koopman operator based approach to model and estimate a variable payload of a soft continuum manipulator arm and employs this knowledge to improve control performance. Our work confirms that incorporating knowledge about the payload into the model improves tracking accuracy and makes the controller more robust to changes in the loading conditions. In Experiment 1, the KL-MPC controller, which incorporated the payload value, reduced the RMSE tracking error averaged over all payloads by approximately 33\% compared to the K-MPC controller that did not utilize information about the payload and reduced the standard deviation of the tracking error by about 77\% (see Table~\ref{tab:comparison}). To automate the process of identifying payload values, we implemented an observer that was able to automatically estimate unknown payloads within 25~grams in a time of about 15~seconds (see Fig.~\ref{fig:load-estimation}). It is notable that this approach was capable of estimating loads other than those presented in the training data set that was used during model-identification. We did not observe over-fitting to the behavior seen under limited loading conditions which suggests that, despite the fact that the approach is data-driven, the identified Koopman model is able to capture the actual physical effect of various loading conditions. By combining the estimation, modeling, and control into a single MPC algorithm (Algorithm~\ref{alg:mpc}), we could demonstrate the effectiveness of our approach to improve control accuracy under unknown loading conditions. We first tracked periodic trajectories with an unknown payload. Since the controller needs some time to establish an accurate estimate of the load, the tracking error gradually decreases over time as the load estimate becomes more reliable. After approximately 15~seconds, the tracking error decreased to less than 30~mm, which was about equal to the error with a known load value. As a final demonstration, we implemented successful pick-and-(mass-based)-place object manipulation using the same algorithm. Unknown objects were successfully sorted by mass, taking advantage of the fact that the payload estimate was accurate enough to choose the correct container for each object and that the tracking error of the ``drop-off'' trajectory was small enough not to miss the cup. This required a payload estimate accuracy of less than 50~grams, and a tracking error accuracy of less than 45~mm (the radius of the cups). While the manipulator exhibited sufficient accuracy to complete this task, several modifications could be made to the robot and controller to improve performance even further. First, the workspace of the manipulator could be greatly enlarged by replacing some of the current acuators with more powerful ones. This could be done without significantly increasing size or weight just by increasing the diameter of the PAMs \cite{tondu2012modelling}. Second, a model and controller could be identified with a shorter sampling time, which would enable the model to account for higher frequency behavior and track more dynamic trajectories. This could be achieved by making upgrades to our computational hardware and optimizing our code. Even with these changes, the system's inherent stochasticity would limit tracking accuracy, but these improvements would likely enable much more accurate control. While, so far, our approach has only been validated on one specific instance of a pneumatically-actuated soft manipulator, it should readily extend to other types and classes of soft robotic systems. Beyond specifying the inputs and outputs, no system knowledge was necessary in the implementation, and the tuning of algorithmic parameters such as the type and number of basis functions was minimal. We thus believe that our work lays a foundation towards enabling the widespread use of automated soft manipulators in real-world applications. \section{Introduction} \label{sec:intro} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/overview.pdf} \caption{Control of a pneumatic soft robot arm that is robust to changes to the load at the end effector is achieved using model predictive control and observer that provides online estimate the load. \Dan{Placeholder image. Elaborate once the figure is complete}} \label{fig:overview} \end{figure} Soft continuum manipulator arms are an active area of research in the robotics community. They are lightweight, cheap to make, and their inherent compliance carries the promise to enable safe interaction with humans, delicate objects, and the environment ~\cite{immega1995ksi,hannan2001elephant,mcmahan2005design,grissom2006design,cianchetti2013stiff,mahl2014variable,hughes2016soft,phillips2018dexterous}. These properties would make them ideal platforms for tasks involving physical human-robot interaction such as feeding~\cite{miller1998assistive} or sorting products in a warehouse~\cite{correll2016analysis}, yet the real-world application of soft manipulators has, so far, been limited. This may be due to the difficulty of obtaining control-oriented models for systems that exhibit infinite degrees-of-freedom, nonlinear material properties, and unconventional actuation schemes \cite{george2018control}. The lack of suitable models limits automatic control and it is not surprising that many (or: the most) successful soft robots to date are controlled via teleoperation to perform tasks such as object manipulation \cite{webster2010design} \Dan{Move this thing about teleoperation to later, and need better citation}. \David{Can we restate the last sentence to say "The lack of suitable models limits automatic control and it is not surprising that many (or: the most) successful soft robots to date are controlled via teleoperation to perform tasks such as object manipulation \cite{webster2010design}.} In manipulation, accurate models are particularly important for feedforward-control, where they are used to compensate for the effects of gravity and inertia. Accurate feedforward models also allow for a reduction of the feedback gains and thus for an overall more compliant behavior. When such accurate models are not available, feedback control can be used to some degree to offset the effects of model uncertainty and disturbances. However, relying too heavily on feedback has been shown to reduce the compliance of soft robotic systems \cite{della2017controlling}. That is, high-gain feedback might negate the desirable compliance of a soft robot by replacing its natural dynamics with those of a slower, stiffer system. Recently, data-driven modeling techniques have emerged as a powerful tool for obtaining the accurate models required for feedforward control of soft continuum manipulators. A primary benefit of these techniques is their ability to describe an input/output relationship from system observations. \David{You need to elaborate on the previous sentence. Why is this a benefit? Going back to the question of dimensionality, it allows you to do dimensionality reduction in a space that is potentially more meaningful than the shape of the robot.} The input/output data that is necessary to create such models can be easily collected for soft robots. Compared to conventional rigid-bodied manipulators, soft manipulators pose much less of a physical threat to themselves and their surroundings. It is hence possible to automatically and safely collect data under a wide range of control inputs and operating conditions. Within the class of data-driven methods, deep learning of artificial neural networks has been a popular approach to capture the nonlinear input/output behavior of soft robots. For instance, \citet{thuruthel2018model} describe a recurrent neural network model for a pneumatically driven soft manipulator that was trained from data. To perform subsequent control, a closed-loop optimal control policy was generated using supervised reinforcement learning. The resulting control policy enabled the manipulator to reach desired points in a two-dimensional workspace. \citet{hyatt2019model} constructed a neural network model for a bellows-actuated manipulator from data, and three-dimensional position control was achieved using a model predictive controller. Notably, the controller was based on a linearization of the full neural network model, since computing inputs in real-time using the full model was computationally infeasible. Theses examples illustrate two fundamental downsides of using neural network models for control. First, building a neural network model requires solving a nonlinear optimization problem for which global convergence may not be guaranteed and second, utilizing this model at run-time is non-trivial since the control input usually appear non-linearly within the computed model. \Dan{Also need to cite Girish Chowdhary paper \cite{satheeshbabu2019open}, which used deep reinforcement learning for open loop position control of soft arm. ALso Andrew Marchese paper using iterative learning control \cite{marchese2014autonomous}} \Ram{may also want to mention that these models have not been applied to pick and place with soft robots...} In contrast to deep learning methods, Koopman operator theory offers a data-driven modeling approach that yields explicit, control-oriented models by identifying a linear embedding of system dynamics in an infinite-dimensional function space \cite{budivsic2012applied}\cite[Ch. 7]{brunton2019data}. The approach leverages the linear structure of the Koopman operator to construct linear models of nonlinear controlled dynamical systems from input/output data \cite{bruder2019nonlinear, mauroy2016linear}, and to control them using established linear control methods. In theory, this approach involves \emph{lifting} into an infinite-dimensional space of scalar functions (referred to as observables), where the flow of such functions along trajectories of the nonlinear dynamical system is described by the \emph{linear} Koopman operator. In practice, however, it is infeasible to compute an infinite-dimensional operator, so a process called Extended Dynamic Mode Decomposition (EDMD) \cite{williams2015data} is typically employed to compute a finite-dimensional projection of the Koopman operator onto a finite-dimensional subspace of all observables (scalar functions). Hence, this approach makes it possible to control the output of a nonlinear dynamical system using a controller designed for its linear Koopman representation \cite{Abraham-RSS-17, korda2018linear}. \Ram{technically our approach and T. Murphey's approach are a bit different...} A Koopman-based approach has been successfully used to control several rigid-bodied robots such as a Sphero SPRK, \cite{Abraham-RSS-17}, a quadcopter \cite{abraham2019active}, a robotic fish \cite{mamakoukas2019local}, and the 2D configuration of the end effector of a soft robot arm \cite{Vasudevan-RSS-19}. Despite their ability to construct models that are amenable to online control, these Koopman-based methods have not yet been applied to perform control of soft robot manipulators during tasks that involve variable loading conditions, such as pick and place. For such tasks, the effect of loading on the model must somehow be estimated and updated during real-time operation. \David{The addition of loading in this paragraph seems to be a bit of an afterthought. Given that the primary innovation from your side is to work on loads, I would try to include the notion and effect of load more prominently in the introduction. I tried to mentioned it more in my edits, but maybe you can state it in a more fundamental manner: What changes when you have to think about loads?} \Ram{list out contributions...} This work presents a method for incorporating loading conditions into a Koopman-based modeling and control framework and demonstrates real-time, fully autonomous control of a pneumatically actuated soft continuum manipulator with variable payload. This approach yields a more consistent control performance over a variety of loading conditions than several comparison model-based controllers. To the best of the authors' knowledge, this is the first implementation of a closed-loop controller for a soft continuum manipulator that explicitly accounts for loading, and the first successful pick and place demonstration by a soft continuum manipulator. \David{If this is true, you need to play that card much sooner in your intro and highlight the challenges of adding load.} The rest of this paper is organized as follows: Section \ref{sec:sysid} formally introduces the Koopman operator and describes how to construct models of nonlinear dynamical systems from data. Section \ref{sec:loadest} introduces a method for incorporating loading conditions into the model. Section \ref{sec:mpc} describes our Koopman-based model predictive controller and a method for estimating loading conditions online. Section \ref{sec:experiments} describes the set of experiments used to evaluate the performance of our Koopman-based model predictive controller on a pneumatically actuated soft continuum manipulator, including trajectory following while carrying an unknown payload, and autonomously sorting objects by mass. Section \ref{sec:discussion} discusses the results of these experiments and concludes the paper. \section{Introduction} \label{sec:intro} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/overview.pdf} \caption{Control of a pneumatic soft robot arm that is robust to changes to the load at the end effector is achieved using model predictive control and observer that provides online estimate the load. \Dan{Placeholder image. Elaborate once the figure is complete}} \label{fig:overview} \end{figure} Soft continuum manipulator arms are an active area of research in the robotics community. They are lightweight, cheap to make, and their inherent compliance carries the promise to enable safe interaction with humans, delicate objects, and the environment ~\cite{immega1995ksi,hannan2001elephant,mcmahan2005design,grissom2006design,cianchetti2013stiff,mahl2014variable,hughes2016soft,phillips2018dexterous}. These properties would make them ideal platforms for tasks involving physical human-robot interaction such as feeding~\cite{miller1998assistive} or sorting products in a warehouse~\cite{correll2016analysis}, yet the real-world application of soft manipulators has, so far, been limited. This may be due to the difficulty of obtaining control-oriented models for systems that exhibit infinite degrees-of-freedom, nonlinear material properties, and unconventional actuation schemes \cite{george2018control}. The lack of suitable models limits automatic control and it is not surprising that many (or: the most) successful soft robots to date are controlled via teleoperation to perform tasks such as object manipulation \cite{webster2010design} \Dan{Move this thing about teleoperation to later, and need better citation}. \David{Can we restate the last sentence to say "The lack of suitable models limits automatic control and it is not surprising that many (or: the most) successful soft robots to date are controlled via teleoperation to perform tasks such as object manipulation \cite{webster2010design}.} The challenge in modeling and controlling soft continuum manipulators stems from the fact that they exhibit distributed deformation throughout their bodies and thus an have infinite number of degrees of freedom. This limits the applicability of rigid-body methods for modeling and control, which rely on the assumption of a finite set of rigid links connected by discrete joints. Without the assumption of rigid links, nonlinear material properties must also be taken into account, further complicating the task of identifying a suitably accurate model for the purpose of automatic control. \David{"The primary challenge in modeling and controlling soft continuum manipulators, especially under load, is that their continuous deformation cannot be described by a finite set of degrees of freedom. This lack of discrete joints strongly limits the applicability of established rigid-body methods for modeling and control, which can only be used in combination with strongly simplifying assumptions about shape and deformation of the soft robot. In addition, the continuous deformation is subject to nonlinear material properties, even further complicating the task of identifying a suitably accurate model."} \David{It feels as in your head you are actually thinking about two different types of models. One for feedforward control and one for feedback control. You seem to assume that the feedforward model needs to be more precise. It needs to be able to accurately predict deformations under load and let you figure out how to compensate for these deformation by applying the correct and exact pressure values to each free. If your model is off, your resulting motion is completely off. In the next paragraph, you seem to imply that the model for feedback can be simpler. That's true, since in a sense it only needs to be representing a sense of direction and not one of scale. i.e., `if you want to go in this direction try putting pressure in these Frees. Keep playing with the pressure value until you are there.'. If you are thinking this way, try to express this idea more explicitely.} In cases where an accurate model is unavailable, feedback control is typically used to offset the effects of model uncertainty and disturbances. However, relying too heavily on feedback to compensate for model inaccuracy has been illustrated to reduce the compliance of soft robotic systems \cite{della2017controlling}. That is, excessive feedback negates the desirable compliance of a soft robot by replacing its natural dynamics with those of a slower, stiffer system. To limit the dependence on feedback and preserve compliance of soft robots while achieving adequate control performance, sufficiently accurate models are needed. \David{In manipulation, accurate models are particularly important for feedforward-control, where they are used to compensate for the effects of gravity and inertia. Acurate feedforward models also allow for a reduction of the feedback gains and thus for an overall more compliant behavior. When such accurate models are not available, feedback control can be used to some degree to offset the effects of model uncertainty and disturbances. However, relying too heavily on feedback has been shown to reduce the compliance of soft robotic systems \cite{della2017controlling}. That is, high-gain feedback might negate the desirable compliance of a soft robot by replacing its natural dynamics with those of a slower, stiffer system.} \David{To obtain the acurate models required for feed forward control, ...}Recently, data-driven modeling techniques have emerged as a powerful tool for representing the input/output behavior of soft manipulators. A primary benefit of these techniques is their ability to describe an input/output relationship from system observations. \David{You need to elaborate on the previous sentence. Why is this a benefit? Going back to the question of dimensionality, it allows you to do dimensionality reduction in a space that is potentially more meaningful than the shape of the robot.} These data-driven methods are especially well suited for soft manipulators since soft robots pose less of a physical threat to themselves and their surroundings when subjected to arbitrary control inputs when compared to conventional rigid-bodied manipulators. This property of soft robotic systems makes it possible to safely collect input/output data over a wide range of operating conditions in an automated fashion. \David{"The input-output data that is necessary to create such models can be easily collected for soft robots. Compared to conventional rigid-bodied manipulators, they pose much less of a physical threat to themselves and their surroundings. It is hence possible to subject them to arbitrary control inputs and thus automatically and safely collect input/output data over a wide range of operating conditions."} \David{Within the class of data-driven models, } Deep learning of artificial neural networks has been a popular data-driven \David{remove data-driven if you use the opening part} approach to capture the nonlinear input/output behavior of soft robots. For instance, \citet{thuruthel2018model} describe a recurrent neural network model for a pneumatically driven soft manipulator that was trained from data. To perform subsequent control, a closed-loop optimal control policy was generated using supervised reinforcement learning. The resulting control policy enabled the manipulator to reach desired points in a two-dimensional workspace. \citet{hyatt2019model} constructed a neural network model for a bellows-actuated manipulator from data, and three-dimensional position control was achieved using a model predictive controller. Notably, the controller was based on a linearization of the full neural network model, since computing inputs in real-time using the full model was computationally infeasible. This exemplifies/illustrates the two fundamental downsides of using neural network models for control. First, building a neural network model requires solving a nonlinear optimization problem for which global convergence may not be guaranteed and second, utilizing this model at run-time is non-trivial since the control input usually appear non-linearly within the computed model. \Dan{Also need to cite Girish Chowdhary paper \cite{satheeshbabu2019open}, which used deep reinforcement learning for open loop position control of soft arm. ALso Andrew Marchese paper using iterative learning control \cite{marchese2014autonomous}} \Ram{may also want to mention that these models have not been applied to pick and place with soft robots...} In contrast to deep learning methods, Koopman operator theory offers a data-driven modeling approach that yields explicit, control-oriented models by identifying a linear embedding of system dynamics in an infinite-dimensional function space \cite{budivsic2012applied}\cite[Ch. 7]{brunton2019data}. The approach leverages the linear structure of the Koopman operator to construct linear models of nonlinear controlled dynamical systems from input/output data \cite{bruder2019nonlinear, mauroy2016linear}, and to control them using established linear control methods. In theory, this approach involves \emph{lifting} into an infinite-dimensional space of scalar functions (referred to as observables), where the flow of such functions along trajectories of the nonlinear dynamical system is described by the \emph{linear} Koopman operator. In practice, however, it is infeasible to compute an infinite-dimensional operator, so a process called Extended Dynamic Mode Decomposition (EDMD) \cite{williams2015data} is typically employed to compute a finite-dimensional projection of the Koopman operator onto a finite-dimensional subspace of all observables (scalar functions). Hence, this approach makes it possible to control the output of a nonlinear dynamical system using a controller designed for its linear Koopman representation \cite{Abraham-RSS-17, korda2018linear}. \Ram{technically our approach and T. Murphey's approach are a bit different...} A Koopman-based approach has been successfully used to control several rigid-bodied robots such as a Sphero SPRK, \cite{Abraham-RSS-17}, a quadcopter \cite{abraham2019active}, a robotic fish \cite{mamakoukas2019local}, and the 2D configuration of the end effector of a soft robot arm \cite{Vasudevan-RSS-19}. Despite their ability to construct models that are amenable to online control, these Koopman-based methods have not yet been applied to perform control of soft robot manipulators during tasks that involve variable loading conditions, such as pick and place. For such tasks, the effect of loading on the model must somehow be estimated and updated during real-time operation. \David{The addition of loading in this paragraph seems to be a bit of an afterthought. Given that the primary innovation from your side is to work on loads, I would try to include the notion and effect of load more prominently in the introduction. I tried to mentioned it more in my edits, but maybe you can state it in a more fundamental manner: What changes when you have to think about loads?} \Ram{list out contributions...} This work presents a method for incorporating loading conditions into a Koopman-based modeling and control framework and demonstrates real-time, fully autonomous control of a pneumatically actuated soft continuum manipulator with variable payload. This approach yields a more consistent control performance over a variety of loading conditions than several comparison model-based controllers. To the best of the authors' knowledge, this is the first implementation of a closed-loop controller for a soft continuum manipulator that explicitly accounts for loading, and the first successful pick and place demonstration by a soft continuum manipulator. \David{If this is true, you need to play that card much sooner in your intro and highlight the challenges of adding load.} The rest of this paper is organized as follows: Section \ref{sec:sysid} formally introduces the Koopman operator and describes how to construct models of nonlinear dynamical systems from data. Section \ref{sec:loadest} introduces a method for incorporating loading conditions into the model. Section \ref{sec:mpc} describes our Koopman-based model predictive controller and a method for estimating loading conditions online. Section \ref{sec:experiments} describes the set of experiments used to evaluate the performance of our Koopman-based model predictive controller on a pneumatically actuated soft continuum manipulator, including trajectory following while carrying an unknown payload, and autonomously sorting objects by mass. Section \ref{sec:discussion} discusses the results of these experiments and Section \ref{sec:conclusion} concludes the paper. \section{Outline} \label{sec:outline} \begin{outline}[enumerate] \1 Introduction \2 Pick-and-place is a common/useful task almost exclusively done by humans. \2 Soft grippers make grasping possible, but safety still a concern \2 Soft arm would be safe (and still accurate enough) \2 Recent work has shown this may be possible (self-cite) \2 Summary of contributions of this work. \1 Theory \2 Koopman system identification overview \2 Construction of load observer \2 Koopman MPC controller \1 Experiments \2 Description of robot \3 Composition of arm \3 Choice of gripper (can grasp a variety of objects without much control) \3 Regulators, cameras, etc. \2 Sysid \3 Gather data of robot arm with various known loads attached to end effector. \3 Fit Koopman model with the load as a state. \3 Build model and observer to estimate the load during operation. \2 Control \3 Pick and place objects with unknown mass (some in training data, some not). Measure deviation from desired placement point. \3 Throw objects of unknown mass into a bucket. \1 Conclusion \end{outline} \section{Introduction} This demo file is intended to serve as a ``starter file" for the Robotics: Science and Systems conference papers produced under \LaTeX\ using IEEEtran.cls version 1.7a and later. \section{Section} Section text here. \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{RSS citations} Please make sure to include \verb!natbib.sty! and to use the \verb!plainnat.bst! bibliography style. \verb!natbib! provides additional citation commands, most usefully \verb!\citet!. For example, rather than the awkward construction {\small \begin{verbatim} \cite{kalman1960new} demonstrated... \end{verbatim} } \noindent rendered as ``\cite{kalman1960new} demonstrated...,'' or the inconvenient {\small \begin{verbatim} Kalman \cite{kalman1960new} demonstrated... \end{verbatim} } \noindent rendered as ``Kalman \cite{kalman1960new} demonstrated...'', one can write {\small \begin{verbatim} \citet{kalman1960new} demonstrated... \end{verbatim} } \noindent which renders as ``\citet{kalman1960new} demonstrated...'' and is both easy to write and much easier to read. \subsection{RSS Hyperlinks} This year, we would like to use the ability of PDF viewers to interpret hyperlinks, specifically to allow each reference in the bibliography to be a link to an online version of the reference. As an example, if you were to cite ``Passive Dynamic Walking'' \cite{McGeer01041990}, the entry in the bibtex would read: {\small \begin{verbatim} @article{McGeer01041990, author = {McGeer, Tad}, title = {\href{http://ijr.sagepub.com/content/9/2/62.abstract}{Passive Dynamic Walking}}, volume = {9}, number = {2}, pages = {62-82}, year = {1990}, doi = {10.1177/027836499000900206}, URL = {http://ijr.sagepub.com/content/9/2/62.abstract}, eprint = {http://ijr.sagepub.com/content/9/2/62.full.pdf+html}, journal = {The International Journal of Robotics Research} } \end{verbatim} } \noindent and the entry in the compiled PDF would look like: \def\tmplabel#1{[#1]} \begin{enumerate} \item[\tmplabel{1}] Tad McGeer. \href{http://ijr.sagepub.com/content/9/2/62.abstract}{Passive Dynamic Walking}. {\em The International Journal of Robotics Research}, 9(2):62--82, 1990. \end{enumerate} where the title of the article is a link that takes you to the article on IJRR's website. Linking cited articles will not always be possible, especially for older articles. There are also often several versions of papers online: authors are free to decide what to use as the link destination yet we strongly encourage to link to archival or publisher sites (such as IEEE Xplore or Sage Journals). We encourage all authors to use this feature to the extent possible. \section{Conclusion} \label{sec:conclusion} The conclusion goes here. \section*{Acknowledgments} \bibliographystyle{plainnat}
1,477,468,750,771
arxiv
\section{Introduction} In a recent paper\cite{evans-99-2} we considered the collection of decays $B^+\to D_{s,d}^{(*)+}e^+e^-$. The decay rate for these is proportional to $|V_{ub}|^2$. We found that over a large kinematic domain one can reliably estimate the rate (in terms of $|V_{ub}|^2$). The process is first order weak and first order electromagnetic, and, therefore, the amplitude involves long distance physics. The central observation of \cite{evans-99-2} is that over a large kinematic domain the interaction is local on the scale of strong dynamics. The amplitude can, therefore, be approximated by the matrix elements of local operators, which can be estimated in a variety of ways and should eventually be determined in numerical simulations of QCD on the lattice. The branching fraction for $B^+\to D_{s}^{*+}e^+e^-$, restricted to invariant mass of the $e^+e^-$ pair in excess of $1.0$~GeV, was estimated to be $1.9\times10^{-9}$. This is too small to be measured in $e^+e^-$ B-factories, but could be observable at high luminosity high energy hadronic colliders. In this paper we consider the decays $\bar B_{s,d}\to J/\psi e^+e^-$, $\bar B_{s,d}\to \eta_c e^+e^-$, $\bar B_{s,d}\to D^{*0} e^+e^-$ and $\bar B_{s,d}\to D^{0} e^+e^-$. These proceed via W-exchange topologies, as shown in Fig.~\ref{fig:fig0}. In addition, $\bar B_{d,s}\to J/\psi e^+e^-$ and $\bar B_{d,s}\to \eta_c e^+e^-$ have small contributions from penguins, which we neglect. The goal of the paper is to show how the methods introduced in paper\cite{evans-99-2} can be applied to the processes considered here. The kinematics of $\bar B_{d,s}\to D^{(*)0} e^+e^-$ is similar to that of $B^+\to D_{d,s}^{(*)+}e^+e^-$ so one expects the methods to apply readily. In fact, the only dynamical difference is that in $\bar B_{d,s}\to D^{(*)0} e^+e^-$ the heavy $b$ quark decays to a heavy $c$-quark, whereas in $B^+\to D_{s,d}^{(*)+}e^+e^-$ it is a heavy $b$-anti-quark that decays into a heavy $c$-quark. The case $\bar B_{d,s}\to J/\psi(\eta_c) e^+e^-$ is clearly different: both quark and anti-quark in the final state are heavy and they are moving together in a bound charmonium state. As we will see the expansion that arises naturally corresponds to NRQCD, the non-relativistic limit of heavy quarks bound by QCD into quarkonia. The processes under consideration here have advantages compared to $B^+\to D_{s,d}^{(*)+}e^+e^-$. These processes are not suppressed by the small CKM element $|V_{ub}|^2$. One might hope that the decay rate is, therefore, substantially higher. However, the enhancement of the rate due to bigger CKM elements is partially cancelled by small Wilson coefficients. Therefore, all these processes have small branching fractions. While none are observable at B-factories, some are observable at future hadronic collider experiments like LHC-B and BTeV. These processes are first order weak and first order electromagnetic, and, therefore, the amplitude involves long distance physics. We will show that over a large kinematic domain the interaction is approximated by a set of matrix elements of local operators. All these matrix elements should eventually be determined by lattice calculations. For the processes considered in this paper, the number of independent matrix elements is reduced by the use of rotational, heavy quark spin and chiral symmetries. \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig0.eps}} \vskip0.5cm \caption{W-exchange quark topology diagram underlying the transition $\bar B_{d,s}\to D^{(*)0} e^+e^-$. Emission of a $ e^+e^-$ pair from any line is understood.} \label{fig:fig0} \end{figure} This paper is organized as follows. In Sec.~\ref{sec:method} we review the methods of Ref.~\cite{evans-99-2} that lead to an expansion in local operators. The review is done in terms of the graphs relevant to $\bar B\to D^0 e^+e^-$, which is one of the processes of interest here. In Sec.~\ref{sec:spinsym} we present a novel analysis that shows that the matrix elements of the operators in the expansion are all related by a combination of heavy-spin, rotational and chiral symmetries. We then proceed to find the short distance QCD corrections to our operator expansion in Sec.~\ref{sec:QCD1}. In Sec.~\ref{sec:results1} we give expressions for the differential decay rates in terms of matrix elements of local operators. These should be considered our main results. To get some numerical estimates of the decay rates we crudely approximate the local matrix elements. The material in Secs.~\ref{sec:method}--\ref{sec:results1} deals with the decays $\bar B_q\to D^{*0} e^+e^-$ and $\bar B_q\to D^{0} e^+e^-$, and we repeat the steps applied to the processes $B_q\to\eta_c e^+e^-$ and $B_q\to J/\psi e^+e^-$ in Sec.~\ref{sec:onium}. Our results are summarized in Sec.~\ref{sec:conclusions}. \section{Operator Expansion} \label{sec:method} In this section we review the method introduced in \cite{evans-99-2}. However, we will present the method as applied to the process $\bar B_{d}\to D^{(*)0} e^+e^-$. Therefore we will at once review the method and perform the necessary calculation for one of the cases of interest. \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig1.eps}} \vskip0.5cm \caption{Feynman diagram representing a contribution to the Green function. The filled square represents the four quark operator ${\cal O}$ and the cross represents the electromagnetic current $j^\mu_{{\rm em}}$, cf. Eq.~(\ref{T-prod}), which here couples to the $c$-quark. } \label{fig:fig1} \end{figure} The effective Hamiltonian for the weak transition in $\bar B_{d}\to D^{(*)0} e^+e^-$, is \beq \label{eq:Heff} {\cal H}'_{\rm eff}= \frac{4G_F}{\sqrt2}\,V^{\phantom{*}}_{ud}V^*_{cb}\left( c(\mu/M_W){\cal O}+c_8(\mu/M_W){\cal O}_8\right), \eeq where \beq \label{eq:Odefd} {\cal O}=\bar d\gamma^\nu P_- b \;\;\bar c\gamma_\nu P_- u \eeq and \beq {\cal O}_8=\bar d\gamma^\nu P_-T^a b\;\; \bar c\gamma_\nu P_- T^a u, \eeq $P_\pm\equiv(1\pm\gamma_5)/2$ and $T^a$ are the generators of color gauge symmetry. This is a useful basis of operators for our purposes since the hadronic matrix element of the ``octet'' operator ${\cal O}_8$ is suppressed. The dependence on the renormalization point $\mu$ of the short distance coefficients $c$ and $c_8$ cancels the $\mu$-dependence of operators, so matrix elements of the effective Hamiltonian are $\mu$-independent. The amplitude for $\bar B_{d}\to D^{(*)0} e^+e^-$, to leading order in weak and electromagnetic interactions and to all orders in the strong interactions involves the following non-local matrix element: \beq \label{T-prod} \langle D^{*+}| \int d^4x\;e^{iq\cdot x} \; T(j^\mu_{\rm em}(x){\cal O}(0)) |B^+\rangle. \eeq Here $q$ denotes the momentum of the $e^+e^-$ pair, $j^\mu_{\rm em}$ is the electromagnetic current operator and the operator ${\cal O}$, defined in Eq.~(\ref{eq:Odefd}), is the long distance approximation to the $W$-exchange graph. The full amplitude will of course also involve a similar non-local matrix element but with the ``singlet'' operator ${\cal O}$ replaced by the octet operator ${\cal O}_8$. For now we concentrate on the singlet operator. None of the arguments given in this section depend on the particular choice of the operator. \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig2.eps}} \vskip0.5cm \caption{Same as Fig.~\ref{fig:fig1} but with the electromagnetic current coupling to the $b$-quark.} \label{fig:fig2} \end{figure} We will now argue that for heavy $b$ and $c$ quarks the non-local matrix element in Eq.~(\ref{T-prod}) is well approximated by the matrix element of a sum of local operators. The approximation is valid provided $\LQCD\ll m_{c,b}$, \ie, the corrections are order $\LQCD/ m_{c,b}$. There are also corrections of order $\LQCD m_{b,c}/q^2$. So our results are limited to the region were $q^2$ scales like $m_{c,b}^2$. The region were $q^2$ does not scale like $m_{c,b}^2$ is parametrically small, so the arguments we present are theoretically sound. However, there is the practical issue of determining a minimum $q^2$ for realistic calculations were our approximations can still be trusted. We return to this practical matter below, when we attempt to estimate the rate for this decay. The underlying decay is represented in the quark diagrams of Figs.~\ref{fig:fig1}--\ref{fig:fig4}. In the heavy quark limit, $\LQCD\ll m_{c,b}$, the heavy meson momentum is predominantly the heavy quark's. This suggests the following kinematics in the quark diagrams: for the momenta of the heavy quarks take $m_bv+k_b$ and $m_cv'+k_c$, for the momenta of the light quarks take $k_u$ and $k_d$ and then the photon's momentum is determined by conservation, $q=m_bv-m_cv'+\sum k_i$. We can now exhibit our OPE by considering the quark Green functions in Figs.~\ref{fig:fig1}--\ref{fig:fig4}. The convergence of the expansion for physical matrix elements rests on the intuitive fact that the residual momenta $k_i$ will be of order $\LQCD$ (parametrically all we need is that these are independent of the large masses). This intuition is made explicit in Heavy Quark Effective Theory (HQET): there are no heavy masses in the HQET-Lagrangian so the only relevant dynamical scale is $\LQCD$. Thus our expansion of a non-local product will be in terms of local operators of the HQET. \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig3.eps}} \vskip0.5cm \caption{Same as Fig.~\ref{fig:fig1} but with the electromagnetic current coupling to the $u$-quark.} \label{fig:fig3} \end{figure} Calculating the Feynman diagram of Fig.~\ref{fig:fig1} with our choice of kinematics we have \beq \label{eq:feynmfull1} -iQ_c\gamma^\mu\frac{i}{\sla{q}+m_c \sla{v}'+\sla{k}_c-m_c}\gamma^\nu P_- \otimes \gamma_\nu P_-. \eeq Here $Q_c=2/3$ is the charge of the $c$-quark and the tensor product corresponds to the two fermion bilinears. External legs are amputated. Using $q=m_bv-m_cv'+\sum k_i$ and expanding in $k_i/m_{c,b}$ we obtain, to leading order \beq \label{eq:feynmeff1} Q_c\gamma^\mu\frac{m_b \sla{v}+m_c}{m_b^2-m_c^2}\gamma^\nu P_- \otimes \gamma_\nu P_-. \eeq This Green's function is that of a local operator in the HQET. Denoting by $h_v^{(Q)}$ the annihilation operator for the heavy quark with four-velocity $v$, we define \beq \label{eq:hqetopdefd} \tilde{\cal O}\equiv \bar d \Gamma^{\phantom{()}}_bh^{( b)}_{v}\; \bar h^{(c)}_{v'}\Gamma^{\phantom{()}}_c u. \eeq Here $\Gamma_{b,c}$ are arbitrary Dirac matrices. With $\Gamma_c\otimes \Gamma_b$ set equal to the tensor product in (\ref{eq:feynmeff1}), \beq \label{eq:feynmeff1b} \Gamma_c\otimes \Gamma_b = Q_c\gamma^\mu\frac{m_b \sla{v}+m_c}{m_b^2-m_c^2}\gamma^\nu P_- \otimes \gamma_\nu P_-, \eeq the operator expansion is \beq \label{eq:OPE1} \int d^4x\;e^{iq\cdot x} \; T[\bar c\gamma^\mu c(x)\;{\cal O}(0)] = \tilde{\cal O}+\cdots. \eeq The ellipses indicate terms of higher order in our expansion, and correspond to higher derivative operators suppressed by powers of $m_{c,b}$. There are also perturbative corrections to this expression. These show up as modifications to the operator defined by setting $\Gamma_c\otimes \Gamma_b$ equal to (\ref{eq:feynmeff1}). \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig4.eps}} \vskip0.5cm \caption{Same as Fig.~\ref{fig:fig1} but with the electromagnetic current coupling to the $d$-quark.} \label{fig:fig4} \end{figure} The diagram of Fig.~\ref{fig:fig2} can be analyzed in complete analogy. It leads to the operator $\tilde{\cal O}$ with the choice \beq \label{eq:feynmeff2} \Gamma_c\otimes \Gamma_b =- Q_b \gamma^\nu P_- \otimes \gamma_\nu P_-\frac{m_b +m_c\sla{v}'}{m_b^2-m_c^2} \gamma^\mu, \eeq where $Q_b=-1/3$ is the $b$ quark charge. The analysis of Figs.~\ref{fig:fig3} and~\ref{fig:fig4} is similar, but there is an important distinction. With the electromagnetic current coupling to the light quarks, we get intermediate light quark propagators. The denominator in these propagators are parametrically large only if $q^2$ is parametrically large, \ie, if $q^2\sim m_{c,b}^2$. With this caveat, the OPE for Fig.~\ref{fig:fig3} gives \beq \label{eq:feynmeff3} \Gamma_c\otimes \Gamma_b = -Q_u \gamma^\nu P_-\frac{\sla{q}}{q^2} \gamma^\mu \otimes \gamma_\nu P_- \eeq and for Fig.~\ref{fig:fig4} the OPE gives \beq \label{eq:feynmeff4} \Gamma_c\otimes \Gamma_b = Q_d \gamma^\nu P_-\otimes \gamma^\mu \frac{\sla{q}}{q^2} \gamma_\nu P_-. \eeq \section{Spin Symmetry} \label{sec:spinsym} We have shown how to replace the time ordered product in Eq.~(\ref{T-prod}) by a local operator. The replacement is valid provided the invariant mass of the lepton pair is large, \ie, scales as $q^2\sim m_{c,b}^2$. The operator $\tilde {\cal O}$ that replaces the time ordered product is defined by Eq.~(\ref{eq:hqetopdefd}), with the tensor $\Gamma_c\otimes \Gamma_b $ defined as the sum of the contributions in Eqs.~(\ref{eq:feynmeff1b}), (\ref{eq:feynmeff2}), (\ref{eq:feynmeff3}) and~(\ref{eq:feynmeff4}). We now show how to relate the matrix element of this operator to the operator with $\Gamma_c\otimes \Gamma_b =\gamma^\nu P_-\otimes \gamma_\nu P_-$. This operator is not only simpler, but one can estimate its matrix elements by a variety of means, as we explain below. Consider the matrix element of $\tilde{\cal O}$ as defined in Eq.~(\ref{eq:hqetopdefd}) for arbitrary tensor product $\Gamma_c\otimes \Gamma_b$ between heavy meson states. We will use heavy quark spin symmetry to determine the matrix elements of this operator between heavy meson states. Recall that the HQET lagrangian \beq {\cal L}_{\rm HQET}=\barhbv iv\cdot D\hbv + \bar h^{(c)}_{v'} iv'\cdot Dh^{(c)}_{v'} \eeq is symmetric under the group $SU(2)_b\times SU(2)_c$ of transformations acting on spin indices of the heavy quark fields: \[ \hbv \to S^{\phantom{\dagger}}_b\hbv \quad,\quad h^{(c)}_{v'} \to S^{\phantom{\dagger}}_ch^{(c)}_{v'}. \] At $v'=v$ the symmetry is enlarged to $U(4)$, which contains an $SU(2)$ subgroup corresponding to a flavor symmetry. For now we will need only the spin symmetries. In order to make use of these symmetries, it is convenient to represent a spin multiplet consisting of a pseudoscalar $P$ and a vector meson $V_\mu$ by a $4\times4$ matrix \beq H_v=\left(\frac{1+\sla{v}}{2}\right)[V_\mu\gamma^\mu-P\gamma_5]. \eeq Then $S_b\in SU(2)_b$ and $S_c\in SU(2)_c$ act simply on the left, \beq H_v^{(b)}\to S_b H_v^{(b)} \qquad H_{v'}^{(c)}\to S_c H_{v'}^{(c)}, \eeq while an arbitrary rotation $R$ represented by the Dirac matrix ${\cal D}(R)$ acts simultaneously on both multiplets according to \beq H^{(Q)}\to {\cal D}(R)^\dagger H^{(Q)} {\cal D}(R). \eeq Consider now the matrix element $\vev{H^{(c)}_{v'}|\tilde{\cal O}|H^{(b)}_v}$. It must be linear in the tensors $\Gamma_c\otimes \Gamma_b$, $H^{(b)}_v$ and $H^{(c)}_{v'}$. Acting with $SU(2)_b$ we see that $\Gamma_b\to \Gamma_b S_b^\dagger$ and $H_v^{(b)}\to S_b H_v^{(b)}$, so they enter the matrix element as the product $\Gamma_bH_v^{(b)}$. A similar argument with $SU(2)_c$ gives then \beq \label{eq:fromspin} \vev{H^{(c)}_{v'}|\tilde{\cal O}|H^{(b)}_v} \propto \bar H^{(c)}_{v'} \Gamma_c\otimes \Gamma_b H^{(b)}_v \eeq Finally, invariance under rotations implies that the remaining four indices must be contracted. There are two possible contractions, \beq \label{eq:contractions} {\rm Tr}(\bar H^{(c)}_{v'} \Gamma_c) {\rm Tr} (\Gamma_b H^{(b)}_v) \quad\hbox{and}\quad {\rm Tr}(\bar H^{(c)}_{v'} \Gamma_c \Gamma_b H^{(b)}_v). \eeq We now show that the second one is excluded by chiral symmetry. The lagrangian for a massless quark in QCD, \beq {\cal L}=\bar\psi i\Sla{D}\psi, \eeq is invariant under the chiral symmetry \beq \psi \to e^{i\alpha\gamma_5}\psi, \eeq where $\alpha$, the parameter of the transformation, is a real number. Under this symmetry the transformation rule for our tensors is \beq \label{eq:GbHbtransf} \Gamma_b H^{(b)}_v \to e^{-i\alpha\gamma_5} \Gamma_b H^{(b)}_v e^{i\alpha\gamma_5} \eeq and \beq \bar H^{(c)}_{v'} \Gamma_c \to e^{i\alpha\gamma_5}\bar H^{(c)}_{v'} \Gamma_c e^{-i\alpha\gamma_5}. \eeq It is seen that the first contraction of indices in (\ref{eq:contractions}) is invariant, but the second one is not. We have shown that heavy quark spin symmetry, rotations and light quark chiral symmetry combine to give \beq \label{eq:werels} \vev{H^{(c)}_{v'}|\tilde{\cal O}|H^{(b)}_v} =\frac14\beta(w) {\rm Tr}(\bar H^{(c)}_{v'} \Gamma_c) {\rm Tr} (\Gamma_b H^{(b)}_v). \eeq We have indicated that the invariant matrix element $\beta$ is a function of $w=v\cdot v'$. In general, it is a function of $v$ and $v'$. However, since it must be Lorenz invariant and since $v^2=v^{\prime2}=1$, it is a function of $w=v\cdot v'$ only. The octet operator in the HQET, \beq \label{eq:hqetopdefd8} \tilde{\cal O}_8\equiv \bar d \Gamma^{\phantom{()}}_bT^ah^{(b)}_{v}\; \bar h^{(c)}_{v'}\Gamma^{\phantom{()}}_cT^a u, \eeq has the same spin and heavy flavor symmetry properties as its singlet counterpart. Therefore in complete analogy we can introduce a reduced matrix element $\beta_8$: \beq \label{eq:werels8} \vev{H^{(c)}_{v'}|\tilde{\cal O}_8|H^{(b)}_v} =\frac14\beta_8(w) {\rm Tr}(\bar H^{(c)}_{v'} \Gamma_c) {\rm Tr} (\Gamma_b H^{(b)}_v). \eeq The authors of Ref.~\cite{GJMSW} proposed a relation analogous to Eq.~(\ref{eq:werels}) for a $\Delta B=2$ transition. It was noted there that spin symmetry allowed more than one invariant and that, however, all invariants lead to the same symmetry relations. One may wonder if our use of chiral symmetry may help relate the different invariants there. We show that this is not the case. For the $\Delta B=2$ case the analogue of Eq.~(\ref{eq:fromspin}) is \beq \label{eq:fromspin2} \vev{H^{(\bar b)}_{v}|\tilde{\cal O}_{\Delta B=2}|H^{(b)}_v} \propto \Gamma_{\bar b}\bar H^{(\bar b)}_{v}\otimes \Gamma_b H^{(b)}_v, \eeq where $\tilde{\cal O}_{\Delta B=2} = \bar d \Gamma^{\phantom{()}}_{\bar b}h^{(\bar b)}_{v}\; \bar d \Gamma^{\phantom{()}}_bh^{( b)}_{v}$ (note that we define $h^{(\bar b)}_{v}$ to create a $b$-antiquark). Again, invariance under rotations implies that the remaining four indices must be contracted and, again, there are two possible contractions, \beq \label{eq:contractions2} {\rm Tr}(\Gamma_{\bar b}\bar H^{(\bar b)}_{v}) {\rm Tr} (\Gamma_b H^{(b)}_v) \quad\hbox{and}\quad {\rm Tr}(\Gamma_{\bar b}\bar H^{(\bar b)}_{v}\Gamma_b H^{(b)}_v). \eeq Chiral symmetry for the antiquark's meson tensor is just as for the quark's in Eq.~(\ref{eq:GbHbtransf}), \beq \Gamma_{\bar b}\bar H^{(\bar b)}_{v}\to e^{-i\alpha\gamma_5} \Gamma_{\bar b}\bar H^{(\bar b)}_{v} e^{i\alpha\gamma_5}. \eeq Therefore both contractions in (\ref{eq:contractions2}) are allowed by chiral symmetry. However, it is easy to see that for a class of operators of interest the two contractions are equivalent. If \[ \Gamma_{\bar b}\otimes \Gamma_b = \gamma^\mu P_- \hat\Gamma \otimes \gamma_\mu P_- \] or \[ \Gamma_{\bar b}\otimes \Gamma_b = \gamma^\mu P_- \otimes \gamma_\mu P_-\hat\Gamma , \] for any arbitrary Dirac matrix $\hat\Gamma$ the two contractions are related by Fierz rearrangement. This class of operators includes the $B-\bar B$ mixing case studied in Ref.~\cite{GJMSW}. \section{QCD Corrections} \label{sec:QCD1} Consider the operator expansion in Eq.~(\ref{eq:OPE1}). We have seen that at leading order the operator on the right hand side is given by Eqs.~(\ref{eq:hqetopdefd})--(\ref{eq:feynmeff1b}). We now consider the leading-log corrections to this relation. In the large mass limit these are formally the largest, leading corrections to the operator expansion. A renormalization scale $\mu$ must be stipulated for the evaluation of matrix elements of the composite operators on both sides of Eq.~(\ref{eq:OPE1}). It is often convenient to evaluate the matrix elements at a low renormalization point $\mu=\mu_{\rm low}$. This choice makes the matrix elements in the HQET completely independent of the large masses of the heavy quarks. If $\mu_{\rm low}\ll m_{c,b}$ there are large corrections to Eq.~(\ref{eq:OPE1}) in the form of powers of $\alpha_s\ln(m_{c,b}/\mu_{\rm low})$. These powers of large logarithms can be summed using renormalization group techniques. The corrections to these ``leading-logs'' are of order $1/\ln(m_{c,b}/\mu_{\rm low})$ or $\alpha_s$. It is important therefore to keep $\mu_{\rm low}$ small, but large enough that perturbation theory remains valid. When we estimate decay rates below, we use $\mu_{\rm low}=1.0$~GeV. To study the dependence on the renormalization point $\mu$ we take a logarithmic derivative on both sides of Eq.~(\ref{eq:OPE1}). Consider first the left side. Acting with $\mu(d/d\mu)$ on the charm number current $\bar c \gamma^\mu c$ gives zero, because the current is conserved. The action of $\mu(d/d\mu)$ on the composite four-quark operator is a linear combination of itself and the octet operator. It is therefore convenient to consider instead the linear combination that appears in the effective Hamiltonian (\ref{eq:Heff}): \begin{eqnarray} \label{eq:OPEfull} \int d^4x\;e^{iq\cdot x} \; T[\bar c\gamma^\mu c(x)& &(c{\cal O}(0)+c_8{\cal O}_8(0))] =\nonumber\\ & &\tilde c\tilde{\cal O}+\tilde c_8\tilde{\cal O}_8+\cdots. \end{eqnarray} The coefficients $c$ and $c_8$ are such that the left hand side is $\mu$-independent. This is necessary for the physical amplitude to be independent of the arbitrary choice of renormalization point $\mu$. Therefore our task is to determine the proper $\mu$-dependence for $\tilde c$ and $\tilde c_8 $ so that the right hand side is also independent of $\mu$. Therefore, if the operators satisfy \beq \label{eq:rgeops} \mu\frac{d}{d\mu} \left( \begin{array}{c} \tilde{\cal O} \\ \tilde{\cal O}_8 \\ \end{array}\right) =\gamma \left( \begin{array}{c} \tilde{\cal O} \\ \tilde{\cal O}_8 \\ \end{array}\right), \eeq where $\gamma$ is a $2\times2$ matrix of anomalous dimensions, then the coefficients must satisfy \beq \label{eq:rgecoefs} \mu\frac{d}{d\mu} \left( \begin{array}{c} \tilde{c} \\ \tilde{c}_8 \\ \end{array}\right) =-\gamma^T \left( \begin{array}{c} \tilde{c} \\ \tilde{c}_8 \\ \end{array}\right). \eeq Here ``$T$'' denotes transpose of a matrix. \begin{figure} \centerline{ \epsfysize 4.0in \epsfbox{fig11.eps}} \vskip0.5cm \caption{One loop Feynman diagrams for the calculation of the anomalous dimension matrix. The solid diamond represents the local operators ${\cal O}$ or ${\cal O}_8$.} \label{fig:loops} \end{figure} The calculation of the anomalous dimension matrix is straightforward. In dimensional regularization with $D=4-\epsilon$ dimensions, one needs\cite{DG} the residues of the $\epsilon$-poles of graphs with one insertion of the operators $\tilde{\cal O}$ and $\tilde{\cal O}_8 $. The leading-log corrections arise from the leading, $O(\alpha_s)$ terms in $\gamma$. These arise from the one-loop graphs in Fig.~\ref{fig:loops} In principle the different tensor structures $\Gamma_c\otimes \Gamma_b$ defining $\tilde{\cal O}$ and $\tilde{\cal O}_8 $ can have different anomalous dimensions and even mix among themselves. However spin symmetry ensures that the anomalous dimension matrix is independent of the tensor structure $\Gamma_c\otimes \Gamma_b$. We find \begin{equation} \gamma=\frac{\alpha_s}{4\pi} \left(\begin{array}{cc} 8 & -4wr(w)-2\\ -\frac89 wr(w)-\frac49 & \frac{17}3-\frac{14}3wr(w))\\ \end{array}\right), \end{equation} where \beq r(w)\equiv \frac1{\sqrt{w^2-1}}\ln(w+\sqrt{w^2-1}). \eeq The solution to the renormalization group equation (\ref{eq:rgecoefs}) is straightforward. In terms of the ratio of running coupling constants \beq \label{eq:zdef} z\equiv\left(\frac{\alpha_s(\mu)}{\alpha_s({\muz})}\right) \eeq and the functions \begin{eqnarray} \psi &=& \frac1{12}\;\frac{41-14wr(w)}{b_0},\\ \xi &=& -\frac34\; \frac{1+2wr(w)}{b_0}, \end{eqnarray} where the coefficient of the one loop term of the $\beta$-function for QCD is $b_0=11-\frac23n_f$, and $n_f$ is the number of light flavors ($n_f=3$ in our case), we obtain \beq \label{eq:rgesol1} \left( \begin{array}{c} \tilde{c}(\mu) \\ \tilde{c}_8(\mu) \\ \end{array}\right) =U \left( \begin{array}{c} \tilde{c}({\muz}) \\ \tilde{c}_8({\muz}) \\ \end{array}\right) \eeq where \beq \label{eq:rgesol2} U=z^\psi \left(\begin{array}{cc} \frac19z^\xi+\frac89z^{-\xi}& \frac4{27}(z^\xi-z^{-\xi})\\ \frac23(z^\xi-z^{-\xi})& \frac89z^\xi+\frac19z^{-\xi} \end{array}\right). \eeq The question that remains is how to determine the coefficients $\tilde c$ and $\tilde c_8$ at some scale ${\muz}$. But we have already determined these coefficients in Sec.~\ref{sec:method}. Recall that the operator $\tilde {\cal O}$ that replaces the time ordered product is defined by Eq.~(\ref{eq:hqetopdefd}), with the tensor $\Gamma_c\otimes \Gamma_b $ defined as the sum of the contributions in Eqs.~(\ref{eq:feynmeff1b}), (\ref{eq:feynmeff2}), (\ref{eq:feynmeff3}) and~(\ref{eq:feynmeff4}) with unit coefficient. The question can be rephrased as what is the scale ${\muz}$ for which the calculation in Sec.~\ref{sec:method} is valid. What we would like to do is to determine for what choice of ${\muz}$ the loop corrections to relations like Eq.~(\ref{eq:OPE1}) will be free from large logs. The only relevant scales in the problem are the large masses $m_{c,b}$, the invariant mass of the $e^+e^-$ pair, $q^2$, which itself scales like $m_{c,b}^2$, the small masses and residual momenta and the renormalization point ${\muz}$. The corrections to the relations of Sec.~\ref{sec:method} are guaranteed to be free from logs of the small masses or residual momenta. But there will be logs of ratios of large masses to the renormalization point, $\ln(m_{c,b}/{\muz})$. To avoid these one may choose ${\muz}\sim m_{c,b}$. For our computations below we will use ${\muz}\approx4.0$~GeV. If the scales $m_c$ and $m_b$ are both large but very disparate one could review the above analysis by introducing a new renormalization group equation to re-sum the logs of $m_c/m_b$. The results of this section would still re-sum the logs of $\mu/m_c$. We thus have that $\tilde c(\mu)$ and $\tilde c_8(\mu)$ are given by Eqs.~(\ref{eq:rgesol1}) and~(\ref{eq:rgesol2}), with \begin{eqnarray} \tilde c(\muz) & = & c({\muz}) = \frac23(x^{-1}-\frac12x^2)\\ \tilde c_8({\muz}) & = & c_8({\muz}) = x^{-1}+x^2 \end{eqnarray} where \beq x\equiv\left(\frac{\alpha({\muz})}{\alpha(M_W)}\right)^{6/23}. \eeq For illustration we have given the leading log expression for the coefficients $c(\muz)$ and $c_8({\muz})$, but in rate computations below we use the next to leading log results from \cite{buras}. We do not have at present a full next to leading log result: still missing is a computation of the one loop corrections to the coefficients $\tilde c$ and $\tilde c_8$ at $\mu=\muz$ and of the anomalous dimensions matrix $\gamma$ of Eq.~(\ref{eq:rgeops}) at two loops. It is interesting to note that the coefficients $c(\mu_0)$ are significantly enhanced at next to leading log order. For the case $\mu_0=4.0$~GeV one has in next to leading order\cite{buras} $c=0.16$, rather than the leading log result $c=0.07$. We emphasize that this enhancement can be systematically accounted for. The large enhancement is not a signal of perturbation theory breaking down but rather due to the accidental cancellation in the leading order. \section{Rates: $\bar B^0\to D^{(*)0} e^+e^-$} \label{sec:results1} We are ready to compute decay rates. Defining \beq h^{(*)\mu}= \langle D^{(*)0}| \int d^4x\;e^{iq\cdot x} \; T(j^\mu_{\rm em}(x){\cal H}'_{\rm eff}(0)) |B^0\rangle, \eeq the decay rate for $B^0\to D^{(*)0}e^+e^-$ is given in terms of $q^2$ and $t\equiv(p_D+p_{e^+})^2=(p_B-p_{e^-})^2$ by \begin{equation} \label{eq:doublediffrate} \frac{d\Gamma}{dq^2dt}=\frac1{2^8\pi^3M_B^3} \left|\frac{e^2}{q^2}\ell_\mu h^{(*)\mu} \right|^2 \end{equation} where $\ell^\mu=\bar u(p_{e^-})\gamma^\mu v(p_{e^+})$ is the leptons' electromagnetic current. A sum over final state lepton helicities, and polarizations in the $D^*$ case, is implicit. To compute $h^{(*)\mu}$ we need to pull together the results of the previous sections. First the time ordered product is expanded in terms of local operators as in Eqs.~(\ref{eq:feynmeff1b})--(\ref{eq:feynmeff4}). This involves replacing the coefficient functions $c({\muz})$ and $c_8({\muz})$ by $\tilde c({\muz})$ and $\tilde c_8({\muz})$ as seen in Eq.~(\ref{eq:OPEfull}). Then the matrix elements of the leading local operators $\tilde{\cal O}$ and $\tilde{\cal O}_8$ between particular states can all be expressed in terms of the reduced matrix elements $\beta$ and $\beta_8$ defined in (\ref{eq:werels}) and (\ref{eq:werels8}). Finally, to make all dependence on the heavy quark masses explicit, we run down the coefficients $\tilde c$ and $\tilde c_8$ from the scale $\mu={\muz}$ of order of $m_{b,c}$ (which we take to be $\sqrt{m_cm_b}$) to a scale $\mu=\mu_{\rm low}$ of order of a few times $\LQCD$. Our computation gives \begin{eqnarray} h^\mu=\frac\kappa3& &\Big[ \frac{-(2wm_b+m_c)v^\mu-(m_b-4wm_c)v^{\prime\mu}}{(m_bv-m_cv')^2} \nonumber\\ & &\hspace{3cm}+\frac{3(m_bv^{\prime\mu}+m_cv^\mu)}{m_b^2-m_c^2}\Big] \end{eqnarray} and \begin{eqnarray} h^{*\mu}&=&\frac\kappa3\Big[ \frac{m_b(\epsilon^\mu+2v\cdot\epsilon v^{\mu}) -m_c(3v\cdot\epsilon v^{\prime\mu}+w\epsilon^\mu)}{(m_bv-m_cv')^2}\nonumber\\ & &\hspace{2cm}+ \frac{3im_c\epsilon^{\mu\alpha\beta\gamma}\epsilon_\alpha v'_\beta v_\gamma}% {(m_bv-m_cv')^2} \\ &-& \frac{3m_b\epsilon^\mu+3m_c(v\cdot\epsilon v^{\prime\mu}-w\epsilon^\mu) -im_c\epsilon^{\mu\alpha\beta\gamma}\epsilon_\alpha v'_\beta v_\gamma}% {m_b^2-m_c^2} \Big].\nonumber \end{eqnarray} Here $\kappa=G_F/\sqrt2\,V^{\phantom{*}}_{cb}V_{ud}^*[\tilde c\beta+\tilde c_8\beta_8]$. These expressions are our central results, demonstrating that the decay rates for $B^0\to D^{(*)0}e^+e^-$ can be expressed in terms of the matrix elements $\beta$ and $\beta_8$. Below we make an educated guess of these matrix elements, but for reliable results they should be determined from first principles, say, by Monte Carlo simulations of lattice QCD. In the computation of the rate the amplitude depends on heavy quark masses $m_c$ and $m_b$, while the phase space involves physical meson masses $M_B$ and $M_{D}$ or $M_{D^*}$. Although it is straightforward to retain the dependence on all four masses in our expressions for the decay rates, we have chosen to express the results in terms of physical meson masses, with the substitutions $m_b=M_B$ and $m_c=M_{D}$ or $m_c=M_{D^*}$. We are not justified in distinguishing between quark and meson masses since the distinction enters at higher order in the $1/m_{c,b}$ expansion. It is now a trivial exercise to compute the differential decay rate. Integrating the rate in Eq.~(\ref{eq:doublediffrate}) over the variable $t$ we obtain \beq \label{eq:rateBDee} \frac{d\Gamma}{dq^2}= \frac{\alpha^2 G_F^2}{288\pi M_B^3}|V_{cb} V_{ud}|^2 (\tilde c\beta+\tilde c_8\beta_8)^2{\cal F}(\hq). \eeq Here ${\cal F}(\hq)$ is a dimensionless function of $\hq \equiv \sqrt{q^2/m_b^2}$ and $\hm\equiv M_{D^{(*)}}/M_B$. For $B^0\to D^{*0}e^+e^-$ it is given by \begin{eqnarray} {\cal F} &=& \frac43 \frac{\sqrt {{{1-2 {\hat q}^{2}-2 {\hat m}^{2}+{\hat q}^{4}-2 {\hat m} ^{2}{\hat q}^{2}+{\hat m}^{4}}} }}{{\hat q}^6\hat m (1-\hat m^2)^{2}} \nonumber\\ & &(5 {\hm}^{2}+19 {\hm}^{4}\hq^{2}+30 {\hm}^{6}-20 {\hm}^{4}-14 \hq^{2} {\hm}^{2}\nonumber\\ & &-20 {\hm}^{8}+12 \hq^{6}{\hm}^{2}+\hq^{2}+\hq^{6}-2 \hq^{4}+2 {\hm}^{6}\hq^{4}\nonumber\\ & &-6 {\hm}^{8}\hq^{2}+5 {\hm}^{10}-6 \hq^{6}{\hm}^{4} +5 {\hm}^{2}\hq^{8}),\nonumber\\ \end{eqnarray} while for $B^0\to D^0e^+e^-$ \beq {\cal F}= \frac{4 (2 {\hat m}^{2}+1 )^{2} (1-2 {\hat q}^{2}-2 {\hat m}^{2}+{\hat q}^{4}-2 {\hat m}^{2}{\hat q}^{2}+{\hat m}^{4})^{\frac32}}{ 3{\hat q}^4\hat m (1-\hat m^2)^{2} }. \eeq In these we have neglected the electron mass. In order to obtain a numerical estimate of the branching fraction we need to calculate the hadronic matrix elements $\beta$ and $\beta_8$. While these could be studied in Monte Carlo simulations of QCD on the lattice, at the moment we have no reliable information on their magnitude. These matrix elements are similar to the matrix element of the $\Delta B=2$ operator for $B-\bar B$ mixing. Lattice QCD\cite{latticeBB} indicates that the vacuum saturation approximation works very well for $B-\bar B$ mixing. Therefore we take vacuum saturation as an educated guess\footnote{The matrix elements in $B^+\to D^{(*)+} e^+e^-$ can be related by symmetry to the matrix element for $B-\bar B$ mixing, if the matrix element of the octet is negligible; see Ref.~\cite{evans-99-1}} for $\beta$ and~$\beta_8$. Taking $\Gamma_c\otimes\Gamma_b=\gamma^\mu\gamma_5\otimes\gamma^\nu\gamma_5$ the right hand side of Eq.~(\ref{eq:werels}) is $v^\nu v^{\prime\mu}\beta$. On the left hand side vacuum saturation gives $(z^{-a_I}f_Bp_B^\nu/\sqrt{M_B})(z^{-a_I}f_Dp_D^\mu/\sqrt{M_D})$. Here $z$ is defined in Eq.~(\ref{eq:zdef}) and $a_I=2/b_0$\cite{fggw} is the well known anomalous scaling power for the heavy-light current in HQET.\footnote{The two factors of $z^{-a_I}$ really correspond to distinct running, between $m_b$ and $\mu_{\rm low}$ for the first factor, and between $m_c$ and $\mu_{\rm low}$ for the second. The distinction is of higher order than we have retained, if we assume that the heavy scales $m_b$ and $m_c$ are not too disparate, that is, that $\alpha_s$ does not run much between these scales.} Thus we obtain \begin{eqnarray} \beta(w) & = & z^{-2a_I} f_Bf_D\sqrt{M_BM_D}\\ \beta_8(w) & = &0 \end{eqnarray} The second equation is true not just in vacuum saturation but also in the approximation that we can insert a complete set of states between the currents defining $\tilde {\cal O}_8$. This is not an exact statement because the composite operator $\tilde {\cal O}_8$ does not equal the product of two currents. But the distinction arises from their different short distance behavior. So we expect the deviation of $\beta_8$ from zero to be of order of the QCD coupling at short distances $\alpha(\mu_0)$ times the unsuppressed $\beta$. Using these matrix elements we integrate the differential rate in Eq.~(\ref{eq:rateBDee}) over the range $1.0~\hbox{GeV}\le q^2\le q^2_{\rm max}$ to obtain a partial decay rate. We have chosen $q^2_{\rm min}=1.0$~GeV as a lower limit since our OPE requires that $q^2$ scale like $m_{c,b}^2$. The corrections to the leading terms in Eqs.~(\ref{eq:feynmeff3}) and~(\ref{eq:feynmeff4}) are of the form of an expansion in $m_{c,b}k/q^2$, where $k$ is any of the residual momenta and in our matrix elements is of order $\LQCD$. Parametrically, if $q^2\sim m_{c,b}^2$, then $m_{c,b}k/q^2 \sim\LQCD/m_{c,b}\ll1$. In addition, the region over which $q^2\alt\LQCD m_{c,b}$ where the expansion breaks down, is parametrically small. However, physical heavy masses are not very large, and the scale $m_b\LQCD$ is just slightly smaller than $m_c^2$. In order to have some non-trivial phase space we have taken $q^2\agt m_b\LQCD\sim1.0$~GeV. The price we pay is that for the lower values of $q^2$ our expansion converges slowly, $m_{c,b}k/q^2\alt1$. We find \begin{eqnarray} \label{eq:ratedstar} {\rm Br}(B^0\to D^{*0}e^+e^-)|_{q^2>1~{\rm GeV}} &=&1.4\times10^{-8}\\ {\rm Br}(B^0\to D^{0}e^+e^-)|_{q^2>1~{\rm GeV}} &=&2.6\times10^{-9} \end{eqnarray} where we have used $|V_{cb}V_{ud}|=0.04$, $f_D=f_B\sqrt{M_B/M_D}$ and $f_B=170$~MeV. It is important to observe that the portion of phase space $q^2\ge 1.0~{\rm GeV}$ is expected to give a small fraction of the total rate since the pole at $q^2=0$ dramatically amplifies the rate for small $q^2$. The rates for $B_s^0\to D^{*0}e^+e^-$ and $B_s^0\to D^{0}e^+e^-$ can be obtained to good approximation by replacing $|V_{cb}V_{ud}|$ by $|V_{cb}V_{us}|$, reducing the rates by $(0.22)^2\approx0.05$. The next generation of B-physics experiments at high energy and luminosity hadron colliders, like LHC-B and BTeV, will produce well in excess of $10^{11}$ $B$-mesons per year. Our calculation includes only large invariant mass lepton pairs so detection and triggering on the lepton pair should be straightforward. Dedicated studies must be done to determine feasibility of detection and measurement of spectra of these decays. \section{Decays to Quarkonium} \label{sec:onium} \subsection{Operator Expansion and NRQCD} The decays $B_s\to\eta_c e^+e^-$ and $B_s\to J/\psi e^+e^-$ (and obvious extensions to excited charmonium) can be studied in a similar way. The notable difference in the operator expansion here is that the residual momenta $k$ of the heavy quarks in the quarkonium bound state do scale with the large heavy mass $k\sim\alpha_s m_c$, as opposed to the residual momenta of the quarks in the heavy $B$ or $D$ mesons, $k\sim\LQCD$. The residual momentum for the case of quarkonia is small for a different reason: ${\bf k}=m_c{\bf u} $ and $k^0=\frac12m_c u^2$ are small because the velocity ${\bf u}$ of the bound quarks is small\cite{NRQCD} for heavy quarks, $u\sim \alpha_s(m_c)$. The parameter of the expansion is therefore $m_{c,b}k/m_{c,b}^2\sim \alpha_s(m_c)$. \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig5.eps}} \vskip0.5cm \caption{Feynman diagram representing a contribution to the Green function. The filled square represents the four quark operator ${\cal O}$ and the cross represents the electromagnetic current $j^\mu_{{\rm em}}$, cf. Eq.~(\ref{eq:OPEfullcharmonium}), which here couples to the $c$-quark. } \label{fig:fig5} \end{figure} Our best hope in making the nature of the expansion explicit is to use NRQCD\cite{NRQCD}, the effective theory of non-relativistic quarks in QCD. As opposed to HQET, where all the heavy mass dependence has disappeared, the lagrangian of NRQCD still depends on the heavy mass: \beq \label{eq:NRQCDlag} {\cal L}_{\rm NRQCD}= \Psi^\dagger(iD_t -\frac{{\bf D}^2}{2m_c})\Psi \eeq Here $\Psi$ denotes a two component spinor field for the $c$-quark. A separate spinor field must be included to describe the antiquark. We have written the lagrangian in the rest-frame of charmonium, but it is straightforward to boost into a moving frame. One relies on the dynamics to generate the small parameter of the expansion.\footnote{Attempts to make the expansion in $u$\cite{mlam} or, alternatively, in $1/c$\cite{bgir} explicit yield theories where the gluon self-couplings must be perturbative. The scale of QCD must then be negligible compared with the Bohr radius of quarkonium, $\LQCD\ll m_c\alpha_s(m_c)$. In our case non-perturbative gluons play a crucial role in binding the heavy-light meson $B$.} For example, the two terms in ${\cal L}_{\rm NRQCD}$ are of comparable magnitude if, as expected, $D_t\sim k^0\sim m_c\alpha_s^2$ and $ |{\bf D}|\sim |{\bf k}|\sim m_c\alpha_s$. The operator expansion is in terms of operators with an HQET quark, a light quark and a pair of NRQCD quark-antiquark. So instead of Eq.~(\ref{eq:hqetopdefd}) we have \beq \label{eq:hqetnrqcdopdefd} \tilde{\cal O}\equiv \bar d \Gamma^{\phantom{()}}_bh^{( b)}_{v}\; \Psi_c^\dagger\Gamma^{\phantom{()}}_c \Psi_{\bar c}, \eeq where $\Psi_c^\dagger$ and $\Psi_{\bar c}$ create a charm quark and a charm antiquark, respectively. We elect to use four component spinors throughout; the reduction to two components results from algebraic constraints that must be imposed, just as in HQET: \[ \Psi=\left(\frac{1+\sla{v}'}{2}\right)\Psi \] \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig7.eps}} \vskip0.5cm \caption{Same as Fig.~\ref{fig:fig5} but with the electromagnetic current coupling to the $c$-antiquark.} \label{fig:fig7} \end{figure} \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig6.eps}} \vskip0.5cm \caption{Same as Fig.~\ref{fig:fig5} but with the electromagnetic current coupling to the $b$-quark.} \label{fig:fig6} \end{figure} The calculation proceeds much as before. The effective Hamiltonian for the weak transition is \beq \label{eq:Heffcharmonium} {\cal H}'_{\rm eff}= \frac{4G_F}{\sqrt2}\,V^{\phantom{*}}_{cs}V^*_{cb}\left( c(\mu/M_W){\cal O}+c_8(\mu/M_W){\cal O}_8\right), \eeq where \beq \label{eq:Odefdcharmonium} {\cal O}=\bar s\gamma^\nu P_- b \;\;\bar c\gamma_\nu P_- c \eeq and \beq {\cal O}_8=\bar s\gamma^\nu P_-T^a b\;\; \bar c\gamma_\nu P_- T^a c. \eeq The operator expansion of the hadronic matrix element takes the form \begin{eqnarray} \label{eq:OPEfullcharmonium} \int d^4x\;e^{iq\cdot x} \; T[j_{\rm em}^\mu(x) & &(c{\cal O}(0)+c_8{\cal O}_8(0))] =\nonumber\\ & &\tilde c\tilde{\cal O}+\tilde c_8\tilde{\cal O}_8+\cdots, \end{eqnarray} where $\tilde{\cal O}$ is defined in (\ref{eq:hqetnrqcdopdefd}) and the octet operator $\tilde{\cal O}_8$ is defined analogously, \beq \label{eq:hqetnrqcdopdefd8} \tilde{\cal O}_8\equiv \bar d \Gamma^{\phantom{()}}_bT^ah^{( b)}_{v}\; \Psi_c^\dagger\Gamma^{\phantom{()}}_cT^a \Psi_{\bar c}. \eeq The first task is to determine the tensor $\Gamma_b\otimes\Gamma_c$. To this order we consider Green functions of the time ordered product in Eq.~(\ref{eq:OPEfullcharmonium}) with four external quarks. The in-going momenta of the $b$- and $s$-quarks are $m_b v+k_b$ and $k_s$, respectively. The outgoing momenta of the charm pair are $m_c v' +k_c$ and $m_c v' +k_{\bar c}$. As explained above, we expect $k_b\sim k_s\sim \LQCD$ while $k_c\sim k_{\bar c}\sim m_c\alpha_s(m_c)$. The leading term in the momentum of the electromagnetic current is $q=m_bv-2m_cv'$. For the purpose of determining the expansion coefficients at tree level we may set $c=1$ and $c_8=0$ and, choosing a renormalization point $\muz$ of the order of the large masses $m_{c,b}$, we can set $\tilde c=1$ and $\tilde c_8=0$. There are four graphs contributing to the tensor $\Gamma_c\otimes\Gamma_b$. Fig.~\ref{fig:fig5} gives \beq \label{eq:feynmeffch1} \Gamma_c\otimes\Gamma_b= Q_c\gamma^\mu\frac{m_b \sla{v}-m_c(\sla{v}'-1)}{m_b^2-2m_bm_cw}\gamma^\nu P_- \otimes \gamma_\nu P_-, \eeq and Fig.~\ref{fig:fig7} gives \beq \label{eq:feynmeffch2} \Gamma_c\otimes\Gamma_b= Q_c\gamma^\nu P_-\frac{-m_b \sla{v}+m_c(\sla{v}'+1)}{m_b^2-2m_bm_cw}\gamma^\mu \otimes \gamma_\nu P_-. \eeq Note that the denominator, which dictates the convergence of the expansion, scales with $m_{c,b}^2$. It vanishes at $w_0=m_b/2m_c$. However, this is never in the physical region: $w_{\rm max}=(m_b^2+4m_c^2)/4m_bm_c=w_0-(m_b/4m_c-m_c/m_b)$, but $m_b>2m_c$ for the decay to be allowed. \begin{figure} \centerline{ \epsfysize 2.0in \epsfbox{fig8.eps}} \vskip0.5cm \caption{Same as Fig.~\ref{fig:fig5} but with the electromagnetic current coupling to the $s$-quark.} \label{fig:fig8} \end{figure} The diagrams in Figs.~\ref{fig:fig6} and~\ref{fig:fig8} are just as in Figs.~\ref{fig:fig2} and~\ref{fig:fig4}, with the replacement $q=m_bv-m_cv'\to q=m_bv-2m_cv'$. For the first we have \beq \label{eq:feynmeffch3} \Gamma_c\otimes \Gamma_b =- Q_b \gamma^\nu P_- \otimes \gamma_\nu P_-\frac{m_b +2m_c\sla{v}'}{m_b^2-4m_c^2} \gamma^\mu, \eeq and for the second \beq \label{eq:feynmeffch4} \Gamma_c\otimes \Gamma_b = Q_d \gamma^\nu P_-\otimes \gamma^\mu \frac{\sla{q}}{q^2} \gamma_\nu P_-. \eeq Again we see that the expansion remains valid as long as $q^2$ scales with the heavy masses (squared), and this limitation arises solely from the coupling of the photon to the light quark. \subsection{Spin Symmetry} The NRQCD lagrangian contains separate fields for the charm quark and antiquark. The quark lagrangian, Eq.~(\ref{eq:NRQCDlag}) is symmetric under spin-$SU(2)$ transformations. The antiquark lagrangian is similarly invariant under a separate spin-$SU(2)$. This case has a larger spin symmetry than the case of decays to $D$-mesons. One can therefore write a trace formula analogous to Eq.~(\ref{eq:werels}) without using chiral symmetry of the light quarks. We can represent the charmonium spin multiplet $(\eta_c,J/\psi)$ by the $4\times4$ matrix \beq H^{(\psi)}_{v'}= \left(\frac{1+\sla{v}'}{2}\right) [\psi_\mu\gamma^\mu-\eta_c\gamma_5] \left(\frac{1-\sla{v}'}{2}\right). \eeq The action of spin-$SU(2)\times SU(2)$ on this is then \beq \label{eq:Hpsispin} H^{(\psi)}_{v'}\to S_c H^{(\psi)}_{v'} S^\dagger_{\bar c} \eeq Consider the matrix element $\vev{H^{(\psi)}_{v'}|\tilde{\cal O}|H^{(b)}_v}$. It must be linear in the tensors $\Gamma_c\otimes \Gamma_b$, $H^{(b)}_v$ and $\bar H^{(\psi)}_{v'}$. As before, acting with $SU(2)_b$ we see that $\Gamma_b\to \Gamma_b S_b^\dagger$ and $H_v^{(b)}\to S_b H_v^{(b)}$, so they enter the matrix element as $\Gamma_bH_v^{(b)}$. Now, acting with the spin symmetries of NRQCD, we have Eq.~(\ref{eq:Hpsispin}) and $\Gamma_c\to S_c \Gamma_c S^\dagger_{\bar c}$, so that they must enter the matrix element as ${\rm Tr} (\bar H^{(\psi)}_{v'}\Gamma_c)$. Finally, rotations demand that we sum over the two remaining indices, \beq \label{eq:werelscharmonium} \vev{H^{(\psi)}_{v'}|\tilde{\cal O}|H^{(b)}_v} =\frac14\beta {\rm Tr}(\bar H^{(\psi)}_{v'} \Gamma_c) {\rm Tr} (\Gamma_b H^{(b)}_v). \eeq Similarly, for the octet operator we find \beq \label{eq:werels8charmonium} \vev{H^{(\psi)}_{v'}|\tilde{\cal O}_8|H^{(b)}_v} =\frac14\beta_8(w) {\rm Tr}(\bar H^{(\psi)}_{v'} \Gamma_c) {\rm Tr} (\Gamma_b H^{(b)}_v). \eeq We have used the same symbols here for operators and reduced matrix elements as in Secs.~\ref{sec:method} and~\ref{sec:spinsym}, but they should be understood as distinct. \subsection{QCD Corrections} \label{sec:QCD2} Consider the operator expansion~(\ref{eq:OPEfullcharmonium}). Just as in Sec.~\ref{sec:QCD1} we argue that matching between left and right sides is most conveniently performed when the renormalization point $\muz$ is chosen to be of the order of the scale of the heavy quarks. For simplicity we assume that $m_c$ and $m_b$ are not too different, but very big, so that we do not have to worry about large logs of the ratio $m_c/m_b$. Then one may take, say, $\muz\sim\sqrt{m_cm_b}$. The point is that the coefficients on the left hand side of~(\ref{eq:OPEfullcharmonium}) explicitly depend on $M_W/\muz$ and the operators implicitly depend on $m_{c,b}/\muz$. If we choose to do the matching at a scale $\muz$ that differs much from $m_{c,b}$ then there are implicit large corrections. Note that the right hand side of~(\ref{eq:OPEfullcharmonium}) can only introduce logs of low scales over $\muz$, but the same infrared logs are found on the left side of the equation. Once the coefficients $\tilde c$ and $\tilde c_8$ in~(\ref{eq:OPEfullcharmonium}) have been determined at $\muz$ we must ask at what scale $\mu$ we should evaluate the matrix elements and how to get there. The situation is more complicated than in the case of $B^0\to D^0e^+e^-$ of Sec.~\ref{sec:QCD1} because now the matrix element in the combined HQET/NRQCD effective theory has several scales. In NRQCD the relevant distance scale is the inverse Bohr radius $m_c\alpha_s(m_c)$ and the relevant temporal scale is the Rydberg $m_c\alpha_s^2(m_c)$. In HQET the dynamical scale is $\LQCD$. Of course $\LQCD$ also plays a dynamical role in NRQCD, but it is usually taken to be irrelevant since one assumes $\LQCD\ll m_c\alpha_s^2(m_c)\ll m_c\alpha_s(m_c)$. So we are faced with a multiple scales problem. Setting $\mu$ equal to any one of these scales leaves us with large logs of the ratios of $\mu$ to the other two. It is not known how to use the renormalization group equation to re-sum these logs. Suppose that we set $\mu\sim m_c\alpha(m_c)$ or $\mu\sim m_c\alpha^2(m_c)$. If we then use the renormalization group to sum powers of $\alpha(m_c)\ln(m_c/\mu)$ we will be summing powers of $\alpha(m_c)\ln\alpha(m_c)$. Notice that these logs vanish as $m_c\to\infty$, since $\alpha(m_c)\sim 1/\ln(m_c/\LQCD)$. Contrast this with the case $\mu\sim\LQCD$ (or, generally, setting $\mu$ equal to any fixed scale as $m_c\to\infty$). Then $\alpha(m_c)\ln(m_c/\mu)\sim1$ as $m_c\to\infty$. As a matter of principle, in the large mass limit it is these latter logs that must be summed (they are parametrically of leading order in the large mass expansion). Therefore we re-sum the leading logs with a fixed low scale $\mu=\mu_{\rm low}$ and choose, as before, $\mu_{\rm low}=1.0$~GeV in our numerical computations. In order to use dimensional regularization and keep track of different orders in the non-relativistic expansion we adopt the $1/c$ counting advocated in Ref.~\cite{bgir}. However, we use a covariant gauge for our calculations. This is convenient because the Feynman diagrams involve light and HQET quarks in addition to the NRQCD quarks. In leading order in the $1/c$ expansion the quark lagrangian in (\ref{eq:NRQCDlag}) is replaced by \beq \label{eq:NRQCDlagmod} {\cal L}_{\rm NRQCD}\to \Psi^\dagger(iD_t -\frac{{\bf \nabla}^2}{2m_c})\Psi \eeq The only interactions are due to temporal gluon exchange. Since we work in covariant gauge, this is not a pure Coulomb potential gluon. It is easy to see that no diagram involving an NRQCD quark gives a divergent contribution. The self-energy diagrams for the NRQCD quarks have an infinite piece, which however is independent of the momentum and therefore gives no contribution to wavefunction renormalization. Therefore the four quark operators scale as the heavy-light currents. That is \begin{equation} \gamma=\frac{\alpha_s}{4\pi} \left(\begin{array}{cc} 4 & 0\\ 0 & 1 \\ \end{array}\right), \end{equation} is the anomalous dimension matrix in the renormalization group equation for the operators, \beq \label{eq:rgeopscharmonium} \mu\frac{d}{d\mu} \left( \begin{array}{c} \tilde{\cal O} \\ \tilde{\cal O}_8 \\ \end{array}\right) =\gamma \left( \begin{array}{c} \tilde{\cal O} \\ \tilde{\cal O}_8 \\ \end{array}\right). \eeq Then the coefficients must satisfy \beq \label{eq:rgecoefscharmonium} \mu\frac{d}{d\mu} \left( \begin{array}{c} \tilde{c} \\ \tilde{c}_8 \\ \end{array}\right) =-\gamma^T \left( \begin{array}{c} \tilde{c} \\ \tilde{c}_8 \\ \end{array}\right), \eeq where, as above, ``$T$'' denotes transpose of a matrix. The solution is trivial, \begin{eqnarray} \label{eqs:coefscharmonium} \tilde c(\mu) & = & z^{a_I} \tilde c(\muz) \\ \label{eqs:coefscharmonium2} \tilde c_8(\mu) & = & z^{\frac14a_I} \tilde c_8(\muz), \end{eqnarray} where $z$ is defined in Eq.~(\ref{eq:zdef}) and $a_I=2/b_0$ is the well known anomalous scaling power for the heavy-light current in HQET\cite{fggw}. Contributions from higher orders in the $1/c$ expansion produce mixing with higher dimension operators and are therefore excluded to the order we are working. This is easy to see. To compensate for the powers of $1/c$ one must have additional velocities in the operators. But these come from powers of $\partial/m_c$. The leading correction to the lagrangian is of order $1/c^{3/2}$. Since two insertions are needed this gives a graph of order $1/c^3$. Since one power of $c$ is needed to form the QCD fine-structure constant, $\alpha_s=g_s^2/4\pi c$, the divergent part of the graph involves $p^2/m_c^2c^2$. It is straightforward to verify this by direct calculation. \subsection{Rates} Defining \beq h^{(\Psi)\mu}= \langle \Psi| \int d^4x\;e^{iq\cdot x} \; T(j^\mu_{\rm em}(x){\cal H}'_{\rm eff}(0)) |B_s\rangle, \eeq where $\Psi=\eta_c,J/\psi$, the decay rate for $B_s\to \Psi e^+e^-$ is given in terms of $q^2$ and $t\equiv(p_\Psi+p_{e^+})^2=(p_B-p_{e^-})^2$ by \begin{equation} \label{eq:doublediffratecharmonium} \frac{d\Gamma}{dq^2dt}=\frac1{2^8\pi^3M_B^3} \left|\frac{e^2}{q^2}\ell_\mu h^{(\Psi)\mu} \right|^2 \end{equation} where $\ell^\mu=\bar u(p_{e^-})\gamma^\mu v(p_{e^+})$ is the leptons' electromagnetic current. A sum over final state lepton helicities, and polarizations in the $\Psi=J/\psi$ case, is implicit. We obtain \begin{eqnarray} h^{(\eta_c)\mu}=\frac\kappa3& &\Big[ \frac{m_bv^{\prime\mu}-2(wm_b-m_c)v^\mu}{(m_bv-2m_cv')^2} \nonumber\\ & &\hspace{3cm}+\frac{m_bv^{\prime\mu}+2m_cv^\mu}{m_b^2-4m_c^2}\Big] \end{eqnarray} and \begin{eqnarray} h^{(J/\psi)\mu}&=&\frac\kappa3\Big[ \frac{2m_bv\cdot\epsilon v^{\mu}-(m_b-2m_cw)\epsilon^\mu -2m_cv\cdot\epsilon v^{\prime\mu}}{(m_bv-2m_cv')^2}\nonumber\\ &+& \frac{2im_c\epsilon^{\mu\alpha\beta\gamma}\epsilon_\alpha v_\beta v'_\gamma}{(m_bv-2m_cv')^2} +\frac{8im_c\epsilon^{\mu\alpha\beta\gamma}\epsilon_\alpha v_\beta v'_\gamma}% {m_b^2-2m_bm_cw} \\ & &\hspace{-1cm}- \frac{m_b\epsilon^\mu+2m_c(v\cdot\epsilon v^{\prime\mu}-w\epsilon^\mu) +2im_c\epsilon^{\mu\alpha\beta\gamma}\epsilon_\alpha v_\beta v'_\gamma}% {m_b^2-4m_c^2} \Big].\nonumber \end{eqnarray} Here $\kappa=G_F/\sqrt2\,V^{\phantom{*}}_{cb}V_{cs}^*[\tilde c\beta+\tilde c_8\beta_8]$. These expressions are our central results for decays to charmonium, demonstrating that the decay rates for $B_s\to \eta_c e^+e^-$ and $B_s\to J/\psi e^+e^-$ can be expressed in terms of the local operator matrix elements $\beta$ and $\beta_8$. We now compute the differential decay rate. We integrate the rate in Eq.~(\ref{eq:doublediffratecharmonium}) over the variable $t$ and obtain, for both $B_s\to \eta_c e^+e^-$ and $B_s\to J/\psi e^+e^-$, \beq \label{eq:rateBpsiee} \frac{d\Gamma}{dq^2}= \frac{\alpha^2 G_F^2}{288\pi M_B^3}|V_{cb} V_{cs}|^2 (\tilde c\beta+\tilde c_8\beta_8)^2{\cal F}(\hq). \eeq Here ${\cal F}(\hq)$ is a dimensionless function of $\hq \equiv \sqrt{q^2/m_b^2}$ and $\hm\equiv M_{J/\psi}/M_B$. For $B_s\to J/\psi e^+e^-$ it is given by \begin{eqnarray} {\cal F} &=& \frac{4\sqrt{1-2 {\hq}^{2}-2 {\hm}^{2}+{\hq}^{4} -2{\hm}^{2}{\hq}^{2}+{\hm}^{4}}}{3\hq^6{\hm}^{2} (1-\hm^2)^{2} (1+{\hq}^{2}-{\hm}^{2} )^{2} }\nonumber\\ & &(15 {\hm}^{10}-6 {\hm}^{12}+{\hm}^{2}+ 15 {\hm}^{6}+{\hq}^{2}-6 {\hm}^{4}-20 {\hm}^{8}\nonumber\\ & &+{\hq}^{10}-2 {\hq}^{6}+6 {\hm}^{2}{\hq}^{2}+23 {\hm}^{ 2}{\hq}^{4}+55 {\hm}^{2}{\hq}^{8}+{\hm}^{2}{\hq}^{12}\nonumber\\ & &-111 {\hm}^{8}{\hq}^{2}+234 {\hm}^{6}{\hq}^{4} +104 {\hm}^{6}{\hq}^{2}+92 {\hm}^{6}{\hq}^{6}\nonumber\\ & &-188 {\hm}^{8} {\hq}^{4}+58 {\hm}^{10}{\hq}^{2}-46 {\hm}^{4}{\hq}^{2}+30 {\hm}^{4}{\hq}^{6}\nonumber\\ & &-124 {\hm}^{4}{\hq}^{4}+{\hm} ^{14}-72 {\hm}^{8}{\hq}^{6}+55 {\hm}^{10}{\hq}^{4}-12 { \hm}^{12}{\hq}^{2}\nonumber\\ & &+23 {\hm}^{6}{\hq}^{8}-78 {\hm }^{4}{\hq}^{8}+4 {\hm}^{4}{\hq}^{10}-6 {\hm}^{2}{\hq}^{10} -48 {\hm}^{2}{\hq}^{6} ) \nonumber\\ \end{eqnarray} while for $B_s\to \eta_c e^+e^-$ \beq {\cal F} = \frac{4(1-2 \hq^2-2 {\hm}^{2}+{\hq}^{4} -2 {\hm}^{2}{\hq}^{2}+{\hm}^{4})^{\frac32}} {3\hq^4 \hm^2 (1-\hm^2)^2}. \eeq For a numerical estimate we need to calculate the matrix elements $\beta$ and $\beta_8$. Again we use vacuum saturation. However, now this approximation is supported by NRQCD. It is argued in Ref.~\cite{bbl} that soft gluon exchange with the quarkonium is suppressed by powers of the relative velocity $u=\alpha_s(m_c)$, and that the matrix element of the octet operator is similarly suppressed. Therefore we take \begin{eqnarray} \label{eqs:melemscharmonium} \beta(w) & = & z^{-a_I}f_Bf_{\eta_c}\sqrt{M_BM_{\eta_c}}\\ \label{eqs:melemscharmonium2} \beta_8(w) & = &0. \end{eqnarray} Note that because vacuum saturation here is valid at least as a leading approximation in a velocity expansion, the combination of coefficients in (\ref{eqs:coefscharmonium})--(\ref{eqs:coefscharmonium}) and matrix elements in (\ref{eqs:melemscharmonium})--(\ref{eqs:melemscharmonium2}) is automatically independent of the renormalization point $\mu$. Spin symmetry gives $f_{\eta_c}=f_{J/\psi}$. We use the measured value from the leptonic width in the tree level rate equation, \beq \Gamma(J/\psi\to e^+e^-)=4\pi\alpha^2\frac{f_{J/\psi}^2}{M_{J/\psi}}, \eeq and obtain $f_{J/\psi}=0.16$~GeV. Integrating over $q^2\ge1.0$~GeV we have partial branching fractions \begin{eqnarray} \label{eq:btojpsirate} {\rm Br}(B_s\to J/\psi e^+e^-)|_{q^2>1~{\rm GeV}} &=&2.2\times10^{-10}\\ {\rm Br}(B_s\to \eta_c e^+e^-)|_{q^2>1~{\rm GeV}} &=&3.4\times10^{-11} \end{eqnarray} where we have used $|V_{cb}V_{cs}|=0.04$, and $f_B=170$~MeV. Again, we remind the reader that the portion of phase space $q^2\ge 1.0~{\rm GeV}$ is a small fraction of the total rate since the pole at $q^2=0$ dramatically amplifies the rate for small $q^2$. The rates for $B^0\to J/\psi e^+e^-$ and $B^0\to\eta_c e^+e^-$ can be obtained to good approximation by replacing $|V_{cb}V_{cs}|$ by $|V_{cb}V_{cd}|$, reducing the rates by $(0.22)^2\approx0.05$. The rate (\ref{eq:btojpsirate}) may seem too small to be detectable even in the next generation of hadronic colliders. However it must be kept in mind that the signature involves four leptons with large invariant masses (one being the $J/\psi$). \section{Conclusions} \label{sec:conclusions} We have successfully shown how to implement the OPE advertised in Ref.~\cite{evans-99-2} to the processes $\bar B_{d,s}\to J/\psi e^+e^-$, $\bar B_{d,s}\to \eta_c e^+e^-$, $\bar B_{d,s}\to D^{*0} e^+e^-$ and $\bar B_{d,s}\to D^{0} e^+e^-$. By the use of the OPE the long distance (first order weak and first order electromagnetic) interaction is replaced by a sum of local operators. The application of the OPE is restricted to a limited kinematic region. In the processes $\bar B_{d,s}\to J/\psi e^+e^-$ and $\bar B_{d,s}\to \eta_c e^+e^-$ our method leads naturally to an NRQCD expansion for the $J/\psi$ and $\eta_c$. This illustrates that the methods of Ref.~\cite{evans-99-2} are applicable to a wider class of processes. Furthermore we found that the number of independent matrix elements of the local operators is severely restricted due to a combined use of heavy-spin, rotational and chiral symmetry. The independent matrix elements could be determined, say, in lattice simulations. Our paper shows that the processes considered can be studied in a systematic fashion independent of any model assumptions in the kinematic regime of $q^2$ scaling like $m_{c,b}^2$. Using a crude estimation of the matrix elements, we found the rates of all the processes considered to be small. We expect some of them, in particular $\bar B_{s}\to D^{*0} e^+e^-$, should be accessible at planned experiments at hadron colliders, like BTeV or LHC-B. \bigskip {\it Acknowledgments} We thank Mark Wise for useful discussions and conversations. This work is supported by the Department of Energy under contract No.\ DOE-FG03-97ER40546. \tighten
1,477,468,750,772
arxiv
\subsection{Proof strategy} To establish the posterior concentration rate for $\Psi$ and $\Omega$, we followed \citet{Ning2020} and \citet{Bai2020_groupSSL} and first showed that the posterior concentrates in log-affinity (see Section~\ref{sec:log_affinity_to_sieve} in the Supplementary Materials for details). Posterior concentration of the individual parameters followed as a consequence. To show that the posterior concentrates in log-affinity, we appealed to general results about posterior concentration for independent but non-identically distributed observations. Specifically, we verified the three conditions of Theorem 8.23 of \citet{ghosal2017fundamentals}. First, we confirmed that the cgSSL prior introduced in Section~\ref{sec:cgSSL_prior} places enough prior probability mass in small neighborhoods around every possible choice of $(\Psi,\Omega).$ This was done by verifying that for each $(\Psi,\Omega),$ the prior probability contained in a small Kullback-Leibler ball around $(\Psi,\Omega)$ can be lower bounded by a function of the ball's radius (the so-called ``KL-condition'' in Lemma~\ref{lemma:KL_cg} of the Supplementary Materials). Then we studied a sequence of likelihood ratio tests defined on sieves of the parameter space that can correctly distinguish between parameter values that are sufficiently far away from each other in log-affinity. In particular, we bounded the error rate of such tests and then bounded the covering number of the sieves (Lemma~\ref{lemma:packing_number_lp} of the Supplementary Materials). \citet{Ning2020} studied the sparse marginal regression model in Equation~\eqref{eq:marginal_model} instead of the sparse chain graph. Although these are somewhat different models, our overall proof strategy is quite similar to theirs. However, we pause here to highlight some important technical differences. First, \citet{Ning2020} placed a prior on $\Omega$'s eigendecomposition while we placed an arguably simpler and more natural element-wise prior on $\Omega.$ The second and more substantive difference is in how we bound the covering number of sieves of the underlying parameter space. Because \citet{Ning2020} specified exactly sparse priors on the elements of $B = \Psi\Omega^{-1},$ it was enough for them to carefully bound the covering number of exactly low-dimensional sets of the form $\mathcal{A} \times \{0\}^{r}$ where $\mathcal{A}$ is some subset of a multi-dimensional Euclidean space and $r > 0$ is a positive integer. In contrast, because we specified absolutely continuous priors on the elements of $\Psi,$ we had to cover ``effectively low-dimensional'' sets of the form $\mathcal{A} \times [-\delta, \delta]^{r}$ for small $\delta > 0.$ Our key lemma (Lemma \ref{lemma:packing_number_lp} in the Supplementary Materials) provides sufficient conditions on $\delta$ for bounding the $\epsilon$-packing number of such effectively low-dimensional sets using the $\epsilon'$-packing number of $\mathcal{A}$ for a carefully chosen $\epsilon' > 0.$ \subsection{Contraction of cgSSL} In order to establish our posterior concentration results, we first assume that the data $(\bm{x}_{1}, \bm{y}_{1}), \ldots, (\bm{x}_{n}, \bm{y}_{n})$ were generated according to a Gaussian chain graph model with true parameter $\Psi_{0}$ and $\Omega_{0}.$ We need to make additional assumptions about the spectra of $\Psi_{0}$ and $\Omega_{0}$ and on the dimensions $n, p$ and $q.$ \begin{description} \item[A1]{$\Psi_{0}$ and $\Omega_{0}$ have bounded operator norm: that is, $\Psi_{0} \in \mathcal{T}_{0}=\{\Psi:|||\Psi|||_2<a_1\}$ and $\Omega_{0} \in \mathcal{H}_{0} \{\Omega:|||\Omega|||_2\in [1/b_2,1/b_1]$ where $||| \cdot |||$ is the operator norm and $a_{1}, b_{1}, b_{2} > 0$ are fixed positive constants.} \item[A2]{Dimensionality: We assume that $\log(n)\lesssim \log(q);$ $\log(n)\lesssim \log(p);$ and $$ \max\{p,q,s_0^\Omega,s_0^\Psi\}\log(\max\{p,q\})/n\to 0, $$ where $s_{0}^{\Omega}$ and $s_{0}^{\Psi}$ are the number of non-zero free parameters in $\Omega$ and $\Psi$ respectively; and $a_n\lesssim b_n$ means for sufficient large $n$, there exists a constant $C$ independent of $n$ such that $a_n\le Cb_n$} \item[A3]{Tuning the $\Psi$ prior: We assume that $(1-\theta)/\theta\sim (pq)^{2+a'}$ for some $a'>0$; $\lambda_0 \sim \max\{n,pq\}^{2+b'}$ for some $b'>1/2$; and $\lambda_1\asymp 1/n$} \item[A4]{Tuning the $\Omega$ prior: We assume that that $(1-\eta)/\eta\sim \max\{Q,pq\}^{2+a}$ for some $a>0$, where $Q=q(q-1)/2$; $\xi_0\sim \max\{Q,pq,n\}^{4+b}$ for some $b>0$; $\xi_1\asymp 1/n$ and $\xi\asymp 1/\max\{Q,n\}$} \end{description} Before stating our main result, we pause to highlight two key differences between the above assumptions and model introduced in Section~\ref{sec:cgSSL_prior}. Although the prior in Section~\ref{sec:cgSSL_prior} restricts $\Omega$ to the positive-definite cone, Assumption 1 is slightly stronger as it bounds the smallest eigenvalue of $\Omega$ away from zero. The stronger assumption ensures that the entries of $X\Psi\Omega^{-1}$ do not diverge in our theoretical analysis. We additionally restricted our theoretical analysis to the setting where the proportion of non-negligible parameters, $\theta$ and $\eta,$ are fixed and known (Assumption 4). We note that \citet{RockovaGeorge2018_ssl} and \citet{Gan2019_unequal} make similar assumptions in their theoretical analyses. \begin{myTheorem}[Posterior contraction of cgSSL] \label{thm:posterior_contraction} Under Assumptions A1--A4, there is a constant $M_{1} > 0,$ which does not depend on $n,$ such that \begin{align} \sup_{\Psi\in\mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0 \Pi\left(\Psi:||X(\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0)||_F^2\ge M_1n\epsilon_n^2|Y_1,\dots,Y_n\right)\to &0 \label{eqn:contract_regressionfun_cg}\\ \sup_{\Psi\in\mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0 \Pi\left(\Omega:||\Omega-\Omega_0||_F^2\ge M_1\epsilon_n^2|Y_1,\dots,Y_n\right)\to &0 \label{eqn:contract_omega_cg} \end{align} where $\epsilon_n=\sqrt{\max\{p,q,s_0^\Omega,s_0^\Psi\}\log(\max\{p,q\})/n}.$ Note that $\epsilon_{n} \to 0$ as $n \rightarrow \infty.$ \end{myTheorem} A key step in proving Theorem~\ref{thm:posterior_contraction} is Lemma~\ref{lemma:dimension_recovery}, which shows that the cgSSL posterior does not place too much probability on $\Psi$'s and $\Omega$'s with too many large entries. In order to state this lemma, we denote the effective dimensions of $\Psi$ and $\Omega$ by $\lvert \nu(\Psi) \rvert $ and $\lvert \nu(\Omega) \rvert.$ The effective dimension of $\Psi$ (resp. $\Omega$) counts the number of entries (resp. off-diagonal entries in the lower-triangle) whose absolute value exceeds the intersection point of the spike and slab prior densities. \begin{myLemma}[Dimension recovery of cgSSL] \label{lemma:dimension_recovery} For a sufficiently large number $C_3'>0$, we have: \begin{align} \sup_{\Psi\in\mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0\Pi\left(\Psi:|\nu(\Psi)|>C_3's^\star|Y_1,\dots,Y_n\right)&\to 0\\ \sup_{B\in\mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0\Pi\left(\Omega:|\nu(\Omega)|>C_3's^\star|Y_1,\dots,Y_n\right)&\to 0 \end{align} where $s^\star=\max\{p,q,s_0^\Omega,s_0^\Psi\}.$ \end{myLemma} Lemma~\ref{lemma:dimension_recovery} essentially guarantees that the cgSSL posterior does not grossly overestimate the number of predictor-response and response-response edges in the underlying graphical model. Note that the result in Equation~\eqref{eqn:contract_regressionfun_cg} shows that the vector containing the $n$ evaluations of the regression function (i.e. the vector $X\Psi\Omega^{-1}$), converges to the vector containing the evaluations of the true regression function $\Omega_{0}^{-1}\Psi_{0}^{\top}\bm{x}.$ Importantly, apart from Assumption A2 about the dimensions of $X,$ we did not make any additional assumptions about the design matrix. The contraction rates for $\Psi$ and $\Psi\Omega^{-1},$ however, depend critically on $X.$ To state these results, denote the restricted eigenvalue of a matrix $A$ as $$ \phi^{2}(s) = \inf_{A\in \mathbb{R}^{p\times q}:0\le |\nu(A)|\le s} \left\{ \frac{\lVert XA \rVert^{2}_{F}}{n\lVert A \rVert^{2}_{F}} \right\}. $$ \begin{myCorollary}[Recovery of regression coefficients in cgSSL] \label{coro:contraction_B_cg} Under Assumptions A1--A4, there is some constant $M' > 0,$ which does not depend on $n,$ such that \begin{align} \sup_{\Psi\in \mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0 \Pi\left(||\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0||_F^2\ge \frac{M'\epsilon_n^2}{\phi^2(s_0^\Psi+C_3's^\star)}\right)&\to 0 \label{eqn:recover_marginal_cg}\\ \sup_{\Psi\in \mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0 \Pi\left(||\Psi-\Psi_0||_F^2\ge \frac{M'\epsilon_n^2}{\min\{\phi^2(s_0^\Psi+C_3's^\star),1\}}\right)&\to 0.\label{eqn:recover_cond_cg} \end{align} \end{myCorollary} Corollary~\ref{coro:contraction_B_cg} shows that the posterior distribution of $\Psi\Omega^{-1}$ can contract at a faster or slower rate than the posterior distributions of $X\Psi\Omega^{-1}$ and $\Omega,$ depending on the design matrix. In particular, when $X$ is poorly conditioned, we might expect the rate to be slower. In contrast, the term $\min\{\phi^2(s_0^\Psi+C_3's^\star),1\}$ appearing in the denominator of the rate in Equation~\eqref{eqn:recover_cond_cg} implies that the posterior distribution of $\Psi$ cannot concentrate at a faster rate than the posterior distributions of $\Psi\Omega^{-1}$ and $\Omega,$ regardless of the design matrix. To develop some intuition about this phenomenon, notice that we can decompose the difference $\Psi - \Psi_{0}$ as $$ \Psi - \Psi_{0} = (\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0)\Omega+(\Psi_0\Omega_0^{-1}(\Omega-\Omega_0)\Omega^{-1})\Omega. $$ Roughly speaking, the decomposition suggests that in order to estimate $\Psi$ well, we must be able to estimate both $\Omega$ and $\Psi\Omega^{-1}$ well. In other words, estimating $\Psi$ is at least as hard, statistically, as estimating $\Omega$ and $\Psi\Omega^{-1}.$ Taken together, the two results in Corollary~\ref{coro:contraction_B_cg} suggest that while a carefully constructed design matrix can improve estimation of the matrix of \textit{marginal} effects, $B = \Psi\Omega^{-1},$ it cannot generally improve estimation of the matrix of \textit{direct} effects $\Psi.$ \begin{comment} In order to establish Theorem \ref{thm:posterior_contraction}, we proved Lemma \ref{lemma:dimension_recovery} showed how our method recovers the low dimension structure of the parameters: \begin{myLemma}[Dimension recovery of cgSSL] \label{lemma:dimension_recovery} For a sufficiently large number $C_3'>0$, we have: \begin{align} \sup_{\Psi\in\mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0\Pi\left(\Psi:|\nu(\Psi)|>C_3's^\star|Y_1,\dots,Y_n\right)&\to 0\\ \sup_{B\in\mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0\Pi\left(\Omega:|\nu(\Omega)|>C_3's^\star|Y_1,\dots,Y_n\right)&\to 0 \end{align} where $s^\star=\max\{p,q,s_0^\Omega,s_0^\Psi\}$ and $|\nu(\Psi)|$, $|\nu(\Omega)|$ are the effective dimension defined as number of entries (for $\Psi$) and off-diagonal entries (for $\Omega$) whose absolute value exceed the intersection point of the spike and slab components of the prior. \end{myLemma} Ideally the method should be able to recover $s_0^\Omega$ and $s_0^\Psi$ however we have $p$ and $q$ involved as lower bounds. \end{comment} \subsection{The Gaussian chain graph model} \label{sec:chain_graph_model} Graphical models are a convenient way to represent the dependence structure between several variables. Specifically, we can represent each variable as a node in a graph and we can draw edges to indicate \textit{conditional} dependence between variables. Absence of an edge between two nodes indicates the corresponding variables are \textit{conditionally independent} given all of the other variables. In the context of our gut microbiome data, we can represent each predictor $X_{j}$ with a node and each outcome $Y_{k}$ with a node. We are primarily interested in detecting edges between predictors and outcomes and edges between outcomes. Figure~\ref{fig:unlabelled_graph} is a cartoon illustration of such a graphical model with $p = 3$ and $q = 4.$ Note that we have not drawn any edges between the predictors as such edges are not typically of primary interest. \begin{figure}[H] \centering \hfill \begin{subfigure}[b]{0.34\textwidth} \centering \includegraphics[width = \textwidth]{Figs/graph_unlabelled} \caption{} \label{fig:unlabelled_graph} \end{subfigure} \hspace{0.25\textwidth} \begin{subfigure}[b]{0.34\textwidth} \includegraphics[width = \textwidth]{Figs/graph_labelled} \caption{} \label{fig:labelled_graph} \end{subfigure} \hfill \caption{Cartoon illustrations of a general graphical model (a) and a Gaussian chain graph model (b) with $p = 3$ covariates and $q = 4$ outcomes. Edges in both graphs encode conditional dependence relationships. The edge labels in (b) correspond to the non-zero parameters in~Equation \eqref{eq:cg_model}.} \label{fig:graph_cartoon} \end{figure} Without additional modeling assumptions, estimating a discrete graph like that in Figure~\ref{fig:unlabelled_graph} from $n$ pairs of data $(\bm{x}_{1}, \bm{y}_{1}), \ldots, (\bm{x}_{n}, \bm{y}_{n})$ is a challenging task. The Gaussian chain graph model in Equation~\eqref{eq:cg_model} translates the discrete graph estimation problem into a much more tractable continuous parameter estimation problem. Specifically, the model introduces two matrices, $\Psi$ and $\Omega$ and asserts that $\bm{y} \vert \Psi, \Omega, \bm{x} \sim \mathcal{N}(\Omega^{-1}\Psi^{\top}\bm{x}, \Omega^{-1}).$ Under the Gaussian graph model, $X_{j}$ and $Y_{k}$ are conditionally independent if and only if $\psi_{j,k} = 0.$ Furthermore, $Y_{k}$ and $Y_{k'}$ are conditionally independent if and only if $\omega_{k,k'} = 0.$ In other words, by first estimating $\Psi$ and $\Omega$ and then examining their supports, we can recover the underlying graphical model. Figure~\ref{fig:labelled_graph} reproduces the cartoon from Figure~\ref{fig:unlabelled_graph} with edges labelled by the corresponding non-zero parameters in Equation~\eqref{eq:cg_model}. In the Gaussian chain graph model, the direct effect of $X_{j}$ on $Y_{k}$ is defined as $$ \mathbb{E}[Y_{k} \vert X_{j} = x_{j} + 1, Y_{-k}, X_{-j}, \Psi, \Omega] - \mathbb{E}[Y_{k} \vert X_{j} = x_{j}, Y_{-k}, X_{-j}, \Psi, \Omega] = -\psi_{j,k}/\omega_{k,k} $$ That is, fixing the values of all of the other covariates and all of the other outcomes, an increase of one unit in $X_{j}$ is associated with $-\psi_{j,k}/\omega_{k,k}$ unit increase in the expectation of $Y_{k}$. Notice that the direct effect of $X_{j}$ on $Y_{k}$ is defined conditionally on the values of all other outcome $Y_{k'}.$ Because of this, the direct effect of $X_{j}$ on $Y_{k}$ is typically not equal to its \textit{marginal} effect, which is defined as $$ \mathbb{E}[Y_{k} \vert X_{j} = x_{j} + 1, X_{-j}, \Psi, \Omega] - \mathbb{E}[Y_{k} \vert X_{j} = x_{j}, X_{-j},\Psi, \Omega] = \beta_{j,k}, $$ where $\beta_{j,k}$ is the $(j,k)$ entry of the matrix $B = \Psi\Omega^{-1}.$ Notice that we can re-parametrize the Gaussian chain graph model in Equation~\eqref{eq:cg_model} in terms of $B$ \begin{equation} \label{eq:marginal_model} \bm{y} \vert B, \Omega, \bm{x} \sim \mathcal{N}(B^{\top}\bm{x}, \Omega^{-1}). \end{equation} We will refer to this re-parametrized model as the marginal regression model. There is a considerable literature on fitting sparse marginal regression models and we refer the reader to \cite{Deshpande2019} and references therein for a review. Generally speaking, under~\eqref{eq:cg_model}, the supports of $\Psi$ and $B$ will be different. Specifically, it is possible for $X_{j}$ to have a marginal effect but no direct effect on $Y_{k}.$ For instance, in Figure~\ref{fig:graph_cartoon}, although $X_{3}$ does not directly affect $Y_{2},$ it may still be marginally correlated with $Y_{2}$ thanks to the conditional correlation between $Y_{2}$ and $Y_{3}.$ That is, changing the value of $X_{3}$ can change the value of $Y_{3},$ which in turn changes the value of $Y_{2}.$ Consequently, if we fit a sparse marginal regression model, we cannot generally expect to recover sparse estimates of the matrix of direct effects. \subsection{Related works} \label{sec:related_work} \textbf{Learning sparse chain graphs}. \cite{McCarter2014} proposing fitting sparse Gaussian chain graphical models by maximizing a penalized negative log-likelihood. They specifically proposed homogeneous $L_1$ penalties on the entries of $\Psi$ and $\Omega$ and used cross-validation to set the penalty parameters for $\Psi$ and $\Omega.$ \cite{shen2021bayesian} developed a Bayesian version of that chain graphical LASSO and put a Gamma prior on the penalty parameters. In this way, they automatically learned the degree to which the entries $\psi_{j,k}$ and $\omega_{k,k'}$ are shrunk to zero. Although these paper differ in how they determine the appropriate amount of penalization, both \citet{McCarter2014} and \citet{shen2021bayesian} deploy a single fixed penalty on all of the entries in $\Psi$ and a single fixed penalty on all entries in $\Omega.$ With such fixed penalties, larger parameter estimates are shrunk towards zero as aggressively as the smaller parameter estimates, which can introduce substantial estimation bias. \textbf{Spike-and-slab variable selection with the EM algorithm}. Spike-and-slab priors are the workhorses of sparse Bayesian modeling. As introduced by \citet{mitchell1988bayesian}, the spike-and-slab prior is mixture between a point mass at 0 (the ``spike'') and a uniform distribution over a wide interval (the ``slab''). \citet{george1993variable} introduced a continuous relaxation of the original spike-and-slab prior, respectively replacing the point mass spike and uniform slab distributions with zero-mean Gaussians with extremely small and large variances. In this way, one may imagine generating all of the ``essentially negligible'' parameters in a model from the spike distribution and generating all of the ``relevant'' or ``significant'' parameters from the slab distribution. Despite their intuitive appeal, spike-and-slab priors usually produce extremely multimodal posterior distributions. In high dimensions, exploring these distributions with Markov chain Monte Carlo (MCMC) is computationally prohibitive. In response, \citet{RockovaGeorge2014_emvs} introduced EMVS, a fast EM algorithm targeting the \textit{maximum a posteriori} (MAP) estimate of the regression parameters. They later extended EMVS, which used conditionally conjugate Gaussian spike and slab distributions, to use Laplacian spike and slab distributions in \citet{RockovaGeorge2018_ssl}. The resulting spike-and-slab LASSO (SSL) procedure demonstrated excellent empirical performance. At a high-level, the SSL algorithm solves a sequence of $L_{1}$ penalized regression problems with \textit{self-adaptive} penalties. The adaptive penalty mixing is key to the empirical success of the SSL \citep{GeorgeRockova2020_penaltymix, Bai2020_ssl_review}, as it facilitates shrinking larger parameter estimates to zero less aggressively than smaller parameter estimates. Since \citet{RockovaGeorge2014_emvs}, the general EM technique for maximizing spike-and-slab posteriors has been successfully applied to many problems. For instance, \citet{Bai2020_groupSSL} introduced a grouped version of the SSL that adaptively shrinks groups of parameter values towards zero. \citet{Tang2017_ssl_glm, Tang2018_glm_groupSSL} similarly deployed the SSL and its grouped variant to generalized linear models. Outside of the single-outcome regression context, continuous spike-and-slab priors have been used to estimate sparse Gaussian graphical models \citep{Li2019_ssglasso, Gan2019_unequal, Gan2019_multiple}, sparse factor models \citet{RockovaGeorge2016_factor}, and to biclustering \citet{Moran2021_biclustering}. \citet{Deshpande2019} introduce a multivariate SSL for estimating $B$ and $\Omega$ in the marginal regression model in Equation~\eqref{eq:marginal_model}. In each extension, the adaptive penalization performed by the EM algorithm resulted in support recovery and parameter estimation superior to that of fixed penalty methods. \textbf{The asymptotics of spike-and-slab variable selection}. Beyond its excellent empirical performance, \citet{RockovaGeorge2018_ssl}'s SSL enjoys strong theoretical support. Using general techniques proposed by \citet{Zhang2012} and \cite{ghosal2017fundamentals}, they proved that, under mild conditions, the posterior induced by the SSL prior in high-dimensional, single-outcome linear regression contracts at a near minimax-optimal rate as $n \rightarrow \infty.$ Their contraction result implies that the MAP estimate returned by their EM algorithm is consistent and is, up to a log factor, rate-optimal. By directly applying \citet{ghosal2017fundamentals}'s general theory, \citet{Bai2020_groupSSL} extended these results to the group SSL posterior with an unknown variance. In the context of Gaussian graphical models, \citet{Gan2019_unequal} showed that the MAP estimator corresponding to placing spike-and-slab LASSO priors on the off-diagonal elements of a precision matrix is consistent. They did not, however, establish the contraction rate of the posterior. \citet{Ning2020} showed that the joint posterior distribution of $(B,\Omega)$ in the multivariate regression model in Equation~\eqref{eq:marginal_model} concentrates when using a group spike-and-slab prior with Laplace slab and point mass spike on $B$ and a carefully selected prior on the eigendecomposition of $\Omega^{-1}.$ To the best of our knowledge, however, the asymptotic properties of the posterior formed by placing SSL priors on both the precision matrix $\Omega$ and regression coefficients $\Psi$ in Equation~\eqref{eq:cg_model} have not yet been established. \subsection{Newton Direction} We now consider the coordinate descent update for the variable $\Omega_{k,k'}$ for $k \leq k'.$ Let $D$ denote the current approximation of the Newton direction and let $D'$ be the updated direction. To preserve symmetry, we set $D'= D+\mu(e_{k}e_{k'}^{\top}+e_{k'}e_{k}^{\top}).$ Our goal, then, is to find the optimal $\mu$: \begin{equation} \argmin_\mu \{\bar{g}(D+\mu(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top} ))+2\xi_{k,k'}|\Omega_{k,k'}+D_{k,k'}+\mu|\} \end{equation} We begin by substituting $\Delta = D'$ into $\bar{g}(\Delta).$ Note that terms not depending on $\mu$ do not affect the line search. Compared to QUIC, we have two additional terms, $\tr(WMW\Delta)$ and $\tr(WMW\Delta W \Delta).$ The first term turns out to be linear $\mu$ and the second is quadratic in $\mu.$ To see this, first observe \begin{align} \begin{split} -\tr(WMW\Delta )&=-\tr(WMW(D+\mu(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top} ))\\ &=C-\mu\tr(WMWe_{k}e_{k'}^{\top}+WMWe_{k'}e_{k}^{\top})\\ &=C-\mu\tr(e_{k'}^{\top}WMWe_{k}+e_{k}^{\top}WMWe_{k'})\\ &=C-2\mu e_{k}^{\top}WMWe_{k'}\\ &=C-2\mu w_k^{\top}Mw_{k'} \end{split} \end{align} where $w_k$ is the $k^{\text{th}}$ column of $\Omega$. Furthermore, we have \begin{align} \begin{split} \tr(WMW\Delta W\Delta)&=\tr[WMW(D+\mu(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top} )) W(D+\mu(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top} ))]\\ &=\tr[DWMW+2\mu DWMW(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top})W] \\ &~~~+ \tr[\mu^2(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top})WMW(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top})W]\\ ~ & ~ \\ =&C+\tr[2\mu DWMW(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top})W] \\ &~~~+\tr[\mu^2(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top})WMW(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top})W]\\ ~ & ~ \\ =&C+2\mu w_{k'}^{\top} DWM w_k+2\mu w_k^{\top} DWM w_{k'}^{\top} \\ &~~~+\mu^2\tr[(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top})WMW(e_{k}e_{k'}^{\top} +e_{k'}e_{k}^{\top})W]\\ ~ & ~ \\ =&C+2\mu (w_{k'}^{\top} DWM w_k+ w_k^{\top} DWM w_{k'}^{\top}) \\ &~~~+\mu^2(W_{k,k}w_{k'}^{\top}Mw_{k'}+W_{k',k'}w_k^{\top}Mw_k+2W_{k,k'}w_k^{\top}Mw_{k'}) \end{split} \end{align} By combining the above simplifications, we can minimize the objective with coordinate descent. The update for $\omega_{k,k'}$ is given by: \begin{align} \begin{split} &\frac{1}{2}[W_{k,k'}^2+W_{k,k}W_{k',k'}+W_{k,k}w_{k'}^{\top}Mw_{k'}+W_{k',k'}w_k^{\top}Mw_k+2W_{k,k'}w_k^{\top}Mw_{k'}]\mu^2\\ +&[S_{k,k'}-W_{k,k'}+w_kDw_{k'}-w_kMw_{k'}+w_{k'}^{\top} DWM w_k+ w_k^{\top} DWM w_{k'}]\mu\\ +&\xi_{k,k'}|\Omega_{k,k'}+D_{k,k'}+\mu| \end{split} \end{align} The optimal solution (for off-diagonal $\omega_{k,k'}$) is given by \begin{equation} \mu=-c+[\lvert c-b/a \lvert-\xi_{k,k'}/a]_+\sign(c-b/a) \label{eqn:newton_mu} \end{equation} where \begin{align*} a&=W_{k,k'}^2+W_{k,k}W_{k',k'}+W_{k,k}w_{k'}^{\top}Mw_{k'}+W_{k',k'}w_k^{\top}Mw_k+2W_{k,k'}w_k^{\top}Mw_{k'}\\ b&=S_{k,k'}-W_{k,k'}+w_k^{\top}Dw_{k'}-w_k^{\top}Mw_{k'}+w_{k'}^{\top} DWM w_k+ w_k^{\top} DWM w_{k'}\\ c&=\omega_{k,k'}+D_{k,k'} \end{align*} For diagonal entries, we take $D'=D+\mu e_{k}e_{k}^{\top}$, the two terms involving $D$ are then: \begin{equation} \begin{aligned} -\tr(WMW\Delta )&= C-\mu w_k^{\top}Mw_k\\ \tr(WMW\Delta W\Delta)&=C+2\mu w_k^{\top}DWMw_k+\mu^2 W_{k,k}w_k^{\top}Mw_k \end{aligned} \end{equation} Then we can take \begin{align*} a&=W_{k,k}^2+2W_{k,k}w_k^{\top}Mw_k\\ b&=S_{k,k}-W_{k,k}+w_k^{\top}Dw_k-w_k^{\top}Mw_k+2 w_k^{\top}DWMw_k\\ c&=\omega_{k,k}+D_{k,k} \end{align*} and use Equation~\eqref{eqn:newton_mu} to obtain the optimal $\mu$ and thus the updated Newton direction $D'$. Note that computing the optimal $\mu$ requires repeated calculation of quantities like $w_k^{\top}Mw_{k'}$ and $w_k^{\top}UMw_{k'}.$ To enable rapid computation, we track and update the values of $U = DW$ and $Q = MW$ during our optimization. \subsection{Step Size} Like \citet{Hsieh2011}, we use Armijo's rule to set a step size $\alpha$ that simultaneously ensures our estimate of $\Omega$ remains positive definite and sufficient decrease of our overall objective function. We denote the Newton direction after a complete update over all active coordinates as $D^*$ (see Appendix \ref{supp:active_set_cgquic} for active sets). We require our step size to satisfy the line search condition~\eqref{eqn:decrease_cond}. \begin{equation} f(\Omega+\alpha D^*)\le f(\Omega)+\alpha \sigma\delta, \delta = \tr[\nabla g(\Omega)^{\top} D^*] + ||\Omega+D^*||_{1,\Xi}-||\Omega||_{1,\Xi} \label{eqn:decrease_cond} \end{equation} Three important properties can be established following \cite{Hsieh2011}: \begin{itemize} \item[P1.] The condition was satisfied for small enough $\alpha$. This property is satisfied exactly following proposition 1 of \cite{Hsieh2011}. \item[P2.] We have $\delta<0$ for all $\Omega \succ 0$, which ensures that the objective function decreases. This property generally follow Lemma 2 and Proposition 2 of \cite{Hsieh2011}, which requires the Hessian of the smooth part $g(\Omega)$ to be positive definite. In our case the Hessian of $g(\Omega)$ is the Fisher information of the chain graph model, ensuring its positive definiteness. \item[P3.] When $\Omega$ is close to the global optimum, the step size $\alpha=1$ will satisfy the line search condition. To establish this, we follow the proof of Proposition 3 in \cite{Hsieh2011}. \end{itemize} \subsection{Thresholding to Decide the Active Sets} \label{supp:active_set_cgquic} Similar to the QUIC procedure, our algorithm does not need to update every $\omega_{k,k'}$ in each iteration. We instead follow \citet{Hsieh2011} and only update those parameters exceeding a certain threshold. More specifically, we can partition the parameters $\omega_{k,k'}$ into a fixed set $S_{\text{fixed}},$ containing those parameters falling below the threshold, and a free set $S_{\text{free}},$ containing those parameters exceeding the threshold. That is \begin{equation} \begin{aligned} \omega_{k,k'}&\in S_{fixed} \text{ if } |\nabla_{k,k'}g(\Omega)|\le \xi_{k,k'}\text{ and } \omega_{k,k'}=0\\ \omega_{k,k'}&\in S_{free} \text{ otherwise} \end{aligned} \end{equation} We can determine the free set $S_{\text{free}}$ using the minimum-norm sub-gradient $\text{grad}_{k,k'}^Sf(\Omega),$ which is defined in Definition 2 of \citet{Hsieh2011}. In our case $\nabla g(\Omega)=S-\Omega^{-1}-\Omega^{-1}M\Omega^{-1}$, so the minimum-norm sub-gradient is \begin{equation} \text{grad}_{k,k'}^Sf(\Omega)= \begin{cases} (S-\Omega^{-1}-\Omega^{-1}M\Omega^{-1})_{k,k'}+\xi_{k,k'} & \text{if } \omega_{k,k'}>0\\ (S-\Omega^{-1}-\Omega^{-1}M\Omega^{-1})_{k,k'}-\xi_{k,k'} & \text{if } \omega_{k,k'}<0\\ \sign((S-\Omega^{-1}-\Omega^{-1}M\Omega^{-1})_{k,k'})[\lvert(S-\Omega^{-1}-\Omega^{-1}M\Omega^{-1})_{k,k'}\rvert-\xi_{k,k'}]_+ & \text{if } \omega_{k,k'}=0\\ \end{cases} \end{equation} Note that the subgradient evaluated on the fixed set is always equal to 0. Thus, following Lemma 4 in \citet{Hsieh2011}, the elements of the fixed set do not change during our coordinate descent procedure. It suffices, then, to only compute the Newton direction on the free set and update those parameters. \subsection{Unique minimizer} \label{sec:cgquic_proof} In this subsection, we show that the CGLASSO problem admits a unique minimizer. Our proof largely follows the proofs of Lemma 3 and Theorem 1 of \cite{Hsieh2011} but makes suitable modifications to account for the extra $\tr(M\Omega^{-1})$ term in the CGLASSO objective. \begin{myTheorem}[Unique minimizer] There is a unique global minimum for the CGLASSO problem ~\eqref{eqn:mod_glasso}. \label{thm:cgquic_unique} \end{myTheorem} We first show the entire sequence of iterates $\{\Omega_{t}\}$ lies in a particular, compact level set. To this end, let \begin{equation} \label{eqn:level_set} U=\{\Omega|f(\Omega)\le f(\Omega_0),\Omega\in S_{++}^p\}. \end{equation} To see that all iterations lies in $U$, we check need to check the line search condition Equation~\eqref{eqn:decrease_cond} has a $\delta<0$. By directly applying \citet{Hsieh2011}'s Lemma 2 to $g(\Omega)$, we have that \begin{align*} \delta\le -\vect(D^*)^\top \nabla^2g(\Omega)\vect(D^*) \end{align*} where $D^*$ is the Newton direction. Since $g(\Omega)$ (Equation~\eqref{eq:cglasso_smooth_part}) is convex, $\nabla^2g(\Omega)$ is positive definite, so the function value $f(\Omega_t)$ is always decreasing. Now we need to check that the level set is actually contained in a compact set, by suitably adapt Lemma 3 of \citet{Hsieh2011}. \begin{myLemma} \label{lem:level_set} The level set $U$ defined in ~\eqref{eqn:level_set} is contained in the set $\{mI\le \Omega \le NI\}$ for some constants $m,N > 0$, if we assume that the off-diagonal elements of $\Xi$ and the diagonal elements of $S$ are positive. \end{myLemma} \begin{proof} We begin by showing that the largest eigenvalue of $\Omega$ is bounded by some constant that does not depend on $\Omega.$ Recall that $S$ and $M$ are positive semi-definite. Since $\Omega$ is positive definite, we have $\tr(S\Omega)+\tr(M\Omega^{-1})>0$ and $||\Omega||_{1,\Xi}+\tr(M\Omega^{-1})>0$. Therefore we have \begin{equation} \label{eqn:level_inequal} \begin{aligned} f(\Omega_0)&>f(\Omega)\ge -\log(\lvert\Omega\rvert) +||\Omega||_{1,\Xi}\\ f(\Omega_0)&>f(\Omega)\ge -\log(\lvert\Omega\rvert) +\tr(S\Omega) \end{aligned} \end{equation} Since $||\Omega||_2$ is the largest eigenvalue of $\Omega$, we have $\log(\lvert\Omega\rvert)\le q\log(||\Omega||_2)$. Using the assumption that off-diagonal entries of $\Xi$ is larger than some positive number $\xi,$ we know that \begin{equation} \label{eqn:inequal_Lambda} \xi\sum_{i\ne j}|\Omega_{k,k'}|\le ||\Omega||_{1,\Xi}\le f(\Omega_0)+q\log(||\Omega||_2) \end{equation} Similarly, we have \begin{equation} \label{eqn:inequal_S} \tr(S\Omega)\le f(\Omega_0)+q\log(||\Omega||_2) \end{equation} Let $\alpha=\min_k(S_{k,k})$ and $\beta = \max_{k\ne k'} S_{k,k'}$. We can split the $\tr(S\Omega)$ into two parts, which can be further lower bounded: \begin{equation} \label{eqn:trSX} \tr(S\Omega)=\sum_k S_{k,k}\Omega_{k,k}+\sum_{k\ne k'}S_{k,k'}\Omega_{k,k'}\ge \alpha \tr(\Omega)-\beta\sum_{k\ne k'} |\Omega_{k,k'}| \end{equation} Since $||\Omega||_2\le \tr(\Omega)$, by using Equation~\eqref{eqn:trSX}, we have, \begin{equation} \label{eq:Omega_2_bound} \alpha ||\Omega||_2\le \alpha \tr(\Omega) \le \tr(\Omega S)+\beta\sum_{k\ne k'} |\Omega_{k,k'}| \end{equation} By combining Equations~\eqref{eqn:inequal_Lambda},~\eqref{eqn:inequal_S}, and\eqref{eq:Omega_2_bound}, we conclude that \begin{equation} \alpha||\Omega||_2\le (1+\beta/\xi)(f(\Omega_0)+q\log(||\Omega||_2)) \end{equation} The left hand side as a function of $||\Omega||_2$ grows much faster than then right hand side. Thus $||\Omega||_2$ can be bounded by a quantity depending only on the value of $f(\Omega_0)$, $\alpha$, $\beta$ and $\xi$. We now consider the smallest eigenvalue denoted by $a$. We use the upper bound of other eigenvalues to bound the determinant. By using the fact that $f(\Omega)$ always decreasing during iterations, we have \begin{equation} f(\Omega_0)>f(\Omega)>-\log(\lvert\Omega\rvert)\ge -\log(a)-(q-1)\log(N) \end{equation} Thus we have $m=e^{-f(\Omega_0)M^{-q+1}}$ is a lower bound for smallest eigenvalue $a$. \end{proof} We are now ready to prove Theorem \ref{thm:cgquic_unique}, by showing the objective function is strongly convex on a compact set. \begin{proof} Because of Lemma~\ref{lem:level_set}, the level set $U$ contains all iterates produced by cgQUIC. The set $U$ is further contained in the compact set $\{mI\le \Omega\le N I\}$. By the Weierstrass extreme value theorem the continuous function $f(\Omega)~\eqref{eq:cglasso_smooth_part}$ attains its minimum on this set. Further, the modified objective function is also strongly convex in its smooth part. This is because $\tr(M\Omega^{-1})$ and $\tr(S\Omega)$ are convex and $-\log(\lvert\Omega\rvert)$ is strongly convex. Since $\tr(M\Omega^{-1})$ is convex, the Hessian of the smooth part has the same lower bound as in Theorem 1 of \cite{Hsieh2011}. By following the argument of in the proof of Theorem 1 of \citet{Hsieh2011}, we can show the objective function $f(\Omega)$ is strongly convex on the compact set $\{mI\le \Omega\le N I\}$, and thus has a unique minimizer. \end{proof} We can further show that the cgQUIC procedure converges to the unique minimizer, using the general results on quadratic approximation methods studied in \citet{Hsieh2011}. \begin{myTheorem}[Convergence] \label{thm:cgquic_convergence} The cgQUIC converge to global optimum. \end{myTheorem} \begin{proof} cgQUIC is an example of quadratic approximation method investigated in Section 4.1 of \cite{Hsieh2011} with a strongly convex smooth part $g(\Omega)$ in~\eqref{eq:cglasso_smooth_part}. Convergence to the global optimum follows from their Theorem 2. \end{proof} \subsection{The Kullback-Leibler condition} We need to verify that our prior places enough probability in small neighborhoods around each of the possible values of the true parameters. These neighborhoods are defined in a KL sense. \begin{myLemma}[KL conditions] Let $ \epsilon_n=\sqrt{\max\{p,q,s_0^\Omega,s_0^\Psi\}\log(\max\{p,q\})/n}.$ Then for all true parameters $(\Psi_{0},\Omega_{0})$ we have \label{lemma:KL_cg} \begin{align*} -\log\Pi\left[(\Psi,\Omega):K(f_0,f)\le n\epsilon_n^2, V(f_0,f)\le n\epsilon_n^2\right]\le C_1n\epsilon_n^2 \end{align*} Further, let $E_{n}$ be the event $$ E_n=\{Y:\iint f/f_0d\Pi(\Psi)\Pi d(\Omega)\ge e^{-C_1n\epsilon_n^2}\}. $$ Then for all $(\Psi_{0}, \Omega_{0},$ we have $\mathbb{P}_{0}(E_{n}^{c}) \to 0$ as $n \to \infty.$ \end{myLemma} The last assertion that $\mathbb{P}(E_{n}^{c})\to 0$ follows from Lemma 8.1 of \citet{ghosal2017fundamentals} so we now focus on establishing the first assertion of the Lemma. To verify this condition we need to bound the prior mass of certain events $A.$ However, the truncation of the prior on $\Omega$ makes computing these masses intractable. To overcome this, we first bound the prior probability of events of the form $A\cap \{\Omega\succ \tau I\}$ by observing the prior on $\Omega$ can be viewed as a particular conditional distribution. Specifically, let $\tilde{\Pi}$ be the untruncated spike-and-slab LASSO prior with density $$ \tilde{f}(\Omega) = \prod_{k>k'}\left[\frac{(1-\eta) \xi_0}{2}\exp\left(-\xi_0|\omega_{k,k'}|\right)+\frac{\eta\xi_1}{2}\exp\left(\xi_1|\omega_{k,k'}|\right)\right] \times \prod_{k}\xi\exp\left(-\xi\omega_{k,k}\right). $$ The following Lemma shows that we can bound $\Pi$ probabilities using $\tilde{\Pi}$ probabilities. \begin{myLemma}[Bounds of the graphical prior] Let $\tilde{\Pi}$ be the untruncated version of the prior on $\Omega.$ Then for all events $A,$ for large enough $n$ there is a number $R$ that does not depend on $n$ such that \begin{equation} \label{eqn:graphical_prior_bound} \tilde{\Pi}(\Omega\succ \tau I|A)\tilde{\Pi}(A)\le \Pi_\Omega(A\cap \{\Omega\succ \tau I\})\le \exp(2 \xi Q-\log(R))\tilde{\Pi}(A) \end{equation} where $Q=q(q-1)/2$ is the total number of free off-diagonal entries in $\Omega.$ \end{myLemma} \begin{proof} Consider an event of form $A\cap \{\Omega\succ \tau I\} \subset \mathbb{R}^{q\times q}$. The prior mass $\Pi_\Omega(A\cap \{\Omega\succ \tau I\})$ can be viewed as a conditional probability: \begin{equation} \label{eqn:graphical_prior_cond_exp} \Pi_\Omega(A\cap \{\Omega\succ \tau I\})=\tilde{\Pi}(A|\Omega\succ \tau I)=\frac{\tilde{\Pi}(\Omega\succ \tau I|A)\tilde{\Pi}(A)}{\tilde{\Pi}(\Omega\succ \tau I)} \end{equation} The lower bound follows because the denominator is bounded from above by 1. For the upper bound, we first observe that \begin{equation} \Pi_\Omega(A\cap \{\Omega\succ \tau I\})=\tilde{\Pi}(A|\Omega\succ \tau I)=\frac{\tilde{\Pi}(\Omega\succ \tau I|A)\tilde{\Pi}(A)}{\tilde{\Pi}(\Omega\succ \tau I)}\le (\tilde{\Pi}(\Omega\succ \tau I))^{-1} \tilde{\Pi}(A) \end{equation} To upper bound the probability in Equation~\eqref{eqn:graphical_prior_cond_exp}, we find a lower bound of the denominator $\tilde{\Pi}(\Omega\succ \tau I)$. To this end, let $$ \mathcal{G}=\left\{\Omega: \omega_{k,k}> q-1, |\omega_{k,k'}|\le 1-\frac{\tau}{q-1} \text{ for } k'\ne k\right\} $$ and consider an $\Omega \in \mathcal{G}.$ Since all of $\Omega$'s eigenvalues are real, they must each be contained in at least one Gershgorin disc. Consider the $k^{\text{th}}$ Gershgorin disc, whose intersection with the real line is an interval centered at $\omega_{k,k}$ with half-width $\sum_{k' \neq k}{\lvert \omega_{k,k'}\rvert}.$ Any eigenvalue of $\Omega$ that lies in this disc must be greater than $$ \omega_{k,k} - \sum_{k' \neq k}{\lvert \omega_{k,k'}\rvert} > q-1 - (q - 1 - \tau) = \tau $$ Thus, we have $\mathcal{G}= \subset \left\{\Omega \succ \tau I \right\}.$ Since the entries of $\Omega$ are independent under $\tilde{\Pi}$, we compute \begin{equation} \label{eqn:graphical_prior_lower_bound_PDcone \begin{aligned} \tilde{\Pi}(\mathcal{G} &\ge\prod_k \int_{q-1}^\infty \xi \exp(-\xi \omega_{k,k})d\omega_{k,k'}(1-\eta)^{Q}\prod_{k>k'}\int_{|\omega_{k,k'}|\le 1-\frac{\tau}{q-1}}\frac{\xi_0}{2}\exp(-\xi_0|\omega_{k,k'}|)d\omega_{k,k'}\\ &\ge \exp(-2\xi Q)(1-\eta)^Q \left[1-\frac{\mathbb{E}|\omega_{k,k'}|}{1-\frac{\tau}{q-1}}\right]^Q\\ &=\exp(-2\xi Q)(1-\eta)^Q\left[1-\frac{1}{\xi_0(1-\frac{\tau}{q-1})}\right]^Q\\ &\ge \exp(-2\xi Q)\left[1-\frac{1}{1+K_1Q^{2+a}}\right]^Q \left[1-\frac{1}{K_3Q^{2+b}(1-\tau)}\right]^Q\\ &\ge \exp(-2\xi Q+\log(R)),\\ \end{aligned} \end{equation} where $R>0$ does not depend on$n$. Note that the first inequality holds by ignoring the contribution to the probability from slab distribution. The second inequality is Markov's inequality and the third inequality follows from our assumptions about how $\xi_{0}$ and $\eta$ are tuned. \end{proof} Let $S_{0}^{\Psi}$ and $S_{0}^{\Omega}$ respectively denote the supports of $\Psi$ and $\Omega.$ Similarly, let $s^{\Psi}_{0}$ be the number of true non-zero entries in $\Psi_{0}$ and let $s^{\Omega}_{0}$ be the true number of non-zero off-diagonal entries in $\Omega_{0}$ The KL divergence between a Gaussian chain graph model with parameters $(\Psi_{0},\Omega_{0})$ and one with parameters $(\Psi,\Omega)$ is \begin{equation} \begin{aligned} &\frac{1}{n}K(f_0,f)=\mathbb{E}_0 \left[\log\left(\frac{f_0}{f}\right)\right]\\ =&\frac{1}{2}\left(\log\left(\frac{|\Omega_0|}{|\Omega|}\right)-q+\tr(\Omega_0^{-1}\Omega)+\frac{1}{n}\sum_{i=1}^n||\Omega^{1/2}(\Psi\Omega-\Psi_0\Omega_0)^{\top}X_i^{\top}||_2^2\right) \end{aligned} \end{equation} The KL variance is: \begin{equation} \begin{aligned} &\frac{1}{n}V(f_0,f)=Var_{0} \left[\log\left(\frac{f_0}{f}\right)\right]\\ =&\frac{1}{2}\left(\tr((\Omega_0^{-1}\Omega)^2)-2\tr (\Omega_0^{-1}\Omega) + q \right)+\frac{1}{n}\sum_{i=1}^n||\Omega_0^{-1/2}\Omega(\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0)^{\top}X_i^{\top}||_2^2 \end{aligned} \end{equation} We need to lower bound the prior probability of the event $$ \{(\Psi,\Omega):K(f_0,f)\le n\epsilon_n^2, V(f_0,f)\le n\epsilon_n^2\} $$ for large enough $n$. We first obtain an upper bound of the average KL divergence and variance so that the mass of such event can serve as a lower bound. To simplify the notation, we denote $(\Psi-\Psi_0)=\Delta_\Psi$ and $\Omega-\Omega_0=\Delta_\Omega$. We observe that $\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0=(\Delta_\Psi-\Psi_0\Omega_0^{-1}\Delta_\Omega)\Omega^{-1}$. Using the fact that $||A-B||_2^2\le(||A||_2+||B||_2)^2\le 2||A||_2^2+2||B||_2^2$ for any two matrices $A$ and $B,$ we obtain a simple upper bound: \begin{equation} \begin{aligned} &\frac{1}{n}K(f_0,f)\\ =&\frac{1}{2}\left(\log\left(\frac{|\Omega_0|}{|\Omega|}\right)-q+\tr(\Omega_0^{-1}\Omega)+\frac{1}{n}\sum_{i=1}^n||\Omega^{-1/2}\Delta_\Psi^{\top}X_i^{\top}-\Omega^{-1/2}\Delta_ \Omega\Omega_0^{-1} \Psi_0^{\top} X_i^{\top} ||_2^2\right)\\ \le&\frac{1}{2}\left(\log\left(\frac{|\Omega_0|}{|\Omega|}\right)-q+\tr(\Omega_0^{-1}\Omega)\right)+\frac{1}{n}\sum_{i=1}^n||\Omega^{-1/2}\Delta_ \Omega\Omega_0^{-1} \Psi_0^{\top} X_i^{\top} ||_2^2\\ &+\frac{1}{n}\sum_{i=1}^n||\Omega^{-1/2}\Delta_\Psi^{\top}X_i^{\top}||_2^2\\ =&\frac{1}{2}\left(\log\left(\frac{|\Omega_0|}{|\Omega|}\right)-q+\tr(\Omega_0^{-1}\Omega)\right)+\frac{1}{n}||X\Psi_0\Omega_0^{-1}\Delta_\Omega\Omega^{-1/2}||_F^2\\ &+\frac{1}{n}||X\Delta_\Psi\Omega^{-1/2}||_F^2\\ \end{aligned} \end{equation} The last line holds because $\Omega^{-1/2}\Delta_\Psi^{\top}X_i^{\top}$ is the $i^{\text{th}}$ row of $X\Delta_\Psi\Omega^{-1/2}$. Using the same inequality, we derive a similar upper bound for the average KL variance: \begin{equation} \begin{aligned} &\frac{1}{n}V(f_0,f)\\ =&\frac{1}{2}\left(\tr((\Omega_0^{-1}\Omega)^2)-2\tr (\Omega_0^{-1}\Omega) + q \right)+\frac{1}{n}\sum_{i=1}^n ||\Omega_0^{-1/2}\Delta_\Psi^{\top} X_i^{\top}- \Omega_0^{-1/2} \Delta_\Omega\Omega_0^{-1} \Psi_0^{\top} X_i^{\top}||_2^2\\ \le&\frac{1}{2}\left(\tr((\Omega_0^{-1}\Omega)^2)-2\tr (\Omega_0^{-1}\Omega) + q \right)+\frac{2}{n}\sum_{i=1}^n || \Omega_0^{-1/2}\Delta_\Omega\Omega_0^{-1} \Psi_0^{\top} X_i^{\top}||_2^2\\ &+\frac{2}{n}\sum_{i=1}^n ||\Omega_0^{-1/2}\Delta_\Psi^{\top} X_i^{\top}||_2^2\\ =&\frac{1}{2}\left(\tr((\Omega_0^{-1}\Omega)^2)-2\tr (\Omega_0^{-1}\Omega) + q \right)+\frac{2}{n}|| X\Psi_0 \Omega_0^{-1} \Delta_\Omega \Omega_0^{-1/2}||_F^2+\frac{2}{n}||X\Delta_\Psi\Omega_0^{-1/2}||_F^2 \end{aligned} \end{equation} Similar to \citet{Ning2020} and \citet{Bai2020_groupSSL}, we find event $\mathcal{A}_1$ involving only $\Delta_\Omega$ and event $\mathcal{A}_2$ involving both $\Delta_\Omega$ and $\Delta_\Psi$ such that $(\mathcal{A}_1\cap \{\Omega\succ 0\})\cap\mathcal{A}_2$ is a subset of the event of interest $\{ K/n\le \epsilon_n^2, V/n\le\epsilon_n^2 \}$. To this end, define \begin{equation} \label{eqn:A1_due_to_Omega} \begin{aligned} \mathcal{A}_1=&\left\{ \Omega: \frac{1}{2}\left(\tr((\Omega_0^{-1}\Omega)^2)-2\tr (\Omega_0^{-1}\Omega) + q \right)+\frac{2}{n}|| X\Psi_0 \Omega_0^{-1} \Delta_\Omega \Omega_0^{-1/2}||_F^2\le \epsilon_n^2/2\right\} \\ &\bigcap\left\{\frac{1}{2}\left(\log\left(\frac{|\Omega_0|}{|\Omega|}\right)-q+\tr(\Omega_0^{-1}\Omega)\right)+\frac{1}{n}||X\Psi_0\Omega_0^{-1}\Delta_\Omega\Omega^{-1/2}||_F^2\le \epsilon_n^2/2 \right\} \end{aligned} \end{equation} and \begin{equation} \label{eqn:A2_condition_on_A1} \begin{aligned} \mathcal{A}_2=\{ (\Omega, \Psi): &\frac{1}{n}||X\Delta_\Psi\Omega_0^{-1/2}||_F^2\le \frac{\epsilon_n^2}{2},\frac{2}{n}||X\Delta_\Psi\Omega^{-1/2}||_F^2 \le \frac{\epsilon_n^2}{2} \} \end{aligned} \end{equation} We separately bound the prior probabilities $\Pi(\mathcal{A}_1)$ and $\Pi(\mathcal{A}_1|\mathcal{A}_2)$ \subsubsection{Bounding the prior mass $\Pi(\mathcal{A}_1)$} The goal here is to find a proper lower bound of prior mass on $\mathcal{A}_1$. To do this, first consider the set \begin{align*} \mathcal{A}_1^\star=\{2\sum_{k>k'} |\omega_{0,k,k'}-\omega_{k,k'}|+\sum_k |\omega_{0,k,k}-\omega_{k,k}|\le \frac{\epsilon_n}{c_1\sqrt{p}}\ \end{align*} where $c_{1} > 0$ is a constant to be specified. Since the Frobenius norm is bounded by the vectorized L1 norm, we immediately conclude that $$ A_{1}^{\star} \subset \left\{ \lVert \Omega_{0} - \Omega \rVert_{F} \leq \frac{\epsilon_{n}}{c_{1}\sqrt{p}}\right\}. $$ We now show that $\left\{ \lVert \Omega_{0} - \Omega \rVert_{F} \leq \frac{\epsilon_{n}}{c_{1}\sqrt{p}} \right\}\subset \mathcal{A}_1$. Since the Frobenius norm bounds the L2 operator norm, if $||\Omega_0-\Omega||_F\le \frac{\epsilon_n}{c_1 \sqrt{p}}$ then the absolute value of the eigenvalues of $\Omega - \Omega_{0}$ are bounded by $\frac{\epsilon_n}{c_1 \sqrt{p}}.$ Further, because we have assumed $\Omega_{0}$ has bounded spectrum, the spectrum of $\Omega=\Omega_0+\Omega-\Omega_0$ is bounded by $\lambda_{\min}-\frac{\epsilon_n}{c_1 \sqrt{p}}$ and $\lambda_{\max}+\frac{\epsilon_n}{c_1 \sqrt{p}}.$ When $n$ is large enough, these quantities are further bounded by $\lambda_{\min}/2$ and $2\lambda_{\max}.$ Thus, for $n$ large enough, if $||\Omega_0-\Omega||_F\le \frac{\epsilon_n}{c_1 \sqrt{p}},$ then we know $\Omega$ has bounded spectrum. Consequently, $\Omega^{-1/2}$ has bounded L2 operator norm. Using the fact that $||AB||_F\le \min(|||A|||_2||B||_F,|||B|||_2||A||_F)$, we have for some constant $c_{2}$ not depending on $n,$ \begin{align*} \frac{2}{n}|| X\Psi_0 \Omega_0^{-1} \Delta_\Omega \Omega_0^{-1/2}||_F^2&\le \frac{2}{n}|||X\Psi_0|||^2_2||\Omega_0^{-1} \Delta_\Omega \Omega_0^{-1/2}||_F^2\\ &\le \frac{2}{n}||X||_F^2|||\Psi_0|||_2^2|||\Omega_0^{-1} \Delta_\Omega \Omega_0^{-1/2}||_F^2\\ &\le pc_2^2||\Delta_\Omega||_F^2, \end{align*} where we have used the fact that $||X||_F=\sqrt{np}.$ Thus $||\Delta_\Omega||_F\le \frac{\epsilon_n}{2c_2\sqrt{p}}$ implies $$\frac{1}{n}|| X\Psi_0 \Omega_0^{-1} \Delta_\Omega \Omega_0^{-1/2}||_F^2\le \epsilon_n^2/4.$$ Similarly, for some constant $c_3$, we have that \begin{align*} \frac{1}{n}||X\Psi_0\Omega_0^{-1}\Delta_\Omega\Omega^{-1/2}||_F^2&\le \frac{1}{n}||X||_F^2||\Psi_0||_2^2 ||\Omega_0^{-1}\Delta_\Omega\Omega^{-1/2}||_F^2\\ &\le pc_3^2||\Delta_\Omega||_F^2 \end{align*} Thus we have $||\Delta_\Omega||_F\le \frac{\epsilon_n}{2c_3\sqrt{p}}$ implies $\frac{1}{2n}||X\Psi_0\Omega_0^{-1}\Delta_\Omega\Omega^{-1/2}||_F^2\le \epsilon_n^2/4$ Using an argument from \citet{Ning2020}, $||\Delta_\Omega||_F\le \frac{\epsilon_n}{2b_2\sqrt{p}} \le \epsilon_n/2b_2$ implies the following two inequalities \begin{align*} \frac{1}{2}(\tr((\Omega_0^{-1}\Omega)^2)-2\tr (\Omega_0^{-1}\Omega) + q ) &\le \epsilon_n^2/4 \\ \frac{1}{2}(\log(\frac{|\Omega_0|}{|\Omega|})-q+\tr(\Omega_0^{-1}\Omega)) &\le \epsilon_n^2/4. \end{align*} Thus by taking $c_1=2\max\{c_2,c_3,b_2\}$, we can conclude $\{||\Omega_0-\Omega||_F\le \frac{\epsilon_n}{c_1\sqrt{p}}\}\subset \mathcal{A}_1$. Thus $\mathcal{A}_1^\star \subset \mathcal{A}_1$ Since $\mathcal{A}_1^\star\in \{\Omega: ||\Omega_0-\Omega||_F\le \epsilon_n/c_1\sqrt{p}\},$ we know that $\tilde{\Pi}(\Omega\succ \tau I|\mathcal{A}_1^\star)=1.$ We can therefore lower bound $\Pi(\mathcal{A}_1)$ by $\Pi(\mathcal{A}_1^*\cap \{\Omega\succ \tau I\})$. Instead of calculating the latter probability directly, we can lower bound it by observing \begin{align*} &2\sum_{k>k'} |\omega_{0,k,k'}-\omega_{k,k'}|+\sum_k |\omega_{0,k,k}-\omega_{k,k}|\\ =&2\sum_{(k,k')\in S_0^\Omega} |\omega_{0,k,k'}-\omega_{k,k'}| + 2\sum_{(k,k')\in (S_0^\Omega)^c} |\omega_{k,k'}|+\sum_{k} |\omega_{0,k,k}-\omega_{k,k}|. \end{align*} Consider the following events \begin{align*} \mathcal{B}_1&=\left\{ \sum_{(k,k')\in S_0^\Omega} |\omega_{0,k,k'}-\omega_{k,k'}|\le \frac{\epsilon_n}{6c_1\sqrt{p}} \right\}\\ \mathcal{B}_2&=\left\{\sum_{(k,k')\in (S_0^\Omega)^c} |\omega_{k,k'}|\le \frac{\epsilon_n}{6c_1\sqrt{p}}\right\}\\ \mathcal{B}_3&=\left\{ \sum_{k} |\omega_{0,k,k}-\omega_{k,k}| \le \frac{\epsilon_n}{3c_1\sqrt{p}}\right\} \end{align*} Let $\mathcal{B}=\bigcap_{i=1}^3\mathcal{B}_i\subset \mathcal{A}_1^*\subset \mathcal{A}_1$. Since the prior probability of $\mathcal{B}$ lower bounds $\Pi(\mathcal{A}_1)$, we now focus on estimating $\tilde{\Pi}(\mathcal{B})$. Recall that the untruncated prior $\tilde{\Pi}$ is separable. Consequently, \begin{align*} \Pi(\mathcal{A}_1\cap \{\Omega\succ \tau I\})\ge \tilde{\Pi}(\mathcal{A}_1)\ge \tilde{\Pi}(\mathcal{B})=\prod_{i=1}^3\tilde{\Pi}(\mathcal{B}_i) \end{align*} We first bound the probability of $\mathcal{B}_{1}.$ Note that we can use only the slab part of the prior to bound this probability. A similar technique was used by \citet{Bai2020_groupSSL} (specifically in their Equation D.18) and by \citet{RockovaGeorge2018_ssl}. Specifically, we have \begin{align*} \tilde{\Pi}(\mathcal{B}_1)&=\int_{\mathcal{B}_1}\prod_{(k,k')\in S_0^\Omega} \pi(\omega_{k,k'}|\eta)d\mu\\ &\ge \prod_{(k,k')\in S_0^\Omega} \int_{|\omega_{0,k,k'}-\omega_{k,k'}|\le \frac{\epsilon_n}{6s_0^\Omega c_1\sqrt{p}}} \pi(\omega_{k,k'}|\eta) d\omega_{k,k'}\\ &\ge \eta^{s_0^\Omega} \prod_{(k,k')\in S_0^\Omega} \int_{|\omega_{0,k,k'}-\omega_{k,k'}|\le \frac{\epsilon_n}{6s_0^\Omega c_1\sqrt{p}}} \frac{\xi_1}{2} \exp(-\xi_1|\omega_{k,k'}|) d\omega_{k,k'}\\ &\ge \eta^{s_0^\Omega} \exp(-\xi_1\sum_{(k,k')\in S_0^\Omega}|\omega_{0,k,k'}|)\prod_{(k,k')\in S_0^\Omega} \int_{|\omega_{0,k,k'}-\omega_{k,k'}|\le \frac{\epsilon_n}{6s_0^\Omega c_1\sqrt{p}}} \frac{\xi_1}{2} \exp(-\xi_1|\omega_{0,k,k'}-\omega_{k,k'}|) d\omega_{k,k'}\\ &=\eta^{s_0^\Omega}\exp(-\xi_1||\Omega_{0,S_0^\Omega}||_1) \prod_{(k,k')\in S_0^\Omega} \int_{|\Delta| \le \frac{\epsilon_n}{6s^\Omega_0c_1\sqrt{p}}}\frac{\xi_1}{2} \exp(-\xi_1|\Delta|)d\Delta\\ &\ge \eta^{s_0^\Omega}\exp(-\xi_1||\Omega_{0,S_0^\Omega}||_1)\left[e^{-\frac{\xi_1\epsilon_n}{6c_1s^\Omega_0\sqrt{p}}}\left(\frac{\xi_1\epsilon_n}{6s^\Omega_0c_1\sqrt{p}}\right)\right]^{s^\Omega_0} \end{align*} The first inequality holds because the fact that $|\omega_{0,k,k'}-\omega_{k,k'}|\le \epsilon_n/(6s_0^\Omega c_1\sqrt{p})$ implies that the sum less than $\epsilon_n/(6 c_1\sqrt{p}).$ The last inequality is a special case of Equation D.18 of \citet{Bai2020_groupSSL}. For $\mathcal{B}_2,$ we derive the lower bound using the spike component of the prior. To this end, let $Q=q(q-1)/2$ denote the number of off-diagonal entries of matrix $\Omega$. We have \begin{align*} \tilde{\Pi}(\mathcal{B}_2)&=\int_{\mathcal{B}_2}\prod_{(k,k')\in (S_0^\Omega)^c} \pi(\omega_{k,k'}|\eta)d\mu\\ &\ge \prod_{(k,k')\in (S_0^\Omega)^c} \int_{|\omega_{k,k'}|\le \frac{\epsilon_n}{6(Q-s_0^\Omega)c_1\sqrt{p}}} \pi(\omega_{k,k'}|\eta)d\mu\\ &\ge (1-\eta)^{Q-s_0^\Omega} \prod_{(k,k')\in (S_0^\Omega)^c} \int_{|\omega_{k,k'}|\le \frac{\epsilon_n}{6(Q-s_0^\Omega)c_1\sqrt{p}}} \frac{\xi_0}{2} \exp(-\xi_0|\omega_{k,k'}|)d\omega_{k,k'} \\ &\ge(1-\eta)^{Q-s_0^\Omega}\prod_{(k,k')\in (S_0^\Omega)^c}\left[1-\frac{6(Q-s_0^\Omega)c_1\sqrt{p}}{\epsilon_n}\mathbb{E}_{\pi}|\omega_{k,k'}|\right]\\ &=(1-\eta)^{Q-s_0^\Omega}\left[1-\frac{6(Q-s_0^\Omega)c_1\sqrt{p}}{\epsilon_n\xi_0}\right]^{Q-s_0^\Omega}\\ &\gtrsim (1-\eta)^{Q-s_0^\Omega} \left[1-\frac{1}{Q-s_0^\Omega}\right]^{Q-s_0^\Omega}\\ &\asymp (1-\eta)^{Q-s_0^\Omega} \end{align*} To derive the last two lines, we used an argument similar to the one used by \citet{Bai2020_groupSSL} to derive Equation D.22. That is, we used the assumption that $\xi_0\sim \max\{Q,n,pq\}^{4+b}$ for some $b>0$ to conclude that $\sqrt{n}/\max\{Q,n,pq\}^{1/2+b}\le 1.$ This inequality allows us to control the $Q$ in the numerator. Since $s_{0}^{\Omega}$ grows slower than $Q,$ we can lower bound the above function some multiplier of the form $(1-\eta)^{Q-s_0^\Omega}.$ Thus, for large enough $n$, we have \begin{align*} \frac{6(Q-s_0^\Omega)c_1\sqrt{p}}{\epsilon_n\xi_0}&\le \frac{6(Q-s_0^\Omega)c_1\sqrt{p}\sqrt{n}}{\sqrt{p\log(q)}Q^{2+b}}\\ &=\frac{6c_1}{\sqrt{\log(q)}}\frac{Q-s_0}{Q^2}\frac{\sqrt{n}}{Q^b}\\ &\le \frac{Q-s_0}{Q^2}\\ &\le \frac{1}{Q-s_0} \end{align*} The event $\mathcal{B}_3$ only involves diagonal entries. The untruncated prior mass can be directly bounded using the exponential distribution \begin{align*} \tilde{\Pi}(\mathcal{B}_3)&=\int_{\mathcal{B}_3}\prod_{k=1}^q \pi(\omega_{k,k})d\mu\\ &\ge \prod_{k=1}^q \int_{|\omega_{0,k,k}-\omega_{k,k}|\le \frac{\epsilon_n}{3q c_1\sqrt{p}}} \pi(\omega_{k,k}) d\omega_{k,k}\\ &=\prod_{k=1}^q \int_{\omega_{0,k,k}- \frac{\epsilon_n}{3 q c_1\sqrt{p}}}^{\omega_{0,k,k}+ \frac{\epsilon_n}{3 q c_1\sqrt{p}}} \xi \exp(-\xi\omega_{k,k}) d\omega_{k,k}\\ &\ge \prod_{k=1}^q \int_{\omega_{0,k,k}}^{\omega_{0,k,k}+ \frac{\epsilon_n}{3 q c_1\sqrt{p}}} \xi \exp(-\xi\omega_{k,k}) d\omega_{k,k}\\ &=\exp(-\xi \sum_{i=1}^q\omega_{0,k,k})\int_{0}^{ \frac{\epsilon_n}{3 q c_1\sqrt{p}}} \xi \exp(-\xi\omega_{k,k}) d\omega_{k,k}\\ &\ge \exp(-\xi \sum_{i=1}^q\omega_{0,k,k})\left[e^{-\frac{\xi\epsilon_n}{3c_1q\sqrt{p}}}\left(\frac{\xi\epsilon_n}{3qc_1\sqrt{p}}\right)\right]^{q} \end{align*} Now we are ready to show that the log prior mass on $\mathcal{B}$ can be bounded by some $C_1n\epsilon_n^2$. To this end, consider the negative log probability \begin{align*} -&\log(\Pi(\mathcal{A}_1\cap \{\Omega\succ \tau I\}))\\ \le& \sum_{i=1}^3 -\log(\tilde{\Pi}(\mathcal{B}_i)) \\ \lesssim& -s_0^\Omega \log(\eta) + \xi_1||\Omega_{0,S_0^\Omega}||_1 + \frac{\xi_1 \epsilon_n}{6c_1\sqrt{p}}-s_0^\Omega \log\left(\frac{\xi_1\epsilon_n}{6s^\Omega_0c_1\sqrt{p}}\right)- (Q-s_0^\Omega) \log(1-\eta)\\ &+\xi\sum_k \omega_{0,k,k}+\frac{\xi\epsilon_n}{3c_1\sqrt{p}}-q\log\left(\frac{\xi\epsilon_n}{3qc_1\sqrt{p}} \right)\\ =&-\log\left( \eta^{s_0^\Omega} (1-\eta)^{Q-s_0^\Omega} \right) + \xi_1||\Omega_{0,S_0^\Omega}||_1 + \frac{\xi_1 \epsilon_n}{6c_1\sqrt{p}}+\xi\sum_k \omega_{0,k,k}+\frac{\xi\epsilon_n}{3c_1\sqrt{p}}\\ &-s_0^\Omega \log\left(\frac{\xi_1\epsilon_n}{6s^\Omega_0c_1\sqrt{p}}\right)-q\log\left(\frac{\xi\epsilon_n}{3qc_1\sqrt{p}} \right)\\ \end{align*} The $\frac{\xi_1 \epsilon_n}{6c_1\sqrt{p}}$ and $\frac{\xi\epsilon_n}{3c_1\sqrt{p}}$ terms are $O(\epsilon_n/\sqrt{p})\lesssim n\epsilon_n^2$ which goes to infinity. The 4th term is of order $q$ since the diagonal entries is controlled by the largest eigenvalue of $\Omega,$ which was assumed to be bounded. \begin{align*} \xi_1||\Omega_{0,S_0^\Omega}||_1\le \xi_1 s_0^\Omega \sup|\omega_{0,k,k'}| \end{align*} is of order $s_0^\Omega$ as the entries of $\omega_{0,k,k'}$ is controlled. Without tuning of $\eta$, the first term $-\log\left( \eta^{s_0^\Omega} (1-\eta)^{Q-s_0^\Omega} \right)$ has order of $Q$. But since we assumed $\frac{1-\eta}{\eta}\sim \max\{Q,pq\}^{2+a}$ for some $a>0$, we have $K_1 \max\{Q,pq\}^{2+a} \le \frac{1-\eta}{\eta}\le K_2 \max\{Q,pq\}^{2+a}$. That is, we have $1/(1+K_2 \max\{Q,pq\}^{2+a})\le \eta \le 1/(1+K_1\max\{Q,pq\}^{2+a}).$ We can derive a simple lower bound as \begin{align*} \eta^{s_0^\Omega} (1-\eta)^{Q-s_0^\Omega}&\ge (1+K_2 \max\{Q,pq\}^{2+a})^{-s_0^\Omega}(1-\eta)^{Q-s_0^\Omega}\\ &\ge (1+K_2 \max\{Q,pq\}^{2+a})^{-s_0^\Omega}\left(1-\frac{1}{1+K_1\max\{Q,pq\}^{2+a}}\right)^{Q-s_0^\Omega}\\ &\gtrsim (1+K_2 \max\{Q,pq\}^{2+a})^{-s_0^\Omega} \end{align*} The last line is because $\max\{Q,pq\}^{2+a}$ grows faster than $Q-s^\Omega_0.$ Thus $(1-\frac{1}{1+K_1\max\{Q,pq\}^{2+a}})^{\max\{Q,pq\}-s_0^\Omega}$ can be bounded below by some constant. \begin{align*} -\log\left( \eta^{s_0^\Omega} (1-\eta)^{Q-s_0^\Omega} \right) &\lesssim s_0^\Omega \log(1+K_2 \max\{Q,pq\}^{2+a})\lesssim s_0^\Omega \log(\max\{Q,pq\})\\&\asymp s_0^\Omega\log(\max\{q,p\})\le \max(p,q,s_0^\Omega) \log(\max\{q,p\}) \end{align*} The last two terms can be treated in the same way, using the assumption $\xi_1\asymp 1/n$ and $\xi\asymp 1/\max\{Q,n\}.$ \begin{align*} -s_0^\Omega \log\left(\frac{\xi_1\epsilon_n}{6s^\Omega_0c_1\sqrt{p}}\right)&=s_0^\Omega \log\left(\frac{6s^\Omega_0c_1\sqrt{p}}{\xi_1\epsilon_n}\right)\\ &\lesssim s_0^\Omega \log\left(\frac{n^{3/2}s_0^\Omega \sqrt{p}}{\sqrt{\max\{s_0^\Omega,p,q\}\log(q)}}\right)\\ &\le s_0^\Omega \log\left(n^{3/2}s_0^\Omega \right)\\ &\lesssim s_0^\Omega\log(q^2)\\ &\lesssim n\epsilon_n^2 \end{align*} The third line holds because $\sqrt{p}\le \sqrt{\max\{s_0^\Omega, p,q\}}$ and $\log(q)\ge 1,$ which together imply that $\sqrt{p}/\sqrt{\max\{s_0^\Omega, p,q\}\log(q)}\le 1$. The fourth line follows from our assumption that $\log(n)\lesssim \log(q)$ because $s_0^\Omega <q^2$. The last line uses the definition of $\epsilon_n$. Finally, we have \begin{align*} -q \log\left(\frac{\xi\epsilon_n}{3qc_1\sqrt{p}}\right)&=q \log\left(\frac{3qc_1\sqrt{p}}{\xi\epsilon_n}\right)\\ &\lesssim q \log\left(\frac{n^{1/2}\max\{Q,n\}q \sqrt{p}}{\sqrt{\max\{s_0^\Omega,p,q\}\log(q)}}\right)\\ &\le q \log\left(n^{1/2}\max\{Q,n\}q \right)\\ &\lesssim q\log(q)\\ &\lesssim n\epsilon_n^2 \end{align*} \subsubsection{Bounding the conditional probability $\Pi(\mathcal{A}_2|\mathcal{A}_1)$} To bound $\Pi(\mathcal{A}_{2} \vert \mathcal{A}_{1}),$ we use a very similar strategy as the one above. The difference is that we now focus on the matrix $\Psi.$ We show that mass on a L1 norm ball serves as a lower bound similar to that of $\Omega$. To see that, using an argument from \citet{Ning2020}, we show that powers of $\Omega$ and $\Omega_{0}$ are bounded in operator norm. Thus the terms $\frac{1}{n}||X\Delta_\Psi\Omega_0^{-1/2}||_F^2$ and $\frac{2}{n}||X\Delta_\Psi\Omega^{-1/2}||_F^2 $ that appear in the KL condition are bounded by a constant multiplier of $n^{-1}||X\Delta_\Psi||_F^2.$ Using the fact that the columns of $X$ have norm $\sqrt{n},$ we can found this norm: \begin{align*} ||X\Delta_\Psi||_F\le \sqrt{n}\sum_{j=1}^p||\Delta_{\Psi,j,.}||_F\le \sqrt{n}\sum_{j=1}^p\sum_{k=1}^q|\psi_{j,k}-\psi_{0,j,k}| \end{align*} Thus to bound $\Pi(\mathcal{A}_2|\mathcal{A}_1)$ from below, it suffices to bound $\Pi(\sum |\psi_{j,k}-\psi_{0,j,k}|\le c_4\epsilon_n)$ for some fixed constant $c_4>0$. We separate the sum based on whether the true value is 0, similar to our treatment on $\Omega$: \begin{align*} \sum_{ij} |\psi_{j,k}-\psi_{0,j,k}|&=\sum_{(j,k)\in S_0^\Psi}|\psi_{j,k}-\psi_{0,j,k}|+\sum_{(j,k)\in (S_0^\Psi)^c}|\psi_{j,k}| \end{align*} Using the same argument as in $\Omega$, we can consider the events whose intersection is a subset of $\mathcal{A}_2$: \begin{align*} \mathcal{B}_4&=\left\{ \sum_{(j,k)\in S_0^\Psi}|\psi_{j,k}-\psi_{0,j,k}|\le \frac{c_4\epsilon_n}{2} \right\}\\ \mathcal{B}_5&=\left\{ \sum_{(j,k)\in (S_0^\Psi)^c}|\psi_{j,k}-\psi_{0,j,k}|\le \frac{c_4\epsilon_n}{2} \right\}\\ \end{align*} We have $\mathcal{B}_4\cap \mathcal{B}_5\subset\mathcal{A}_2.$ Since the elements of $\Psi$ are \textit{a priori} independent of each other and of $\Omega,$ we compute \begin{align*} \Pi(\mathcal{A}_2|\mathcal{A}_1)\ge \Pi(\mathcal{B}_4|\mathcal{A}_1)\Pi(\mathcal{B}_5|\mathcal{A}_1)=\Pi(\mathcal{B}_4)\Pi(\mathcal{B}_5) \end{align*} We bound each of these terms using the same argument as in the previous subsection: \begin{align*} \Pi(\mathcal{B}_4)&=\int_{\mathcal{B}_4}\prod_{(j,k)\in S_0^\Psi}\pi(\psi_{j,k}|\theta)d\mu\\ &\ge \prod_{(j,k)\in S_0^\Psi}\int_{|\psi_{j,k}-\psi_{0,j,k}|\le \frac{c_4\epsilon_n}{2s_0^\Psi}}\pi(\psi_{j,k}|\theta)d\psi_{j,k}\\ &\ge \theta^{s_0^\Psi}\prod_{(j,k)\in S_0^\Psi}\int_{|\psi_{j,k}-\psi_{0,j,k}|\le \frac{c_4\epsilon_n}{2s_0^\Psi}}\frac{\lambda_1}{2}\exp(-\lambda_1|\psi_{j,k}|)d\psi_{j,k}\\ &\ge \theta^{s_0^\Psi}\exp(-\lambda_1\sum_{(j,k)\in S_0^\Psi}|\psi_{0,j,k}|)\prod_{(j,k)\in S_0^\Psi}\int_{|\psi_{j,k}-\psi_{0,j,k}|\le \frac{c_4\epsilon_n}{2s_0^\Psi}} \frac{\lambda_1}{2}\exp(-\lambda_1|\psi_{j,k}-\psi_{0,j,k}|)d\psi_{j,k}\\ &=\theta^{s_0^\Psi}\exp(-\lambda_1\sum_{(j,k)\in S_0^\Psi}|\psi_{0,j,k}|)\prod_{(j,k)\in S_0^\Psi}\int_{|\Delta|\le \frac{c_4\epsilon_n}{2s_0^\Psi}} \frac{\lambda_1}{2}\exp(-\lambda_1|\Delta|)d\Delta\\ &\ge \theta^{s_0^\Psi}\exp(-\lambda_1||\Psi_{0,S_0^\Psi}||_1)\left[e^{-\frac{c_4\lambda_1\epsilon_n}{2s_0^\Psi}}\frac{c_4\epsilon_n}{2s_0^\Psi}\right]^{s_0^\Psi} \end{align*} Similarly, we have \begin{align*} \Pi(\mathcal{B}_5)&\ge (1-\theta)^{pq-s_0^\Psi}\left[1-\frac{2(pq-s_0^\Psi)c_4}{\epsilon_n\lambda_0}\right]^{pq-s_0^\Psi}\\ &\gtrsim (1-\theta)^{pq-s_0^\Psi} \end{align*} From here we have \begin{align*} -\log(\Pi(\mathcal{A}_2|\mathcal{A}_1))&\le -\log(\Pi(\mathcal{B}_4))-\log(\Pi(\mathcal{B}_5))\\ &=-\log(\theta^{s_0^\Psi}(1-\theta)^{pq-s_0^\Psi})+\lambda_1||\Psi_{0,S_0^\Psi}||_1+\frac{\lambda_1c_4\epsilon_n}{2}-s_\Psi^0\log\left(\frac{c_4\epsilon_n}{2s_0^\Psi}\right) \end{align*} Since $\Psi_0$ has bounded L2 operator norm, we know that the entries of $\Psi_0$ are all bounded. Thus$\lambda_1||\Psi_{0,S_0^\Psi}||_1=O(s_0^\Psi)\lesssim n\epsilon_n^2$. The last two terms are $O(\epsilon_n)\lesssim n\epsilon_n^2$. For the first term, recall that we assumed $\frac{1-\theta}{\theta}\sim (pq)^{2+b}$ for some $b>0.$ That is, there are constants $M_{3}$ and $M_{4}$ such that $M_3(pq)^{2+b}\le \frac{1-\theta}{\theta} \le M_4(pq)^{2+b}\le \frac{1-\theta}{\theta}$. Since $1/(1+M_4(pq)^{2+b})\le \theta \le 1/(1+M_3(pq)^{2+b}),$ we compute \begin{align*} \theta^{s_0^\Psi}(1-\theta)^{pq-s_0^\Psi}&\ge(1+M_4(pq)^{2+b})^{-s_0^\Psi}(1-\theta)^{pq-s_0^\Psi}\\ &\ge(1+M_4(pq)^{2+b})^{-s_0^\Psi}\left(1-1/(1+M_3(pq)^{2+b})\right)^{pq-s_0^\Psi}\\ &\gtrsim (1+M_4(pq)^{2+b})^{-s_0^\Psi} \end{align*} Note that the last line is due to the fact that $(pq)^{2+b}$ grows faster than $pq-s_0^\Psi.$ Consequently, the term $\left(1-1/(1+M_3(pq)^{2+b})\right)^{pq-s_0^\Psi}$ can be bounded from below by a constant not depending on $n.$ Thus, \begin{align*} -\log\left(\theta^{s_0^\Psi}(1-\theta)^{pq-s_0^\Psi}\right)\lesssim s_0^\Psi\log(1+M_4(pq)^{2+b})\lesssim s_0^\Psi\log(pq)\lesssim s_0^\Psi\max\{\log(q),\log(p)\} \end{align*} For the last term, we use the same argument as we did with $\Omega$. \begin{align*} -s_\Psi^0\log\left(\frac{c_4\epsilon_n}{2s_0^\Psi}\right)&=s_\Psi^0\log\left(\frac{2s_0^\Psi}{c_4\epsilon_n}\right)\\ &\lesssim s_0^\Psi\log\left(\frac{\sqrt{n}}{\sqrt{\log(pq)}}\right)\\ &\le s_0^\Psi\log(\sqrt{n})\\ &\lesssim n\epsilon_n^2 \end{align*} \subsection{Test condition} To simplify the parameter space to be concerned in the test condition, we first show the dimension recovery result by bounding the prior probability, with our effective dimension defined as number of entries whose absolute value is larger than the intersection of spike and slab components. Then we find the proper vectorized L1 norm sieve in the ``lower-dimensional" parameter space. We construct tests based on the supremum of a collection of single-alternative Neyman-Pearson likelihood ratio tests in the subsets of the sieve that are norm balls, then we show that the number of such subsets needed to cover the sieve can be bounded properly. \subsubsection{Dimension recovery} Unlike \citet{Ning2020}, our prior assigns no mass on exactly sparse solutions. Nevertheless, similar to \citet{RockovaGeorge2018_ssl}, we can define a notion of ``effective sparsity'' and generalized dimension. Intuitively the generalized dimension can be defined as how many coefficients are drawn from the slab rather than the spike part of the prior. Formally the generalized inclusion functions $\nu_{\psi}$ and $\nu_{\omega}$ for $\Psi$ and $\Omega$ can be defined as: \begin{align*} \nu_{\psi}(\psi_{j,k})&=\mathbbm{1}(|\psi_{j,k}|>\delta_{\psi})\\ \nu_{\omega}(\omega_{k,k'})&=\mathbbm{1}(|\omega_{k,k'}|>\delta_{\omega}) \end{align*} where $\delta_{\psi}$ and $\delta_{\omega}$ is the threshold where the spike and slab part has the same density. \begin{align*} \delta_{\psi}&=\frac{1}{\lambda_0-\lambda_1}\log\left[\frac{1-\theta}{\theta}\frac{\lambda_0}{\lambda_1}\right]\\ \delta_{\omega}&=\frac{1}{\xi_0-\xi_1}\log\left[\frac{1-\eta}{\eta}\frac{\xi_0}{\xi_1}\right]\\ \end{align*} Then the generalized dimension can be defined as number of entries are included: \begin{equation} \label{eqn:eff_dim} \begin{aligned} |\nu(\psi)|&=\sum_{jk}\nu_{\psi}(\psi_{j,k})\\ |\nu(\Omega)|&=\sum_{k>k'}\nu_{\omega}(\omega_{k,k'}) \end{aligned} \end{equation} Note that we only count the off-diagonal entries in $\Omega$. We are now ready to prove Lemma~\ref{lemma:dimension_recovery} from the main text. The main idea is to check the posterior probability directly. Let $\mathcal{B}^\Psi_n=\{\Psi:|\nu(\Psi)|<r^\Psi_n\}$ for some $r_n^\Psi=C_3'\max\{p,q,s_0^\Psi,s_0^\Omega\}$ with $C_3'>C_1$ in the KL condition. For $\Omega$, let $\mathcal{B}^\Omega_n=\{\Omega\succ \tau I:|\nu(\Omega)|<r^\Omega_n\}$ for $r_n^\Omega=C_3'\max\{p,q,s_0^\Psi,s_0^\Omega\}$ with some $C_3'>C_1$ in the KL condition. We aim to show that $\mathbb{E}_0\Pi(\Omega\in(\mathcal{B}^\Omega_n)^c|Y_1,\dots,Y_n )\to 0$ and $\mathbb{E}_0\Pi(\Psi\in(\mathcal{B}^\Psi_n)^c|Y_1,\dots,Y_n )\to 0$. The marginal posterior can be expressed using log-likelihood $\ell_n$: \begin{align} \label{eqn:posterior_B} \begin{split} \Pi(\Psi\in \mathcal{B}_n^\Psi|Y_1,\dots,Y_n)&=\frac{\iint_{\mathcal{B}_n^\Psi}\exp(\ell_n(\Psi,\Omega)-\ell_n(\Psi_0,\Omega_0))d\Pi(\Psi)d\Pi(\Omega)}{\iint\exp(\ell_n(\Psi,\Omega)-\ell_n(\Psi_0,\Omega_0))d\Pi(\Psi)d\Pi(\Omega)}\\ \Pi(\Omega\in \mathcal{B}_n^\Omega|Y_1,\dots,Y_n)&=\frac{\iint_{\mathcal{B}_n^\Omega}\exp(\ell_n(\Psi,\Omega)-\ell_n(\Psi_0,\Omega_0))d\Pi(\Psi)d\Pi(\Omega)}{\iint\exp(\ell_n(\Psi,\Omega)-\ell_n(\Psi_0,\Omega_0))d\Pi(\Psi)d\Pi(\Omega)} \end{split} \end{align} By using the result of KL condition (Lemma \ref{lemma:KL_cg}), we know the denominators are bounded from below by $e^{-C_1n\epsilon_n^2}$ with large probability. Thus, we focus now on upper bounding the numerators beginning with $\Psi.$ Consider the numerator: \begin{align*} \mathbb{E}_0\left(\iint_{(\mathcal{B}_n^{\Psi})^c}f/f_0d\Pi(\Psi)d\Pi(\Omega)\right)&=\int\iint_{(\mathcal{B}_n^{\Psi})^c}f/f_0d\Pi(\Psi)d\Pi(\Omega)f_0 dy\\ &=\iint_{(\mathcal{B}_n^{\Psi})^c}\int f dyd\Pi(\Psi)d\Pi(\Omega)\\ &\le \int_{(\mathcal{B}_n^{\Psi})^c}d\Pi(\Psi)=\Pi(|\nu(\Psi)|\ge r_n^\Psi) \end{align*} We can bound the above display using the fact that when $|\psi_{j,k}|>\delta_\psi$ we have $\pi(\psi_{j,k})<2\theta\frac{\lambda_1}{2}\exp(-\lambda_1|\psi_{j,k}|)$, this is by definition of the effective dimension: \begin{align*} \Pi(|\nu(\Psi)|\ge r_n^\Psi)&\le \sum_{|S|>r_n^\Psi}(2\theta)^{|S|}\prod_{(j,k)\in S}\int_{|\psi_{j,k}|>\delta_\psi} \frac{\lambda_1}{2}\exp(-\lambda_1|\psi_{j,k}|) d\psi_{j,k} \prod_{(j,k)\notin S} \int_{|\psi_{j,k}|<\delta_\psi}\pi(\psi_{j,k})d\psi_{j,k}\\ &\le \sum_{|S|>r_n^\Psi}(2\theta)^{|S|}\\ \end{align*} Using the assumption on $\theta$, and the fact $\binom{pq}{k}\le (epq/k)^k$ (similar to \citet{Bai2020_groupSSL}'s equation D.32), we can further upper bound the probability \begin{align*} \Pi(|\nu(\Psi)|\ge r_n^\Psi)&\le \sum_{|S|>r_n^\Psi}(2\theta)^{|S|} \le \sum_{|S|>r_n^\Psi}(\frac{2}{1+M_4 (pq)^{2+b}})^{|S|}\\ &\le \sum_{k=\left\lfloor r_n^\Psi\right\rfloor +1}^{pq} \binom{pq}{k}\left(\frac{2}{M_4(pq)^2}\right)^k \le \sum_{k=\left\lfloor r_n^\Psi\right\rfloor +1}^{pq} \left(\frac{2e}{M_4kpq}\right)^k\\ &<\sum_{k=\left\lfloor r_n^\Psi\right\rfloor +1}^{pq} \left(\frac{2e}{M_4(\left\lfloor r_n^\Psi\right\rfloor +1)pq}\right)^k\\ &\lesssim (pq)^{-(\left\lfloor r_n^\Psi\right\rfloor +1)}\\ &\le \exp(-(\left\lfloor r_n^\Psi\right\rfloor)\log(pq)).\\ \end{align*} Taking $r_n^\Psi=C_3'\max\{p,q,s_0^\Psi,s_0^\Omega\}$ for some $C_3'>C_1$, we have: \begin{align*} \Pi(|\nu(\Psi)|\ge r_n^\Psi)&\le \exp(-C_3'\max\{p,q,s_0^\Psi,s_0^\Omega\}\log(pq)) \end{align*} Therefore, \begin{align*} \mathbb{E}_0\Pi((\mathcal{B}_n^\Psi)^c|Y_1,\dots,Y_n)\le \mathbb{E}_0\Pi((\mathcal{B}_n^\Psi)^c|Y_1,\dots,Y_n)I_{E_n}+P_0(E_n^c), \end{align*} where $E_n$ is the event in the KL condition. On $E_n$, the KL condition ensures that the denominator in Equation~\eqref{eqn:posterior_B} is lower bounded by $\exp(-C_1n\epsilon_n^2)$ while the denominator is upper bounded by $\exp(-C_3'\max\{p,q,s_0^\Psi,s_0^\Omega\}\log(pq)).$, Since $\mathbb{P}_0(E_n^c)$ is $o(1)$ per KL condition, we have the upper bound \begin{align*} \mathbb{E}_0\Pi((\mathcal{B}_n^\Psi)^c|Y_1,\dots,Y_n)&\le \exp(C_1n\epsilon_n^2-C_3'\max\{p,q,s_0^\Psi,s_0^\Omega\}\log(pq))+o(1)\to 0 \end{align*} This completes the proof of the dimension recovery result of $\Psi.$ The workflow for $\Omega$ is very similar, except we need to use the upper bound of the graphical prior in Equation~\eqref{eqn:graphical_prior_bound} to properly bound the prior mass. We upper bound the numerator: \begin{align*} \mathbb{E}_0\left(\iint_{(\mathcal{B}_n^{\Omega})^c}f/f_0d\Pi(\Psi)d\Pi(\Omega)\right)&\le \int_{(\mathcal{B}_n^{\Omega})^c}d\Pi(\Omega)=\Pi(|\nu(\Omega)|\ge r_n^\Omega)\le \exp(2\xi Q-\log(R))\tilde{\Pi}(|\nu(\Omega)|\ge r_n^\Omega) \end{align*} We bound the above display using the fact that when $|\omega_{k,k'}|>\delta_\omega$ we have $\pi(\omega_{k,k'})<2\eta\frac{\xi_1}{2}\exp(-\xi_1|\omega_{k,k'}|)$. Note that this follows from the definition of the effective dimension. We have \begin{align*} \tilde{\Pi}(|\nu(\Omega)|\ge r_n^\Omega)&\le \sum_{|S|>r_n^\Omega}(2\eta)^{|S|}\prod_{(k,k')\in S}\int_{|\omega_{k,k'}|>\delta_\omega} \frac{\xi_1}{2}\exp(-\xi_1|\omega_{k,k'}|) d\omega_{k,k'} \prod_{(k,k')\notin S} \int_{|\omega_{k,k'}|<\delta_\omega}\pi(\omega_{k,k'})d\omega_{k,k'}\\ &\le \sum_{|S|>r_n^\Omega}(2\eta)^{|S|}\\ \end{align*} By using the assumption on $\eta$, and the fact $\binom{Q}{k}\le (eQ/k)^k$, we can further upper bound the probability: \begin{align*} \tilde{\Pi}(|\nu(\Omega)|\ge r_n^\Omega)&\le \sum_{|S|>r_n^\Omega}(2\eta)^{|S|} \le \sum_{|S|>r_n^\Omega}(\frac{2}{1+K_4 \max\{pq,Q\}^{2+b}})^{|S|}\\ &\le \sum_{k=\left\lfloor r_n^\Omega\right\rfloor +1}^{Q} \binom{Q}{k}\left(\frac{2}{K_4\max\{pq,Q\}^2}\right)^k \le \sum_{k=\left\lfloor r_n^\Omega\right\rfloor +1}^{\max\{pq,Q\}} \binom{\max\{pq,Q\}}{k}\left(\frac{2}{K_4\max\{pq,Q\}^2}\right)^k\\ &\le \sum_{k=\left\lfloor r_n^\Omega\right\rfloor +1}^{\max\{pq,Q\}} \left(\frac{2e}{K_4k\max\{pq,Q\}}\right)^k <\sum_{k=\left\lfloor r_n^\Omega\right\rfloor +1}^{\max\{pq,Q\}} \left(\frac{2e}{K_4(\left\lfloor r_n^\Omega\right\rfloor +1)\max\{pq,Q\}}\right)^k\\ &\lesssim \max\{pq,Q\}^{-(\left\lfloor r_n^\Omega\right\rfloor +1)}\\ &\le \exp(-(\left\lfloor r_n^\Omega\right\rfloor)\log(\max\{pq,Q\}))\\ \end{align*} Taking $r_n^\Omega=C_3'\max\{p,q,s_0^\Psi,s_0^\Omega\}$ and $C_3'>C_1$, we have \begin{align*} \tilde{\Pi}(|\nu(\Omega)|\ge r_n^\Omega)&\le \exp(-C_3'\max\{p,q,s_0^\Psi,s_0^\Omega\}\log(\max\{pq,Q\}))\le \exp(-C_3n\epsilon_n^2) \end{align*} Thus, using the assumption $\xi \asymp 1/max\{Q,n\}$, for some $R'$ not depending on $n$, we have \begin{align*} \Pi(|\nu(\Omega)|\ge r_n^\Omega)\le \exp(-C_3n\epsilon_n^2+2\xi Q-\log(R))\le \exp(-C_3n\epsilon_n^2+\log(R')) \end{align*} We therefore conclude that \begin{align*} \mathbb{E}_0\Pi((\mathcal{B}_n^\Omega)^c|Y_1,\dots,Y_n)\le \mathbb{E}_0\Pi((\mathcal{B}_n^\Omega)^c|Y_1,\dots,Y_n)I_{E_n},+P_0(E_n^c) \end{align*} where $E_n$ is the event in KL condition. On $E_n$, the KL condition ensures that the denominator in Equation~\eqref{eqn:posterior_B} is lower bounded by $\exp(-C_1n\epsilon_n^2)$ while the denominator is upper bounded by $\exp(-C_3'n\epsilon_n^2+\log(R')).$ Since $\mathbb{P}_0(E_n^c)$ is $o(1)$ per KL condition, we conclude \begin{align*} \mathbb{E}_0\Pi((\mathcal{B}_n^\Omega)^c|Y_1,\dots,Y_n)&\le \exp(C_1n\epsilon_n^2-C_3'n\epsilon_n^2+\log(R'))+o(1)\to 0 \end{align*} We pause now to reflect on how dimension recovery can help us establish contraction. Our end goal is to show the posterior distribution contract to the true value by first showing that event with log-affinity difference larger than any given $\epsilon>0$ has an $o(1)$ posterior mass. For any such event, we can take a partition based on whether it intersects with $\mathcal{B}^\Psi_n, \mathcal{B}^\Omega_n$ or their complements. Because the complements $(\mathcal{B}^\Psi_n)^c$ and $(\mathcal{B}^\Omega_n)^c$ have $o(1)$ posterior mass, we have the partition that intersects with any of these two complements also has $o(1)$ posterior mass. Thus, we only need to show that events with log-affinity difference larger than any given $\epsilon>0$ \textit{and} recovered the low dimension structure have an $o(1)$ posterior mass. The recovery condition reduces the complexity of the events (on the parameter space) that we need to deal with by reducing the effective dimension of such events. We will make use of this low dimension structure during checking the test condition. Formally for every $\epsilon>0$, we have \begin{align*} &\mathbb{E}_0\Pi(\Psi,\Omega\succ \tau I:\frac{1}{n}\sum \rho(f_i, f_{0,i})>\epsilon|Y_1,\dots, Y_n)\\ \le& \mathbb{E}_0\Pi(\Psi\in \mathcal{B}_n^\Psi,\Omega\succ \tau I:\frac{1}{n}\sum \rho(f_i, f_{0,i})>\epsilon|Y_1,\dots, Y_n)+\mathbb{E}_0\Pi((\mathcal{B}^\Psi_n)^c|Y_1,\dots,Y_n)\\ \le& \mathbb{E}_0\Pi(\Psi\in \mathcal{B}_n^\Psi,\Omega\in \mathcal{B}_n^\Omega:\frac{1}{n}\sum \rho(f_i, f_{0,i})>\epsilon|Y_1,\dots, Y_n)\\ &+\mathbb{E}_0\Pi((\mathcal{B}^\Psi_n)^c|Y_1,\dots,Y_n)+\mathbb{E}_0\Pi((\mathcal{B}^\Omega_n)^c|Y_1,\dots,Y_n) \end{align*} The last two terms are $o(1),$ as proved above. \subsubsection{Sieve} \label{appendix:sieve_exists} As shown in the previous section, we can concentrate on the events with proper dimension recovery, i.e. $\{\Psi\in\mathcal{B}^\Psi_n,\Omega\in \mathcal{B}^\Omega_n\}$. To apply \citet{ghosal2017fundamentals}'s general theory of posterior contraction, to establish contraction on the event of proper dimension recovery (i.e. $\mathbb{E}_0\Pi(\Psi\in \mathcal{B}_n^\Psi,\Omega\in \mathcal{B}_n^\Omega:\frac{1}{n}\sum \rho(f_i, f_{0,i})>\epsilon|Y_1,\dots, Y_n)\to 0$), we need to find a sieve that covers enough of the support of the prior. We will show that an L1 norm sieve is sufficient. Formally we will show that there exist a sieve $\mathcal{F}_n$ such that for some constants $C_2>C_1+2$: \begin{equation} \Pi(\mathcal{F}_n^c)\le \exp(-C_2n\epsilon_n^2) \end{equation} Consider the sieve: \begin{equation} \mathcal{F}_n=\left\{\Psi\in \mathcal{B}^\Psi_n,\Omega\in \mathcal{B}^\Omega_n:||\Psi||_1\le 2C_3p, ||\Omega||_1 \le 8C_3q \right\} \end{equation} for some large $C_3>C_1+2+\log(3)$ where $C_1$ is the constant in KL condition. We have \begin{align*} \Pi(\mathcal{F}_n^c)&\le \Pi(||\Psi||_1> 2C_3p)+\Pi((||\Omega||_1 > 8C_3q)\cap \{\Omega\succ \tau I\}) \end{align*} We upper bound each term similar to \citet{Bai2020_groupSSL}. By using the bound in Equation~\eqref{eqn:graphical_prior_bound}, we know that \begin{align*} \Pi((||\Omega||_1 > 8C_3q)\cap \{\Omega\succ \tau I\})\le \exp(2\xi Q-\log(R))\tilde{\Pi}(||\Omega||_1 > 8C_3q). \end{align*} Since $||\Omega||_1=2\sum_{k>k'}|\omega_{k,k'}|+\sum_{k}|\omega_{k,k}|$, at least one of these two sums exceeds $8C_{3}q/2.$ Thus, we can form an upper bound on the L1 norm probability \begin{align*} \tilde{\Pi}(||\Omega||_1 > 8C_3q)\le \tilde{\Pi}\left(\sum_{k>k'}|\omega_{k,k'}|>\frac{8C_3q}{4}\right)+\tilde{\Pi}\left(\sum_{k}|\omega_{k,k}|>\frac{8C_3q}{2}\right). \end{align*} To get an upper bound under $\tilde{\Pi},$ we can act as if all $\omega_{k,k'}$'s were drawn from the slab distribution. In that setting, $\sum_{k>k'}|\omega_{k,k'}|$ is Gamma distributed with shape parameter $Q$ and rate parameter $\xi_1$. By using an appropriate tail probability for the Gamma distribution (\citet{boucheron2013concentration}, pp.29) and the fact $1+x-\sqrt{1+2x}\ge (x-1)/2,$ we compute \begin{align*} \exp(2\xi Q-\log(R))\tilde{\Pi}(\sum_{k>k'}|\omega_{k,k'}| > 8C_3q/4)&\le \exp\left[-Q\left(1-\sqrt{1+2\frac{8C_3q}{4Q\xi_1}}+\frac{8C_3q}{4Q\xi_1}\right)+2Q-\log(R)\right]\\ &\le \exp\left[-\frac{8C_3q}{8\xi_1}+\left(\frac{5}{2}Q-\log(R)\right)\right] \end{align*} Since we have assumed $\xi_{1} \asymp 1/n,$ for sufficiently large $n,$ we have $ n\epsilon_n^2\ge q\log(q).$ Consequently, $qn\epsilon_n^2\ge Q\log(q),$ $Q=o(qn\epsilon_n^2)$, and we see that \begin{align*} \frac{8C_3q}{8\xi_1}-\left(\frac{5}{2}Q-\log(R)\right)&\asymp C_3(nq)-\left(\frac{5}{2}Q-\log(R)\right)\\ &\ge C_3(qn\epsilon_n^2)-\left(\frac{5}{2}Q-\log(R)\right)\\ &=C_3(qn\epsilon_n^2)-o(qn\epsilon_n^2)\\ &\ge C_3n\epsilon_n^2 \end{align*} The first order term of $Q$ on the left hand side can be ignored when $n$ large as the left hand side is dominated by the $Q\log(q)$ term. Note that we used the assumption that $\epsilon_n\to 0$. We further have \begin{align*} \exp(2 \xi Q-\log(R))\tilde{\Pi}((\sum_{k>k'}|\omega_{k,k'}| > 8C_3q/4))\le \exp(-C_3n\epsilon_n^2) \end{align*} For the diagonal, the sum follows a gamma distribution with shape $q$ and rate $\xi$. We obtain a similar bound \begin{align*} \exp(2 \xi Q-\log(R))\tilde{\Pi}(\sum_{k}|\omega_{k,k}| > 8C_3q/2)&\le \exp(2 Q-\log(R))\exp\left[-q\left(1-\sqrt{1+2\frac{8C_3q}{2q\xi}}+\frac{8C_3q}{2q\xi}\right)\right]\\ &\le \exp\left[-\frac{8C_3q}{4\xi}+Q\left(2+\frac{q}{2Q}\right)-\log(R)\right] \end{align*} Using the same argument as before and the fact that $\xi \asymp 1/\max\{Q,n\}$, we have \begin{align*} \frac{8C_3q}{4\xi}-Q\left(2+\frac{q}{2Q}\right)+\log(R)&\asymp 2C_3(\max\{Q,n\}q))-Q\left(2+\frac{q}{2Q}\right)+\log(R)\\ &\ge C_3qn\epsilon_n^2-o(qn\epsilon_n^2)\\ &\ge C_3n\epsilon_n^2 \end{align*} The first order term of $Q$ on the left hand side can be ignored when $n$ large as the left hand side is dominated by the $Q\log(q)$ term and $q/Q\to 0$. By combining the above results, we have: \begin{equation} \label{eqn:sieve_Omega_bound} \begin{aligned} \Pi((||\Omega||_1 > 8C_3q)\cap \{\Omega\succ \tau I\})\le& \exp(2 Q-\log(R))\tilde{\Pi}(||\Omega||_1 > 8C_3q)\\ \le& \exp(2 Q-\log(R))\tilde{\Pi}(\sum_{k>k'}|\omega_{k,k'}|>\frac{8C_3q}{4})\\ &+\exp(2 Q-\log(R))\tilde{\Pi}(\sum_{k}|\omega_{k,k}|>\frac{8C_3q}{2})\\ \le& 2\exp(-C_3n\epsilon_n^2) \end{aligned} \end{equation} The probability $||\Psi||_1>2C_3p$ can be bounded by tail probability of Gamma distribution with shape parameter $pq$ and rate parameter $\lambda_1$: \begin{align*} \Pi(||\Psi||_1> 2C_3p)&\le \exp\left[-pq\left(1-\sqrt{1+2\frac{2C_3p}{pq\lambda_1}}+\frac{2C_3p}{pq\lambda_1}\right)\right]\\ &\le \exp\left[-pq\left(\frac{2C_3p}{2pq\lambda_1}-\frac{1}{2}\right)\right]\\ &\le \exp\left(-\frac{2C_3p}{2\lambda_1}+\frac{pq}{2}\right)\\ \end{align*} Using the same argument, we have $pn\ge pn\epsilon_n^2\ge pq\log(q)$ and thus, $pq=o(pn\epsilon_n^2)$ for large $n$. Consequently, \begin{align*} \exp\left(-\frac{2C_3p}{2\lambda_1}+\frac{pq}{2}\right)\le \exp\left(-C_3pn\epsilon_n^2+o(pn\epsilon_n^2)\right)\le \exp(-C_3n\epsilon_n^2) \end{align*} and \begin{equation} \label{eqn:sieve_B_bound} \begin{aligned} \Pi(||\Psi||_1> 2C_3p)&\le\exp(-C_3n\epsilon_n^2) \end{aligned} \end{equation} By combining the result from Equations~\eqref{eqn:sieve_Omega_bound} and ~\eqref{eqn:sieve_B_bound}, we conclude \begin{align*} \Pi(\mathcal{F}_n^c)\le 3\exp(-C_3n\epsilon_n^2)=\exp(-C_3n\epsilon_n^2+\log(3)). \end{align*} With our choice of $C_3$, the above probability is asymptotically bounded from above by $\exp(-C_2n\epsilon_n^2)$ with some $C_2\ge C_1+2$. \subsubsection{Tests around a representative point} To apply the general theory, we need to construct test $\varphi_n$, such that for some $M_2>C_1+1$: \begin{equation} \label{eqn:test_condition} \begin{aligned} &\mathbb{E}_{f_0}\varphi_n\lesssim e^{-M_2n\epsilon^2/2}\\ \sup_{f\in \mathcal{F}_n:\rho(f_0,f)>M_2n\epsilon_n^2}&\mathbb{E}_{f}(1-\varphi_n)\lesssim e^{-M_2n\epsilon_n^2} \end{aligned} \end{equation} where $f=\prod_{i=1}^n \mathcal{N}(X_i\Psi\Omega^{-1},\Omega^{-1})$ while $f_0=\prod_{i=1}^n \mathcal{N}(X_i\Psi_0\Omega_0^{-1},\Omega_0^{-1})$ Instead of directly constructing the $\varphi_n$ on the whole sieve, we use the method similar to \citet{Ning2020}. That is, we construct tests versus a representative point and show that these tests works well in the neighborhood of the representative points. We then take the supremum of these tests and show that the number of pieces needed to cover the entire sieve can be appropriately bounded. For a representative point $f_1$, consider the Neyman-Pearson test for a single point alternative $H_0: f=f_0, H_1: f=f_1$, $\phi_n=I\{ f_1/f_0\ge 1 \}$. If the average half order R\'enyi divergence $-n^{-1}\log(\int\sqrt{f_0f_1}d\mu)\ge \epsilon^2$, we will have: \begin{align*} \mathbb{E}_{f_0}(\phi_n)&\le\int_{f_1>f_0}\sqrt{f_1/f_0} f_0 d\mu\le \int \sqrt{f_1f_0}d\mu\le e^{-n\epsilon^2}\\ \mathbb{E}_{f_1}(1-\phi_n)&\le\int_{f_0>f_1}\sqrt{f_0/f_1} f_1 d\mu\le \int \sqrt{f_0f_1}d\mu\le e^{-n\epsilon^2}\\ \end{align*} By Cauchy-Schwarz, for any alternative $f$ we can control the Type II error rate: \begin{align*} \mathbb{E}_f(1-\phi_n)\le \{\mathbb{E}_{f_1}(1-\phi_n)\}^{1/2}\{\mathbb{E}_{f_1}(f/f_1)^2\}^{1/2} \end{align*} So long as the second factor grows at most like $e^{cn\epsilon^2}$ for some properly chosen small $c$, the full expression can be controlled. Thus we can consider the neighborhood around the representative point small enough so that the second factor can be actually bounded. Consider every density with parameters satisfying \begin{equation} \label{eqn:test_sets} \begin{aligned} &|||\Omega|||_2\le ||\Omega||_1\le 8C_3q,\\ &||\Psi_1-\Psi||_2\le ||\Psi_1-\Psi||_1\le \frac{1}{\sqrt{2C_3np}},\\ &|||\Omega_1-\Omega|||_2\le ||\Omega_1-\Omega||_1\le\frac{1}{8C_3n\max\{p,q\}^{3/2}} \le \frac{1}{8C_3nq^{3/2}} \end{aligned} \end{equation} We show that $\mathbb{E}_{f_1}(f/f_1)^2$ is bounded on the above set when parameters are from the sieve $\mathcal{F}_n$. Similar to \citet{Ning2020}, denote $\Sigma_1=\Omega_1^{-1}$, $\Sigma=\Omega^{-1}$ as well as $\Sigma_1^\star=\Omega^{1/2}\Sigma_1\Omega^{1/2}$, and $\Delta_\Psi=\Psi-\Psi_1$ while $\Delta_\Omega=\Omega-\Omega_1$. Using the observation $\Psi\Omega^{-1}-\Psi_1\Omega_1^{-1}=(\Delta_\Psi-\Psi_1\Omega^{-1}\Delta_\Omega)\Omega^{-1},$ we have \begin{equation} \label{eqn:test_f_over_f1} \begin{aligned} \mathbb{E}_{f_1}(f/f_1)^2=&|\Sigma_1^\star|^{n/2}|2I-\Sigma_1^{\star-1}|^{-n/2}\\ &\times \exp\left(\sum_{i=1}^nX_i(\Psi\Omega^{-1}-\Psi_1\Omega_1^{-1})\Omega^{1/2}(2\Sigma_1^\star-I)^{-1}\Omega^{1/2}(\Psi\Omega^{-1}-\Psi_1\Omega_1^{-1})^{\top}X_i^{\top}\right)\\ =&|\Sigma_1^\star|^{n/2}|2I-\Sigma_1^{\star-1}|^{-n/2}\\ &\times \exp\left(\sum_{i=1}^nX_i(\Delta_\Psi-\Psi_0\Omega^{-1}\Delta_\Omega)\Omega^{-1/2}(2\Sigma_1^\star-I)^{-1}\Omega^{-1/2}(\Delta_\Psi-\Psi_0\Omega^{-1}\Delta_\Omega)^{\top}X_i^{\top}\right) \end{aligned} \end{equation} For the first factor we use a similar argument as in \citet{Ning2020} (after Equation 5.9). Since $\Omega\in\mathcal{F}_n$, we have $|||\Omega^{-1}|||_2\le 1/\tau $. The fact $|||\Omega_1-\Omega|||_2\le \delta_n'= 1/8C_3nq^{3/2}$ implies \begin{align*} |||\Sigma_1^\star-I|||_2\le |||\Omega^{-1}|||_2|||\Omega_1-\Omega|||_2\le \delta_n'/\tau \end{align*} and thus we can bound the spectrum of $\Sigma_1^\star$, i.e. $1-\delta_n'/\tau\le \eig_1(\Sigma_1^\star)\le \eig_q(\Sigma_1^\star)\le 1+\delta_n'/\tau $. Thus \begin{align*} \left(\frac{|\Sigma_1^\star|}{|2I-\Sigma_1^{\star-1}|}\right)^{n/2}&=\exp\left(\frac{n}{2}\sum_{i=1}^q\log(\eig_i(\Sigma_1^\star))-\frac{n}{2}\sum_{i=1}^q \log\left(2-\frac{1}{\eig_i(\Sigma_1^\star)}\right)\right)\\ &\le \exp\left(\frac{nq}{2}\log(1+\delta_n'/\tau)-\frac{nq}{2}\log\left(1-\frac{\delta_n'/\tau}{1-\delta_n'/\tau}\right)\right)\\ &\le \exp\left(\frac{nq^2}{2}\delta_n'+\frac{nq}{2}\left(\frac{\delta_n'/\tau}{1-2\delta_n'/\tau}\right)\right)\\ &\le \exp(nq\delta_n'/\tau)\\ &\le e \end{align*} The third inequality is due to the fact $1-x^{-1}\le \log(x)\le x-1$. We can bound the log of the second factor of Equation~\eqref{eqn:test_f_over_f1}. \begin{align*} |||\Omega^{-1}|||_2|||(2\Sigma_1^\star-I)^{-1}|||_2\sum_{i=1}^n||X_i(\Delta_\Psi-\Psi_1\Omega^{-1}\Delta_\Omega)||_2^2\le 2/\tau \sum_{i=1}^n||X_i(\Delta_\Psi-\Psi_1\Omega^{-1}\Delta_\Omega)||_2^2 \end{align*} We can further bound the sum on the sieve. \begin{align*} \sum_{i=1}^n||X_i(\Delta_\Psi-\Psi_1\Omega^{-1}\Delta_\Omega)||_2^2&\le 2\sum_{i=1}^n ||X_i\Delta_\Psi||_2^2+2\sum_{i=1}^n ||X_i\Psi_1\Omega^{-1}\Delta_\Omega||_2^2\\ &\le 2np|||\Delta_\Psi|||_2^2+2np|||\Psi_1|||_2^2|||\Omega^{-1}|||_2^2||\Delta_\Omega||_F^2\\ &\le 2np\frac{1}{2C_3np}+2np\left(2C_3p+\frac{1}{\sqrt{2C_3np}}\right)^2\frac{1}{\tau^2}\frac{1}{(8C_3n\max\{p,q\}^{3/2})^2}\\ &\le 2np\frac{1}{2C_3np}+2np16C_3^2p^2\frac{1}{\tau^2}\frac{1}{(8C_3n\max\{p,q\}^{3/2})^2}\\ &\lesssim 1 \end{align*} We bound the norm of $\Psi_1$ using triangle inequality, $|||\Psi_1|||\le |||\Psi|||+|||\Psi_1-\Psi|||\le 2C_3p+1/\sqrt{2C_3np}$. The first term is $O(1)$ and second term is $O(1/q)$, by combining the result we conclude the second factor of Equation~\eqref{eqn:test_f_over_f1} is bounded. Thus, following the argument of \citet{Ning2020}, the desired test $\varphi_n$ in Equation~\eqref{eqn:test_condition} can be obtained as the maximum of all tests $\phi_n$ described above \subsubsection{Pieces needed to cover the sieve} From here we show the contraction in log-affinity $\rho(f,f_0)$. To finish up the proof, we check that number of sets described in Equation~\eqref{eqn:test_sets} needed to cover sieve $\mathcal{F}_n$, denoted by $N_*$, can be bounded by $\exp(Cn\epsilon_n^2)$ for some suitable constant $C$. The number $N_*$ is called a covering number of $\mathcal{F}_n$. A closely related quantity is the packing number, which is defined as the maximum number of disjoint balls centered in a set and upper bounds the covering number. Both the covering number and packing number can be used as a measure of complexity of a given set \citep{ghosal2017fundamentals}. The packing number of a set usually depends exponentially on the sets dimensions. Because \citet{Ning2020} studied posteriors which place positive probability on exactly sparse parameters, they were able to directly bound the packing number of suitable low-dimensional sets. In our case, which uses an absolutely continuous prior, we need to instead control the packing number of ``effectively low dimensional'' spaces. Lemma~\ref{lemma:packing_number_lp} provides a sufficient condition for bounding the complexity (evaluated by packing number) of an set of ``effectively sparse'' vectors can be bounded by the complexity of a set of actually sparse vectors. \begin{myLemma}[packing a shallow cylinder in Lp] \label{lemma:packing_number_lp} For a set of form $E=A \times [-\delta,\delta]^{Q-s}\subset \mathbb{R}^Q$ where $A\subset \mathbb{R}^s$, (with $s>0$ and $Q\ge s+1$ are integers) for $1\le p<\infty$ and a given $T>1$, if $\delta<\frac{\epsilon}{2[T(Q-s)]^{1/p}}$, we have the packing number: \begin{align*} D(\epsilon, A, ||\cdot||_p) \le D(\epsilon,E, ||\cdot ||_p)\le D((1-T^{-1})^{1/p}\epsilon, A, ||\cdot||_p) \end{align*} \end{myLemma} \begin{proof} The lower bound is trivial by observing $A\times\{0\}^{Q-s}\subset E$ and the packing number of $A\times\{0\}^{Q-s}$ is exactly the packing number of $A$. For the upper bound, we show that for each packing on $E,$ we can slice that packing with the 0-plane to form a packing on $A$ with the same number of balls but smaller radius (see Figure~\ref{fig:packing} for an illustration). We first show any Lp $\epsilon/2-$ball $B_\theta(\epsilon/2)$ centered in the set $E$ intersects the plane $\mathbb{R}^{s}\times \{0\}^{Q-s}$. Assume the center is $\theta=(x_1,\dots, x_Q)$. It suffices to show the center's distance to the plane is less than the radius of the ball. Since the center is in $E$, we have $|x_i|\le \delta$ for the last $Q-s$ coordinates. Denote the projection of the center on the plane as $\theta_A=(x_1,\dots, x_s,0)\in A\times \{0\}^{Q-s}$. Then the Lp distance from the center to the plane is \begin{align*} ||\theta_A-\theta||_p^p=\sum_{i=s+1}^Q |x_i|^p\le (Q-s)\delta^p<T^{-1}(\epsilon/2)^{p} \end{align*} Next we show the slice $B_\theta(\epsilon)\cap \mathbb{R}^{s}\times \{0\}^{Q-s}$ is also a ball centered at $\theta_A$ in the lower dimensional plane. It suffice to show the boundary is a sphere. Suppose we take a point $a$ from the intersection of boundary of $B_\theta(\epsilon)\cap \mathbb{R}^{s}\times \{0\}^{Q-s}$, the vector from center to the point can be decomposed to sum of two orthogonal component, namely the vector from $\theta_A$ to $a$ and from $\theta_A$ to $\theta$, we have in this case \begin{align*} ||a-\theta_A||_p^p+||\theta_A-\theta||_p^p=||a-\theta||_p^p=\epsilon^p/2^p \end{align*} because $a-\theta_A$ has all 0 entries on the last $Q-s$ axis and $\theta_A-\theta$ has all 0 entries on the first $s$ entries. Thhus any such point has a fixed distance to $\theta_A$, the projection of the center $\theta$ on the plane of $A$. Notice that \begin{align*} ||a-\theta_A||_p^p=\epsilon^p/2^p-||\theta_A-\theta||_p^p, \end{align*} which is fixed. Thus the collection of $a$ forms a sphere on $A$'s plane. From here, we can also lower bound the radius of slice by $(1-T^{-1})^{1/p}\epsilon/2$ since $||\theta_A-\theta||_p^p<T^{-1}(\epsilon/2)^p$, thus we have the radius $||a-\theta_A||_p> (1-T^{-1})^{1/p}\epsilon/2$. Thus, we have the smaller ball must lie within the slice, i.e. \begin{equation} \label{eqn:lower_dim_balls} \begin{aligned} B_{\theta_A}((1-T^{-1})^{1/p}\epsilon/2)\times\{0\}^{Q-s}\subset \left( B_\theta(\epsilon/2)\cap (\mathbb{R}^s\times\{0\}^{Q-s})\right) \subset B_\theta(\epsilon/2) \end{aligned} \end{equation} That is, any $\epsilon/2-$ball centered in $E$ has a corresponding $(1-T^{-1})^{1/p}\epsilon/2$ lower dimension ball centered in $A$. With the above observations in hand, we can now prove the inequality by contradiction. Suppose we have a packing on $E$ $\{\theta_1,\dots,\theta_D\}$, where $D$ is larger than the packing number of $A$ in the main result. By Equation~\eqref{eqn:lower_dim_balls}, the lower dimension balls $B_{\theta_{iA}}((1-T^{-1})^{1/p}\epsilon/2)$ must also be disjoint. Since the centers of the balls $\theta_{iA}\in A$, these balls form a packing of $A$ with radius $\epsilon'=(1-T^{-1})^{1/p}\epsilon$. That is, we can find a packing with more balls than the packing number, yielding the desired contradiction. Thus we must have \begin{align*} D\le D((1-T^{-1})^{1/p}\epsilon, A, ||\cdot||_p) \end{align*} \end{proof} \begin{figure}[htp] \centering \includegraphics[width = 0.5\linewidth]{Figs/packing2.pdf} \caption{A schematic of argument used in the packing number lemma proof. We showed two disjoint unit L1 balls (red) centered in $(0.8,0,0.5)$ and $(-.3,1,-.2)$, all with in $A \times [-0.5,0.5]$ (with $A=[-1,1]\times [-1,1]$ shown in the middle plane), their slice in the $z=0$ plane (in blue) also forms L1 balls in $\mathbb{R}^2$ whose radius are lower bounded and centered within $A$, thus induced a packing of the lower dimensional set.} \label{fig:packing} \end{figure} Now we can bound the logarithm of the covering number $\log(N_*)$ similar to \citet{Ning2020}. \begin{align*} \log(N_*)\le & \log\left[N\left(\frac{1}{\sqrt{2C_3np}},\{\Psi\in \mathcal{B}^\Psi_n:||\Psi||_1\le2C_3p\},||\cdot||_1\right)\right]\\ &+\log\left[N\left(\frac{1}{8C_3n\max\{p,q\}^{3/2}},\{\Omega \in \mathcal{B}^\Omega_n,||\Omega||_1\le 8C_3q\},||\cdot||_1\right)\right] \end{align*} The two terms above can be treated in a similar way. Denote $\max\{p,q,s_0^B,s_0^\Omega\}=s^\star$. There are multiple ways to allocate the effective 0's, which introduces the binomial coefficient below: \begin{align*} &N\left(\frac{1}{8C_3n\max\{p,q\}^{3/2}},\{\Omega \in \mathcal{B}^\Omega_n,||\Omega||_1\le 8C_3q\},||\cdot||_1\right)\\ \le & \binom{Q}{C_3's^\star} N\left(\frac{1}{8C_3n\max\{p,q\}^{3/2}},\{V\in \mathbb{R}^{Q+q}:|v_i|<\delta_\omega\text { for $1\le i\le Q+q-C_3's^\star$},||V||_1\le 8C_3q\},||\cdot||_1\right)\\ &N\left(\frac{1}{\sqrt{2C_3np}},\{\Psi\in \mathcal{B}^\Psi_n:||\Psi||_1\le2C_3p\},||\cdot||_1\right)\\ \le & \binom{pq}{C_3' s^\star} N\left(\frac{1}{\sqrt{2C_3np}},\{V\in \mathbb{R}^{pq}:|v_i|<\delta_\psi\text { for $1\le i\le pq-C_3's^\star$},||V||_1\le 2C_3p\},||\cdot||_1\right)\\ \end{align*} Note that $\Omega$ has $Q + q < 2Q$ free parameters. We have first \begin{align*} \log \binom{Q}{C_3's^\star}&\lesssim s^\star \log(Q)\lesssim n\epsilon_n^2 \\ \log \binom{pq}{C_3's^\star}&\lesssim s^\star \log(pq)\lesssim n\epsilon_n^2 \\ \end{align*} We further bound the covering number using the result in Lemma \ref{lemma:packing_number_lp}. Observe that $||V||_1\cap \{|v_i|<\delta_\omega \text{ for } 1\le i\le Q+q-C_3's^\star\}\subset \{||V'||\le 8C_3q\}\times[-\delta_\omega,\delta_\Omega]^{Q+q-C_3's^\star}$ where $V'\in \mathbb{R}^{C_3's^\star}$ we have \begin{align*} &N\left(\frac{1}{8C_3n\max\{p,q\}^{3/2}},\{V:|v_i|<\delta_\omega\text { for $1\le i\le Q+q-C_3's^\star$},||V||_1\le 8C_3q\},||\cdot||_1\right)\\ \le & N\left(\frac{1}{8C_3n\max\{p,q\}^{3/2}},\{V\in \mathbb{R}^{C_3's^\star}:||V'||_1\le 8C_3q\times [-\delta_\omega,\delta_\omega]^{Q+q-C_3's^\star}\},||\cdot||_1\right)\\ \end{align*} We check the condition of Lemma~\ref{lemma:packing_number_lp} (with $p=1$ and $T=2$), by our assumption on $\xi_0$, we have: \begin{align*} (Q+q-C_3's^\star)\delta_\omega&\le 2Q\delta_\omega = 2Q\frac{1}{\xi_0-\xi_1}\log\left[\frac{1-\eta}{\eta}\frac{\xi_0}{\xi_1}\right]\lesssim \frac{Q\log(\max\{p,q,n\})}{\max\{Q,pq,n\}^{4+b/2+b/2}}\\ &\le \frac{1}{\max\{Q,pq,n\}^{3+b/2}} \end{align*} The denominator dominates $ C_3n\max\{p,q\}^{3/2}$ thus for large enough $n$, we have $(Q+q-C_3's^\star)\delta_\omega\le \frac{1}{32C_3n\max\{p,q\}^{3/2}}$ thus by Lemma \ref{lemma:packing_number_lp}, we can control the covering number by the packing number: \begin{align*} & \log N\left(\frac{1}{8C_3n\max\{p,q\}^{3/2}},\{V:|v_i|<\delta_\omega\text { for $1\le i\le Q+q-C_3's^\star$},||V||_1\le 8C_3q\},||\cdot||_1\right)\\ \le &\log D\left(\frac{1}{16C_3n\max\{p,q\}^{3/2}},\{V'\in \mathbb{R}^{C_3's^\star},||V'||_1\le 8C_3q\},||\cdot||_1\right)\\ \lesssim & s^\star \log(128C_3^2qn\max\{p,q\}^{3/2})\\ \lesssim& n\epsilon_n^2 \end{align*} Similarly for $\Psi$, \begin{align*} &N\left(\frac{1}{\sqrt{2C_3np}},\{V:|v_i|<\delta_\psi\text { for $1\le i\le pq-C_3's^\star$},||V||_1\le 2C_3p\},||\cdot||_1\right)\\ \le& N\left(\frac{1}{\sqrt{2C_3np}},\{V'\in \mathbb{R}^{C_3's^\star}:||V'||_1\le 2C_3p\times [-\delta_\psi,\delta_\psi]\},||\cdot||_1\right)\\ \end{align*} We again check the condition of Lemma~\ref{lemma:packing_number_lp} (again with $p=1$ and $T=2$): \begin{align*} (pq-C_3's^\star )\delta_\psi&\le pq\delta_\psi=\frac{pq}{\lambda_0-\lambda_1}\log\left[\frac{1-\theta}{\theta}\frac{\lambda_0}{\lambda_1}\right]\lesssim \frac{pq\log(\max\{p,q,n\})}{\max\{pq,n\}^{5/2+b/2+b/2}}\\ &\le \frac{1}{\max\{pq,n\}^{3/2+b/2}} \end{align*} The denominator dominates $\sqrt{2C_3np}$, Thus for enough large $n$, we have $(pq-C_3's^\star )\delta_\psi\le 1/4\sqrt{2C_3np}$. Thus similar to $\Omega$, we have: \begin{align*} & \log N\left(\frac{1}{\sqrt{2C_3np}},\{V:|v_i|<\delta_\omega\text { for $1\le i\le pq-C_3's^\star$},||V||_1\le 2C_3p\},||\cdot||_1\right)\\ \le &\log D\left(\frac{1}{2\sqrt{2C_3np}},\{V'\in \mathbb{R}^{C_3's^\star},||V'||_1\le 2C_3p\},||\cdot||_1\right)\\ \lesssim & s^\star \log(4C_3p\sqrt{2C_3np})\\ \lesssim& n\epsilon_n^2 \end{align*} Thus we finally get the contraction under log-affinity. \subsection{From log-affinity to $\Omega$ and $X\Psi\Omega^{-1}$} \label{sec:log_affinity_to_sieve} In this section we show the main result Theorem~\ref{thm:posterior_contraction} using the contraction under log-affinity. Denoting $\Psi-\Psi_0=\Delta_\Psi$ and $\Omega-\Omega_0=\Delta_\Omega$ we have the log-affinity $\frac{1}{n}\sum \rho(f_i,f_{0i})$ is \begin{align*} \frac{1}{n}\sum \rho(f_i,f_{0i})=&-\log\left(\frac{|\Omega^{-1}|^{1/4}|\Omega_0^{-1}|^{1/4}}{|(\Omega^{-1}+\Omega_0^{-1})/2|^{1/2}}\right)\\ &+\frac{1}{8n}\sum X_i(\Psi\Omega^{-1}-\Psi_0\Omega_0^{-1})\left(\frac{\Omega^{-1}+\Omega_0^{-1}}{2}\right)^{-1}(\Psi\Omega^{-1}-\Psi_0\Omega_0^{-1})^{\top}X_i^{\top} \end{align*} Thus $\sum \rho(f_i-f_{0i})\lesssim n\epsilon_n^2$ implies \begin{equation} \label{eqn:log_affinity_implies_some_bound} \begin{aligned} -\log\left(\frac{|\Omega^{-1}|^{1/4}|\Omega_0^{-1}|^{1/4}}{|(\Omega^{-1}+\Omega_0^{-1})/2|^{1/2}}\right)&\lesssim \epsilon_n^2\\ \frac{1}{8n}\sum X_i(\Psi\Omega^{-1}-\Psi_0\Omega_0^{-1})\left(\frac{\Omega^{-1}+\Omega_0^{-1}}{2}\right)^{-1}(\Psi\Omega^{-1}-\Psi_0\Omega_0^{-1})^{\top}X_i^{\top}&\lesssim \epsilon_n^2\\ \end{aligned} \end{equation} This is almost the same as \citet{Ning2020}'s Equations 5.11-5.12. We can directly apply the result from \citet{Ning2020}'s Equation 5.11 as it is the same as the first equation in Equation~\eqref{eqn:log_affinity_implies_some_bound}. Because $\Psi_{0}$ and $\Omega^{-1}$ have bounded operator norms and because $\Delta_{\Omega}$ can be controlled, the cross-term is also controlled by $\epsilon_{n}.$ The first part of Equation~\eqref{eqn:log_affinity_implies_some_bound} implies $||\Omega^{-1}-\Omega_0^{-1}||_F^2\lesssim \epsilon_n^2$. Meanwhile $||\Omega^{-1}-\Omega_0^{-1}||_F^2\lesssim \epsilon_n^2$ implies for large enough $n$ $\Omega$'s L2 operator norm is bounded (since we assume bounds on $\Omega_0^{-1}$'s operator norm and the difference cannot have very large eigenvalues which make the sum has 0 eigenvalue), using the result $||AB||_F\le |||A|||_2||B||_F$, while also observe $\Omega_0-\Omega=\Omega(\Omega^{-1}-\Omega_0^{-1})\Omega_0$, and by assumption that $\Omega_0$ has bounded L2 operator norm, we conclude~\eqref{eqn:log_affinity_implies_some_bound} implies $||\Omega-\Omega_0||_F\lesssim \epsilon_n$. Since $|||\Omega^{-1}|||_2$ is bounded for large enough $n$, we can directly apply an argument from \citet{Ning2020} ( specifically the argument around their Equation 5.12) to conclude ~\eqref{eqn:log_affinity_implies_some_bound}'s second part implies: \begin{align*} \epsilon_n^2&\gtrsim \frac{1}{8n}\sum ||X_i(\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0)||_2^2|||\frac{\Omega^{-1}+\Omega_0^{-1}}{2}|||^{-1}_2\\ &\gtrsim \frac{1}{8n}\sum ||X_i(\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0)||_2^2|||\frac{\Omega^{-1}+\Omega_0^{-1}}{2}|||^{-1}_2\\ &\gtrsim\frac{1}{n}\sum ||X_i(\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0)||_2^2/\sqrt{\epsilon_n^2+1} \end{align*} Combining all of these results yields the desired result. \subsection{Contraction of $\Psi$} Contraction of $\Psi$ requires more assumptions on the design matrix $X$. Similar to \citet{RockovaGeorge2018_ssl} and \citet{Ning2020}, we introduce the restricted eigenvalue \begin{align*} \phi^2(\tilde{s})=\inf \left\{\frac{||XA||^2_F}{n||A||_F^2}:0\le |\nu(A)|\le \tilde{s}\right\} \end{align*} With this definition, \begin{align*} ||X(\Psi\Omega^{-1})-\Psi_0\Omega^{-1}_0)||_F^2&\lesssim n\epsilon_n^2\\ ||\Omega-\Omega_0||_F^2\lesssim \epsilon_n^2 \end{align*} implies the result in Equation~\eqref{eqn:recover_marginal_cg} of the main text. Namely, \begin{align*} ||\Psi\Omega^{-1}-\Psi_0\Omega^{-1}_0||_F^2=||(\Delta_\Psi-\Psi_0\Omega^{-1}\Delta_\Omega)\Omega^{-1}||_F^2\lesssim \epsilon_n^2/\phi^2(s_0^\Psi+C_3's^\star) \end{align*} Since both $\Omega$ and $\Omega^{-1}$ have bounded operator norm when $||\Omega-\Omega_0||_F^2\lesssim \epsilon_n^2,$ for large enough $n$, we must have: \begin{align*} ||\Delta_\Psi||_F-||\Psi_0\Omega^{-1}\Delta_\Omega||_F\le ||\Delta_\Psi-\Psi_0\Omega^{-1}\Delta_\Omega||_F\lesssim \epsilon_n/\sqrt{\phi^2(s_0^\Psi+C_3's^\star)} \end{align*} Since $\Psi_0$ and $\Omega^{-1}$ have bounded operator norm, $||\Psi_0\Omega^{-1}\Delta_\Omega||_F\lesssim \epsilon_n$, and we must have: \begin{align*} ||\Delta_\Psi||_F\lesssim \epsilon_n/\sqrt{\min\{\phi^2(s_0^\Psi+C_3's^\star),1\}} \end{align*} Thus we can conclude \begin{align*} \sup_{\Psi\in \mathcal{T}_0,\Omega\in\mathcal{H}_0}\mathbb{E}_0 \Pi\left(||\Psi-\Psi_0||_F^2\ge \frac{M'\epsilon_n^2}{\min\{\phi^2(s_0^\Psi+C_3's^\star),1\}}\right)\to 0 \end{align*} \subsection{Updating $\Psi$ and $\theta$}. Fixing $(\Omega, \eta) = (\Omega^{(t-1)}, \eta^{(t-1)}),$ observe that \begin{align} \begin{split} \label{eq:updating_B_eqval} F^{(t)}(\Psi, \theta, \Omega^{(t-1)}, \eta^{(t-1)}) &= -\frac{1}{2}\tr\left((Y-X\Psi\Omega^{-1})\Omega(Y-X\Psi\Omega^{-1})^{\top}\right)+\log\pi(\Psi, \theta) \\ &= -\frac{1}{2}\tr\left((Y-X\Psi\Omega^{-1})\Omega\Omega^{-1}\Omega(Y-X\Psi\Omega^{-1})^{\top}\right)+\log\pi(\Psi, \theta) \\ &= -\frac{1}{2}\tr\left((Y\Omega-X\Psi)\Omega^{-1}(Y\Omega-X\Psi)^{\top}\right)+\log\pi(\Psi, \theta) \end{split} \end{align} where \begin{align} \begin{split} \label{eq:log_prior_Psi_theta} \log \pi(\Psi,\theta) &= \sum_{j = 1}^{p}{\sum_{k = 1}^{q}{\log\left(\theta\lambda_{1}e^{-\lambda_{1}\lvert \psi_{j,k} \rvert} + (1-\theta)\lambda_{0}e^{-\lambda_{0}\lvert \psi_{j,k} \rvert}\right)}} \\ &+ (a_{\theta} - 1)\log(\theta) + (b_{\theta} - 1)\log(1-\theta) \end{split} \end{align} We solve the optimization problem in Equation~\eqref{eq:updating_B_eqval} using a coordinate ascent strategy that iteratively updates $\Psi$ (resp. $\theta$) while holding $\theta$ (resp. $\Psi$) fixed. We run the coordinate ascent until all active $\psi_{j,k}$ has relative change under the user defined tolerance. \textbf{Updating $\theta$ given $\Psi$}. Notice that the objective in Equation~\eqref{eq:updating_B_eqval} depends on $\theta$ only through the $\log \pi(\Psi,\theta)$ term. Accordingly, to updating $\theta$ conditionally on $\Psi,$ it is enough to maximize the expression in Equation~\eqref{eq:log_prior_Psi_theta} as a function of $\theta$ while keeping all $\psi_{j,k}$ terms fixed. We use Newton's method for this optimization and we terminate once the Newton step has a step size less than the user defined tolerance. \textbf{Updating $\Psi$ given $\theta$}. With $\theta$ fixed, optimizing Equation~\eqref{eq:updating_B_eqval} is equivalent to solving \begin{align} \begin{split} \label{eq:Psi_penalty} \Psi^{(t)}&=\argmax_{\Psi} \left\{-\frac{1}{2}\left((Y\Omega-X\Psi)\Omega^{-1}(Y\Omega-X\Psi)^\top\right)+\log\pi(\Psi\vert\theta)\right\}\\ &=\argmax_{\Psi} \left\{-\frac{1}{2}\left((Y\Omega-X\Psi)\Omega^{-1}(Y\Omega-X\Psi)^\top\right)+\sum_{j,k}\log\left(\frac{\pi(\psi_{j,k}\vert\theta)}{\pi(0\vert\theta)}\right)\right\}\\ &=\argmax_{\Psi} \left\{-\frac{1}{2}\left((Y\Omega-X\Psi)\Omega^{-1}(Y\Omega-X\Psi)^\top\right)+\sum_{j,k}\pen(\psi_{j,k}\vert\theta)\right\} \end{split} \end{align} where \begin{align*} \pen(\psi_{j,k}\vert\theta)&=\log\left(\frac{\pi(\psi_{j,k}\vert\theta)}{\pi(0\vert\theta)}\right)=-\lambda_1|\psi_{j,k}|+\log\left(\frac{p^\star(\psi_{j,k},\theta)}{p^\star(0,\theta)}\right). \end{align*} Following essentially the same arguments as those in \citet{Deshpande2019} and using the fact that the columns of $X$ have norm $n,$ the Karush-Kuhn-Tucker (KKT) condition for optimization problem in the final line of Equation~\eqref{eq:Psi_penalty} tells us that the optimizer $\Psi^{*}$ satisfies \begin{equation} \label{eq:KKT_Psi_supp} \psi^{*}_{j,k}=n^{-1}\left[ \lvert z_{j,k} \rvert-\lambda^\star(\psi^{*}_{j,k},\theta)\right]_{+}\sign(z_{j,k}), \end{equation} where \begin{align*} z_{jk}&= n\psi^{*}_{j,k}+ X_{j}^{\top}\mathbf{r}_{k} + \sum_{k' \neq k}\frac{(\Omega^{-1})_{k,k'}}{(\Omega^{-1})_{k,k}}X_j^{\top} \mathbf{r}_{k'}\\ \mathbf{r}_{k'}&=(Y\Omega-X\Psi^{*})_{k'} \\ \lambda^\star(\psi^{*}_{j,k},\theta)&=\lambda_1 p^\star (\psi^{*}_{j,k},\theta)+\lambda_0(1-p^\star (\psi^{*}_{j,k},\theta)). \end{align*} The KKT condition immediately suggests a cyclical coordinate ascent strategy for solving the problem in Equation~\eqref{eq:Psi_penalty} that involves soft thresholding the running estimates of $\psi_{j,k}.$ Like \citet{RockovaGeorge2018_ssl} and \citet{Deshpande2019}, we can, however, obtain a more refined characterization of the global model $\tilde{\Psi} = (\tilde{\psi}_{j,k})$: $$ \tilde\psi_{j,k} = n^{-1}\left[\vert z_{j,k}\vert-\lambda^\star(\tilde{\psi}_{j,k},\theta)\right]_{+}\sign(z_{j,k}) \times \mathbbm{1}\left(\lvert z_{j,k} \rvert > \Delta_{j,k}\right), $$ where \begin{align*} \Delta_{j,k}=\inf_{t>0}\left\{\frac{nt}{2}-\frac{\pen(\tilde{\psi}_{j,k},\theta)}{(\Omega^{-1})_{k,k}t}\right\}. \end{align*} Though the exact thresholds $\Delta_{j,k}$ are difficult to compute, they can be bounded use an analog to Theorem 2.1 of \citet{RockovaGeorge2018_ssl} and Proposition 2 of \citet{Deshpande2019}. Specifically, suppose we have $(\lambda_1-\lambda_0)>2\sqrt{n(\Omega^{-1})_{k,k}}$ and $(\lambda^\star(0,\theta)-\lambda_1)^2>-2n(\Omega^{-1})_{k,k}p^\star(0,\theta)$. Then we have $\Delta_{j,k}^L\le \Delta_{j,k}\le \Delta^U_{j,k}$, where: \begin{align*} \Delta_{j,k}^L&=\sqrt{-2n((\Omega^{-1})_{k,k})^{-1}\log p^\star(0,\theta)-((\Omega^{-1})_{k,k})^{-2}d}+\lambda_1/(\Omega^{-1})_{k,k}\\ \Delta_{j,k}^U&=\sqrt{-2n((\Omega^{-1})_{k,k})^{-1}\log p^\star(0,\theta)}+\lambda_1/(\Omega^{-1})_{k,k}\\ \end{align*} where $d=-(\lambda^\star (\delta_{c+},\theta)-\lambda_1)^2-2n(\Omega^{-1})_{k,k}\log p^\star(\delta_{c+},\theta)$ and $\delta_{c+}$ is the largest root of $\pen''(x|\theta)=(\Omega^{-1})_{k,k}.$ Our refined characterization of $\tilde{\Psi}$ suggests a cyclical coordinate descent strategy that combines hard thresholding at $\Delta_{j,k}$ and soft-thresholding at $\lambda^{\star}_{j,k}.$ \begin{myRemark} Equation~\eqref{eq:updating_B_eqval} and our approach to solving the optimization problem are extremely similar to Equation 3 and the coordinate ascent strategy used in \citet{Deshpande2019}, who fit sparse marginal multivariate linear models with spike-and-slab LASSO priors. This is because if $Y \sim \mathcal{N}(X\Psi\Omega^{-1},\Omega^{-1})$ in our chain graph model, then $Y\Omega \sim \mathcal{N}(X\Psi, \Omega).$ Thus, if we fix the value of $\Omega,$ we can use any computational strategy for estimating marginal effects in the multivariate linear regression model to estimate $\Psi$ by working with the transformed data $Y\Omega.$ \end{myRemark} \subsection{Updating $\Omega$ and $\eta$} Fixing $\Psi = \Psi^{(t)}$ and $\theta = \theta^{(t)}$, we compute $\Omega^{(t)}$ and $\eta^{(t)}$ by optimizing the function \begin{align} \begin{split} \label{eq:Omega_eta_objective} F^{(t)}(\Psi^{(t)},\theta^{(t)},\Omega,\eta) &= \frac{n}{2}\left[\log\lvert\Omega\rvert - \tr(S\Omega) - \tr(M\Omega^{-1})\right] -\sum_{k < k'}{\xi^{\star}_{k,k'}\lvert \omega_{k,k'}\rvert} - \sum_{k=1}^{q}{\omega_{k,k}} \\ ~&+\left(a_{\eta} - 1 + \sum_{k < k'}{q_{k,k}^{\star}}\right) \times \log(\eta) \\ ~&+ \left(b_{\eta} - 1 + q(q-1)/2 - \sum_{k < k'}{q_{k,k}^{\star}}\right) \times \log(1-\eta) \end{split} \end{align} where $S=\frac{1}{n}Y^{\top}Y$ and $M=\frac{1}{n}(X\Psi)^{\top}X\Psi$. We immediately observe that expression in Equation~\eqref{eq:Omega_eta_objective} is separable in $\Omega$ and $\eta,$ meaning that we can compute $\Omega^{(t)}$ and $\eta^{(t)}$ separately. Specifically, we have \begin{equation} \label{eq:eta_update} \eta^{(t)}=\frac{a_{\eta}-1+\sum_{k<k'}q_{k,k'}^{\star}}{a_{\eta}+b_{\eta}-2+q(q-1)/2} \end{equation} and \begin{align} \label{eq:Omega_update} \Omega^{(t)} &= \argmax_{\Omega>0}\left\{\frac{n}{2}\left[\log(\lvert\Omega\rvert)-\tr(S\Omega)-\tr(M\Omega^{-1})\right] -\sum_{k<k'}\xi^\star_{k,k'}|\omega_{k,k'}|-\xi\sum_{k=1}^q \omega_{k,k} \right\}. \end{align} The objective function in Equation~\eqref{eq:Omega_update} is similar to a graphical LASSO \citep[GLASSO;][]{Friedman2008} problem insofar as both problems involve a term like $\log\lvert\Omega\rvert +\tr(S\Omega)$ and separable $L1$ penalties on the off-diagonal elements of $\Omega.$ However, Equation~\eqref{eq:Omega_update} includes an additional term $\tr(M\Omega^{-1})$, which does not appear in the GLASSO. This term arises from through the entanglement of $\Psi$ and $\Omega$ in the Gaussian chain graph model and we accordingly call the problem in Equation~\eqref{eq:Omega_update} the CGLASSO problem. We solve this problem by (i) forming a quadratic approximation of the objective, (ii) computing a suitable Newton direction, and (iii) following that Newton direction for a suitable step size. We detail this solution strategy in Section~\ref{appendix:cgQUIC}. \subsection{Motivation} There are between 10 and 100 trillion microorganisms living within each person's lower intenstines. These bacteria, fungi, viruses and other microbes constitute the human gut \textit{microbiome} \citep{guinane2013role}. Recent research suggests that the composition of human gut microbiome can have a substantial effect on our health and well-being \citep{shreiner2015gut}: microbes living in the gut play an integral role in our digestive and metabolic processes \citep{larsbrink2014discrete,belcheva2014gut}; they can mediate our immune response to various diseases \citep{kamada2014regulation, kim2017interplay}; and they can even influence disease pathogenesis and progression \citep{scher2013expansion, wang2011gut}. Additional emerging evidence suggests that the gut microbiome mediates the effects of lifestyle factors such as diet and medication use on human health \citep{singh2017influence,battson2018gut,hills2019gut}. That is, such lifestyle factors may first affect the composition of the gut microbiome, which in turn influences health outcomes. In fact, lifestyle factors and medication use can impact the composition of the microbiome in direct and indirect ways. For instance, many antibiotics target and kill certain microbial species, thereby directly affecting the abundances of the targeted species. However, by killing the targeted species, the antibiotics may reduce the overall competition for nutrients, thereby allowing non-targeted species to proliferate. In other words, by directly reducing the abundance of certain targeted microbes, antibiotics may indirectly increase the abundance of other non-targeted species. Our goal in this paper is to estimate such direct and indirect effects. \subsection{Sparse chain graph models} At a high level, the statistical challenge is to estimate the functional relationship between a vector of predictors $\bm{x} \in \mathbb{R}^{p}$ and vector of responses $\bm{y} \in \mathbb{R}^{q}.$ In our application, we re-analyze a dataset from \cite{claesson2012gut} containing $n = 178$ predictor-response pairs $(\bm{x}, \bm{y})$ where $\bm{x}$ contains measures of $p = 11$ factors related to diet, medication use, and residence type, and $\bm{y}$ contains the logit-transformed relative abundances of $q = 14$ different microbial taxa. Our goal is to uncover the direct and indirect effect of these factors on the abundance of each microbial taxon as well as any interactions between microbial taxa. The Gaussian chain graph model \citep{lauritzen1989graphical,frydenberg1990chain,lauritzen2002chain}, which simultaneously parameterizes the direct effects of predictors on responses and the residual dependence structure between response, is natural for these data. The model asserts that \begin{equation} \label{eq:cg_model} \bm{y} \vert \Psi, \Omega, \bm{x} \sim \mathcal{N}(\Omega^{-1}\Psi^{\top}\bm{x}, \Omega^{-1}). \end{equation} where $\Psi$ is a $p \times q$ matrix and $\Omega$ is a symmetric, positive definite $q \times q$ matrix. As we detail in Section~\ref{sec:chain_graph_model}, the $(j,k)$ entry of $\Psi,$ $\psi_{j,k},$ quantifies the direct effect of the $j^{\text{th}}$ predictor $X_{j}$ on the $k^{\text{th}}$ response $Y_{k}.$ The $(k,k')$ entry of $\Omega,$ $\omega_{k,k'},$ encodes the residual conditional covariance between outcomes $Y_{k}$ and $Y_{k'}$ that remains after accounting for the direct effects of the predictors and all of the other response variables. To fit the model in Equation~\eqref{eq:cg_model}, we must estimate $pq + q(q+1)/2$ unknown parameters. When the total number of unknown parameters is comparable to or larger than the sample size $n,$ it is common to assume that the matrices $\Psi$ and $\Omega$ are sparse. If $\omega_{k,k'} = 0,$ we can conclude that after adjusting for the covariates and all other outcomes, outcomes $Y_{k}$ and $Y_{k'}$ are conditionally independent. If $\psi_{j,k} = 0,$ we can conclude that $X_{j}$ does not have a \textit{direct} effect on the $k^{\text{th}}$ outcome variable $Y_{k}.$ Furthermore, when $\psi_{j,k} = 0,$ any marginal correlation between $X_{j}$ and $Y_{k}$ is due solely to $X_{j}$'s \textit{direct} effects on other outcomes $Y_{k'}$ that are themselves conditionally correlate with $Y_{k}.$ \subsection{Our contributions} We introduce the chain graph spike-and-slab LASSO (cgSSL) procedure for fitting the model in Equation~\eqref{eq:cg_model} in a sparse fashion. At a high level, we place separate spike-and-slab LASSO priors \citep{RockovaGeorge2018_ssl} on the entries of $\Psi$ and on the off-diagonal entries of $\Omega$ in Equation~\eqref{eq:cg_model}. We derive an efficient Expectation Conditional Maximization algorithm to compute the \textit{maximum a posteriori} (MAP) estimates of $\Psi$ and $\Omega.$ Our algorithm is equivalent to solving a series of maximum likelihood problems with \textit{self-adaptive} penalties. On synthetic data, we demonstrate that our algorithm displays excellent support recovery and estimation performance. We further establish the posterior contraction rate for each of $\Psi, \Omega, \Psi\Omega^{-1},$ and $X\Psi\Omega^{-1}.$ Our contraction results imply that our proposed cgSSL procedure consistently estimates these quantities and also provides an upper bound for the minimax optimal rate of estimating these quantities in the Frobenius norm. To the best of our knowledge, ours are the first posterior contraction results for fitting sparse Gaussian chain graph models with element-wise priors on $\Psi$ and $\Omega.$ Here is an outline for the rest of our paper. We review the Gaussian chain graph model and spike-and-slab LASSO in Section~\ref{sec:background}. We next introduce the cgSSL procedure in Section~\ref{sec:proposed_method} and carefully derive our ECM algorithm for finding the MAP in Sections~\ref{sec:map_estimation}. We present our asymptotic results in Section~\ref{sec:theory} before demonstrating the excellent finite sample performance of the cgSSL on several synthetic datasets in Section~\ref{sec:synthetic_experiments}. We apply the cgSSL to our motivating gut microbiome data in Section~\ref{sec:real_data_experiments}. We conclude in Section~\ref{sec:discussion} by outlining several avenues for future development. \section{Introduction} \label{sec:introduction} \input{introduction} \section{Background} \label{sec:background} \input{background} \section{Introducing the cgSSL} \label{sec:proposed_method} \input{proposed_method} \section{Asymptotic theory of cgSSL} \label{sec:theory} \input{asymptotics} \section{Synthetic experiments} \label{sec:synthetic_experiments} \input{synthetic_experiments} \section{Real data experiments} \label{sec:real_data_experiments} \input{real_data_experiments} \section{Discussion} \label{sec:discussion} \input{discussion} \section*{Acknowledgements} \input{acknowledgements} \subsection{The cgSSL prior} \label{sec:cgSSL_prior} To quantify the prior belief that many entries in $\Psi$ are essentially negligible, we model each $\psi_{j,k}$ as having been drawn either from a spike distribution, which is sharply concentrated around zero, or a slab distribution, which is much more diffuse. More specifically, we take the spike distribution to be $\text{Laplace}(\lambda_{0})$ and the slab distribution to be $\text{Laplace}(\lambda_{1}),$ where $0 < \lambda_{1} \ll \lambda_{0}$ are fixed positive constants. This way, the spike distribution is much more heavily concentrated around zero than is the slab. We further let $\theta \in [0,1]$ be the prior probability that each $\psi_{j,k}$ is drawn from the slab and model the $\psi_{j,k}$'s as conditionally independent given $\theta.$ Thus, the prior density for $\Psi$, conditional on $\theta,$ is given by \begin{equation} \label{eq:psi_prior_density} \pi(\Psi \vert \theta) = \prod_{j = 1}^{p}{\prod_{k = 1}^{q}{\left(\frac{\theta\lambda_{1}}{2}e^{-\lambda_{1}\lvert \psi_{j,k}\rvert} + \frac{(1-\theta)\lambda_{0}}{2}e^{-\lambda_{0}\lvert \psi_{j,k} \rvert}\right)}}. \end{equation} Since $\Omega$ is symmetric, it is enough to specify a prior on the entries $\omega_{k,k'}$ where $k \leq k'.$ To this end, we begin by placing an entirely analogous spike-and-slab prior on the off-diagonal entries. That is, we model each $\omega_{k,k'}$ as being drawn from a $\text{Laplace}(\xi_{1}),$ with probability $\eta \in [0,1],$ or a $\text{Laplace}(\xi_{0}),$ with probability $1 - \eta,$ where $0 < \xi_{1} \ll \xi_{0}.$ We similarly model each $\omega_{k,k'}$ as conditionally independent given $\eta$ and place independent $\text{Exp}(\xi_{1})$ priors on the diagonal entries of $\Omega.$ We then truncate the resulting distribution of $\Omega \vert \theta$ to the cone of symmetric positive definite matrices, yielding the prior density \begin{equation} \label{eq:omega_prior_density} \pi(\Omega \vert \eta) \propto \left(\prod_{1 \leq k < k' \leq q}\left[\frac{\eta\xi_1}{2}e^{-\xi_1|\omega_{k,k'}|}+\frac{(1-\eta)\xi_0}{2}e^{-\xi_0|\omega_{k,k'}|}\right]\right) \times \left(\prod_{k=1}^q \xi e^{-\xi\omega_{k,k}}\right) \times \mathbbm{1}(\Omega\succ 0) \end{equation} Observe that $1-\theta$ and $1-\eta$ respectively quantify the proportion of entries in $\Psi$ and $\Omega$ that are essentially negligible. To model our uncertainty about these proportions, we place Beta priors on each of $\theta$ and $\eta.$ Specifically, we independently model $\theta \sim \text{Beta}(a_{\theta}, b_{\theta})$ and $\eta \sim \text{Beta}(a_{\eta}, b_{\eta}),$ where $a_{\theta}, b_{\theta}, a_{\eta}, b_{\eta} > 0$ are fixed positive constants. \subsection{Targeting the MAP} \label{sec:map_estimation} Unfortunately the posterior distribution of $(\Psi, \theta, \Omega, \eta) \vert \bm{Y}$ is analytically intractable. Further, it is generally high-dimensional and rather multimodal, rendering stochastic search techniques like Markov Chain Monte Carlo computationally impractical. We instead follow \citet{RockovaGeorge2018_ssl}'s example and focus on finding the \textit{maximum a posteriori} (MAP) estimate of $(\Psi, \theta, \Omega, \eta).$ Throughout, we assume that the columns of $X$ have been centered and scaled to have norm $n.$ To this end, we attempt to maximize the log posterior density \begin{align} \begin{split} \label{eq:log_density} \log \pi(\Psi, \theta, \Omega,\eta \vert \bm{Y}) &= -\frac{n}{2}\log\lvert\Omega\rvert -\frac{1}{2}\tr\left((Y-X\Psi\Omega^{-1})\Omega(Y-X\Psi\Omega)^\top\right) \\ &+ \sum_{j = 1}^{p}{\sum_{k = 1}^{q}{\log\left(\theta\lambda_{1}e^{-\lambda_{1}\lvert \psi_{j,k}\rvert} + (1-\theta)\lambda_{0}e^{-\lambda_{0}\lvert \psi_{j,k} \rvert}\right)}} \\ &+ \sum_{k = 1}^{q-1}{\sum_{k' > k}^{q}{\log\left(\eta\xi_{1}e^{-\xi_1\lvert\omega_{k,k'}\rvert}+(1-\eta)\xi_{0}e^{-\xi_0\lvert\omega_{k,k'}\rvert}\right)}} \\ &- \sum_{k = 1}^{q}{\xi_{0} \omega_{k,k}} + \log{\mathbbm{1}(\Omega \succ 0)} \\ &+ (a_{\theta}-1)\log(\theta) + (b_{\theta} - 1)\log(1-\theta) \\ &+ (a_{\eta} - 1)\log(\eta) + (b_{\eta} - 1)\log(1-\eta) \\ \end{split} \end{align} Optimizing the log posterior density directly is complicated by the non-concavity of $\log \pi(\Omega \vert \eta).$ Instead, following \citet{Deshpande2019}, we iteratively optimize a surrogate objective using an EM-like algorithm. To motivate this approach, observe that we can obtain the posterior density $\pi(\Omega \vert \eta)$ in Equation~\eqref{eq:omega_prior_density} by marginalizing an \textit{augmented} prior $$ \pi(\Omega \vert \eta) = \int{\pi(\Omega \vert \boldsymbol{\delta})\pi(\boldsymbol{\delta} \vert \eta)d\boldsymbol{\delta}} $$ where $\boldsymbol{\delta} = \{\delta_{k,k'}: 1 \leq k < k' \leq q\}$ is a collection of $q(q-1)/2$ i.i.d.~$\text{Bernoulli}(\eta)$ variables and $$ \pi(\Omega \vert \boldsymbol{\delta}) \propto \left(\prod_{1\le k<k'\le q}{\left(\xi_{1}e^{-\xi_{1}\lvert \omega_{k,k'}\rvert}\right)^{\delta_{k,k'}}\left(\xi_{0}e^{-\xi_{0}\lvert\omega_{k,k'}\rvert}\right)^{1-\delta_{k,k'}}}\right)\times \left(\prod_{k=1}^q \xi e^{-\xi\omega_{k,k}}\right) \times \mathbbm{1}(\Omega \succ 0). $$ In our augmented prior, $\delta_{k,k'}$ indicates whether $\omega_{k,k'}$ is drawn from the slab $(\delta_{k,k'} = 1$) or the spike ($\delta_{k,k'} = 0$). The above marginalization immediately suggests an EM algorithm: rather than optimize $\log \pi(\Psi, \theta, \Omega, \eta \vert \bm{Y})$ directly, we can iteratively optimize a surrogate objective formed by marginalizing the augmented log posterior density. That is, starting from some initial guess $(\Psi^{(0)}, \theta^{(0)}, \Omega^{(0)}, \eta^{(0)}),$ for $t > 1,$ the $t^{\text{th}}$ iteration of our algorithm consists of two steps. In the first step, we compute the surrogate objective $$ F^{(t)}(\Psi, \theta, \Omega, \eta) = \mathbb{E}_{\boldsymbol{\delta} \vert \cdot}[\log \pi(\Psi, \theta, \Omega, \eta, \boldsymbol{\delta} \vert \bm{y}) \vert \Psi = \Psi^{(t-1)}, \Omega = \Omega^{(t-1)}, \theta = \theta^{(t-1)}, \eta = \eta^{(t-1)}], $$ where the expectation is taken with respect to the conditional posterior distribution of the indicators $\boldsymbol{\delta}$ given the current value of $(\Psi, \theta, \Omega, \eta).$ Then in the second step, we maximize the surrogate objective and set $(\Psi^{(t)}, \theta^{(t)}, \Omega^{(t)}, \eta^{(t)}) = \argmax F^{(t)}(\Psi,\theta, \Omega, \eta).$ It turns out that, given $\Omega$ and $\eta,$ the indicators $\delta_{k,k'}$ are conditionally independent Bernoulli random variables whose means are easy to evaluate, making it simple to compute a closed form expression for the surrogate objective $F^{(t)}.$ Unfortunately, maximizing $F^{(t)}$ is still difficult. Consequently, similar to \citet{Deshpande2019}, we carry out two conditional maximizations, first optimizing with respect to $(\Psi,\theta)$ while holding $(\Omega,\eta)$ fixed, and then optimizing with respect to $(\Omega,\eta)$ while holding $(\Psi,\theta)$ fixed. That is, in the second step of each iteration of our algorithm, we set \begin{align} (\Psi^{(t)}, \theta^{(t)}) &= \argmax_{\Psi,\theta}~ F^{(t)}(\Psi, \theta, \Omega^{(t-1)}, \eta^{(t-1)}) \label{eq:psi_theta_update} \\ (\Omega^{(t)}, \eta^{(t)}) &= \argmax_{\Omega,\eta}~ F^{(t)}(\Psi^{(t)}, \theta^{(t)},\Omega, \eta). \label{eq:omega_eta_update} \end{align} In summary, we propose finding the MAP estimate of $(\Psi, \theta, \Omega, \eta)$ using an Expectation Conditional Maximization \citep[ECM;][]{MengRubin1993_ecm} algorithm. When we fix the values of $\Omega$ and $\eta,$ the surrogate objective $F^{(t)}$ is separable in $\Psi$ and $\theta.$ That is, the objective function $F^{(t)}(\Psi, \theta, \Omega^{(t-1)}, \eta^{(t-1)})$ in Equation~\eqref{eq:psi_theta_update} can be written as the sum of a function of $\Psi$ alone and a function of $\theta$ alone. This means that we can separately compute $\Psi^{(t)}$ and $\theta^{(t)}$ while fixing $(\Omega, \eta) = (\Omega^{(t-1)}, \eta^{(t-1)})$. The objective function in Equation~\eqref{eq:omega_eta_update} is similarly separable and we can separately compute $\Omega^{(t)}$ and $\eta^{(t)}$ while fixing $(\Psi, \theta) = (\Psi^{(t)}, \theta^{(t)}).$ As we describe in Section~\ref{appendix:cgSSL_algorithm} of the Supplementary Materials, computing $\theta^{(t)}$ and $\eta^{(t)}$ is relatively straightforward; we compute $\theta^{(t)}$ with a simple Newton algorithm and there is a closed form expression for $\eta^{(t)}.$ The main computational challenge is computing $\Psi^{(t)}$ and $\Omega^{(t)}.$ In the next subsection, we detail how updating $\Psi$ and $\Omega$ reduces to solving penalized likelihood problems with \textit{self-adaptive} penalties. \subsection{Adaptive penalty mixing} \label{sec:penalty_mixing} Before describing how we compute $\Psi^{(t)}$ and $\Omega^{(t)},$ we introduce two important functions: \begin{align*} p^{\star}(x, \theta)&=\frac{\theta\lambda_1e^{-\lambda_1 \lvert x \rvert}}{\theta\lambda_1e^{-\lambda_1 \lvert x \rvert}+(1-\theta)\lambda_0e^{-\lambda_0 \lvert x \rvert}}\\ q^\star(x,\eta)&=\frac{\eta\xi_1e^{-\xi_1 \lvert x \rvert}}{\eta\xi_1e^{-\xi_1 \lvert x \rvert}+(1-\eta)\xi_0e^{-\xi_0 \lvert x \rvert}} \end{align*} For each $1 \leq j \leq p$ and $1 \leq k \leq q,$ $p^{\star}(\psi_{j,k}, \theta)$ is the conditional posterior probability that $\psi_{j,k}$ was drawn from the $\text{Laplace}(\lambda_{1})$ slab distribution. Similarly, for $1\leq k < k' \leq q,$ $q^{\star}(\omega_{k,k'}, \eta)$ is just the conditional posterior probability that $\omega_{k,k'}$ was drawn from the $\text{Laplace}(\xi_{1})$ slab. That is, $q^{\star}(\omega_{k,k'}, \eta) = \mathbb{E}[\delta_{k,k'} \vert \bm{Y}, \Psi, \Omega, \theta, \eta].$ \textbf{Updating $\Psi$.} Fixing the value $\Omega = \Omega^{(t-1)},$ computing $\Psi^{(t)}$ is equivalent to solving the following penalized optimization problem \begin{equation} \label{eq:psi_update} \Psi^{(t)} = \argmax_{\Psi} \left\{-\frac{1}{2}\tr\left((Y\Omega-X\Psi)\Omega^{-1}(Y\Omega-X\Psi)^{\top}\right)+\sum_{jk}\text{pen}(\psi_{j,k} ; \theta)\right\} \end{equation} where \begin{align*} \text{pen}(\psi_{j,k}; \theta)&=\log\left(\frac{\pi(\psi_{j,k} \vert \theta)}{\pi(0 \vert \theta)}\right)=-\lambda_1 \lvert \psi_{j,k} \rvert +\log\left(\frac{p^{\star}(\psi_{j,k},\theta)}{p^{\star}(0,\theta)}\right). \end{align*} Note that the first term in the objective of Equation~\eqref{eq:psi_update} can obtained by distributing a factor of $\Omega$ through the quadratic form that appears in the log-likelihood (see Equations~\eqref{eq:updating_B_eqval} and~\eqref{eq:Psi_penalty} of the Supplementary Materials for details). Following arguments similar to those in \citet{Deshpande2019}, the Karush-Kuhn-Tucker (KKT) condition for~\eqref{eq:psi_update} tells us that \begin{equation} \label{eq:KKT_Psi} \psi^{(t)}_{j,k}=n^{-1}\left[ \lvert z_{j,k} \rvert-\lambda^\star(\psi^{(t)}_{j,k},\theta)\right]_{+}\sign(z_{j,k}), \end{equation} where \begin{align*} z_{jk}&= n\psi^{(t)}_{j,k}+ X_{j}^{\top}\mathbf{r}_{k} + \sum_{k' \neq k}\frac{(\Omega^{-1})_{k,k'}}{(\Omega^{-1})_{k,k}}X_j^{\top} \mathbf{r}_{k'}\\ \mathbf{r}_{k'}&=(Y\Omega-X\Psi^{(t)})_{k'} \\ \lambda^\star(\psi^{(t)}_{j,k},\theta)&=\lambda_1 p^\star (\psi^{(t)}_{j,k},\theta)+\lambda_0(1-p^\star (\psi^{(t)}_{j,k},\theta)). \end{align*} The KKT conditions suggest a natural coordinate-ascent strategy for computing $\Psi^{(t)}$: starting from some initial guess $\Psi_{0},$ we cyclically update the entries $\psi_{j,k}$ by soft-thresholding $\psi_{j,k}$ at $\lambda_{j,k}^{\star}.$ During our cyclical coordinate ascent, whenever the current value of $\psi_{j,k}$ is very large, the corresponding value of $p^{\star}(\psi_{j,k},\theta)$ will be close to one, and the threshold $\lambda^{\star}$ will be close to the slab penalty $\lambda_{1}.$ On the other hand, when $\psi_{j,k}$ is very small, the corresponding $p^{\star}$ will be close to zero and the threshold $\lambda^{\star}$ will be close to the spike penalty $\lambda_{0}.$ Since $\lambda_{1} \ll \lambda_{0},$ we are therefore able to apply a stronger penalty to the smaller entries of $\Psi$ and a weaker penalty to the larger entries. As our cyclical coordinate ascent proceeds, we iteratively refine the thresholds $\lambda^{\star},$ thereby adaptively shrinking our estimates of $\psi_{j,k}.$ Before proceeding, we note that the quantity $z_{j,k}$ depends not only on the inner product between the $X_{j},$ the $j^{\text{th}}$ column of the design matrix, and the partial residual $\mathbf{r}_{k}$ but also on the inner product between $X_{j}$ and all other partial residuals $\mathbf{r}_{k'}$ for $k' \neq k.$ Practically this means that in our cyclical coordinate ascent algorithm, our estimate of the direct effect of predictor $X_{j}$ on outcome $Y_{k}$ can depend on how well we have fit all other outcomes $Y_{k'}.$ Moreover, the entries of $\Omega^{-1}$ determine the degree to which $\psi_{j,k}$ depends on the outcomes $Y_{k'}$ for $k' \neq k.$ Specifically, if $(\Omega^{-1})_{k,k'} = 0,$ then we are unable to leverage information contained in $Y_{k'}$ to inform our estimate of $\psi_{j,k}.$ \textbf{Updating $\Omega.$} Fixing $\Psi = \Psi^{(t)}$ and letting $S=n^{-1}Y^{\top}Y$ and $M=n^{-1}(X\Psi)^\top X\Psi,$ we can compute $\Omega^{(t)}$ by solving \begin{equation} \label{eq:cglasso} \Omega^{(t)} = \argmax_{\Omega\succ 0}\left\{\frac{n}{2}\left(\log \lvert \Omega \rvert - \tr(S\Omega)-\tr(M\Omega^{-1})\right) - \sum_{k = 1}^{q}{\left[\xi \omega_{k,k} + \sum_{k' > k}{\xi^{\star}_{k,k'}\lvert \omega_{k,k'}\rvert}\right]}\right\} \end{equation} where $\xi^{\star}_{k,k'} = \xi_{1}q^{\star}(\omega^{(t-1)}_{k,k'}, \eta^{(t-1)}) + \xi_{0}(1 - q^{\star}(\omega^{(t-1)}_{k,k'}, \eta^{(t-1)})).$ The objective in Equation~\eqref{eq:cglasso} is extremely similar to the conventional graphical LASSO \citep[GLASSO;][]{Friedman2008} objective. However, there are two crucial differences. First, because the conditional mean of $Y$ depends on $\Omega$ in the Gaussian chain graph model~\eqref{eq:cg_model}, we have an additional term $\tr(M\Omega^{-1})$ that is absent in the GLASSO objective. Second, and more substantively, the objective in Equation~\eqref{eq:cglasso} contains \textit{individualized} penalties $\xi^{\star}_{k,k'}$ on the off-diagonal entries of $\Omega.$ Here, the penalty $\xi^{\star}_{k,k'}$ will be large (resp. small) whenever the previous estimate of $\omega_{k,k'}^{(t-1)}$ is small (resp. large). In other words, as we run our ECM algorithm, we can refine the amount of penalization applied to each off-diagonal entry in $\Omega.$ Although the objective in Equation~\eqref{eq:cglasso} is somewhat different than the GLASSO objective, we can solve it by suitably modifying an existing GLASSO algorithm. Specifically, we solve the optimization problem in Equation~\eqref{eq:cglasso} with a modified version of \citet{Hsieh2011}'s QUIC algorithm. Our solution repeatedly (i) forms a quadratic approximation of the objective, (ii) computes a suitable Newton direction, and (iii) follows that Newton direction for step size chosen with an Armijo rule. In Section \ref{sec:cgquic_proof} of the Supplementary Materials, we show that the optimization problem in Equation~\eqref{eq:cglasso} has a unique solution and that our modification to QUIC converges to the unique solution. \subsection{Selecting the spike and slab penalties} The proposed ECM algorithm depends on two sets of hyperparameters. The first set, containing $a_{\theta}, b_{\theta}, a_{\eta},$ and $b_{\eta},$ encode our initial beliefs about the overall proportion of non-negligible entries in $\Psi$ and $\Omega.$ We set $a_{\theta}=1, b_{\theta}=pq, a_{\eta}=1,$ and $b_{\eta}=q$ similar to \citet{Deshpande2019}. The second set of hyperparameters consists of the spike and slab penalties $\lambda_{0}, \lambda_{1}, \xi_{0}$ and $\xi_{1}.$ Rather than run cgSSL with a single set of these penalties, we use \citet{Deshpande2019}'s path-following \textit{dynamic posterior exploration} (DPE) strategy to obtain the MAP estimates corresponding to several different choices of spike penalties. Specifically, we fix the slab penalties $\lambda_{1}$ and $\xi_{1}$ and specify grids of increasing spike penalties $\mathcal{I}_\lambda=\{\lambda_0^{(1)}<\dots <\lambda_0^{(L)}\}$ and $\mathcal{I}_\xi=\{\xi_0^{1}<\dots<\xi_0^{(L)}\}.$ We then run cgSSL with warmstarts for each combination of spike penalties, yielding a set of posterior modes $\{(\Psi^{(s,t)}, \theta^{(s,t)}, \Omega^{(s,t)}, \eta^{(s,t)}\}$ indexed by the choices $(\lambda_{0}^{(s)}, \xi_{0}^{(t)}).$ To warm start the estimation of the mode corresponding to $(\lambda_{0}^{(s)}, \xi_{0}^{(t)}),$ we first compute the models found with $(\lambda_{0}^{s-1},\xi_{0}^{(t-1)})$, $(\lambda_{0}^{s},\xi_{0}^{(t-1)})$ and $(\lambda_{0}^{s-1},\xi_{0}^{(t)}).$ We evaluate the posterior density using $(\lambda_{0}, \xi_{0}) = (\lambda_{0}^{(s)}, \xi_{0}^{(t)})$ at each of the three previously computed modes and initialize at the mode with largest density. Following this DPE strategy provides a snapshot of the many different cgSSL posteriors. However, it can be computationally intensive, as we must run our ECM algorithm to convergence for every pair of spike penalties. \citet{Deshpande2019} introduced a faster variant, called dynamic \textit{conditional} posterior exploration (DCPE), which we also implemented for the cgSSL. In DCPE, we first run our ECM algorithm with warm-starts over the ladder $\mathcal{I}_{\lambda}$ while keeping $\Omega = I$ fixed. Then, fixing $(\Psi,\theta)$ at the final value from the first step, we run our ECM algorithm with warm-starts over the ladder $\mathcal{I}_{\Omega}.$ Finally, we run our ECM algorithm starting from the final estimates of the parameters obtained in the first two steps with $(\lambda_{0}, \xi_{0}) = (\lambda^{(L)}_{0}, \xi^{(0)}_{L}).$ Generally speaking, DPE and DCPE trace different paths through the parameter space and typically return different final estimates. When the spike and slab penalties are similar in size (i.e. $\lambda_{1} \approx \lambda_{0}, \xi_{1} \approx \xi_{0}$), we noticed that our ECM algorithm would sometimes return very dense estimates of $\Psi$ and diagonal estimates of $\Omega$ with very large diagonal entries. Essentially, when the spike and slab distributions are not too different, our ECM algorithm has a tendency to overfit the response with a dense $\Psi$, leaving very little residual variation to be quantified with $\Omega.$ On further investigation, we found that we could detect such pathological behavior by examining the condition number of the matrix $Y\Omega - X\Psi$. To avoid propagating dense $\Psi$'s and diagonal $\Omega$'s through the DPE and DCPE, we terminate our ECM early whenever the condition number of $Y\Omega-X\Psi$ exceeds $10n.$ We then set the corresponding $\Psi^{(s)} = 0$ and $\Omega^{(t)} = I$ and continue the dynamic exploration from that point. While this is admittedly ad hoc heuristic, we have found that it works well in practice and note that \citet{Moran2019_variance} utilized a similar strategy in the single-outcome high-dimensional linear regression setting with unknown variance. The DPE and DCPE cgSSL procedures are implemented in the \textbf{mSSL} \textsf{R} \citep{R_citation} package, which is available at \url{https://github.com/YunyiShen/mSSL}. Note that this package contains a new implementation of \citet{Deshpande2019}'s mSSL procedure as well. \subsection{Simulation design} We simulated data with three different choices of dimensions $(n,p,q) = (100, 10, 10),$ $(100, 20, 30),$ and $(400, 100, 30).$ For each choice of $(n,p,q),$ we considered five different choices of $\Omega$: (i) an AR(1) model for $\Omega^{-1}$ so that $\Omega$ is tri-diagonal; (ii) an AR(2) model for $\Omega^{-1}$ so that $\omega_{k,k'} = 0$ whenever $\lvert k - k'\vert > 2$; (iii) a block model in which $\Omega$ is block-diagonal with two dense $q/2 \times q/2$ diagonal blocks; (iv) a star graph where the off-diagonal entry $\omega_{k,k'} = 0$ unless $k$ or $k'$ is equal to 1; and a dense model with all off-diagonal elements $\omega_{k,k'} = 2.$ In the AR(1) model we set $(\Omega^{-1})_{k,k'} = 0.7^{\lvert k - k' \rvert}$ so that $\omega_{k,k'} = 0$ whenever $\lvert k - k' \vert > 1.$ In the AR(2) model, we set $\omega_{k,k} = 1, \omega_{k-1,k} = \omega_{k,k-1} = 0.5,$ and $\omega_{k-2,k} = \omega_{k,k-2} = 0.25.$ For the block model, we partitioned $\Sigma = \Omega^{-1}$ into 4 $q/2 \times q/2$ blocks and set all entries in the off-diagonal blocks of $\Sigma$ to zero. We then set $\sigma_{k,k} = 1$ and $\sigma_{k,k'} = 0.5$ for $1 \leq k \neq k' \leq q/2$ and for $q/2 + 1 \leq k \neq k' \leq q.$ For the star graph, we set $\omega_{k,k} = 1$, $\omega_{1,k} = \omega_{k,1} = 0.1$ for each $k = 2, \ldots, q,$ and set the remaining off-diagonal elements of $\Omega$ equal to zero. These five specifications of $\Omega$ (top row of Figure~\ref{fig:simulation_design}) correspond to rather different underlying graphical structure among the response variables (bottom row of Figure~\ref{fig:simulation_design}). The AR(1) model, for instance, represents an extremely sparse but regular structure while the AR(2) model is somewhat less sparse. While the star model and AR(1) model contain the same number of edges, the underlying graphs have markedly different degree distributions. Compared to the AR(1), AR(2), and star models, the block model is considerably denser. We included the full model, which corresponds to a dense $\Omega,$ to assess how well all of the methods perform in a misspecified regime. In total, we considered 15 combinations of dimensions $(n,p,q)$ and $\Omega.$ For each combination, we generated $\Psi$ by randomly selecting 20\% of entries to be non-zero. We drew the non-zero entries uniformly from a $\mathcal{U}(-2,2)$ distribution. For each combination of $(n,p,q), \Omega$ and $\Psi,$ we generated 100 synthetic datasets from the Gaussian chain graph model~\eqref{eq:cg_model}. The entries of the design matrix $X$ were independently drawn from a standard $\mathcal{N}(0,1)$ distribution. \begin{figure}[H] \centering \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/omega_AR1} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/omega_AR2} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/omega_block} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/omega_star} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/omega_full} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/graph_AR1} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/graph_AR2} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/graph_block} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/graph_star} \end{subfigure} \begin{subfigure}[b]{0.18\textwidth} \centering \includegraphics[width = \textwidth]{Figs/graph_full} \end{subfigure} \caption{Visualization of the supports of $\Omega$ for $q = 10$ under each of the five specifications (top) and corresponding graph (bottom). In the top row, we have gray cells indicate non-zero entries in $\Omega$ and white cells indicate zeros} \label{fig:simulation_design} \end{figure} \subsection{Results} To assess estimation performance, we computed the Frobenius norm between the estimated matrices and the true data generating matrices. To assess the support recovery performance, we counted the number of elements in each of $\Psi$ and $\Omega$ that were (i) correctly estimated as non-zero (true positives; TP); (ii) correctly estimated as zero (true negatives; TN); (iii) incorrectly estimated as non-zero (false positives; FP); and (iv) incorrectly estimated as zero (false negatives; FN). We report the sensitivity (TP/(TP + FN)) and precision (TP/(TP + FP)). Generally speaking, we prefer methods with high sensitivity and high precision. High sensitivity indicates that the method has correctly estimated most of the true non-zero parameters as non-zero. High precision, on the other hand, indicates that most of the estimated non-zero parameters are truly non-zero. For brevity, we only report the average sensitivity, precision, and Frobenius errors for the $(n,p,q) = (100,10,10)$ setting in Table~\ref{tab:psi_omega_relationship}. We observed qualitatively similar results for the other two settings of dimension and report average performance in those settings in Tables~\ref{tab:psi_omega_relationship_2}--\ref{tab:psi_omega_relationship_3} of the Supplementary Materials. \begin{table}[H] \centering \caption{Average (sd) sensitivity, precision, and Frobenius error for $\Psi$ and $\Omega$ when $(n,p,q) = (100, 10, 10)$ for each specification of $\Omega$ across 100 simulated datasets. For each choice of $\Omega,$ the best performance is bold-faced.} \label{tab:psi_omega_relationship} \footnotesize \begin{tabular}{lcccccc} \hline ~ & \multicolumn{3}{c}{$\Psi$ recovery} & \multicolumn{3}{c}{$\Omega$ recovery} \\ Method & SEN & PREC & FROB & SEN & PREC & FROB \\ \hline \multicolumn{7}{c}{$AR(1)$ model} \\ \hline \texttt{cgLASSO} & \textbf{0.88 (0.08)} & 0.44 (0.15) & 0.13 (0.16) & 0.78 (0.37) & 0.55 (0.31) & 31.93 (22.08)\\ \texttt{CAR} & 0.86 (0.06) & 0.31 (0.03) & 0.04 (0.01) & \textbf{1 (0)} & 0.3 (0.03) & 4.16 (1.18)\\ \texttt{CAR-A} & 0.87 (0.06) & 0.59 (0.07) & \textbf{0.02 (0.01)} & \textbf{1 (0)} & 0.83 (0.1) & 2.75 (1.59)\\ \texttt{cgSSL-dcpe} & 0.64 (0.05) & 0.8 (0.16) & 0.08 (0.05) & 0.94 (0.11) & 0.96 (0.07) & 6.32 (6.64)\\ \texttt{cgSSL-dpe} & 0.65 (0.05) & \textbf{0.99 (0.03)} & 0.04 (0.01) & \textbf{1 (0)} & \textbf{0.97 (0.05)} & \textbf{2.49 (1.12)}\\ \hline \multicolumn{7}{c}{$AR(2)$ model} \\ \hline \texttt{cgLASSO} & \textbf{1 (0.02)} & 0.22 (0.06) & 0.17 (0.09) & 0.84 (0.29) & 0.55 (0.17) & 2.7 (1.66)\\ \texttt{CAR }& 0.9 (0.06) & 0.34 (0.04) & 0.03 (0.01) & 0.98 (0.03) & 0.57 (0.06) & 0.58 (0.21)\\ \texttt{CAR-A} & 0.89 (0.05) & 0.67 (0.08) & \textbf{0.02 (0.01)} & \textbf{1 (0.02)} & \textbf{0.91 (0.06)} & 0.46 (0.32)\\ \texttt{cgSSL-dcpe} & 0.96 (0.06) & 0.43 (0.12) & 0.45 (0.28) & 0.24 (0.3) & 0.63 (0.14) & 5 (0.98)\\ \texttt{cgSSL-dpe} & 0.73 (0.05) & \textbf{1 (0.01)} & \textbf{0.02 (0.01)} & \textbf{1 (0)} & 0.86 (0.06) & \textbf{0.38 (0.21)} \\ \hline \multicolumn{7}{c}{Block model} \\ \hline \texttt{cgLASSO} & \textbf{0.95 (0.05)} & 0.39 (0.18) & 0.13 (0.11) & 0.73 (0.38) & 0.78 (0.21) & 5.15 (2.27)\\ \texttt{CAR} & 0.89 (0.06) & 0.31 (0.03) & \textbf{0.03 (0.01)} & 0.95 (0.02) & 0.61 (0.06) & \textbf{1.89 (0.75)}\\ \texttt{CAR-A} & 0.87 (0.06) & 0.57 (0.07) & \textbf{0.03 (0.01)} & 0.86 (0.07) & 0.93 (0.05) & 2.97 (1.22)\\ \texttt{cgSSL-dcpe} & 0.76 (0.06) & 0.29 (0.02) & 0.28 (0.02) & 0.01 (0.03) & 0.71 (0.39) & 8.85 (0.2)\\ \texttt{cgSSL-dpe} & 0.69 (0.07) & \textbf{0.99 (0.02)} & \textbf{0.03 (0.01)} & 0.71 (0.06) & \textbf{0.95 (0.05)} & 3.28 (1.17)\\ \hline \multicolumn{7}{c}{Star model} \\ \hline \texttt{cgLASSO} & \textbf{0.96 (0.04)} & 0.48 (0.14) & 0.04 (0.02) & 0.36 (0.41) & 0.2 (0.18) & 0.86 (0.35)\\ \texttt{CAR} & 0.91 (0.05) & 0.34 (0.03) & 0.02 (0) & \textbf{0.55 (0.18)} & 0.25 (0.08) & 0.57 (0.29)\\ \texttt{CAR-A} & 0.91 (0.04) & 0.57 (0.06) & 0.02 (0.01) & 0.22 (0.14) & 0.46 (0.24) & 0.57 (0.26)\\ \texttt{cgSSL-dcpe} & 0.83 (0.04) & 0.96 (0.05) & \textbf{0.01 (0)} & 0.05 (0.09) & \textbf{0.9 (0.24)} & \textbf{0.22 (0.12)}\\ \texttt{cgSSL-dpe} & 0.79 (0.06) & \textbf{0.99 (0.03)} & \textbf{0.01 (0.01)} & 0.09 (0.13) & 0.71 (0.29) & 0.29 (0.19)\\ \hline \multicolumn{7}{c}{Dense model} \\ \hline \texttt{cgLASSO} & \textbf{0.92 (0.04)} & 0.57 (0.07) & 0.03 (0.01) & \textbf{0.88 (0.32)} & 1 (0) & \textbf{16.93 (32.74)}\\ \texttt{CAR} & 0.85 (0.06) & 0.28 (0.03) & 0.04 (0.01) & 0.03 (0.02) & 1 (0) & 92.51 (1.74)\\ \texttt{CAR-A} & 0.84 (0.06) & 0.4 (0.04) & 0.04 (0.01) & 0 (0.01) & 1 (0) & 96.04 (1.21)\\ \texttt{cgSSL-dcpe} & 0.82 (0.03) & 0.84 (0.06) & \textbf{0.02 (0)} & 0.01 (0.02) & 1 (0) & 99.93 (0.39)\\ \texttt{cgSSL-dpe} & 0.72 (0.07) & \textbf{0.93 (0.06)} & 0.03 (0.01) & 0.05 (0.04) & 1 (0) & 99.99 (0.98)\\ \hline \end{tabular} \end{table} In terms of identifying non-zero direct effects (i.e. estimating the support of $\Psi$), \texttt{cgLASSO} consistently achieves the highest sensitivity. On further inspection, we found that the penalties selected by 10-fold cross-validation tended to be quite small, meaning that \texttt{cgLASSO} returned many non-zero $\hat{\psi}_{j,k}$'s. As the precision results indicate, many of \texttt{cgLASSO}'s ``discoveries'' were in fact false positives. The other fixed penalty method, \texttt{CAR}, similarly displayed somewhat high sensitivity and low precision. Interestingly, for several choices of $\Omega,$ the precisions of \texttt{cgLASSO} and \texttt{CAR} for recovering the support of $\Psi$ were less than 0.5. Such low precisions indicate that most of the returned non-zero estimates were in fact false positives. In contrast, methods that deployed adaptive penalties (\texttt{CAR-A} and both implementations of cgSSL), displayed higher precision in estimating the support of $\Psi.$ In fact, at least for estimating the support of $\Psi,$ \texttt{cgSSL-DPE} made almost no false positives. We observed essentially the same phenomenon for $\Omega$: although the cgSSL generally returned fewer non-zero estimates of $\omega_{k,k'}$, the vast majority of these estimates were true positives. In a sense, the fixed penalty methods (\texttt{cgLASSO} and \texttt{CAR}) cast a very wide net when searching for non-zero signal in $\Psi$ and $\Omega,$ leading to large number of false positive identifications in the supports of these matrices. Adaptive penalty methods, on the other hand, are much more discerning. In terms of estimation performance, we found that the fixed penalty methods (\texttt{cgLASSO} and \texttt{CAR}) tended to have much larger Frobenius error, reflecting the well-documented bias introduced by $L_{1}$ regularization. The one exception was in the misspecified setting where $\Omega$ was dense. Interestingly, for the four sparse $\Omega$'s, we did not observe any method achieving high Frobenius error for $\Omega$ but low Frobenius error for $\Psi.$ This finding helps substantiate our intuition about Corollary~\ref{coro:contraction_B_cg}. Namely, in order to estimate $\Psi$ well, one must estimate $\Omega$ well. Finally, like \citet{Deshpande2019}, we found that dynamic conditional posterior exploration implementation of cgSSL performed slightly worse than dynamic posterior exploration implementation.
1,477,468,750,773
arxiv
\section*{Introduction} The method of implicitization via moving hypersurfaces of rational parameterized varieties developed by Sederberg and his collaborators in the 90's (cf. \cite{SGH97, CGZ00} and the references therein) can be properly formulated and studied via the Rees algebra of the input data, as shown in \cite{Cox}. Since then, the defining equations of Rees algebras of parametric curves and surfaces have become an active area of research, see for instance \cite{CHW,B09,KPU11,CD13,CD2,KPU13,LP14,CD15,Madsen}. In this paper, we study the defining equations of the Rees algebra of ideals arising from curve parametrizations in the plane and rational normal scrolls, and connections between them. The paper \cite{BGI} by Bernardi, Gimigliano, and Id\'a studies this connection from a \emph{geometric} point of view, while the papers \cite{KPU11} by Kustin, Polini, and Ulrich and \cite{Madsen} by Madsen are more \emph{algebraic}. Our goal is to link these two approaches. In more detail, consider a map ${\mathbb{P}}^1 \to {\mathbb{P}}^2$ defined by $f_{0,d},f_{1,d},f_{2,d}\in{\mathbb{K}}[T_0,T_1]$ relatively prime of degree $d$. The syzygy module of $f_{0,d},f_{1,d},f_{2,d}$ has a basis $p,q$ of degrees $\mu \le d-\mu$. If we write $p = (p_{0,\mu},p_{1,\mu},p_{2,\mu})$, then $p$ has its own syzygy module with generators of degrees $0\leq\mu_1\leq\mu_2$ with $\mu = \mu_1+\mu_2$. As shown in \cite{BGI}, this leads to a factorization of ${\mathbb{P}}^1 \to {\mathbb{P}}^2$ into maps \[ {\mathbb{P}}^1 \longrightarrow {\mathcal{S}}_{\mu_1,\mu_2} \lhook\joinrel\longrightarrow {\mathbb{P}}^{\mu+1} \dashrightarrow {\mathbb{P}}^2, \] where ${\mathcal{S}}_{\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+1}$ is a rational normal scoll and the final map is a linear projection. The geometry of this factorization is described in \cite[Thm.\ 3.1]{BGI}. Our approach is to assemble these maps into the commutative diagram \begin{equation} \label{BGIdiagram} \begin{array}{c} \SelectTips{cm}{} \xymatrix@C=30pt{ {\mathbb{P}}^1 \ar[drr]^\gamma \ar[dr]^(.65){\gamma_0} \ar[ddr]_{\mathbf{f}} & & \\ & {\mathcal{S}}_{\mu_1,\mu_2} \ar@{^{(}->}[r] \ar[d] & {\mathbb{P}}^{\mu+1} \ar@{-->}[dl] \\ & {\mathbb{P}}^2 & } \end{array} \end{equation} where $\gamma, \, \gamma_0,$ and ${\mathbf{f}}$ are defined in \eqref{gamma}, \eqref{gamma0}, and \eqref{f} respectively. We see below that these three maps lead to ideals $I,J,K \subseteq {\mathbb{K}}[T_0,T_1]$ whose Rees algebras ${\mathcal{R}}(I),{\mathcal{R}}(J),{\mathcal{R}}(K)$ have defining ideals ${\mathcal{I}}, {\mathcal{J}}, {\mathcal{K}}$. In Lemma~\ref{maplemma}, we consider a Rees dual version of \eqref{BGIdiagram}, which is the commutative diagram: \begin{equation} \label{Reesdiagram} \begin{array}{c} \SelectTips{cm}{} \xymatrix{ & {\mathbb{K}}[T_0,T_1,{\mathbf{X}},{\mathbf{Y}}] \ar[d]^{\Phi'} \ar[ddr]^\Phi & \\ {\mathbb{K}}[T_0,T_1,{\mathbf{Z}}] \ar[ur]^\Gamma \ar[r]^\Omega \ar[rrd]_\psi & {\mathbb{K}}[T_0,T_1,X,Y] \ar[dr]^\phi & \\ && {\mathbb{K}}[T_0,T_1,s]} \end{array} \end{equation} where $\Phi$ comes from $\gamma,\, \phi$ from $\gamma_0,$ and $\psi$ from ${\mathbf{f}}.$ The maps $\phi, \psi, \Phi, \Phi',\Gamma,$ and $\Omega$ in \eqref{Reesdiagram} are defined in \eqref{phi}, \eqref{psidef}, \eqref{phii}, \eqref{phiip}, \eqref{Gama}, and \eqref{Omega} respectively, where the notation ${\mathbf{X}}, {\mathbf{Y}}, {\mathbf{Z}}$ is also explained. The connection between these diagrams is the following: \begin{align*} &\text{the curve }\, \gamma({\mathbb{P}}^1)\subset {\mathbb{P}}^{\mu+1}\, \text{ gives } {\mathcal{R}}(I) = \mathrm{im}(\Phi) \text{ and } {\mathcal{I}} = \ker(\Phi) \\[-2pt] &\text{the curve }\, \gamma_0({\mathbb{P}}^1)\subset{\mathcal{S}}_{\mu_1,\mu_2}\, \text{gives } {\mathcal{R}}(J) = \mathrm{im}(\phi) \text{ and } {\mathcal{J}} = \ker(\phi) \\[-2pt] &\text{the curve }\,{\mathbf{f}}({\mathbb{P}}^1)\subset \ {\mathbb{P}}^{2}\ \ \text{ gives }{\mathcal{R}}(K) = \mathrm{im}(\psi) \text{ and } {\mathcal{K}} = \ker(\psi). \end{align*} The easiest Rees algebra is ${\mathcal{R}}(J)$ coming from $\gamma_0.$ In Section~\ref{1}, we show that the defining ideal ${\mathcal{J}}$ of ${\mathcal{R}}(J)$ is especially simple with a nice toric interpretation (Proposition~\ref{alphabetaRees}). In Section~\ref{2}, we shift to $\gamma,$ which leads to the Rees algebra ${\mathcal{R}}(I)$ discussed in \cite{KPU11}. We explictly describe the minimal generators of ${\mathcal{I}}$ in Theorem~\ref{mtm}. Section~\ref{3} explains how our results relate to the papers \cite{KPU11, LP14, Madsen}. In Section~\ref{4}, we bring ${\mathbf{f}}$ into the picture and explain the diagrams \eqref{BGIdiagram} and \eqref{Reesdiagram} in detail. Here, the ideal is \[ K = \langle f_{0,d},f_{1,d},f_{2,d}\rangle \subseteq {\mathbb{K}}[T_0,T_1], \] and as noted above, describing the ideal ${\mathcal{K}}$ of defining equations of the Rees algebra ${\mathcal{R}}(K)$ is a major unsolved problem. When we present the Rees algebra of $K$ as \[ {\mathcal{R}}(K) = {\mathbb{K}}[T_0,T_1,Z_0,Z_1,Z_2]/{\mathcal{K}}, \] the syzygy $p$ gives $p = p_{0,\mu} Z_0 + p_{1,\mu}Z_1 + p_{2,\mu}Z_2 \in {\mathcal{K}}$. If we do the same for the other syzygy $q$, then $p$ and $q$ become part of a minimal generating set of ${\mathcal{K}}$. In Section~\ref{5}, we construct operators $D_A$ and $D_B$ which, when applied successively to $q$, give further minimal generators of ${\mathcal{K}}$ (Theorem~\ref{DD}). In Section~\ref{6} we discuss how our results relate to Madsen's paper \cite{Madsen}, and in Section~\ref{7}, we explain how the minimal generators of ${\mathcal{K}}$ constructed in Theorem~\ref{DD} relate to the minimal generators of ${\mathcal{I}}$ described earlier in Theorem~\ref{mtm}. One notational convention is that a second subscript often denotes degree. We used this above when three polynomials of degree $d$ were denoted $f_{i,d}$ for $i=0,1,2$. \bigskip {\bf Acknowledgements} We are grateful to Kuei-Nan Lin and Yi-Huang Shen for their careful reading of the manuscript and helpful suggestions for improvement. We are also grateful to the anonymous referees for their several suggestions for improving the presentation of the manuscript. T.~Cortadellas and C.~D'Andrea are both supported by Spanish MINECO/FEDER research projects MTM2013-40775-P, and MTM 2015-65361-P respectively. C.~D'Andrea is also supported by the ``Mar\'ia de Maeztu'' Programme for Units of Excellence in R\&D (MDM-2014-0445), and by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 675789. \section{Parametrizations and Toric Surfaces}\label{1} Assume we have $(d,\,\mu_1,\mu_2)\in{\mathbb{Z}}^3$ with $0\leq\mu_1\leq\mu_2$ and set $\mu=\mu_1+\mu_2\leq\frac{d}{2}$. Let ${\mathbb{K}}$ be a field and $T_0,T_1$ be variables. For homogeneous elements $\alpha_{d-\mu_1},\,\beta_{d-\mu_2}\in{\mathbb{K}}[T_0,T_1]$ of respective degrees $d-\mu_1,\,d-\mu_2$ and no common factors, consider the rational map \begin{equation} \label{gamma} \begin{array}{cccc} \gamma:&{\mathbb{P}}^1&\longrightarrow&{\mathbb{P}}^{\mu+1}\\ &{\mathbf{t}}:=(t_0:t_1)&\longmapsto& \big(t_0^{\mu_1}\alpha_{d-\mu_1}({\mathbf{t}}):\dotsb:t_1^{\mu_1}\alpha_{d-\mu_1}({\mathbf{t}}):t_0^{\mu_2}\beta_{d-\mu_2}({\mathbf{t}}):\dotsb:t_1^{\mu_2}\beta_{d-\mu_2}({\mathbf{t}})\big). \end{array} \end{equation} This is one of the maps appearing in \eqref{BGIdiagram}. The image of $\gamma$ is a curve lying inside the rational normal surface ${\mathcal{S}}_{\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+1}$ defined by \[ {\mathcal{S}}_{\mu_1,\mu_2}=\{(s_0 t_0^{\mu_1}:\dotsb:s_0 t_1^{\mu_1}:s_1 t_0^{\mu_2}:\dotsb:s_1 t_1^{\mu_2}),\, (t_0:t_1),\,(s_0:s_1)\in{\mathbb{P}}^1\}. \] To approach these objects from a toric point of view, let $X, Y$ be new variables and consider the lattice polygon $P$ with facet variables $T_0,T_1,X,Y$ shown in Figure~\ref{fig1}. \begin{figure}[h] \[ \begin{picture}(210,60)\label{fig1} \thicklines \put(15,15){\line(1,0){120}} \put(15,15){\line(0,1){30}} \put(15,45){\line(1,0){180}} \put(135,15){\line(2,1){60}} \multiput(15,15)(30,0){5}{\circle*{3}} \multiput(15,45)(30,0){7}{\circle*{3}} \put(0,26){$T_1$} \put(172,20){$T_0$} \put(85,2){$Y$} \put(85,50){$X$} \put(123,2){$(\mu_1,0)$} \put(182,51){$(\mu_2,1)$} \put(4,2){$(0,0)$} \put(4,51){$(0,1)$} \end{picture} \] \caption{The Lattice Polygon $P$} \label{polygonfig} \end{figure} The lattice points in $P$ give the monomials \begin{equation} \label{monomials} \begin{aligned} &T_0^{\mu_2} Y \ \ T_0^{\mu_2-1}T_1 Y \ \ \cdots \ \ T_1^{\mu_2} Y\\[5pt] &T_0^{\mu_1} X \ \ \cdots \ \ T_1^{\mu_1} X \end{aligned} \end{equation} where the exponents are the lattice distances to the edges. When we assign toric bidegrees \begin{equation} \label{toricgrading} \deg(T_0) = \deg(T_1) = (1,0),\ \deg(X) = (-\mu_1,1), \ \deg(Y) = (-\mu_2,1), \end{equation} the monomials in \eqref{monomials} all have toric bidegree $(0,1)$. Furthermore, $P$ gives the toric variety $X_P$ which is the Hirzebruch surface $\mathbb{F}_{\mu_2-\mu_1}$, and ${\mathbb{K}}[T_0,T_1,X,Y]$, with the toric bigrading given in \eqref{toricgrading}, is the total coordinate ring (Cox ring) of $X_P$ after picking a suitable basis of the Picard group. The toric geometry used here is explained in \cite[Chapter 5]{CLS}. For the above lattice polygon $P$, $X_P$ maps isomorphically to the normal rational surface ${\mathcal{S}}_{\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+1}$ via the monomials \eqref{monomials}. Because of this, we identify $X_P$ with its image in ${\mathbb{P}}^{\mu+1}$ and write $X_P = {\mathcal{S}}_{\mu_1,\mu_2}$. The image of $\gamma$ lies in ${\mathcal{S}}_{\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+1}$ and is defined by the equation \[ \alpha_{d-\mu_1} Y = \beta_{d-\mu_2} X \] in the total coordinate ring ${\mathbb{K}}[T_0,T_1,X,Y]$. Thus we have the factorization \begin{equation}\label{gamma0} \gamma:{\mathbb{P}}^1\stackrel{\gamma_0}{\longrightarrow} {\mathcal{S}}_{\mu_1,\mu_2} \stackrel{\gamma_1}{\longrightarrow}{\mathbb{P}}^{\mu+1}, \end{equation} with $\gamma_0({\mathbb{P}}^1)\subseteq {\mathcal{S}}_{\mu_1,\mu_2}$ defined by $\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X$. Note that $\gamma_0$ also appears in the diagram \eqref{BGIdiagram}. The Rees algebra ${\mathcal{R}}(J)$ of $J = \langle \alpha_{d-\mu_1},\beta_{d-\mu_2}\rangle \subseteq {\mathbb{K}}[T_0,T_1]$ is presented by the map \begin{equation} \label{phi} \begin{array}{rcl} \phi:{\mathbb{K}}[T_0,T_1,X,Y]&\longrightarrow&{\mathbb{K}}[T_0,T_1,s]\\ T_i&\longmapsto& T_i, \ i=0,1\\ X&\longmapsto& \alpha_{d-\mu_1}\,s\\ Y&\longmapsto& \beta_{d-\mu_2}\,s. \end{array} \end{equation} This is the map $\phi$ in \eqref{Reesdiagram}. The image of $\phi$ is the Rees algebra ${\mathcal{R}}(J)$, and its kernel ${\mathcal{J}} = \ker(\phi) \subseteq {\mathbb{K}}[T_0,T_1,X,Y]$ gives the defining equations of ${\mathcal{R}}(J)$. The ring ${\mathbb{K}}[T_0,T_1,X,Y]$ is the total coordinate ring of ${\mathcal{S}}_{\mu_1,\mu_2}$, and the ideal ${\mathcal{J}}$ is easy to describe. \begin{proposition} \label{alphabetaRees} The ideal ${\mathcal{J}}$ is principal, generated by $\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X.$ \end{proposition} \begin{proof} Since $\gcd(\alpha_{d-\mu_1},\beta_{d-\mu_2}) = 1$, $J$ is generated by a regular sequence. This has two consequences: the natural map $\mathrm{Sym}(J) \to {\mathcal{R}}(J)$ is an isomorphism by \cite[p.\ 29]{Vasconcelos}, and the syzygy module of $J$ is generated by $(-\beta_{d-\mu_2},\alpha_{d-\mu_1})$, so that \[ \mathrm{Sym}(J) \simeq {\mathbb{K}}[T_0,T_1,X,Y]/\langle \alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X\rangle, \] which proves the claim. \end{proof} \section{The Rees Algebra of the Space Curve}\label{2} The map $\gamma_0:{\mathbb{P}}^1 \to {\mathcal{S}}_{\mu_1,\mu_2}$ gives the easy Rees algebra described in Proposition~\ref{alphabetaRees}. Combining this with ${\mathcal{S}}_{\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+1}$ gives the space curve $\gamma: {\mathbb{P}}^1 \to {\mathbb{P}}^{\mu+1}.$ Here, the Rees algebra is more complicated. We introduce some notation. Let $s,\,X_0,\dotsb, X_{\mu_1},\,Y_0,\dotsb, Y_{\mu_2}$ be new variables. We set ${\mathbf{X}}=X_0,\dotsb, X_{\mu_1},\ {\mathbf{Y}}= Y_0,\dotsb, Y_{\mu_2}$ and ${\mathbf{T}}=T_0,T_1$ for short. For any $\ell \ge 1$, we also set ${\mathbf{T}}^{\ell}=T_0^{\ell}, T_0^{\ell-1}T_1,\dotsb, T_1^{\ell}$. With this notation, the map \eqref{gamma} is written more compactly as $\gamma = (\alpha_{d-\mu_1} {\mathbf{T}}^{\mu_1} : \beta_{d-\mu_2} {\mathbf{T}}^{\mu_2})$, and the entries of $\gamma$ give the ideal $I = \langle \alpha_{d-\mu_1} {\mathbf{T}}^{\mu_1},\beta_{d-\mu_2} {\mathbf{T}}^{\mu_2}\rangle \subseteq {\mathbb{K}}[{\mathbf{T}}]$. The Rees algebra ${\mathcal{R}}(I)$ is presented by the map \begin{equation}\label{phii} \begin{array}{cccl} \Phi:&{\mathbb{K}}[{\mathbf{T}},\,{\mathbf{X}},\,{\mathbf{Y}}]&\longrightarrow&{\mathbb{K}}[{\mathbf{T}},s]\\ &T_i&\longmapsto& T_i, \ \ \ i=0,\,1\\ & X_i&\longmapsto& \alpha_{d-\mu_1}\,T_0^{\mu_1-i}T_1^{i}\,s,\ 0\leq i\leq\mu_1\\ & Y_i&\longmapsto& \beta_{d-\mu_2}\,T_0^{\mu_2-i}T_1^{i}\,s,\ 0\leq i\leq \mu_2. \end{array} \end{equation} This is $\Phi$ in \eqref{Reesdiagram}. As noted above, ${\mathcal{R}}(I) = \mathrm{im}(\Phi)$, and ${\mathcal{I}} = \ker(\Phi) \subseteq{\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$ gives the defining equations of the Rees algebra. Consider also the map \begin{equation} \label{phiip} \begin{array}{rcl} \Phi':{\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]&\longrightarrow&{\mathbb{K}}[{\mathbf{T}},X,Y]\\ T_i&\longmapsto& T_i,\ i=0,\,1\\ X_i&\longmapsto& T_0^{\mu_1-i}T_1^{i}\,X,\ i = 0,\dotsb,\mu_1\\ Y_i&\longmapsto& T_0^{\mu_2-i}T_1^{i}\,Y,\ i = 0,\dotsb, \mu_2, \end{array} \end{equation} and denote with ${\mathcal{I}}'$ its kernel. The variables $X_0, X_2, \dotsc, X_{\mu_1}, Y_{0}, Y_{1}, \dotsc, Y_{\mu_2}$ map to the monomials in \eqref{monomials}. Observe that $\Phi'$ appears in \eqref{Reesdiagram} and corresponds to the inclusion ${\mathcal{S}}_{\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+1}$ in \eqref{BGIdiagram}. The rings ${\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$ and ${\mathbb{K}}[{\mathbf{T}},s]$ have bigradings defined by $\deg(T_i) = (1,0)$, $\deg(X_i) = \deg(Y_i) = (0,1)$ and $\deg(s) = (-d,1)$, and ${\mathbb{K}}[{\mathbf{T}},X,Y]$ has the bigrading defined in \eqref{toricgrading}. The maps $\Phi$, $\Phi'$ and $\phi$ all preserve these bigradings. \begin{theorem} \ \label{phiipproperties} \begin{enumerate} \item The ideal ${\mathcal{I}}$ is the inverse image of ${\mathcal{J}} = \langle \alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X\rangle$ via $\Phi'$. \item The ideal ${\mathcal{I}}' $ is generated by all $F \in {\mathcal{I}}$ which are $({\mathbf{X}},{\mathbf{Y}})$-bihomogeneous. \end{enumerate} \end{theorem} \begin{proof} For part (1), observe that $\Phi = \phi \circ \Phi'$, so that \[ {\mathcal{I}} = \ker(\Phi) = \Phi'{}^{-1}(\ker(\phi)). \] Since $\ker(\phi) = \langle \alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X\rangle$ by Proposition~\ref{alphabetaRees}, part (1) follows immediately. For part (2), take $F\in \ker(\Phi')$ and write $F = \sum_{j,k} F_{j,k}$, where the polynomial $F_{j,k}$ is $({\mathbf{X}},{\mathbf{Y}})$-bihomogeneous of bidegree $(j,k)$. Then \[ 0 = F({\mathbf{T}},X {\mathbf{T}}^{\mu_1},Y {\mathbf{T}}^{\mu_2}) = \sum_{j,k} F_{j,k}({\mathbf{T}},X {\mathbf{T}}^{\mu_1},Y {\mathbf{T}}^{\mu_2}) = \sum_{j,k} X^j Y^k F_{j,k}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2}), \] which implies that $F_{j,k}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2}) = 0$ for all $j,k$. Using the homogeneity again, we see that $F_{j,k}({\mathbf{T}},X{\mathbf{T}}^{\mu_1},Y{\mathbf{T}}^{\mu_2}) = 0$, so that $F_{j,k} \in \ker(\Phi') \subseteq \ker(\Phi) = {\mathcal{I}}$. Thus $F_{j,k}$ is a $({\mathbf{X}},{\mathbf{Y}})$-bihomogeneous element of ${\mathcal{I}}$. By \eqref{phiip}, we conclude that $F_{j,k}$ and hence $F$ lie in the ideal generated by $({\mathbf{X}},{\mathbf{Y}})$-bihomogeneous elements of ${\mathcal{I}}$. For the opposite inclusion, we show that if $F \in {\mathcal{I}}$ is $({\mathbf{X}},{\mathbf{Y}})$-bihomogeneous of bidegree $(j,k)$, then $F\in \ker(\Phi')$. To see why, note that by part~(1), $F\in {\mathcal{I}}$ implies that \[ F({\mathbf{T}},X {\mathbf{T}}^{\mu_1},Y{\mathbf{T}}^{\mu_2}) = (\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X) H \] for some $H \in {\mathbb{K}}[{\mathbf{T}},X,Y]$. But being $({\mathbf{X}},{\mathbf{Y}})$-bihomogeneous of bidegree $(j,k)$ implies that $F({\mathbf{T}},X {\mathbf{T}}^{\mu_1},Y{\mathbf{T}}^{\mu_2}) = X^j Y^k F({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2})$, so that \begin{equation} \label{eqinKTX0Y0} X^j Y^k F({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2}) = (\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X) H. \end{equation} However, $\alpha_{d-\mu_1}$ and $\beta_{d-\mu_2}$ are nonzero and relatively prime, so that $\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X$ is irreducible in ${\mathbb{K}}[{\mathbf{T}},X,Y]$ and hence the only way it can divide the left-hand side of \eqref{eqinKTX0Y0} is for the left-hand side to vanish. But when this happens, we get $F\in \ker(\Phi')$. \end{proof} \begin{proposition}\label{2.2} The ideal ${\mathcal{I}}'$ is minimally generated by the pencils \begin{equation}\label{pencils}T_1X_{i-1}-T_0X_i,\, T_1Y_{j-1}-T_0Y_j, \ 1\leq i\leq\mu_1,\,1\leq j\leq\mu_2, \end{equation} and the quadrics \begin{itemize} \item[] $X_iX_j-X_{i-1}X_{j+1}, 1\leq i\leq j\leq\mu_1-1$, \item[] $Y_iY_j-Y_{i-1}Y_{j+1}, 1\leq i\leq j\leq\mu_2-1$, \item[] $X_iY_j-X_{i-1}Y_{j+1},\, 1\leq i\leq\mu_1,\,0\leq j\leq\mu_2-1$. \end{itemize} Moreover, this family is a minimal Gr\"obner basis of ${\mathcal{I}}'$ for the monomial order grevlex $T_0\succ T_1\succ X_0\succ \dotsb\succ X_{\mu_1}\succ Y_0\succ\dotsb\succ Y_{\mu_2}$. \end{proposition} \begin{remark}\label{numm} The number of elements of the family of minimal generators given in Proposition \ref{2.2} is equal to $\mu_1+\mu_2+\binom{\mu_1}{2}+\binom{\mu_2}{2}+\mu_1\mu_2$. \end{remark} \begin{proof}[Proof of Proposition \ref{2.2}] Let us show first that those binomials above are a Gr\"obner basis of ${\mathcal{I}}'$ for the monomial order stated above. The leading terms of this family are the following: \begin{itemize} \item[] $T_1X_{i},\, T_1Y_{j}, \ 0\leq i\leq\mu_1-1,\,0\leq j\leq\mu_2-1$, \item[] $X_iX_j,\, 1\leq i\leq j\leq\mu_1-1$, \item[] $Y_iY_j,\, 1\leq i\leq j\leq\mu_2-1$, \item[] $X_iY_j,\, 1\leq i\leq\mu_1,\,0\leq j\leq\mu_2-1$. \end{itemize} Due to \eqref{phiip}, we deduce straightforwardly that ${\mathcal{I}}'$ is a \textit{trihomogeneous} ideal in the groups of variables $({\mathbf{T}},{\mathbf{X}},{\mathbf{Y}})$, so it is enough to test the membership of this kind of elements. In what follows, we refer to such a polynomial as trihomogeneous, or if we want to specify the degrees, as $(i,j,k)$-homogeneous. As ${\mathcal{I}}'$ is a prime ideal, a minimal set of generators consists of a system of irreducible elements. Given a nonzero irreducible $(i,j,k)-$trihomogeneous element $F_{i,j,k}$, if its leading monomial is not divisible by any of the leading terms of the binomials in the family above, then it must be of one of the following forms: \begin{enumerate} \item $T_0^i X_\ell,\,1\leq \ell\leq \mu_1$ (so $j=1,\,k=0$), \item $T_0^iY_\ell,\,1\leq \ell\leq \mu_2$ (so $j=0,\,k=1$), \item $T_1^i X_{\mu_1}^jY_{\mu_2}^k$, \item $X_0^{j'}X_\ell^{\{0,1\}}X_{\mu_1}^{j''}Y_{\mu_2}^k,\, 1 \leq \ell \leq \mu_1-1$ (so $i=0$), \item $X_{\mu_1}^{j}Y_0^{k'}Y_{\ell}^{\{0,1\}}Y_{\mu_2}^{k''},\, 1 \leq \ell \leq \mu_2-1$ (so $i=0$), \end{enumerate} with $j'+j''\in\{j, j-1\},\,k'+k''\in\{k,k-1\}$. Here, $X_\ell^{\{0,1\}}$ means that there are only two possible exponents for $X_\ell: 0$ or $1$. We deal with each of these cases: \begin{enumerate} \item Any other monomial appearing in the expansion of $F_{i,1,0}$ must be of the form $T_0^{i'}T_1^{i''}X_{\ell'}$ with $i'+i''=i,\,\ell'>\ell$. After the specialization given by \eqref{phiip}, we get that $T_0^i X_\ell$ gets converted into $T_0^{i+\mu_1-\ell}T_1^{\ell}s$, while $T_0^{i'}T_1^{i''}X_{\ell'}$ maps to $T_0^{i'+\mu_1-\ell'}T_1^{i''+\ell'}s$. We have $i''+\ell'>\ell$, so the image of the leading term cannot be cancelled, which shows that such a polynomial cannot be in the kernel. \item The same argument used in (1) applies here. \item After specializing the polynomial with \eqref{phiip}, we get that $T_1^iX_{\mu_1}^jY_{\mu_2}^k$ maps into $T_1^{i+j\mu_1+k\mu_2}s^{j+k}$, and any other nonzero term of $F_{i,j,k}$ is converted into a multiple of $T_0$. So, $F_{i,j,k}$ cannot be in the kernel of $\Phi'$, and hence this monomial cannot be the leading monomial of any element of ${\mathcal{I}}'$. \item As $F_{0,j,k}$ is trihomogenous, due to the way we defined the monomial order, any other monomial in the expansion of $F_{i,j,k}$ must be a multiple of $Y_{\mu_2}^k$. As we assumed this polynomial irreducible, this forces $k=0$, and in fact the leading monomial is $X_0^{j'}X_\ell^{j'''}X_{\mu_1}^{j''}$, with $j'''\in\{0,1\}$. Applying \eqref{phiip}, it becomes $T_0^{\mu_1j'+(\mu_1-\ell)j'''}T_1^{\ell j'''+\mu_1j''}s^j$. As before, any other monomial in $F_{0,j,0}$ maps to a strictly larger power of $T_1$, hence the specialized polynomial cannot be identically zero. This shows that no element in ${\mathcal{I}}'$ can have this leading term. \item As $X_{\mu_1}Y_\ell$ is one the leading terms of the quadrics in the statement of the claim, we have that if $Y_\ell$ actually appears in the monomial, then $j=0$, and this case can be solved like in (4). Suppose then that this is not the case. The leading monomial then turns into $X_{\mu_1}^{j}Y_0^{k'}Y_{\mu_2}^{k''}$. As $X_{\mu_1}Y_0$ is also one of the leading terms of the quadrics above, we now have that either $j=0$ or $k'=0$. The case $j=0$ gets solved as before, and in the other one, we get that the leading monomial actually is $X_{\mu_1}^{j}Y_{\mu_2}^k$, which is the case we have dealt with in (2). \end{enumerate} So, we get that the family of elements in the claim is a Gr\"obner basis of ${\mathcal{I}}'$. In particular, they generate this ideal. It is easy to see that it is a minimal Gr\"obner basis, as the leading terms have all total degree $2$ and they are pairwise different. To show that it is also a minimal set of generators, note that all of them have total degree $2$, and hence if one of these binomials is a combination of the others, it must be a ${\mathbb{K}}$-linear combination of them. Choose the polynomial in this nontrivial linear combination with the highest leading term among all the polynomials in the combination. This highest leading term cannot be cancelled by any of the other summands, which is a contradiction. So, the family is minimal and this concludes with the proof of the theorem. \end{proof} Now we search for trihomogeneous nontrivial elements of ${\mathcal{I}}$. \begin{lemma}\label{util2p} If $A_{i,j,k}\in{\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$ is $(i,j,k)$-trihomogeneous with $k\geq1$, such that $$A_{i,j,k}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2})=\alpha_{d-\mu_1}\cdot q_{i+(j+1)\mu_1+k\mu_2-d},$$ with $q_{i+(j+1)\mu_1+k\mu_2-d}\in{\mathbb{K}}[{\mathbf{T}}]$ nonzero, homogeneous of degree $i+(j+1)\mu_1+k\mu_2-d$, then there exists $B_{i,j+1,k-1}\in{\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$ $(i,j+1,k-1)$-trihomogeneous such that \begin{equation}\label{oxix} A_{i,j,k}-B_{i,j+1,k-1}\in{\mathcal{I}}. \end{equation} \end{lemma} \begin{proof} The polynomial $0\neq q_{i+(j+1)\mu_1+k\mu_2-d}\beta_{d-\mu_2}\in{\mathbb{K}}[{\mathbf{T}}]$ has degree $$i+(j+1)\mu_1+k\mu_2-d+d-\mu_2=i+(j+1)\mu_1+(k-1)\mu_2,$$ so we can write it as $B_{i,j+1,k-1}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2})$ for some $B_{i,j+1,k-1}\in{\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$ $(i,j+1,k-1)$-trihomogeneous. This polynomial satisfies the claim. \end{proof} \begin{remark} All choices of $B_{i,j+1,k-1}$ in \eqref{oxix} must satisfy that $B_{i,j+1,k-1}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2})$ is equal to a fixed polynomial. Hence, two different choices for this form are equivalent modulo ${\mathcal{I}}'$. \end{remark} \subsection{Minimal Generators} \label{33} Now we exhibit a family of minimal generators of ${\mathcal{I}}$. Let $\mathbf{V} = (V_0, V_1, V_2)$ be new variables, set $\mu=\mu_1+\mu_2$, and consider the monomial ideal in ${\mathbb{K}}[\mathbf{V}]$: \begin{equation}\label{Smu} S_{\mu_1,\mu_2,d}=\langle V_0^iV_1^jV_2^k \mid (i,j,k)\ \in({\mathbb{Z}}_{\geq0})^3\,\mbox{with}\, i+\mu_1j+\mu_2k\geq d-\mu\rangle. \end{equation} By the Hilbert Basis Theorem, $S_{\mu_1,\mu_2,d}$ has a unique minimal set of monomial generators: $$S_{\mu_1,\mu_2,d}=\langle V_0^{i_1}V_1^{j_1}V_2^{k_1},\dotsb, V_0^{i_N}V_1^{j_N}V_2^{k_N} \rangle. $$ \begin{remark}\label{rett} If $d\geq3$, we have that neither $V_0$ nor $V_1$ nor $V_2$ belong to $S_{\mu_1,\mu_2,d}$ as $0\leq\mu_1\leq\mu_2$, and $d-\mu\geq\frac{d}2>1$. \end{remark} In the following Lemma, whose proof is straightforward, we make the elements of $S_{\mu_1,\mu_2,d}$ more explicit. \begin{lemma} \label{useful} The elements of $S_{\mu_1,\mu_2,d}$ are of the form $V_0^{d-(a+1)\mu_1-(b+1)\mu_2}V_1^{a}V_2^{b},$ with $a,b \in{\mathbb{Z}}_{\geq0}, a\mu_1+b\mu_2<d-\mu, $ or $V_1^aV_2^b$, with $(a,b)$ in the Hilbert Basis of $\{(x,y)\in({\mathbb{Z}}_{\geq0})^2\mid\mu_1x+\mu_2y\geq d-\mu\}$. \end{lemma} Write ${\underline{v}}_\ell=(i_\ell, j_\ell, k_\ell)$, and set \begin{equation}\label{sl} s_\ell=i_\ell+(j_\ell+1)\mu_1+(k_\ell+1)\mu_2-d\geq0. \end{equation} The following result is needed to prove Theorem \ref{mtm} below. \begin{lemma}\label{prima} For $\ell=1,\dotsb, N$, we have the following: \begin{enumerate} \item $0\leq s_\ell<\mu_2$. \item $s_\ell = 0$ whenever $i_\ell > 0$. \end{enumerate} \end{lemma} \begin{proof} (1) An exponent ${\underline{v}}_\ell=(i_\ell,j_\ell,k_\ell)$ appears among the minimal generators of $S_{\mu_1,\mu_2,d}$ if and only if the following three triplets either do not belong to $({\mathbb{Z}}_{\geq0})^3$ or the corresponding monomial does not belong to the monomial ideal: $$(i_\ell-1,j_\ell,k_\ell),\,(i_\ell,j_\ell-1,k_\ell),\,(i_\ell,j_\ell,k_\ell-1).$$ If one or two of the exponents are zero, then we need to consider fewer cases, so w.l.o.g.\ we can assume that the three of them are positive. In the first case, we have that $s_\ell=0<\mu_2$, in the second, we get $0\leq s_\ell<\mu_1\leq \mu_2$, and in the third, we have $0\leq s_\ell<\mu_2$. (2) If $i_\ell > 0$ and $s_\ell > 0$, then $V_0^{i_\ell-1}V_1^{j_\ell}V_2^{k_\ell} \in S_{\mu_1,\mu_2,d}$. It follows that $V_0^{i_\ell}V_1^{j_\ell}V_2^{k_\ell} = V_0 \cdot V_0^{i_\ell-1}V_1^{j_\ell}V_2^{k_\ell}$ cannot be a minimal generator. \end{proof} For each ${\underline{v}}_\ell$ and $0\leq t\leq s_\ell$, let $A^t_{i_\ell,j_\ell,k_\ell+1}\in{\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$ the tri-homogeneous polynomial such that \begin{equation}\label{ambi}A^t_{i_\ell,j_\ell,k_\ell+1}({\mathbf{T}},{\mathbf{T}}^{\mu_1}, {\mathbf{T}}^{\mu_2})=\alpha_{d-\mu_1} T_0^tT_1^{s_\ell-t} \end{equation} (it is easy to see that there always exists such a polynomial, and moreover any two choices for $A^t_{i_\ell,j_\ell,k_{\ell}+1}$ coincide modulo ${\mathcal{I}}'$), and set \begin{equation}\label{cozi} \Psi^t_{{i_\ell,j_\ell,k_\ell+1}}=A^t_{{i_\ell,j_\ell,k_\ell+1}}-B^t_{i_\ell,j_\ell+1,k_\ell}, \end{equation} where $B^t_{i_\ell,j_\ell+1,k_\ell}$ has been defined in \eqref{oxix}. \begin{theorem}\label{mtm} The ideal ${\mathcal{I}}$ is minimally generated by a set of minimal generators of ${\mathcal{I}}'$ plus the family \begin{equation}\label{flii} \{\Psi^t_{i_\ell,j_\ell,k_\ell+1}\mid 1\leq \ell\leq N,\, 0\leq t\leq s_\ell\}. \end{equation} \end{theorem} \begin{remark}\label{rkr} Note that the cardinality of \eqref{flii} is equal to $\sum_{\ell=1}^N(s_\ell+1)$. \end{remark} \begin{proof}[Proof of Theorem \ref{mtm}] Let $g = \alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X \in {\mathbb{K}}[{\mathbf{T}},X,Y]$. Theorem~\ref{phiipproperties} implies that $\Phi'({\mathcal{I}}) \subseteq {\mathcal{J}} = \langle \alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X\rangle = g{\mathbb{K}}[{\mathbf{T}},X,Y]$ for the map $\Phi'$ defined in \eqref{phiip}. Since $\Phi'$ preserves the bigrading and $\Psi^t_{i_\ell,j_\ell,k_\ell+1} \in {\mathcal{I}}$ by construction, we have inclusions \[ \Phi'(\langle \Psi^t_{i_\ell,j_\ell,k_\ell+1}\mid 1\leq \ell\leq N,\, 0\leq t\leq s_\ell\rangle) \subseteq \Phi'({\mathcal{I}}) \subseteq (g{\mathbb{K}}[{\mathbf{T}},X,Y])_{\ge0,*}. \] We claim that these inclusions are equalities, i.e., \begin{equation} \label{2.9.claim} \Phi'(\langle \Psi^t_{i_\ell,j_\ell,k_\ell+1}\mid 1\leq \ell\leq N,\, 0\leq t\leq s_\ell\rangle) = \Phi'({\mathcal{I}}) = (g {\mathbb{K}}[{\mathbf{T}},X,Y])_{\ge0,*}. \end{equation} To prove this, first recall that $\Psi^t_{{i_\ell,j_\ell,k_\ell+1}}=A^t_{{i_\ell,j_\ell,k_\ell+1}}-B^t_{i_\ell,j_\ell+1,k_\ell}$, where \begin{align*} A^t_{i_\ell,j_\ell,k_\ell+1}({\mathbf{T}},{\mathbf{T}}^{\mu_1}, {\mathbf{T}}^{\mu_2})&=\alpha_{d-\mu_1} T_0^tT_1^{s_\ell-t}\\ B^t_{i_\ell,j_\ell+1,k_\ell}({\mathbf{T}},{\mathbf{T}}^{\mu_1}, {\mathbf{T}}^{\mu_2}) &= \beta_{d-\mu_2} T_0^tT_1^{s_\ell-t}. \end{align*} Since $A^t_{i_\ell,j_\ell,k_\ell+1}$ and $B^t_{i_\ell,j_\ell+1,k_\ell}$ are trihomogeneous, it follows easily that \begin{align*} \Phi'(\Psi^t_{{i_\ell,j_\ell,k_\ell+1}}) &= \Psi^t_{{i_\ell,j_\ell,k_\ell+1}}({\mathbf{T}},{\mathbf{T}}^{\mu_1}X, {\mathbf{T}}^{\mu_2}Y)\\ &= T_0^tT_1^{s_\ell-t}X^{j_\ell} Y^{k_\ell} \big(\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X) = T_0^tT_1^{s_\ell-t}X^{j_\ell} Y^{k_\ell} g. \end{align*} It suffices to show $(g {\mathbb{K}}[{\mathbf{T}},X,Y])_{\ge0,*} \subseteq \Phi'(\langle \Psi^t_{i_\ell,j_\ell,k_\ell+1}\mid 1\leq \ell\leq N,\, 0\leq t\leq s_\ell\rangle)$. Suppose that $H g \in \big(g{\mathcal{R}}(F)\big)_{\ge0,*}$, and let $T_0^u T_1^{s-u} X^j Y^k$ be a monomial appearing in $H$. Since $\deg(g) = (d-\mu,1),\, \deg(X) = (-\mu_1,1),\, \deg(Y) = (-\mu_2,1)$, we have \[ \deg(T_0^u T_1^{s-u} X^j Y^k g) = (s-j\mu_1-k\mu_2+d-\mu,j+k+1), \] so that $i := s-j\mu_1-k\mu_2+d-\mu \ge 0$. Then $i+ j\mu_1+k\mu_2-d+\mu = s\ge 0$, which implies that $V_0^i V_1^j V_2^k \in S_{\mu_1,\mu_2,d}$ from \eqref{Smu}. It follows that for some $1 \le \ell \le N$ and $0 \le t \le s_\ell$, we have \[ i \ge i_\ell,\ j \ge j_\ell,\ k \ge k_\ell, \] from which we conclude $s \ge s_\ell$. It is then straightforward to find an integer $0 \le t \le s_\ell$ and a monomial ${\mathbf{T}}^{\underline{u}} {\mathbf{X}}^{\underline{v}} {\mathbf{Y}}^{\underline{w}}$ of tridegree $(i-i_\ell,j-j_\ell,k-k_\ell)$ such that \[ \Phi'({\mathbf{T}}^{\underline{u}} {\mathbf{X}}^{\underline{v}} {\mathbf{Y}}^{\underline{w}} \,\Psi^t_{{i_\ell,j_\ell,k_\ell+1}}) = T_0^u T_1^{s-u} X^j Y^k g. \] It follows that $H g \in \Phi'(\langle \Psi^t_{i_\ell,j_\ell,k_\ell+1}\mid 1\leq \ell\leq N,\, 0\leq t\leq s_\ell\rangle)$, which completes the proof of \eqref{2.9.claim}. This tells us that the ideals $\langle \Psi^t_{i_\ell,j_\ell,k_\ell+1}\mid 1\leq \ell\leq N,\, 0\leq t\leq s_\ell\rangle$ and ${\mathcal{I}}$ have the same image under $\Phi'$. Since ${\mathcal{I}}' = \ker(\Phi')$, we conclude that ${\mathcal{I}}$ is generated by ${\mathcal{I}}'$ and $\{\Psi^t_{i_\ell,j_\ell,k_\ell+1}\mid 1\leq \ell\leq N,\, 0\leq t\leq s_\ell\}$. Let us prove that the family is minimally generated. If $d=2,\,\mu_1=0,\,\mu_2=1,$ then we get by direct calculation that $T_1 Y_0 - T_0 Y_1$ generates ${\mathcal{I}}'$, and by writing $\alpha_{d-\mu_1} = \alpha_2 = aT_0^2 + bT_0T_1 + cT_1^2$ and $\beta_{d-\mu_2} = \beta_1,$ we can express the elements in \eqref{flii} as $$\Psi^0_{1,0,1} = A^0_{1,0,1} - B^0_{1,1,0} = (aT_0 Y_0 + bT_0Y_1 + cT_1 Y_1) - \beta_1(T_0,T_1) X_0$$ and $ \Psi^0_{0,0,2} = A^0_{0,0,2} - B^0_{0,1,1} = \alpha_2(Y_0,Y_1) - \beta_1(Y_0,Y_1) X_0.$ It is easy to see that these three elements form a minimal set of generators of ${\mathcal{I}}.$ The remaining cases are $\mu_1=\mu_2=0$ or $d\geq3$. In the first case, a direct computation shows that ${\mathcal{I}}'=0.$ In the latter, thanks to Remark \ref{rett} we see that the only elements in the family of generators of total degree two are the generators of ${\mathcal{I}}'$, which is a minimal generating set of ${\mathcal{I}}'$ by Proposition \ref{2.2}. Suppose then that an element of \eqref{flii} can be written as a polynomial combination of the others modulo ${\mathcal{I}}'$, i.e., \[ \Psi^{t_0}_{{i_{\ell_0},j_{\ell_0},k_{\ell_0}+1}}-\sum Q^{t,\ell} \hskip1pt \Psi^t_{{i_\ell,j_\ell,k_\ell+1}}\in{\mathcal{I}}'. \] Applying $\Phi'$, we obtain \begin{equation} \label{mostro} T_0^{t_0} T_1^{s_{\ell_0} - t_0} X^{j_{\ell_0}} Y^{k_{\ell_0}} g = \sum \Phi'(Q^{t,\ell}) \hskip1pt T_0^{t} T_1^{s_{\ell} - t} X^{j_{\ell}} Y^{k_{\ell}} g \end{equation} in ${\mathbb{K}}[{\mathbf{T}},X,Y]$. Consider the map $\pi : {\mathbb{K}}[{\mathbf{T}},X,Y] \to {\mathbb{K}}[V_0^{\pm1},V_1,V_2]$ defined by \[ (T_0,T_1,X,Y) \longmapsto (V_0,V_0,V_0^{-\mu_1}V_1,V_0^{-\mu_2}V_2). \] If we divide \eqref{mostro} by $g$ and apply $\pi$, we obtain an equation \[ V_0^{s_{\ell_0} - \mu_1 j_{\ell_0} - \mu_2 k_{\ell_0}} V_1^{j_{\ell_0}} V_2^{k_{\ell_0}} =\sum \pi(\Phi'(Q^{t,\ell})) \hskip1pt V_0^{s_{\ell}- \mu_1 j_{\ell} - \mu_2 k_{\ell}} V_1^{j_{\ell}} V_2^{k_{\ell}} \] in ${\mathbb{K}}[V_0^{\pm1},V_1,V_2]$. However, one checks that $\pi(\Phi'(X_i)) = \pi(T_0^{\mu_1-i}T_1^i X) = V_1$, and similarly $\pi(\Phi'(Y_i)) = V_2$. It follows that $\pi(\Phi'(Q^{t,\ell}))$ is a polynomial in $\mathbf{V} = (V_0,V_1, V_2)$. Hence, if we multiply each side by $V_0^{d-\mu}$, we obtain the equation \begin{equation} \label{mostro2} V_0^{i_{\ell_0}} V_1^{j_{\ell_0}} V_2^{k_{\ell_0}} =\sum \pi(\Phi'(Q^{t,\ell})) \hskip1pt V_0^{i_{\ell}} V_1^{j_{\ell}} V_2^{k_{\ell}} \end{equation} in ${\mathbb{K}}[\mathbf{V}]$. This is impossible since the monomials \eqref{mostro2} are minimal generators of the ideal $S_{\mu_1,\mu_2,d} \subseteq {\mathbb{K}}[\mathbf{V}]$ defined in \eqref{Smu}. \end{proof} \begin{example} Set $\mu_1=3,\,\mu_2=5$, and $d=17$. The exponents of the minimal generators of the monomial ideal $S_{3,5,17}$ can be easily computed to be $$\{(9,0,0),\,(0,3,0),\,(0,0,2),\,(1,1,1),\,(0,2,1),\,(4,0,1),\,(3,2,0),\,(6,1,0)\}.$$ The number $s_\ell$ defined in \eqref{sl} is always equal to $0$ except for $(0,2,1)$, where it is $2$, and for $(0,0,2)$, where it is $1$. Due to Remark \ref{rkr}, the family of minimal generators \eqref{flii} of ${\mathcal{I}}$ modulo ${\mathcal{I}}'$ has then cardinality $11$. In addition, thanks to Remark \ref{numm}, we know that ${\mathcal{I}}'$ has $36$ minimal generators. \par To confirm all these numbers with a computational example, we set $\alpha_{d-\mu_1}=T_0^{14}$, and $\beta_{d-\mu_2}=T_1^{12}$. An explicit computation with \emph{Macaulay2} (\cite{mac2}) gives the following set of generators of ${\mathcal{I}}'$: \begin{itemize} \item[] $T_1Y_4 - T_0Y_5 , T_1Y_3 - T_0Y_4 , T_1Y_2 - T_0Y_3 , T_1Y_1 - T_0Y_2 , T_1Y_0-T_0Y_1$, \item[] $T_1X_2-T_0X_3, T_1X_1-T_0X_2, T_1X_0-T_0X_1$, \item[] $Y_4^2-Y_3Y_5,\,Y_4Y_3-Y_2Y_5,\,Y_4Y_2-Y_1Y_5,Y_1Y_4-Y_0Y_5,\,Y_3^2-Y_1Y_5,\,\{\mathbf{Y}}_3Y_2-Y_0Y_5,\,Y_1Y_3-Y_0Y_4,\,Y_2^2-Y_0Y_4,\,Y_1Y_2-Y_0Y_3,\,Y_1^2-Y_0Y_2$, \item[] $X_3Y_4-X_2Y_5, \,X_2Y_4-X_1Y_5,\, X_1Y_4-X_0Y_5,\,X_3Y_3-X_1Y_5,\,X_2Y_3-X_0Y_5,\,\\ X_1Y_3-X_0Y_4,\,X_3Y_2-X_0Y_5,\,X_2Y_2-X_0Y_4,X_1Y_2-X_0Y_3,\,X_3Y_1-X_0Y_4,\,\\ X_2Y_1-X_0Y_3,\,X_1Y_1-X_0Y_2,\,X_3Y_0-X_0Y_3,\,X_2Y_0-X_0Y_2,\,X_1Y_0-X_0Y_1$, \item[] $X_2^2-X_1X_3,\,X_1X_2-X_0X_3,\,X_1^2-X_0X_2$. \end{itemize} Then the following elements complete a system of minimal generators of ${\mathcal{I}}$: \begin{itemize} \item[] $Y_0^2Y_1-X_3Y_5^2,\,Y_0^3-X_2Y_5^2,\, X_0^2Y_0Y_2-X_3^3Y_5,\,X_0^2Y_0Y_1-X_2X_3^2Y_5,\, \{\mathbf{X}}_0^2Y_0^2-X_1X_3^2Y_5, X_3^4-X_0^3Y_0$, \item[] $T_0X_0Y_0^2-T_1X_3^2Y_5,\,T_0^4Y_0^2-T_1^4X_0Y_5,\,T_1^3X_3^3-T_0^3X_0^2Y_0,\,T_1^6X_3^2-T_0^6X_0Y_0,\,\{\mathbf{T}}_1^9X_3-T_0^9Y_0$. \end{itemize} \end{example} \section{Comparison with Previous Work}\label{3} Minimal generators for ${\mathcal{I}}$ have been previously studied and made explicit by Kustin, Polini and Ulrich in \cite{KPU11}. One of their main results \cite[Theorem 3.6]{KPU11} states that ${\mathcal{I}}$ (${\mathcal A}$ in their paper) is generated by ${\mathcal{I}}'$ (their $H$) modulo an ideal generated by eligible tuples which can be seen to be in one-to-one correspondence with our \eqref{flii}. Their elements can be constructed explicitly, see \cite[Definition 3.5]{KPU11}, although in a more complicated way. They indeed claim that in page 25{:}\hskip1pt\emph{``we obtain closed formulas for the defining equations of ${\mathcal R}(I)$ {\rm(}Theorem 3.6\hskip.5pt{\rm)} which turn out to be tremendously complicated despite the seemingly strong assumptions on $I$!''} The construction of the equations is similar to what we have done above: they start with the forms $\alpha_{d-\mu_1},\,\beta_{d-\mu_2}$, and after some calculations in \cite[Definition 3.3]{KPU11}, they produce the polynomials that belong to the list of minimal generators (\cite[Definition 3.5]{KPU11}). The strategy used to prove their result goes as follows. The quotient ring $A = {\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]/{\mathcal{I}}'$ is the coordinate ring of the 3-dimensional rational normal scroll $S_{1,\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+3}$ coming from the lattice polytope shown in Figure~\ref{PolytopeOfA}. These are the ``rational normal scrolls'' in the title of \cite{KPU11}. \begin{figure}[t] \[ {\psset{unit=1pt}\begin{pspicture}(210,55 \psline[linestyle=dashed](0,0)(15,15)(135,15) \psline(135,15)(195,45)(15,45)(0,0) \psline[linestyle=dashed](15,15)(15,45) \psline(0,0)(30,0)(135,15) \psline(30,0)(195,45) \rput[bl](61,19){$\mu_1$} \rput[bl](81,49){$\mu_2$} \rput[bl](16,3.5){$1$} \end{pspicture}} \] \caption{The Lattice Polytope for $S_{1,\mu_1,\mu_2}$} \label{PolytopeOfA} \end{figure} The inclusion ${\mathcal{I}}' \subseteq {\mathcal{I}}$ induces a surjection \begin{equation} \label{AtoRees} A \twoheadrightarrow {\mathcal{R}}(I). \end{equation} As noted in \cite{KPU11}, the ring $A$ gives a better approximation to the Rees algebra ${\mathcal{R}}(I)$ than the symmetric algebra of $I$. In fact, ${\mathcal{R}}(I) = A/{\mathcal{I}} A$, where ${\mathcal{I}} A$ is a height one prime ideal. In Theorem 1.11, a monomial ideal $K \subseteq A$ is defined such that $K^{(d-\mu)}$, its $(d-\mu)$-th symbolic power, is isomorphic to ${\mathcal{I}} A$, keeping the bigrading. Then the minimal (monomial) generators of $K^{(d-\mu)}$ are made explicit (\cite[Theorem 3.2]{KPU11}), and from here lifted to the whole ring ${\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$. To compare this to our approach, note that the back face of the polytope in Figure~\ref{PolytopeOfA} is the polygon $P$ appearing in Figure~\ref{polygonfig}, which gives the rational normal surface $S_{\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+1}$. For us, $S_{\mu_1,\mu_2}$ is more natural geometrically (see \eqref{BGIdiagram}), while \cite{KPU11} uses $S_{1,\mu_1,\mu_2}$ because it leads naturally to the surjection \eqref{AtoRees}. In a different direction, our situation was also studied by Lin and Polini in \cite{LP14}. Given a homogeneous complete intersection $J = \langle f_1,\dots,f_r\rangle$ in a polynomial ring and an integer $d\ge \max\{\deg(f_i)\}$, they consider the truncation $J_{\ge d}$ of $J$ in degrees $\ge d$ and prove several results about the Rees algebra ${\mathcal{R}}(J_{\ge d})$, including a complete description when $d$ is large (\cite[Theorem 3.9]{LP14}). To relate their results to our situation, note that the ideal $J=\langle\alpha_{d-\mu_1},\,\beta_{d-\mu_2}\rangle\subseteq{\mathbb{K}}[{\mathbf{T}}]$ is clearly a complete intersection, and it is straightforward to check that our ideal $I = \langle \alpha_{d-\mu_1} {\mathbf{T}}^{\mu_1}, \beta_{d-\mu_2} {\mathbf{T}}^{\mu_2}\rangle$ is precisely $J_{\ge d}$. To analyze this case, Lin and Polini set ${\mathfrak M}=\langle T_0, T_1\rangle \subseteq {\mathbb{K}}[{\mathbf{T}}]$ and $M = {\mathfrak M}^{\mu_1}(\mu_1-d)\oplus{\mathfrak M}^{\mu_2}(\mu_2-d)$. The surjection $M \twoheadrightarrow I$ defined by $(u,v)\mapsto u\alpha_{d-\mu_1}+v\beta_{d-\mu_2}$ gives a surjection ${\mathcal{R}}(M)\twoheadrightarrow {\mathcal{R}}(I)$ of Rees algebras, which turns out to be precisely the map \eqref{AtoRees}. Their main result about ${\mathcal{R}}(I)$ in this case (\cite[Theorem 3.11]{LP14}) is based on the methods of \cite{KPU11}. Subsequently, in \cite[Example 3.20]{Madsen}, Madsen studied the Rees algebra of an arbitrary height two ideal of ${\mathbb{K}}[{\mathbf{T}}]$ generated in degree $d$. Madsen builds up the Hilbert-Burch matrix of the given ideal one column at a time. The cokernels $E_1, E_2, \dots$ of the partial Hilbert-Burch matrices give surjections \[ E_1 \twoheadrightarrow E_2 \twoheadrightarrow \cdots \] that induce surjections of Rees algebras \begin{equation} \label{madsenapproach} S \twoheadrightarrow {\mathcal{R}}(E_1) \twoheadrightarrow {\mathcal{R}}(E_2) \twoheadrightarrow \cdots \end{equation} for a suitable polynomal ring $S$. If ${\mathcal{R}}(E_i) = S/{\mathcal{K}}_{i}$, then ${\mathcal{K}}_i \subseteq {\mathcal{K}}_{i+1}$ and \[ {\mathcal{R}}(E_{i+1}) = {\mathcal{R}}(E_i)/{\mathcal{K}}_{i+1}{\mathcal{R}}(E_i). \] By the incremental construction of the $E_i$'s, ${\mathcal{K}}_{i+1}{\mathcal{R}}(E_i)$ has a relatively simple description (\cite[Proposition 3.1]{Madsen}). The main result (\cite[Theorem 3.9]{Madsen}) describes some of the minimal generators of ${\mathcal{K}}_{i+1}{\mathcal{R}}(E_i)$. When applied to our ideal $I= \langle \alpha_{d-\mu_1} {\mathbf{T}}^{\mu_1}, \beta_{d-\mu_2} {\mathbf{T}}^{\mu_2}\rangle$, Madsen's results are strong enough to give a complete description of the defining equations ${\mathcal{I}}$ of ${\mathcal{R}}(I)$. In the sequence of Rees algebras \eqref{madsenapproach}, the last two turn out to be exactly \eqref{AtoRees}, which in the notation of \cite[Example 3.20]{Madsen} is written ${\mathcal{R}}(E) \to {\mathcal{R}}(I)$. Madsen uses the modules $M = {\mathfrak M}^{\mu_1}(\mu_1-d)\oplus{\mathfrak M}^{\mu_2}(\mu_2-d)$ and $F = E^{**}$ and notes that $E \simeq M$ and $F \simeq {\mathbb{K}}[{\mathbf{T}}](\mu_1-d)\oplus {\mathbb{K}}[{\mathbf{T}}](\mu_2-d)$, so that $E = F_{\ge d}$. (Madsen works with $I(d)$ rather than $I$, so $M$ is written ${\mathfrak M}^{\mu_1}(\mu_1)\oplus{\mathfrak M}^{\mu_2}(\mu_2)$ in \cite{Madsen}.) Madsen also notes that ${\mathcal{R}}(F)$ is the ring ${\mathbb{K}}[{\mathbf{T}},X,Y]$ defined in Section~\ref{1}. The inclusion $E \subseteq F$ induces a map of Rees algebras ${\mathcal{R}}(E) \hookrightarrow {\mathcal{R}}(F) = {\mathbb{K}}[{\mathbf{T}},X,Y]$ that fits into the commutative diagram \begin{equation} \label{Madsen.sec3} \begin{array}{c} \SelectTips{cm}{} \xymatrix@C=10pt{ {\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}] \ar@{->>}[d]^(.55){\Phi'} \ar@{->>}[ddr]^\Phi\\ {\mathcal{R}}(E) \ar@{^{(}->}[d] \ar[dr]\\ {\mathbb{K}}[{\mathbf{T}},X,Y] \ar[dr]^\phi & {\mathcal{R}}(I)\ar@{^{(}->}[d] \\ & {\mathbb{K}}[{\mathbf{T}},s]} \end{array} \end{equation} where $\Phi', \Phi, \phi$ are from the diagram \eqref{Reesdiagram}. Madsen's approach refines \eqref{Reesdiagram} by regarding $\Phi'$ and $\Phi$ as surjections onto their images, which are the Rees algebras ${\mathcal{R}}(E)$ and ${\mathcal{R}}(I)$ shown in \eqref{Madsen.sec3}. Since ${\mathcal{I}}' = \ker(\Phi')$ and ${\mathcal{R}}(E) = {\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]/{\mathcal{I}}'$, Madsen observes that it suffices to know ${\mathcal{I}} {\mathcal{R}}(E)$, which is given by the intersection \begin{equation} \label{sec3.intersection1} {\mathcal{I}} {\mathcal{R}}(E) = ( g{\mathcal{R}}(F)) \cap {\mathcal{R}}(E), \end{equation} where $g = \alpha_{d-\mu_1}Y - \beta_{d-\mu_2}X$ as in the proof of Theorem~\ref{mtm}. Note that the height one prime ideal ${\mathcal{I}} A$ in \cite{KPU11} is precisely ${\mathcal{I}} {\mathcal{R}}(E)$ is since the ring $A$ in \eqref{AtoRees} is ${\mathcal{R}}(E)$. In \cite[Example 3.20]{Madsen}, Madsen refines \eqref{sec3.intersection1} to the stronger equality \begin{equation} \label{sec3.intersection2} {\mathcal{I}} {\mathcal{R}}(E) = (g{\mathcal{R}}(F))_{\ge0,*}. \end{equation} Madsen uses earlier results in the paper to explain which multiples occur and how they lift back to ${\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$. Combining these new generators with ${\mathcal{I}}'$, Madsen obtains a complete description of the defining equations ${\mathcal{I}}$ of ${\mathcal{R}}(I)$. For us, \eqref{sec3.intersection2} is \eqref{2.9.claim} since $\Phi'({\mathcal{I}}) = {\mathcal{I}}{\mathcal{R}}(E)$. Our proof uses the explicit construction of the $\Psi^t_{i_\ell,j_\ell,k_\ell+1} \in {\mathcal{I}}$, while for Madsen, \eqref{sec3.intersection2} is a special case of a more general result (see \cite[(3.11)]{Madsen}). Overall, our treatment of ${\mathcal{I}}$ is consistent with what Madsen does;\ the main difference is that we use elementary methods that avoid the machinery of \cite{Madsen}. \section{The Rees Algebra of the Plane Curve}\label{4} Consider now a rational parametrization ${\mathbb{P}}^1\to{\mathbb{P}}^2$ of a genus zero algebraic curve $C\subseteq{\mathbb{P}}^2$ of degree $d$ defined by homogeneous elements $f_{0,d},f_{1,d},f_{2,d} \in{\mathbb{K}}[{\mathbf{T}}]$ of degree $d$ and $\gcd(f_{0,d},f_{1,d},f_{2,d})=1:$ \begin{equation}\label{f} \begin{array}{cccc} {\bf f}:&{\mathbb{P}}^1&\to&{\mathbb{P}}^2\\ &{\mathbf{t}}&\mapsto & (f_{0,d}({\mathbf{t}}): f_{1,d}({\mathbf{t}}): f_{2,d}({\mathbf{t}})) \end{array} \end{equation} We assume in addition that $d > 1$ and $f_{0,d},f_{1,d},f_{2,d}$ are linearly independent. In this section we consider the Rees algebra ${\mathcal{R}}(K)$ of the ideal $K = \langle f_{0,d},f_{1,d},f_{2,d}\rangle \subseteq {\mathbb{K}}[{\mathbf{T}}]$ and explain how ${\mathbf{f}}$ generates the diagrams \eqref{BGIdiagram} and \eqref{Reesdiagram} from the Introduction. Let the $\mu$-basis of $f_{0,d},f_{1,d},f_{2,d}$ be \[ p = \big(p_{0,\mu},p_{1,\mu},p_{2,\mu}\big)\in{\mathbb{K}}[{\mathbf{T}}]_{\mu}^3,\, q= \big(q_{0,d-\mu},q_{1,d-\mu},q_{2,d-\mu}\big)\in{\mathbb{K}}[{\mathbf{T}}]_{d-\mu}^3, \] where as usual $\mu \le d-\mu$. Following \cite{BGI}, we let \[ A =\big(A_{0,\mu_1},A_{1,\mu_1},A_{2,\mu_1}\big)\in{\mathbb{K}}[{\mathbf{T}}]_{\mu_1}^3,\,B=\big(B_{0,\mu_2},B_{1,\mu_2},B_{2,\mu_2}\big)\in{\mathbb{K}}[{\mathbf{T}}]_{\mu_2}^3 \] be a $\mu$-basis of $(p_{0,\mu},p_{1,\mu},p_{2,\mu})$ with $0 \le \mu_1 \le \mu_2$ and $\mu = \mu_1 + \mu_2$. If $\mu_1<\mu_2,$ then $A$ is uniquely defined up to a constant, and $B$ uniquely defined (up to a constant also) modulo $A$. If $\mu_1=\mu_2$, then there are infinitely possible choices of $A$ up to a constant. \begin{remark} \label{nonunique} If $\mu< d-\mu$, then $p$ is unique up to a constant, which implies that $\mu_1$ is uniquely determined. However, if $\mu=d-\mu$, then different choices of $p$ can lead to different values of $\mu_1$. For example, suppose that $d=6$ and $(f_{0,6},f_{1,6},f_{2,6})$ parametrizes a rational sextic in ${\mathbb{P}}^2$ with $\mu=3$. If the curve has three triple points, then \cite[Lem.\ 4.14]{CKPU} implies that in suitable coordinates, the Hilbert-Burch matrix becomes \[ \begin{pmatrix} Q_1 & Q_1\\ Q_2 & 0\\ 0 & Q_3\end{pmatrix}, \] where $Q_1, Q_2, Q_3$ are linearly independent cubics. We can choose $p$ to be any nonzero vector in the column space. Writing $p$ as a row, we have: \begin{align*} p &= (Q_1,Q_2,0) \text{ has $\mu_1 = 0$ since $Q_1,Q_2,0$ are linearly dependent.}\\ p &= (2Q_1,Q_2,Q_3) \text{has $\mu_1 = 1$ since $2Q_1,Q_2,Q_3$ are linearly independent.} \end{align*} As we vary over all $p$, the generic value is $\mu_1=1$, but up to a constant, there are three choices of $p$ with $\mu_1 = 0$. \end{remark} Since $(p_{0,\mu},p_{1,\mu},p_{2,\mu})$ is a syzygy on $(f_{0,d},f_{1,d},f_{2,d})$, the latter is a syzygy on the former, so that $(f_{0,d},f_{1,d},f_{2,d})$ can be decomposed as \begin{equation} \label{decom} \big(f_{0,d},f_{1,d},f_{2,d}\big) = \alpha_{d-\mu_1}\big(A_{0,\mu_1},A_{1,\mu_1},A_{2,\mu_1}\big)+\beta_{d,-\mu_2}\big(B_{0,\mu_2},B_{1,\mu_2},B_{2,\mu_2}\big), \end{equation} with $\alpha_{d-\mu_1},\,\beta_{d-\mu_2}\in{\mathbb{K}}[{\mathbf{T}}]$ homogeneous of degrees $d-\mu_1$ and $d-\mu_2$ respectively. Since $\alpha_{d-\mu_1},\ \beta_{d-\mu_2}$ clearly have no common factors, they give a parametrization $\gamma : {\mathbb{P}}^1 \to {\mathbb{P}}^{\mu+1}$ as in \eqref{gamma} with image contained in ${\mathcal{S}}_{\mu_1,\mu_2}$. To relate these parametrized curves geometrically, we write for $i=0,1,2$: \begin{equation} \label{AB} \begin{array}{c} \begin{aligned} A_{i,\mu_1}&=\sum_{j=0}^{\mu_1} a_{i,j}T_0^{\mu_1-j}T_1^j = \underline{a}_i\cdot {\mathbf{T}}^{\mu_1}\\ B_{i,\mu_2}&=\sum_{j=0}^{\mu_2}b_{i,j}T_0^{\mu_2-j}T_1^j = \underline{b}_i\cdot {\mathbf{T}}^{\mu_2}. \end{aligned} \end{array} \end{equation} Then we get the projection ${\mathbb{P}}^{\mu+1} \dashrightarrow {\mathbb{P}}^2$ defined by \begin{equation} \label{projection} ({\mathbf{X}}:{\mathbf{Y}}) \longmapsto {\mathbf{Z}} = (\underline{a}_0\!\cdot {\mathbf{X}} + \underline{b}_0\!\cdot {\mathbf{Y}}:\underline{a}_1\!\cdot {\mathbf{X}} + \underline{b}_1\!\cdot {\mathbf{Y}}:\underline{a}_2\!\cdot {\mathbf{X}} + \underline{b}_2\!\cdot {\mathbf{Y}}), \end{equation} where ${\mathbf{Z}}=(Z_0,Z_1,Z_2)$ are homogeneous coordinates for ${\mathbb{P}}^2$. The projection \eqref{projection} interacts nicely with the scroll ${\mathcal{S}}_{\mu_1,\mu_2} \subseteq {\mathbb{P}}^{\mu+1}$. First, the computations \begin{align*} ({\mathbf{t}}^{\mu_1}:0) &\longmapsto (\underline{a}_0\!\cdot {\mathbf{t}}^{\mu_1}:\underline{a}_1\!\cdot {\mathbf{t}}^{\mu_1}:\underline{a}_2\!\cdot {\mathbf{t}}^{\mu_1}) = (A_{0,\mu_1}({\mathbf{t}}):A_{1,\mu_1}({\mathbf{t}}): A_{2,\mu_1}({\mathbf{t}})) = A({\mathbf{t}})\\ (0:{\mathbf{t}}^{\mu_2}) &\longmapsto (\underline{b}_0\!\cdot {\mathbf{t}}^{\mu_2}:\hskip1pt\underline{b}_1\!\cdot {\mathbf{t}}^{\mu_2}:\hskip1pt\underline{b}_2\!\cdot {\mathbf{t}}^{\mu_2}) = (B_{0,\mu_2}({\mathbf{t}}):\hskip.5ptB_{1,\mu_2}({\mathbf{t}}): \hskip.5ptB_{2,\mu_2}({\mathbf{t}})) = B({\mathbf{t}}) \end{align*} show that the rational normal curves that form the edges of the scroll project to the curves given by the syzygies $A,B$. Furthermore, by Hilbert-Burch, $p $ is equal up to a nonzero constant to the signed maximal minors of the $2\times3$ matrix made by $A$ and $B.$ Note that $p$ has no basepoints since it is a minimal syzygy. Hence $A({\mathbf{t}}),B({\mathbf{t}})$ are always distinct points in ${\mathbb{P}}^2$. It follows that the line of the scroll for parameter ${\mathbf{t}}$ projects to the line through $A({\mathbf{t}}),B({\mathbf{t}})$, which is the moving line defined by $p$. We also have that by \eqref{decom}, $\gamma({\mathbf{t}}) = (\alpha_{d-\mu_1}({\mathbf{t}})\hskip1pt{\mathbf{t}}^{\mu_1}:\beta_{d-\mu_2}({\mathbf{t}})\hskip1pt{\mathbf{t}}^{\mu_2})$ projects to $(f_{0,d}({\mathbf{t}}):f_{1,d}({\mathbf{t}}):f_{2,d}({\mathbf{t}}))$. All these show that the projection ${\mathbb{P}}^{\mu+1} \dashrightarrow {\mathbb{P}}^2$ induces a morphism ${\mathcal{S}}_{\mu_1,\mu_2} \rightarrow {\mathbb{P}}^2$. It follows that we get the commutative diagram from \eqref{BGIdiagram}: \[ \SelectTips{cm}{} \xymatrix@C=30pt{ {\mathbb{P}}^1 \ar[drr]^\gamma \ar[dr] \ar[ddr]_{\mathbf{f}}& & \\ & {\mathcal{S}}_{\mu_1,\mu_2} \ar@{^{(}->}[r] \ar[d] & {\mathbb{P}}^{\mu+1} \ar@{-->}[dl] \\ & {\mathbb{P}}^2 & } \] From the algebraic point of view, the projection \eqref{projection} gives the map: \begin{equation} \label{Gama} \begin{array}{rcl} \Gamma:{\mathbb{K}}[{\mathbf{T}}, {\mathbf{Z}}]&\longrightarrow&{\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}] \\ T_i&\longmapsto& T_i,\ i=0,1\\ Z_j&\longmapsto& \underline{a}_j \cdot {\mathbf{X}} + \underline{b}_j \cdot {\mathbf{Y}},\ j=0,1,2. \end{array} \end{equation} For the Rees algebra of $K=\langle f_{0,d},f_{1,d},f_{2,d}\rangle \subseteq {\mathbb{K}}[{\mathbf{T}}]$, we have the map: \begin{equation} \label{psidef} \begin{array}{rcl} \psi:{\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]&\longrightarrow&{\mathbb{K}}[{\mathbf{T}},s]\\ T_i&\longmapsto & T_i, \ i=0,1\\ Z_j&\longmapsto &f_{j,d}\,s, \ j=0, 1, 2, \end{array} \end{equation} whose image is ${\mathcal{R}}(K)$. The kernel ${\mathcal{K}}=\ker{\psi} \subseteq {\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]$ is the moving curve ideal of the parametrized curve in ${\mathbb{P}}^2$ and gives the equations defining the Rees algebra. To see how $\Gamma$ and $\psi$ relate to the map $\phi$ from Section~\ref{1} and maps $\Phi$, $\Phi'$ from Section \ref{2}, we need to introduce one more map: \begin{equation} \label{Omega} \begin{array}{rcl} \Omega:{\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]&\longrightarrow&{\mathbb{K}}[{\mathbf{T}},X,Y]\\ T_i&\longmapsto & T_i,\, i=0,1\\ Z_j&\longmapsto & A_{j,\mu_1}X+B_{j,\mu_2}Y, \,j=0,1,2. \end{array} \end{equation} If we use the bigrading on ${\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]$ defined by $\deg(T_i) = (1,0)$ and $\deg(Z_i) = (0,1)$, then one can check that $\Gamma$, $\psi$ and $\Omega$ all preserve the bigradings. \begin{lemma} \label{maplemma} The maps $\Phi$, $\Phi'$, $\Gamma$, $\Omega$, $\phi$, and $\psi$ defined above fit together into the commutative diagram from \eqref{Reesdiagram}{\rm:} \[ \xymatrix{ & {\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}] \ar[d]^{\Phi'} \ar[ddr]^\Phi & \\ {\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}] \ar[ur]^\Gamma \ar[r]^\Omega \ar[rrd]_\psi & {\mathbb{K}}[{\mathbf{T}},X,Y] \ar[dr]^\phi & \\ && {\mathbb{K}}[{\mathbf{T}},s]} \] \end{lemma} \begin{proof} We have already observed that $\Phi = \phi\circ \Phi'$. Then notice that \[ Z_j \stackrel{\Gamma}{\longmapsto} \sum_{k=0}^{\mu_1}a_{j,k}X_{k}+ \sum_{k=0}^{\mu_2}b_{j,k}Y_{k} \stackrel{\Phi'}{\longmapsto}\sum_{k=0}^{\mu_1}a_{j,k}(T_0^{\mu_1-k}T_1^k X)+ \sum_{k=0}^{\mu_2}b_{j,k}(T_0^{\mu_2-k}T_1^k Y). \] The expression on the right equals $A_{j,\mu_1}X+B_{j,\mu_2}Y = \Omega(Z_j)$, and $\Omega = \Phi' \circ \Gamma$ follows. Finally, we have \[ Z_j \stackrel{\Omega}{\longmapsto} A_{j,\mu_1}X+B_{j,\mu_2}Y \stackrel{\phi}{\longmapsto} A_{j,\mu_1}\alpha s +B_{j,\mu_2}\beta_{d-\mu_2} s \] The expression on the right equals $f_{jd,}s = \psi(Z_j)$, and $\psi =\phi \circ \Omega$ follows. \end{proof} \begin{corollary} \label{mapcor} The ideal ${\mathcal{K}}$ is equal to the inverse image via $\Omega$ of $\langle \alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X\rangle$. \end{corollary} \begin{proof} Lemma~\ref{maplemma} implies that ${\mathcal{K}} = \ker(\psi) = \ker(\phi \circ \Omega) = \Omega^{-1}(\ker(\phi))$. Then we are done since $\ker(\phi) = \langle \alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X\rangle$ by Lemma~\ref{alphabetaRees}. \end{proof} As is standard, the syzygy $(p_{0,\mu},p_{1,\mu},p_{2,\mu})$ gives the polynomial \begin{equation}\label{p} p:=p_{0,\mu}Z_0+p_{1,\mu}Z_1+p_{2,\mu}Z_2 = \det\!\left(\!\!\begin{array}{ccc} Z_0&Z_1&Z_2\\ A_{0,\mu_1}&A_{1,\mu_1}&A_{2,\mu_1}\\ B_{0,\mu_2}&B_{1,\mu_2}&B_{2,\mu_2} \end{array}\!\! \right)\in{\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]. \end{equation} Note that $p$ is an element of bidegree $(\mu,1)$ which vanishes after specializing $Z_j\mapsto A_{j,\mu_1}X+B_{j,\mu_2}Y, \,j=0,1,2$, and hence belongs to ${\mathcal{K}}$. Moreover, $p\in\ker(\Omega)$. Actually, it generates the whole kernel. \begin{proposition} \label{kerOmega} The ideal generated by $p$ is equal to $\ker(\Omega).$ \end{proposition} \begin{proof} We can assume w.l.o.g.\ that $F_{i,j}\in\ker(\Omega)$ is primitive with respect to the ${\mathbf{T}}$-variables. From \eqref{p}, we get that if we specialize $(Z_0,Z_1,Z_2)$ in $\overline{{\mathbb{K}}({\mathbf{T}})}^3$, then $p=0$ if and only if there exist $\lambda,\nu\in\overline{{\mathbb{K}}({\mathbf{T}})}$ such that $${\mathbf{Z}}=\lambda \big(A_{0,\mu_1},A_{1,\mu_1},A_{2,\mu_2}\big)+\nu\big(B_{0,\mu_2},B_{1,\mu_2},B_{2,\mu_2}\big).$$ For each of these $\lambda,\mu$, we set $x=\lambda, \,y=\mu$, and get that $$F_{i,j}\big({\mathbf{T}},A_{0,\mu_1}x+B_{0,\mu_2}y,A_{1,\mu_1}x+B_{1,\mu_2}y,A_{2,\mu_1}x+B_{2,\mu_2}y \big)=0.$$ By the Nullstellensatz, we have that $p$ divides $F_{i,j}$ in ${\mathbb{K}}({\mathbf{T}})[{\mathbf{Z}}]$. As both $p$ and $F_{i,j}$ are primitive with respect to the ${\mathbf{T}}$-variables, the division actually holds in ${\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]$. \end{proof} The following result shows that in some bidegrees, all we need is $p$. \begin{corollary}\label{belowbottom} If $F_{i,j}\in {\mathcal{K}}$ and $i+\mu_2j<d-\mu_1$, then it is a multiple of $p$. \end{corollary} \begin{proof} If $\Omega(F_{i,j})$ is not zero, it should be a polynomial of ${\mathbf{T}}$-degree at least $d-\mu_1$. On the other hand, this polynomial has ${\mathbf{T}}$-degree $i+\mu_2j$. From here, the claim follows straightforwardly. \end{proof} \begin{remark}\label{rem}\ \begin{enumerate} \item In Figure \ref{triangularregion} at the end of Section~\ref{5}, Corollary~\ref{belowbottom} shows that in bidegrees that lie strictly below the bottom edge of the triangular region in the figure, ${\mathcal{K}}$ is generated by $p$. \item Corollary~\ref{belowbottom} is a slight strengthening of Theorem 2.10(3) of \cite{CD2}. \end{enumerate} \end{remark} \section{The Other Syzygy and Some Explicit Minimal Generators}\label{5} So far, the syzygy $p$ of degree $\mu$ has played a central role. But what about the other syzygy $q = q_{0,d-\mu}Z_0+q_{1,d-\mu}Z_1+q_{2,d-\mu}Z_2$ of degree $d-\mu$? Our next result shows that it maps via $\Omega$ to $-(\alpha_{d-\mu_1} Y-\beta_{d-\mu_2} X)$. \begin{proposition}\label{q} \label{abprop} With notation as above, we have that $$\Omega(q)=-(\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X).$$ \end{proposition} \begin{proof} To prove the claim, by using \eqref{Omega}, we have to show that \begin{equation}\label{xio} \begin{aligned} \alpha_{d-\mu_1} &= -(q_{0,d-\mu} B_{0,\mu_2} + q_{1,d-\mu} B_{1,\mu_2} + q_{2,d-\mu} B_{2,\mu_2})\\ \beta_{d-\mu_2} &= q_{0,d-\mu} A_{0,\mu_1} + q_{1,d-\mu} A_{1,\mu_1} + q_{2,d-\mu} A_{2,\mu_1}. \end{aligned} \end{equation} Since $(f_{0,d}, f_{1,d}, f_{2,d})$ is given by the $2\times2$ minors (with signs) of its Hilbert-Burch matrix, we have \begin{align*} f_{0,d} &= p_{1,\mu} q_{2,d-\mu} - p_{2,\mu} q_{1,d-\mu},\ f_{1,d}= p_{2,\mu} q_{0,d-\mu} - p_{0,\mu} q_{2,d-\mu},\ f_{2,d}\\ &= p_{0,\mu} q_{1,d-\mu} - p_{1,\mu} q_{0,d-\mu}\\ p_{0,\mu} &= A_{1,\mu_1} B_{2,\mu_2} - A_{2,\mu_1} B_{1,\mu_2},\ p_{1,\mu} = A_{2,\mu_1} B_{0,\mu_2} - A_{0,\mu_1} B_{2,\mu_2},\ p_{2,\mu}\\ &= A_{0,\mu_1} B_{1,\mu_2} - A_{1,\mu_1} B_{0,\mu_2}. \end{align*} We deduce then that \begin{align*} f_{0,d} &= p_{1,\mu} q_{2,d-\mu} - p_{2,\mu} q_{1,d-\mu}\\ &= (A_{2,\mu_1} B_{0,\mu_2} - A_{0,\mu_1} B_{2,\mu_2})q_{2,d-\mu} - (A_{0,\mu_1} B_{1,\mu_2} - A_{1,\mu_1} B_{0,\mu_2})q_{1,d-\mu}\\ &= (-B_{1,\mu_2} q_{1,d-\mu} - B_{2,\mu_2} q_{2,d-\mu})A_{0,\mu_1} + (A_{1,\mu_1} q_{1,d-\mu} + A_{2,\mu_1} q_{2,d-\mu})B_{0,\mu_2}\\ &= (-q_{1,d-\mu} B_{1,\mu_2} - q_{2,d-\mu} B_{2,\mu_2})A_{0,\mu_1} + (q_{1,d-\mu} A_{1,\mu_1} + q_{2,d-\mu} A_{2,\mu_1})B_{0,\mu_2}\\ &= \alpha_{d-\mu_1}'A_{0,\mu_1} + \beta_{d-\mu_2}'B_{0,\mu_2}, \end{align*} with $$ \begin{array}{ccl} \alpha_{d-\mu_1}'& :=& -(q_{0,d-\mu} B_{0,\mu_2} + q_{1,d-\mu} B_{1,\mu_2} + q_{2,d-\mu} B_{2,\mu_2}),\\ \beta_{d-\mu_2}'&:=&q_{0,d-\mu} A_{0,\mu_1} + q_{1,d-\mu} A_{1,\mu_1} + q_{2,d-\mu} A_{2,\mu_1}. \end{array}$$ Similarly, we get \begin{align*} f_{1d,} = p_{2,\mu} q_{0,d-\mu} - p_{0,\mu} q_{2,d-\mu} &= \alpha_{d-\mu_1}'A_{1,\mu_1} + \beta_{d-\mu_2}'B_{1,\mu_2},\\ f_{2,d} = p_{0,\mu} q_{1,d-\mu} - p_{1,\mu} q_{0,d-\mu} & = \alpha_{d-\mu_1}'A_{2,\mu_1} + \beta_{d-\mu_2}'B_{2,\mu_2}. \end{align*} This shows that $\alpha_{d-\mu_1}'A + \beta_{d-\mu_2}'B = (f_{0,d},f_{1,d},f_{2,d})$, and $\alpha_{d-\mu_1} = \alpha_{d-\mu_1}'$, $\beta_{d-\mu_2} = \beta_{d-\mu_2}'$ follows since $A,B$ are a basis of the syzygy module of $(p_{0,\mu}, p_{1,\mu}, p_{2,\mu})$. \end{proof} To produce more elements which are mapped to a multiple of $\alpha_{d-\mu_1} Y-\beta_{d-\mu_2} X$ via $\Omega$, we use the following regularity result. \begin{proposition}\label{cc} If $i\geq\mu+\mu_2-1$, then we have $\langle p_{0,\mu}, p_{1,\mu}, p_{2,\mu} \rangle_{i,j} = {\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]_{i,j}$. \end{proposition} \begin{proof} Let $I = \langle p_{0,\mu}, p_{1,\mu}, p_{2,\mu} \rangle \subseteq R = {\mathbb{K}}[{\mathbf{T}}]$. It suffices to prove that $I_i = {\mathbb{K}}[T]_i$ for $i \geq\mu+\mu_2-1$. We have the exact sequence \[ 0 \longrightarrow R_{i-\mu-\mu_1}\oplus R_{i-\mu-\mu_2} \longrightarrow R_{i-\mu}^3 \xrightarrow{(p_{0,\mu},p_{1,\mu},p_{2,\mu})} I_i \longrightarrow 0. \] Note that $i-\mu \ge i-\mu-\mu_1 \ge i-\mu-\mu_2$. In general, $\dim R_m = m+1$ for all $m \ge -1$. Thus, if $i-\mu-\mu_2 \ge -1$, then the above exact sequence implies \begin{align*} \dim I_i &= 3(i-\mu+1) - (i-\mu-\mu_1+1) - (i-\mu-\mu_1 +1) \\ &=i - \mu + \mu_1+\mu_2 + 1 = i+1. \end{align*} Since $\dim R_i = i+1$, it follows that $I_i = R_i$ when $i-\mu-\mu_2 \ge -1$, i.e., when $i \ge \mu+\mu_2-1$. \end{proof} With this result in mind, we proceed as follows:\ let $F_{i,j}\in\langle p_{0,\mu},p_{1,\mu}, p_{2,\mu}\rangle$ (this always holds for instance if $i\geq\mu+\mu_2-1$ thanks to Proposition \ref{cc}), and write $$F_{i,j}=\sum_{\ell=0}^2p_{\ell,\mu}\,F^{(\ell)}_{i-\mu,j}, $$ for suitable homogeneous elements $F^{(\ell)}_{i-\mu,j}\in{\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}],\,\ell=0,1,2$. Then set \begin{equation}\label{DB} D_B\big(F_{i,j}\big):=\det\left(\begin{array}{ccc} F^{(0)}_{i-\mu,j}&F^{(1)}_{i-\mu,j}&F^{(2)}_{i-\mu,j}\\ Z_0&Z_1&Z_2\\ A_{0,\mu_1}& A_{1,\mu_1}& A_{2,\mu_1} \end{array} \right). \end{equation} Note that $D_B(F_{i,j})$ has bidegree $(i-\mu_2,j+1)$. Similarly, $D_A(F_{i,j})$ of bidegree $(i-\mu_1,j+1)$ is defined by replacing the last row of the matrix in \eqref{DB} with $B_{0,\mu_2} B_{1,\mu_2} B_{2,\mu_2}$. If the image of these operators lies in $\langle p_{0,\mu},\,p_{1,\mu},\,p_{2,\mu}\rangle$, one can iterate them again to get $D_A^aD_B^b(F_{i,j})$. The following result is straightforward. \begin{proposition}\label{DaDb} Let $F_{i,j}\in{\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]$. If $D_A^aD_B^b(F_{i,j})$ is defined, then it is an element of ${\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]$ of bidegree $(i-a\mu_1-b\mu_2, j+a+b)$ such that \begin{equation}\label{formula} \Omega\left(D_A^aD_B^b(F_{i,j})\right) = (-1)^b X^aY^b \Omega\left(F_{i,j}\right). \end{equation} Furthermore, $F_{i,j}$ belongs to ${\mathcal{K}}$ if and only if $D_A^aD_B^b(F_{i,j})$ belongs to ${\mathcal{K}}$. \end{proposition} \begin{proof} Since $\Omega$ is the identity on ${\mathbb{K}}[{\mathbf{T}}]$, applying $\Omega$ to $D_B(F_{i,j})$ gives \begin{align*} \Omega(D_B(F_{i,j})) &= \det\begin{pmatrix} \Omega(F^{(0)}_{i-\mu,j})&\Omega(F^{(1)}_{i-\mu,j}) &\Omega(F^{(2)}_{i-\mu,j})\\[2pt] X A_{0,\mu_1} + Y B_{0,\mu_2} & X A_{1,\mu_2} + Y B_{1,\mu_2} & X A_{2,\mu_2} + Y B_{2,\mu_2} \\ A_{0,\mu_1} & A_{1,\mu_1} & A_{2,\mu_1}\end{pmatrix} \\ &= \det\begin{pmatrix} \Omega(F^{(0)}_{i-\mu,j})&\Omega(F^{(1)}_{i-\mu,j}) &\Omega(F^{(2)}_{i-\mu,j})\\[2pt] Y B_{0,\mu_2} & Y B_{1,\mu_2} & Y B_{2,\mu_2} \\ A_{0,\mu_1} & A_{1,\mu_1} & A_{2,\mu_1}\end{pmatrix} \\ &= Y \det\begin{pmatrix} \Omega(F^{(0)}_{i-\mu,j})&\Omega(F^{(1)}_{i-\mu,j}) &\Omega(F^{(2)}_{i-\mu,j}) \\[2pt] B_{0,\mu_2} & B_{1,\mu_2} & B_{2,\mu_2} \\ A_{0,\mu_1} & A_{1,\mu_1} & A_{2,\mu_1}\end{pmatrix}\\ &= -Y\left(\sum_{\ell=0}^2p_{\ell,\mu}\,\Omega(F^{(\ell)}_{i-\mu,j})\right)= -Y\Omega(F_{i,j}), \end{align*} where the last line holds by \eqref{p}. By recurrence, \eqref{formula} follows. To prove the last part of the claim, thanks to Corollary \ref{mapcor} we have that $F_{ij}\in{\mathcal{K}}$ if and only if $\Omega(F_{ij})=(\alpha_{d-\mu_1}Y-\beta_{d-\mu_2}X)G$ for a suitable $G\in{\mathbb{K}}[{\mathbf{T}},X,Y].$ This combined with \eqref{formula} prove the claim. \end{proof} From Propositions \ref{q} and \ref{DaDb}, we deduce straightforwardly the following result. \begin{corollary}\label{pi} If $D_A^aD_B^b(q)$ is defined, then \begin{equation}\label{ki} \Omega\big(D_A^aD_B^b(q)\big)=(-1)^{b+1} X^aY^b\big(\alpha_{d-\mu_1} Y -\beta_{d-\mu_2} X \big). \end{equation} \end{corollary} Thanks to Proposition \ref{cc}, we have the following: \begin{lemma}\label{ax} With notation as above, $D_A^aD_B^b(F_{i,j})$ is defined whenever $a \ge 0$ and either $b \ge 1, i-a\mu_1-b\mu_2\geq\mu-1$ or $b = 0, i-a\mu_1\geq\mu+\mu_2-\mu_1-1$. \end{lemma} Proposition \ref{indep} below shows that \eqref{ki} actually produce some nice minimal generators of ${\mathcal{K}}$. \begin{proposition}\label{indep} If $\mu_1>0$ and the family $\{F_{i_1,j_1},\dotsb, F_{i_\ell,j_\ell}\}\subset{\mathcal{K}}$ is such that $\Omega\big(F_{i_k,j_k}\big)=X^{a_k}Y^{b_k}(\alpha_{d-\mu_1}Y-\beta_{d-\mu_2}X)$ for $k=1,\dotsb, \ell$, where $(a_{k'},b_{k'})\neq(a_k,b_k)$ if $k\neq k'$, then this family is contained in a system of minimal generators of ${\mathcal{K}}$. \end{proposition} \begin{proof} Let $\{G_1,\dotsb, G_m\}$ be a family of minimal generators of ${\mathcal{K}}$. For each $k=1,\dotsb, \ell$, as $F_{{i_k,j_k}}\in{\mathcal{K}}$, we must have \begin{equation}\label{ixx} F_{{i_k,j_k}}=\sum_{\ell=1}^m R_\ell G_\ell \end{equation} for suitable bihomogeneous polynomials $R_1,\dotsb, R_m\in{\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]$. By applying $\Omega$ to both sides of this expression, we get that (thanks to Corollary \ref{mapcor} and the hypothesis) \begin{equation}\label{abbov} X^{a_k}Y^{b_k}(\alpha_{d-\mu_1}Y-\beta_{d-\mu_2}X)=(\alpha_{d-\mu_1}Y-\beta_{d-\mu_2}X)\bigg(\sum_{\ell=1}^m \Omega(R_\ell)G^*_\ell\bigg), \end{equation} with $G^*_\ell\in{\mathbb{K}}[{\mathbf{T}},X,Y]$. Canceling the common factor in both sides of \eqref{abbov}, we obtain \begin{equation}\label{qui} X^{a_k}Y^{b_k}=\bigg(\sum_{\ell=1}^m \Omega(R_\ell)G^*_\ell\bigg). \end{equation} From the definition of $\Omega$ given in \eqref{Omega}, and using the fact that $\mu_1, \mu_2>0,$ we deduce that $\Omega(R_\ell)\in\langle T_0,\,T_1\rangle$ unless $\deg_{\mathbf{T}}(R_\ell)=\deg_{\mathbf{Z}}(R_\ell)=0$. So, there must be an index $\ell_0\in\{1,\dotsb, m\}$ such that $R_{\ell_0}=\lambda_{s_0}\in{\mathbb{K}}^\times$, and $G^*_{\ell_0}$ has $X^{a_k}Y^{b_k}$ among its monomials. Hence, from \eqref{ixx} we get $$ F_{{i_k,j_k}}-\lambda_{\ell_0}G_{\ell_0}=\sum_{\ell\neq \ell_0} R_\ell G_\ell, $$ which implies straightforwardly that both families $\{G_1,\dotsb, G_m\}$ and \newline $\{G_1,\dotsb, G_{\ell_0-1}, F_{i_k,j_k}, G_{\ell_0+1}, G_m\}$ are minimal generators of ${\mathcal{K}}$. To conclude, we have to show that we can add {\em all} the $F_{{i_{k'},j_{k'}}}$ with $k'\neq k$ to the list of minimal generators. This can be done recursively following the reasoning given above, just noting that in each step of the process the $R_s$ which is mapped via $\Omega$ to a constant $\lambda_s\in{\mathbb{K}}^\times$ can always be chosen among those remaining $G_s$ in the list, which is straightforward. This concludes with the proof of the proposition. \end{proof} \begin{remark}\label{rrem} The hypothesis $\mu_1>0$ is necessary in Proposition \ref{indep}. Indeed, if $\mu_1=0$, we may have a ${\mathbb{K}}$-linear combination of the $Z_i$'s that get mapped to a nonzero scalar multiple of $X$ in ${\mathbb{K}}[{\mathbf{T}},X,Y]$ (for instance if the coordinates of $B$ are ${\mathbb{K}}$-linearly independent). \end{remark} \begin{theorem}\label{DD} Assume that $\mu_1>0.$ Then a subset of minimal generators of ${\mathcal{K}}$ is given by the polynomials $D_A^aD_B^b(q)$ with $a \ge 0$ and either $b \ge 1, d-\mu-a\mu_1-b\mu_2\geq\mu-1$ or $b = 0, d-\mu-a\mu_1\geq\mu+\mu_2-\mu_1-1$. \end{theorem} \begin{proof} Consider the region in the first quadrant whose lattice points are given by \[ (i,j) = (d-\mu-a \mu_1 -b \mu_2, a+b+1) \] where $i,j,a,b \in {\mathbb{Z}}_{\ge 0}$. This give the triangular region in the plane shown in Figure~\ref{triangularregion}. \begin{figure}[ht] \[ {\setlength{\unitlength}{.8pt}\begin{picture}(350,265) \put(10,20){\line(1,0){330}} \put(10,20){\line(0,1){240}} \put(130,40){\circle*{4}} \put(130,80){\circle*{4}} \multiput(330,40)(-20,20){8}{\circle*{4}} \multiput(230,60)(-20,20){6}{\circle*{4}} \put(130,16){\line(0,1){8}} \put(330,16){\line(0,1){8}} \put(230,16){\line(0,1){8}} \put(310,16){\line(0,1){8}} \put(127,0){\small $\mu$} \put(326,0){\small $d-\mu$} \put(212,0){\small $d{-}\mu{-}\mu_2$} \put(277,0){\small $d{-}\mu{-}\mu_1$} \put(328,48){\small$q$} \put(128,48){\small$p$} \put(6,40){\line(1,0){8}} \put(6,60){\line(1,0){8}} \put(16,37){\small$1$} \put(16,57){\small$2$} \Thicklines \put(130,240){\circle*{4}} \put(150,220){\circle*{4}} \put(170,200){\circle*{4}} \drawline(330,40)(130,80) \drawline(330,40)(172,198) \drawline(132,238)(148,222) \drawline(152,218)(168,202) \put(310,60){\circle*{5}} \put(306,68){\small$D_A(q)$} \put(230,60){\circle*{5}} \put(226,68){\small$D_B(q)$} \put(330,40){\circle*{5}} \dottedline{5}(130,80)(30,100) \dottedline{5}(130,240)(110,260) \end{picture}} \] \caption{The Triangular Region} \label{triangularregion} \end{figure} By \cite[Corollary 3.13]{Madsen}, we know that for $i \ge \mu$, the minimal generators of bidegree $(i,j)$ for $\mathcal{K}$ lie in the triangular region in Figure~\ref{triangularregion}, and correspond to elements which are mapped to $X^i Y^j(\alpha_{d-\mu_1} Y- \beta_{d-\mu_2} X)$ via $\Omega$. From Corollary \ref{pi}, we deduce that $\pm D_A^iD_B^j(q)$ gets mapped to this polynomial. Lemma \ref{ax} applied to $q$ concludes with the proof of the claim. \end{proof} \begin{remark}\label{rremr} The hypothesis $\mu_1>0$ is necessary, as otherwise if $d-\mu>\mu+\mu_2-1=2\mu-1,$ we would be able to produce the infinite family ${D_A}^j(q),\, j=0,1,\dotsb,$ which gets mapped to $(-1)^jY^j(\alpha_{d-\mu_1}Y-\beta_{d-\mu_2}X),$ which clearly cannot be part of a (finite) system of minimal generators of ${\mathcal{K}}$. Moreover, thanks to \cite[Theorem 4.6]{CD13}, we know that there are no minimal generators of ${\mathbf{T}}$-degree $d-\mu$ except for $q$. \end{remark} \subsection{The cases $\mu_1=0$ and $0<\mu_1=\mu_2$} Figures \ref{polygonfig} and \ref{triangularregion} are made under the assumption that $0<\mu_1<\mu_2.$ But what happens if $\mu_1=0$ or $0<\mu_1=\mu_2$? In the first case, the segment defined by $D_A$ becomes parallel to the vertical axis, and an infinite family $D_A^j(q)$ may be produced for all $j\geq0.$ But Theorem \ref{DD} does not hold as explained by Remark \ref{rremr}. This is because Proposition \ref{indep} does not hold in this case (cf.\ Remark \ref{rrem}). However, one can prove that the family $\{D_B^j(q)\}$ for all those $j$ such that it is defined, is part of a minimal system of generators by modifying Proposition \ref{indep} as follows: \begin{proposition} If $\mu_1\geq0$ and the family $\{F_{i_1,j_1},\dotsb, F_{i_\ell,j_\ell}\}\subset{\mathcal{K}}$ is such that $\Omega\big(F_{i_k,j_k}\big)=Y^{b_k}(\alpha_{d-\mu_1}Y-\beta_{d-\mu_2}X)$ for $k=1,\dotsb, \ell$, where $b_{k'}\neq b_k$ if $k\neq k'$, then this family is contained in a system of minimal generators of ${\mathcal{K}}$. \end{proposition} \begin{proof} Follow the proof of Proposition \ref{indep} until \eqref{qui}. Set $X\mapsto 0$ in that identity, to conclude that there must be $s_0\in\{1,\dotsb, m\}$ such that $R_{s_0}=\lambda_{s_0}\in{\mathbb{K}}^\times.$ From here, the proof can be completed as in the proof of Proposition \ref{indep}. \end{proof} The case $\mu_1=\mu_2$ corresponds to when the two segments defined by both $D_A$ and $D_B$ in Figure \ref{triangularregion} coincide, and hence the triangular region becomes a segment. In this case, for all the admissible $j\geq0,$ there are $j+1$ elements of bidegree $(d-\mu-j\mu_1,j+1)$ in ${\mathcal{K}}$ which get mapped via $\Omega$ to $X^aY^b(\alpha_{d-\mu_1}Y-\beta_{d-\mu_2}X),\,a+b=j$, and hence thanks toTheorem \ref{DD} they are part of a minimal system of generators of ${\mathcal{K}}$. \section{Comparison with Madsen's Results}\label{6} Our situation is studied by Madsen in \cite{Madsen}. For $K = \langle f_{0,d},f_{1,d},f_{2,d}\rangle \subseteq {\mathbb{K}}[{\mathbf{T}}]$, the Hilbert-Burch matrix has only two columns, the first of which is $p = (p_{0,\mu},p_{1,\mu},p_{2,\mu})$. Thus the sequence of Rees algebras \eqref{madsenapproach} simplifies to \[ {\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}] \twoheadrightarrow {\mathcal{R}}(E_1) \twoheadrightarrow {\mathcal{R}}(K), \] where ${\mathcal{R}}(E_1) \simeq {\mathbb{K}}[{\mathbf{T}}](-d)^3/(p_{0,\mu},p_{1,\mu},p_{2,\mu}) {\mathbb{K}}[{\mathbf{T}}](-d)^3$ by \cite[Sec.~4]{Madsen}. Unlike \cite{Madsen}, we do not shift by $d$, which explains why we use ${\mathbb{K}}[{\mathbf{T}}](-d)^3$. In the notation of Section~\ref{4}, $p$ has a $\mu$-basis $A,B$ of degrees $\mu_1+\mu_2 = \mu$. Thinking of $A,B$ as row vectors, we get a map \[ E_1 \simeq {\mathbb{K}}[{\mathbf{T}}](-d)^3/(p_{0,\mu},p_{1,\mu},p_{2,\mu}) {\mathbb{K}}[{\mathbf{T}}](-d)^3 \stackrel{\scriptstyle\big(\begin{smallmatrix}A \\[1pt] B\end{smallmatrix}\big)}{\longrightarrow} R(\mu_1-d)\oplus R(\mu_2-d) = F, \] and Madsen notes that $F = E_1^{**}$. Similar to \eqref{Madsen.sec3}, the inclusion $E_1 \hookrightarrow F$ gives a commutative diagram \begin{equation} \label{Madsen.sec6} \begin{array}{c} \SelectTips{cm}{} \xymatrix@C=15pt@R=12pt{ {\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}] \ar@{->>}[r]^{\Omega} \ar@{->>}[rrd]_\psi & {\mathcal{R}}(E_1) \ar@{^{(}->}[r] \ar@{->>}[rd]& {\mathbb{K}}[{\mathbf{T}},X,Y] \ar[dr]^\phi\\ && {\mathcal{R}}(K) \ar@{^{(}->}[r] & {\mathbb{K}}[{\mathbf{T}},s]} \end{array} \end{equation} where the maps $\Omega, \psi,\phi$ are from the diagram \eqref{Reesdiagram} and also feature in Lemma~\ref{maplemma}. The difference is that we now regard $\Omega$ and $\psi$ as surjections onto their images, which are the Rees algebras ${\mathcal{R}}(E_1)$ and ${\mathcal{R}}(K)$ respectively. Recall that the goal is to understand ${\mathcal{K}} = \ker (\psi)$. Similar to \eqref{sec3.intersection1}, Lemma~\ref{mapcor} implies that \[ {\mathcal{K}} {\mathcal{R}}(E_1) = (g{\mathcal{R}}(F)) \cap {\mathcal{R}}(E_1) \] for $g = \alpha_{d-\mu_1}Y - \beta_{d-\mu_2}X$. Madsen refines this with \begin{equation} \label{sec6.intersection2} {\mathcal{K}}_{\ge\mu,*} {\mathcal{R}}(E_1) = (g{\mathcal{R}}(F))_{\ge\mu,*}, \end{equation} which follows from \cite[(3.11)]{Madsen}. This enables Madsen to show that the minimal generators of bidegree $(i,j)$ for $\mathcal{K}$ lie in the triangular region in Figure~\ref{triangularregion}. These bidegrees $(i,j)$ correspond to elements which are mapped to $X^i Y^j g$ via $\Omega$. From Corollary \ref{pi}, we deduce that $\pm D_A^iD_B^j(q)$ gets mapped to this polynomial, so Theorem \ref{DD} can be regarded as an explicit description of these particular generators. We do not succeed in covering all the elements predicted by \cite[Theorem 3.9]{Madsen}: there may be some points at the top of the upper edge in Figure~\ref{triangularregion} corresponding to bidegrees where we cannot predict in advance that $D_A^a D_B^b (q)$ is defined. For instance, when $d = 22$, $\mu = 6$, $\mu_1 = 1$, and $\mu_2 = 5$, there are three open dots at the top of the upper edge where our method does not guarantee to produce any element of those bidegrees (see Figure~\ref{ConfusedFig} in Section~\ref{7}). In Section~\ref{3}, the analog of \eqref{sec6.intersection2} was \eqref{sec3.intersection2}, which we proved by elementary methods in \eqref{2.9.claim}. However, our methods are not strong enough to prove \eqref{sec6.intersection2}, which is why we rely upon Madsen's results in the proof of Theorem~\ref{DD}. In Section 3.3 of \cite{Madsen} there is also an algorithm to compute the generators of ${\mathcal{K}}$. Let us describe this method and explain its relation to the operators $D_A$ and $D_B$ defined in Section~\ref{5}. Using a $\mu$-basis $\{b_1,b_2\}$ of $B$, define four polynomials: \begin{align*} p^A_i &= b_i \cdot A \in {\mathbb{K}}[{\mathbf{T}}], \quad i = 1,2\\ \rho^A_i &= b_i \cdot (Z_0,Z_1,Z_2), \quad i = 1,2. \end{align*} Since $b_i$ is a syzygy on $B$ and $\Omega(Z_i) = A_{i,\mu_1} X + B_{i,\mu_2}Y$, an easy calculation yields \begin{equation} \label{omegarho} \Omega(\rho^A_i) = X \Omega(p^A_i). \end{equation} More generally, if $F_{i,j} \in {\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}]$ can be written \begin{equation} \label{Fijp} F_{i,j} = h_1 p^A_1 + h_2 p^A_2, \end{equation} then (using different notation) Madsen defines \[ F_{i,j}^A = h_1 \rho^A_1 + h_2 \rho^A_2. \] Note that $\deg(F_{i,j}^A) = (i-\mu_1,j+1)$ since $A$ has degree $\mu_1$, and \eqref{omegarho} implies that $\Omega(F_{i,j}^A) = X \Omega(F_{i,j})$. Since $\Omega(D_A(F_{i,j})) = X \Omega(F_{i,j})$ by Proposition~\ref{DaDb}, $F_{i,j}^A$ and $D_A(F_{i,j})$ differ by a multiple of $p$ (see Proposition~\ref{kerOmega}). In Proposition 3.15 of \cite{Madsen}, Madsen proves that $\langle p_1^A,p_2^A\rangle_{\mu+\mu_1-1} = {\mathbb{K}}[{\mathbf{T}}]_{\mu+\mu_1-1}$. This tells us that when $i \ge \mu+\mu_1-1$, \eqref{Fijp} holds and hence $F_{i,j}^A$ is defined. In contrast, the definition of $D_A$ requires that $i \ge \mu+\mu_2-1$. Madsen also has an analog of our $D_B$ operator that is defined using a $\mu$-basis of $A$. Here, $F_{i,j}^B$ has degree $(i-\mu_2,j+1)$ and is defined for $i \ge \mu+\mu_2-1$, the same as for $D_B$. Furthermore $\Omega(F_{i,j}^B) = Y \Omega(F_{i,j})$, so that $F_{i,j}^B$ and $-D_B(\deg(F_{i,j})$ differ by a multiple of $p$ (remember the minus sign in Proposition~\ref{DaDb}). By starting with $q \in {\mathcal{K}}_{d-\mu,1}$ and applying the $\{\}^A$ and $\{\}^B$ operators, Madsen constructs all minimal generators of ${\mathcal{K}}$ with $i \ge \mu$ \cite[Corollary 3.17]{Madsen}. There is also an interpretation in terms of Sylvester forms \cite[Proposition 3.18]{Madsen}. Because of the restriction that $i \ge \mu+\mu_2-1$, our operators $D_A$ and $D_B$ do not give all all minimal generators of ${\mathcal{K}}$ with $i \ge \mu$. As noted above, Figure~\ref{ConfusedFig} shows what can happen. The generators we miss in this figure all require $D_A$, which requires $i \ge \mu+\mu_2-1$. Madsen's $\{\}^A$ only requires $i \ge \mu +\mu_1-1$, which explains the success of the methods in \cite{Madsen}. An intriguing observation is that we start with $\mathbf{f} = (f_{0,d},f_{1,d},f_{2,d})$ with $\mu$-basis $\{p,q\}$ and then use a $\mu$-basis $\{A,B\}$ of $p$ to construct elements of ${\mathcal{K}}$. In \cite{Madsen}, Madsen uses $\mu$-bases of $A$ and $B$ to construct further elements of ${\mathcal{K}}$. Is it possible that repeatedly taking $\mu$-bases could lead to a complete description of the minimal generators of ${\mathcal{K}}$? \section{Lifts of Minimal Generators}\label{7} Recall from Section \ref{2} that for ${\mathcal{I}}$, the minimal generators consist of the minimal generators of ${\mathcal{I}}'$ together with the generators \begin{equation} \label{Psigens} \Psi^t_{i_\ell,j_\ell,k_\ell+1}, \ 0 \le t \le s_\ell \end{equation} described in Theorem~\ref{mtm}. We also know that $s_\ell = 0$ when $i_\ell > 0$, so that when $i_\ell > 0$, \eqref{Psigens} becomes \[ \Psi^0_{i_\ell,j_\ell,k_\ell+1} \in {\mathcal{I}}_{d-\mu-\mu_1 j_\ell - \mu_2 k_\ell,j_\ell+k_\ell+1} \] since $s_\ell = 0$ implies that $i_\ell = d-\mu-\mu_1 j_\ell - \mu_2 k_\ell$. This bidegree lies in triangle obtained by extending the dotted lines in Figure~\ref{triangularregion} to the $y$-axis. The map $\Gamma : {\mathbb{K}}[{\mathbf{T}},{\mathbf{Z}}] \to {\mathbb{K}}[{\mathbf{T}},{\mathbf{X}},{\mathbf{Y}}]$ defined in \eqref{Gama} satisfies $\Gamma({\mathcal{K}}) \subseteq {\mathcal{I}}$. For $f \in {\mathcal{K}}$, we call $\Gamma(f)$ the \emph{lift} of $f$. \subsection{Lifting \boldmath{$q$}} We now show how to lift $q$. \begin{lemma}\label{liftq} For $q = q_{0,d-\mu} Z_0 + q_{1,d-\mu} Z_1 + q_{2,d-\mu} Z_2 \in {\mathcal{K}}_{d-\mu,1}$, we have $$\Gamma(q) = \Psi^0_{d-\mu,0,1} \in {\mathcal{I}}_{d-\mu,1}.$$ \end{lemma} \begin{proof} We again drop subscripts indicating degree. By definition, \begin{align*} \Gamma(q) &=q_0 (\underline{a}_0\cdot{\mathbf{X}}+\underline{b}_0\cdot{\mathbf{Y}}) +, q_1(\underline{a}_1\cdot{\mathbf{X}}+\underline{b}_1\cdot{\mathbf{Y}}) + q_2(\underline{a}_2\cdot{\mathbf{X}}+\underline{b}_2\cdot{\mathbf{Y}})\\ &= (q_0 \underline{a}_0 + q_1 \underline{a}_1 +q_2 \underline{a}_2 )\cdot {\mathbf{X}} + (q_0 \underline{b}_0 + q_1 \underline{b}_1 +q_2 \underline{b}_2 )\cdot {\mathbf{Y}}. \end{align*} One of the minimal generators of $S_{\mu_1,\mu_2,d}$ from \eqref{Smu} is $\mathbf{v}_\ell = (d-\mu,0,0)$, where $s_\ell = 0$. If we pick \[ A^0_{d-\mu,0,1} = (q_0 \underline{b}_0 + q_1 \underline{b}_1 +q_2 \underline{b}_2 )\cdot {\mathbf{Y}}, \] then \[ A^0_{d-\mu,0,1}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2}) = (q_0 \underline{b}_0 + q_1 \underline{b}_1 +q_2 \underline{b}_2 )\cdot {\mathbf{T}}^{\mu_2} = (q_0 B_0 + q_1B_1+q_2B_2) = -\alpha_{d-\mu_1}, \] where the last equality is by \eqref{xio}. Then one computes that \[ B^0_{d-\mu,1,0} = -(q_0 \underline{a}_0 + q_1 \underline{a}_1 +q_2 \underline{a}_2 )\cdot {\mathbf{X}} \] satisfies $B^0_{d-\mu,1,0}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2}) = -\beta_{d-\mu_2}$ again by \eqref{xio}, and \begin{align*} \Psi^0_{d-\mu,0,1} &= A^0_{d-\mu,0,1} - B^0_{d-\mu,1,0}\\ &= (q_0 \underline{a}_0 + q_1 \underline{a}_1 +q_2 \underline{a}_2 )\cdot {\mathbf{X}} + (q_0 \underline{b}_0 + q_1 \underline{b}_1 +q_2 \underline{b}_2 )\cdot {\mathbf{Y}}, \end{align*} which is the above formula for $\Gamma(q)$. \end{proof} \subsection{Lifting Other Generators} The general strategy for lifting minimal generators from ${\mathcal{K}}$ to ${\mathcal{I}}$ is to work mod ${\mathcal{I}}'$, whose minimal generators are described in Proposition~\ref{2.2}. Since ${\mathcal{I}}' = \ker(\Phi')$, studying $H \in {\mathcal{I}}$ mod ${\mathcal{I}}'$ means working with $\Phi'(F) \in {\mathbb{K}}[{\mathbf{T}},X,Y]$. For a minimal generator $\Psi^t_{i_\ell,j_\ell,k_\ell+1} \in {\mathcal{I}}$, the following result tells us exactly what its image in ${\mathbb{K}}[{\mathbf{T}},X,Y]$ looks like. \begin{proposition} \label{phiipPsi} The generator $\Psi^t_{i_\ell,j_\ell,k_\ell+1}$ gets mapped via $\Phi'$ to the element $\big(\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X\big) X^{j_\ell} Y^{k_\ell} T_0^t T_1^{s_\ell-t}$. \end{proposition} \begin{proof} Recall from \eqref{cozi} that \[ \Psi^t_{{i_\ell,j_\ell,k_\ell+1}}=A^t_{{i_\ell,j_\ell,k_\ell+1}}-B^t_{i_\ell,j_\ell+1,k_\ell}, \] where $A^t_{{i_\ell,j_\ell,k_\ell+1}},\,B^t_{i_\ell,j_\ell+1,k_\ell}$ are trihomogeneous as indicated by their subscripts. Also, by \eqref{ambi}, \[ A^t_{i_\ell,j_\ell,k_\ell+1}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2}) = \alpha_{d-\mu_1} T_0^t T_1^{s_\ell-t}, \] and by the proof of Lemma \ref{util2p}, \[ B^t_{i_\ell,j_\ell+1,k_\ell}({\mathbf{T}},{\mathbf{T}}^{\mu_1},{\mathbf{T}}^{\mu_2}) = \beta_{d-\mu_2} T_0^t T_1^{s_\ell-t}. \] These formulas together with \eqref{phiip} and the trihomogeneity of $A^t,B^t$ imply that \begin{align*} \Phi'(\Psi^t_{i_\ell,j_\ell,k_\ell+1}) &= A^t_{i_\ell,j_\ell,k_\ell+1}({\mathbf{T}},{\mathbf{T}}^{\mu_1}X,{\mathbf{T}}^{\mu_2}Y) - B^t_{i_\ell,j_\ell+1,k_\ell}({\mathbf{T}},{\mathbf{T}}^{\mu_1}X,{\mathbf{T}}^{\mu_2}Y) \\ &= X^{j_\ell} Y^{k_\ell + 1} \alpha_{d-\mu_1} T_0^t T_1^{s_\ell-t} - X^{j_\ell+1} Y^{k_\ell}\beta_{d-\mu_2} T_0^t T_1^{s_\ell-t} \\ &= \big(\alpha_{d-\mu_1} Y - \beta_{d-\mu_2} X\big) X^{j_\ell} Y^{k_\ell} T_0^t T_1^{s_\ell-t}.\qedhere \end{align*} \end{proof} For $F \in {\mathcal{K}}$, applying the above strategy to its lift $\Gamma(F)$ mod ${\mathcal{I}}'$ means working with $\Phi'(\Gamma(F)) = \Omega(F)$ by Lemma~\ref{maplemma}. For the minimal generators of ${\mathcal{K}}$ identified in Theorem~\ref{DD}, this leads to the following result. \begin{theorem}\label{DDlift} Suppose $a,b$ satisfy $a \ge 0$ and either $b \ge 1, d-\mu-a\mu_1-b\mu_2\geq\mu-1$ or $b = 0, d-\mu-a\mu_1\geq\mu+\mu_2-\mu_1-1$. Then we have a minimal generator \[ D_A^aD_B^b(q) \in {\mathcal{K}}_{d-\mu-a\mu_1-b\mu_2,a+b+1} \] whose lift to ${\mathcal{I}}$ satisfies \[ \Gamma(D_A^aD_B^b(q)) \equiv (-1)^{b+1} \Psi^0_{d-\mu-a\mu_1-b\mu_2,a,b+1} \bmod {\mathcal{I}}'. \] \end{theorem} \begin{proof} By \eqref{ki}, we have $\Omega(D_A^aD_B^b(q))=(-1)^{b+1}X^bY^a(\alpha_{d-\mu_1} Y -\beta_{d-\mu_2} X)$. As noted above, this implies \[ \Phi'\big(\Gamma(D_A^aD_B^b(q))\big)=(-1)^{b+1} X^aY^b\big(\alpha_{d-\mu_1} Y -\beta_{d-\mu_2} X \big). \] Since $\Phi'( \Psi^0_{d-\mu-a\mu_1-b\mu_2,a,b+1}) = X^aY^b(\alpha_{d-\mu_1} Y -\beta_{d-\mu_2} X)$ by Proposition~\ref{phiipPsi}, the theorem follows immediately. \end{proof} Here is an example that gives a picture of which minimal generators of ${\mathcal{K}}$ are involved in Theorem~\ref{DDlift}. \begin{example} \label{fromConfused} When $d = 22$, $\mu = 6$, $\mu_1 = 1$, and $\mu_2 = 5$, the part of Figure~\ref{triangularregion} with $i \ge \mu$ is shown in Figure~\ref{ConfusedFig}. The large dots in the figure show $q$, $D_A(q)$, and $D_B(q)$. {\setlength{\unitlength}{.8pt} \begin{figure}[h] \[ \begin{picture}(350,265) \put(10,20){\line(1,0){330}} \put(10,20){\line(0,1){240}} \put(130,40){\circle*{4}} \put(130,80){\circle*{4}} \multiput(330,40)(-20,20){8}{\circle*{4}} \multiput(230,60)(-20,20){6}{\circle*{4}} \put(130,16){\line(0,1){8}} \put(330,16){\line(0,1){8}} \put(115,0){$\mu = 6$} \put(293,0){$d-\mu =16$} \put(328,48){$q$} \put(128,48){$p$} \Thicklines \put(130,240){\circle{4}} \put(150,220){\circle{4}} \put(170,200){\circle{4}} \drawline(330,40)(130,80) \drawline(130,80)(130,237) \drawline(330,40)(172,198) \drawline(132,238)(148,222) \drawline(152,218)(168,202) \put(310,60){\circle*{5}} \put(306,68){$D_A(q)$} \put(230,60){\circle*{5}} \put(226,68){$D_B(q)$} \put(330,40){\circle*{5}} \end{picture} \] \caption{$d = 22$, $\mu = 6$, $\mu_1 = 1$, $\mu_2 = 5$} \label{ConfusedFig} \end{figure}} By Madsen's results \cite{Madsen}, the dots (solid and open) correspond to bidgrees $(i,j)$ of all minimal generators of ${\mathcal{K}}$ with $i \ge \mu$. The inequalities of Theorem~\ref{DDlift} become \begin{itemize} \item[] $b \ge 1$, $a \ge 0$, and $a+5b \le 11$. \item[] $b = 0$ and $0 \le a \le 7$. \end{itemize} In fact, $b = 0$ gives the eight solid dots on the upper edge of the triangular region, and $b \ge 1$ gives the remaining solid dots in the region. The three open dots at the top of the upper edge correspond to bidegrees where our methods cannot guarantee that $D_A^a D_B^b (q)$ is defined, in contrast with Madsen's results. \end{example}
1,477,468,750,774
arxiv
\section{Introduction} \subsection{} Let $\U =\U^-\U^0\U^+$ be a Drinfeld-Jimbo quantum group (of finite type). There are several remarkable bases for $\U^-$, known as PBW basis and canonical basis. The PBW basis is orthogonal with respect to a natural bilinear form on $\U^-$, providing an approach to the construction of the canonical basis \cite{Lu90, Lu91} (also cf. \cite{K91} for another construction of the canonical basis). Lusztig \cite{Lu92, Lu93} further constructed a canonical basis for the modified quantum group $\Udot$, which is compatible with canonical bases on the tensor products of lowest and highest weight modules ${}^\omega L(\lad) \otimes L(\mu)$, for various dominant weights $\lad, \mu \in X^+$. The canonical bases admit remarkable positivity properties in ADE and symmetric types; they have major impacts in several (geometric, combinatorial, and categorical) directions of representation theory. \subsection{} The goal of this paper is to formulate a PBW basis for the modified quantum group $\Udot$ of finite type with an orthogonality property and to establish its relations to canonical basis. The formulation follows by a new construction for the modified quantum group of arbitrary type, which is built on limits of sequences of elements in tensor products of lowest and highest weight modules. \subsection{} The inspiration came from the computation by the author \cite{Wa21} of an orthogonal basis for an $\imath$quantum group of rank 1 in terms of $\imath$divided powers (aka $\imath$canonical basis) \cite{BW18, BeW18}; in this setting, the $\imath$quantum group is a polynomial algebra in one variable. The $\imath$quantum groups arise from quantum symmetric pairs, and as we view quantum groups as $\imath$quantum groups of diagonal type, it is natural to explore the counterpart in the Drinfeld-Jimbo quantum group setting, starting again in rank 1. Indeed, by imposing the orthogonality condition, we are able to construct naturally and compute explicitly an (apparently new) PBW basis for $\Udot$ of rank 1 in terms of the canonical basis. Moreover, relations of these two bases when acting on the tensor product modules of the form ${}^\omega L(p) \otimes L(p+m)$ can be further observed; compare \cite[\S25.3]{Lu93}. A (PBW) basis of $\Udot$ with an orthogonality property emerges as a limit in a suitable sense of the tensor products of PBW basis elements acting on the tensor product modules. The rank 1 case is carried out in \S~\ref{sec:rank1}--\ref{sec:rank1c}, and some reader might prefer to go over the rank 1 case first. A bilinear pairing formula between canonical basis elements which appeared in Lauda \cite[Proposition~ 2.8]{La10} (who gave a long combinatorial proof) follows most naturally from the orthogonality of PBW basis for $\Udot$ and the PBW expansion of the canonical basis of $\Udot$. We develop in Section~\ref{sec:PBW} a framework for studying the limits of the so-called standard sequences of elements in tensor products of lowest and highest weight modules, ${}^\omega L(\lad) \otimes L(\lad +\zeta)$ with $\zeta$ fixed, as $\lad$ tends to $\infty$. This leads to a $\Q(q)$-linear isomorphism (valid for quantum groups of arbitrary type) \[ \mathcal F_\zeta\colon \U^+\otimes \U^- \stackrel{\cong}{\longrightarrow} \Udot \one_\zeta, \] which allows us to transfer bases for $\U^\pm$ to bases for $\Udot$. The {\em fused canonical basis} for $\Udot$ is obtained via $\mathcal F_\zeta$ (for various $\zeta$) from the pure tensors of canonical bases in $\U^+\otimes \U^-$. \subsection{} Now assume $\U$ is of finite type. The observations made in the rank 1 example suggest us to define the PBW basis for $\Udot$ as the transfer under $\mathcal F_\zeta$ (for various $\zeta$) of a tensor product of PBW bases in $\U^+ \otimes \U^-$; here the PBW bases of $\U^\pm$ can be associated to any reduced expression of the longest Weyl group element $w_0$. We show that the PBW basis for $\Udot$ is orthogonal with respect to the standard bilinear form on $\Udot$; it contains as a subset the PBW bases for $\U^+$ and $\U^-$. We further show that the transition matrix from the canonical basis to the PBW basis on $\Udot$ is unital triangular. Let us make clear that the PBW basis and the fused canonical basis for $\Udot$ are bases over $\Q(q)$, but they do not lie in the integral form of $\Udot$. Already in the rank 1 case, the coefficients of the PBW-expansion of the canonical basis are typically (up to factors of $q$-powers) rational functions of the following form \[ \prod_{a=1}^m \frac1{1-q^{-2a}}. \] These rational functions expand as power series in $q^{-1}$ with {\em positive integral} coefficients. For a general ADE type, we show that the canonical basis is PBW-positive with {\em positive} coefficients in $\N[[q^{-1}]] \cap \Q(q)$. The proof relies on two positivity results. To that end, we show that the canonical basis has an expansion with coefficients in $\N[[q^{-1}]] \cap \Q(q)$ in terms of the fused canonical basis by using a positivity result in finite type of Webster \cite[Corollary 8.9]{We15} on the expansion of any canonical basis element in terms of pure tensors of canonical basis elements in ${}^\omega L(\lad) \otimes L(\lad +\zeta)$. On the other hand, the canonical basis of $\U^+$ is PBW-positive; this was first established in \cite[Corollary~ 10.7]{Lu90} when the reduced expressions of $w_0$ are ``adapted", conjectured by Lusztig and proved by Syu Kato \cite{Ka14} using categorification for arbitrary reduced expressions of $w_0$ (see \cite{BKM14} for a second categorification proof and \cite{Oy18} for another proof based on the positivity of canonical bases under comultiplication \cite{Lu91}). Consequently, it follows that the fused canonical basis of $\Udot$ is PBW-positive. It will be very interesting to explore if the fused canonical basis and the PBW basis of $\Udot$ admit a categorification in a generalized KLR categorical setting, generalizing the geometrical and categorical interpretation for the canonical basis and PBW basis of $\U^+$ \cite{KL10, R12, VV11, Ka14, BKM14}. We shall return in \cite{Wa21} to construct PBW bases for modified $\imath$quantum groups arising from quantum symmetric pairs. \vspace{2mm} {\bf Acknowledgement.} The author is partially supported by the NSF grant DMS-2001351. \section{A limit construction of the modified quantum group} \label{sec:PBW} In this section, we develop a framework for studying limits of sequences of elements in tensor product $\U$-modules ${}^\omega L(\lad) \otimes L(\lad+\zeta)$, as $\lad$ tends to $\infty$. This leads to a linear isomorphism $\U^+ \otimes \U^- \rightarrow \Udot \one_\zeta$, which allows to construct new bases for $\Udot$, including the fused canonical basis of $\Udot$ arising from the pure tensors of canonical bases in $\U^+ \otimes \U^-$. \subsection{Quantum groups and bilinear forms} We denote by $\U$ the quantum group \cite{Lu93} associated to the Cartan/root datum $(X, Y, I, \cdot)$ with a perfect bilinear pairing $\langle \cdot, \cdot \rangle : Y\times X \rightarrow \Z$; it is a $\Qq$-algebra generated by $E_i$, $F_i$, $K_\mu$, for $i\in I, \mu \in Y$. Denote by $\Udot$ the modified quantum group \cite[Chapter~ 23]{Lu93}. Denote by $\FF{m}_i =F_i^m /[m]_{q_i}!$, for $i\in \I, m\in \N$, the divided powers of $F_i$. Denote by $X^+$ the set of dominant weights in $X$. The comultiplication $\Delta$ satisfies \begin{align} \label{eq:Delta} \Delta(F_i) = F_i \otimes \tilde{K}_i^{-1} + 1\otimes F_i, \qquad \Delta(E_i) = E_i \otimes 1 + \tilde{K}_i \otimes E_i. \end{align} By identifying $\bff \cong \U^-$ ($z \mapsto z^-$), we have $\Qq$-linear maps ${}_i r, r_i : \U^- \rightarrow \U^-$, for $i\in \I$; cf. \cite[1.2.13]{Lu93}. We have \cite[3.1.6]{Lu93}, for $y \in \U^-$, \begin{align} \label{Eiy} E_iy -y E_i = \frac{\tilde{K}_i \, {}_i r(y) - r_i(y) \tilde{K}_i^{-1}}{q_i -q_i^{-1}}. \end{align} Similarly, by identifying $\bff \cong \U^+$ ($z \mapsto z^+$), we have $\Qq$-linear maps ${}_i r, r_i : \U^+ \rightarrow \U^+$, and for $x \in \U^+$, \begin{align} \label{Fiy} F_ix -x F_i = \frac{\tilde{K}_i^{-1} \, {}_i r(x) - r_i(x) \tilde{K}_i}{q_i -q_i^{-1}}. \end{align} Note that $\U^\pm$ are $\N \I$-graded: $\U^+ =\sum_{\nu \in \N\I} \U^+_\nu$, and $\U^+ =\sum_{\nu \in \N\I} \U^-_\nu$. We say an element $x \in \U^+$ (respectively, $y \in \U^-$) is homogeneous if $x \in \U^+_\nu$ (respectively, $y \in \U^-_\nu$) for some $\nu$. In this case, we denote $|x|=\nu$ and $|y|=-\nu$. Denote by $\B$ the canonical basis for $\mathbf f$; the isomorphism $\mathbf f \cong \U^\pm$ induces the canonical bases $\B^\pm$ ($b \mapsto b^\pm$) for $\U^\pm$. Let $\U^+_{\Z[q^{-1}]}$ (respectively, $\U^-_{\Z[q^{-1}]}$) denote the $\Z[q^{-1}]$-span of the canonical basis in $\U^+$ (respectively, $\U^-$). Let $L(\lad)$ be the highest weight $\U$-module with highest weight vector $\eta_\lad$ of highest weight $\lad\in X^+$, and let ${}^\omega L(\lad)$ be the lowest weight $\U$-module with lowest weight vector $\xi_{-\lad}$ of lowest weight $-\lad$. For $x\in \U^+$, $y \in \U^-$, $\lad \in X^+$ and $\zeta \in X$ such that $\lad +\zeta \in X^+$, it follows by \eqref{eq:Delta} that \begin{align} \label{eq:y} y(\xi_{-\lad} \otimes \eta_{\lad+\zeta}) &= \xi_{-\lad} \otimes y \eta_{\lad+\zeta}, \qquad x(\xi_{-\lad} \otimes \eta_{\lad+\zeta}) = x \xi_{-\lad} \otimes \eta_{\lad+\zeta}. \end{align} There is an anti-involution $\rho$ on $\U$ such that, for $i\in \I, \nu \in Y$, \[ \rho (E_i) =q_i \tilde{K}_i F_i, \quad \rho (F_i) =q_i^{-1} E_i \tilde{K}_i^{-1}, \quad \rho (\tilde{K}_\nu) =\tilde{K}_{-\nu}. \] According to \cite{K91, Lu93}, there is a bilinear form $(\cdot, \cdot)$ on $L(\mu)$, for $\mu \in X^+$, such that $(\eta_\mu, \eta_\mu)=1$ and $(ux, y) =(x, \rho(u)y)$, for all $x,y \in L(\mu), u \in \U$; a bilinear form $(\cdot, \cdot)$ on ${}^\omega L(\mu)$ is defined similarly. A bilinear form $(\cdot, \cdot)$ on ${}^\omega L(\lad) \otimes L(\mu)$ is defined by $(x\otimes y, x'\otimes y') =(x,x') (y, y')$. There is an anti-involution $\rho$ of the $\Qq$-algebra $\U$ such that, for $i\in \I$ and $\mu \in Y$, \[ E_i \mapsto q_i^{-1} F_i \tilde{K}_i, \quad F_i \mapsto q_i^{-1} E_i\tilde{K}_i^{-1}, \quad K_\mu \mapsto K_\mu. \] There exists a unique $\Q(q)$-bilinear form $(\cdot, \cdot)$ on $\Udot$ \cite[26.1.2]{Lu93}, which extends the one on $\U^- (\cong \mathbf{f})$ \cite[1.2.3, 1.2.5]{Lu93} such that \begin{align} (\one_{\lad} x \one_\mu, \one_{\lad'} x' \one_{\mu'}) &=0, \text{ for all } x, x' \in \Udot, \text{ unless } \lad =\lad' \text{ and } \mu =\mu', \label{eq:ad1} \\ (ux, y) &=(x, \rho(u)y), \text{ for } x, y \in \Udot, u\in \U, \label{eq:ad2} \\ (f \one_\lad, f' \one_\lad) &=(f, f'), \text{ for } f, f' \in \U^-, \lad \in X. \label{eq:ad3} \end{align} This bilinear form is symmetric. Note \begin{align} \label{BFhalf} (\FF{m}_i, \FF{n}_i) =(\EE{m}_i, \EE{n}_i) = \delta_{m,n} (q_i^{-2}; q_i^{-2})_m^{-1}, \end{align} where we have denoted \begin{align} \label{eq:g} (a; q_i^{-2})_m = \prod_{s=0}^{m-1} (1-a q_i^{-2s}), \qquad (q_i^{-2}; q_i^{-2})_m = \prod_{s=1}^{m} (1-q_i^{-2s}). \end{align} \subsection{Standard sequences} In this paper we shall often deal with sequences of elements $\{z_\lad\}_{\lad \in X^+}$, where $z_\lad \in {}^\omega L(\lad) \otimes L(\lad+\zeta)$ is a linear combination of elements of the form \begin{align} u\one_\zeta (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ), & \quad \text{ for } u \in \Udot, \text{ and } \label{eq:spanning} \\ \quad x \xi_{-\lad} \otimes y \eta_{\lad+\zeta}, & \quad \text{ for } x \in \U^+, \, y \in \U^-. \label{eq:spanning2} \end{align} We shall refer to such a sequence {\em standard}. The coefficients of $z_\lad$ usually take a certain form, which we now specify. \begin{definition} \label{def:seq} (1) A standard sequence $\{z_\lad\}_{\lad \in X^+}$ is said to be {\em bounded} if $z_\lad$ is spanned by elements \eqref{eq:spanning}--\eqref{eq:spanning2} with coefficients being a finite sum of the form \begin{align} \label{eq:s1} \sum_{s\ge 0} \sum_{\vec{i} =(i_1, \ldots, i_s) \in \I^s} f_{\vec{i}}(q) \prod_{a=1}^s q_{i_a}^{-2\langle {i_a}, \lad \rangle}, \text{ where $f_{\vec{i}}(q) \in \Q(q)$ is independent of $\lad$.} \end{align} (2) A standard sequence $\{z_\lad\}_{\lad \in X^+}$ is said to be {\em asymptotically zero}, if $z_\lad$ is spanned by elements \eqref{eq:spanning}--\eqref{eq:spanning2} with coefficients being a finite sum of the form \begin{align} \label{eq:s0} \sum_{s\ge 1} \sum_{\vec{i} =(i_1, \ldots, i_s) \in \I^s} f_{\vec{i}}(q) \prod_{a=1}^s q_{i_a}^{-2\langle {i_a}, \lad \rangle}, \text{ where $f_{\vec{i}}(q) \in \Q(q)$ is independent of $\lad$.} \end{align} \end{definition} We say $\lad$ tends to $\infty$ if $\langle i, \lad \rangle$ tends to $+\infty$, for each $i\in \I$; in this case we shall denote $\lad \mapsto \infty$. Note that the coefficients in \eqref{eq:s1} (respectively, \eqref{eq:s0}) converges in $\Q((q^{-1}))$ to some scalar (respectively, to $0$) as $\lad$ tends to $\infty$. Given bounded standard sequences $\{z_\lad\}_{\lad \in X^+}, \{z_\lad '\}_{\lad \in X^+}$ and $\{z_\lad ''\}_{\lad \in X^+}$, we shall denote \[ z_\lad = o(1), \qquad \text{ and }\; z_\lad ' = z_\lad '' + o(1), \] if $\{z_\lad\}_{\lad \in X^+}$ is asymptotically zero and $\{z_\lad ' - z_\lad ''\}_{\lad \in X^+}$ is asymptotically zero, respectively. \subsection{Approximations} There are 2 types of elements in \eqref{eq:spanning}--\eqref{eq:spanning2}, and we shall understand how to approximate one another as $\lad \mapsto \infty$ in a precise fashion. \begin{lem} \label{lem:yEi} Let $x' \in \U^+, y \in \U^-$ be homogeneous, and $i\in \I$. Then we have \begin{align} &E_i (x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta}) = E_i x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} + \frac{q_i^{\langle i, \zeta +|x'| +|y| \rangle -2}}{q_i -q_i^{-1}} x' \xi_{-\lad} \otimes {}_i r(y) \eta_{\lad+\zeta} \label{Eixy1}\\ &\qquad\qquad\qquad\qquad\quad - \frac{q_i^{-2\langle i, \lad \rangle} q_i^{\langle i, -\zeta+|x'| \rangle } }{q_i -q_i^{-1}} x' \xi_{-\lad} \otimes r_i(y) \eta_{\lad+\zeta}. \notag \end{align} Equivalently, we have \begin{align} &E_i x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} = E_i (x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta}) - \frac{q_i^{\langle i, \zeta +|x'| +|y| \rangle -2}}{q_i -q_i^{-1}} x' \xi_{-\lad} \otimes {}_i r(y) \eta_{\lad+\zeta} \label{Eixy2}\\ &\qquad\qquad\qquad\qquad\quad + \frac{q_i^{-2\langle i, \lad \rangle} q_i^{\langle i, -\zeta+|x'| \rangle } }{q_i -q_i^{-1}} x' \xi_{-\lad} \otimes r_i(y) \eta_{\lad+\zeta}. \notag \end{align} \end{lem} \begin{proof} Using the comultiplication formula \eqref{eq:Delta} and the identity \eqref{Eiy}, we compute that \begin{align*} &E_i (x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta}) \\ &= E_i x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} + \tilde{K}_i x' \xi_{-\lad} \otimes E_i y \eta_{\lad+\zeta} \notag \\ &= E_i x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} + \tilde{K}_i x' \xi_{-\lad} \otimes \frac{\tilde{K}_i \, {}_i r(y) - r_i(y) \tilde{K}_i^{-1}}{q_i -q_i^{-1}} \eta_{\lad+\zeta} \notag \\ &= E_i x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} + \frac{q_i^{\langle i, \zeta +|x'| +|y| \rangle -2}}{q_i -q_i^{-1}} x' \xi_{-\lad} \otimes {}_i r(y) \eta_{\lad+\zeta} - \frac{q_i^{\langle i, -2\lad-\zeta+|x'| \rangle } }{q_i -q_i^{-1}} x' \xi_{-\lad} \otimes r_i(y) \eta_{\lad+\zeta}. \notag \end{align*} This proves \eqref{Eixy1}. The identity \eqref{Eixy2} follows from \eqref{Eixy1}. \end{proof} \begin{lem} \label{lem:xFi} Let $x \in \U^+, y \in \U^-$ be homogeneous and $i\in \I$. Then we have \begin{align} &F_i (x \xi_{-\lad} \otimes y \eta_{\lad+\zeta}) = x \xi_{-\lad} \otimes F_i y \eta_{\lad+\zeta} + \frac{q_i^{-\langle i, \zeta +|x| +|y| \rangle +2}}{q_i -q_i^{-1}} {}_i r(x) \xi_{-\lad} \otimes y \eta_{\lad+\zeta} \label{Fixy1}\\ &\qquad\qquad\qquad\qquad\quad - \frac{q_i^{-2\langle i,\lad\rangle} q_i^{\langle i, -\zeta-|y| \rangle } }{q_i -q_i^{-1}} r_i(x) \xi_{-\lad} \otimes y \eta_{\lad+\zeta}. \notag \end{align} \end{lem} \begin{proof} Using the comultiplication formula \eqref{eq:Delta} and the identity \eqref{Eiy}, we compute that \begin{align*} &F_i (x \xi_{-\lad} \otimes y \eta_{\lad+\zeta}) \\ &= x \xi_{-\lad} \otimes F_i y \eta_{\lad+\zeta} + F_i x \xi_{-\lad} \otimes \tilde{K}_i^{-1} y \eta_{\lad+\zeta} \notag \\ &= x \xi_{-\lad} \otimes F_i y \eta_{\lad+\zeta} + \frac{\tilde{K}_i^{-1} \, {}_i r(x) - r_i(x) \tilde{K}_i}{q_i -q_i^{-1}} \xi_{-\lad} \otimes \tilde{K}_i^{-1} y \eta_{\lad+\zeta} \notag \\ &= x \xi_{-\lad} \otimes F_i y \eta_{\lad+\zeta} + \frac{q_i^{-\langle i, \zeta +|x| +|y| \rangle +2}}{q_i -q_i^{-1}} {}_i r(x) \xi_{-\lad} \otimes y \eta_{\lad+\zeta} - \frac{q_i^{\langle i, -2\lad-\zeta-|y| \rangle } }{q_i -q_i^{-1}} r_i(x) \xi_{-\lad} \otimes y \eta_{\lad+\zeta}. \notag \end{align*} The lemma is proved. \end{proof} \begin{lem} \label{lem:stable} Let $g \in \Udot$. \begin{enumerate} \item If $\{z_\lad\}_{\lad \in X^+}$ is a bounded standard sequence, then so is $\{g z_\lad \}_{\lad \in X^+}$. \item If $\{z_\lad\}_{\lad \in X^+}$ is an asymptotically-zero standard sequence, then so is $\{g z_\lad \}_{\lad \in X^+}$. (We shall write $g \cdot o(1) =o(1)$.) \end{enumerate} \end{lem} \begin{proof} We shall prove (1) only, as the proof of (2) is entirely similar. If $g=g_1 +g_2$ and if $g_i z_\lad$ is a bounded standard sequences (for $i=1,2$), then so is $g z_\lad$. So we can assume $g =E_{i_a} \cdots E_{i_1} F_{j_b} \cdots F_{j_1} \one_\zeta$, and we only need to concern about $g$ acting on elements of the form \eqref{eq:spanning2}. A simple induction on $a+b$ reduces the proof to two basic cases for $g =E_i \one_\zeta$ and $g =F_i \one_\zeta$, which in turn follow by applying the identities \eqref{Eixy1} and \eqref{Fixy1}, respectively. \end{proof} Given $\mu =\sum_{i\in \I} n_i i \in \N\I$, we define its height $\hgt \mu =\sum_i n_i$. Given $N, N'\in \N$, we shall denote by $P^+(N)$ (respectively, $P^-(N)$) the $\Q(q)$-submodule of $\U^+$ (respectively, $\U^-$) spanned by the elements $b \in \B^+$ (respectively, $b \in \B^-$) such that $\hgt |b| \le N$. For $\zeta \in X$, we denote by $P(N, N')$ the $\Q(q)$-submodule of $\Udot$ spanned by the elements $b^+b^-\one_\zeta$, where $b^+ \in \B^+$ and $b^- \in \B^-$ are such that $\hgt |b^+| \le N, \hgt |b^-| \le N'$ and $|b^+| - |b^-| =\zeta$. The following is the most crucial technical construction in this paper. \begin{prop} \label{prop:approx} Let $\zeta \in X$. \begin{enumerate} \item Given $x \in \U^+, y \in \U^-$, there exists a unique element $x \fus_\zeta y \in \Udot \one_\zeta$ such that \begin{align} \label{starxy} (x \fus_\zeta y) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) -x \xi_{-\lad} \otimes y \eta_{\lad+\zeta} = o(1). \end{align} \item Given $u \in \Udot \one_\zeta$, there exists a unique element $u'' =\sum_k x_{k} \otimes y_{k} \in \U^+ \otimes \U^-$ such that \begin{align} \label{Eixy4} u ( \xi_{-\lad} \otimes \eta_{\lad+\zeta} ) - \sum_{k} x_{k} \xi_{-\lad} \otimes y_{k} \eta_{\lad+\zeta} = o(1). \end{align} \end{enumerate} \end{prop} \begin{proof} (1) For the existence, we shall prove the following more precise statement. {\bf Claim 1.} For $x \in \U^+$ and $y \in \U^-$ homogeneous, there exists $x \fus_\zeta y \in P(\hgt |x|, \hgt |y|)$ of the form \begin{align} \label{starxy1} x \fus_\zeta y \in xy \one_\zeta + P(\hgt |x|-1, \hgt |y| -1) \text{ such that \eqref{starxy} holds. } \end{align} We prove Claim~1 by induction on $\hgt |x|$. The case when $\hgt |x|=0$ clearly follows by \eqref{eq:y}. If $\hgt |x| >0$, we can assume $x$ is of the form $x =E_{i} x'$, for some $x' \in \U^+$. We observe by \eqref{Eixy2} that \begin{align} \label{Eixy3} &E_i x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} = E_i (x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta}) - \frac{q_i^{\langle i, \zeta +|x'| +|y| \rangle -2}}{q_i -q_i^{-1}} x' \xi_{-\lad} \otimes {}_i r(y) \eta_{\lad+\zeta} + o(1). \end{align} Since $\hgt |x'| < \hgt |x|$, the inductive assumption can be applied to $x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta}$ and $x' \xi_{-\lad} \otimes {}_i r(y) \eta_{\lad+\zeta}$ on the RHS\eqref{Eixy3}, and we have \begin{align*} x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} &= (x' \fus_{\zeta} y) ( \xi_{-\lad} \otimes y \eta_{\lad+\zeta} ) + o(1), \\ x' \xi_{-\lad} \otimes {}_i r(y) \eta_{\lad+\zeta} &= (x' \fus_{\zeta} {}_i r(y)) ( \xi_{-\lad} \otimes y \eta_{\lad+\zeta} ) + o(1). \end{align*} By Lemma~\ref{lem:stable}, $E_i \cdot o(1)$ is of the form $o(1)$. Hence Equation \eqref{Eixy3} can be written as \begin{align*} E_i x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} &= \Big( E_i (x' \fus_\zeta y) - \frac{q_i^{\langle i, \zeta +|x'| +|y| \rangle -2}}{q_i -q_i^{-1}} x' \fus_\zeta {}_i r(y) \Big) ( \xi_{-\lad} \otimes \eta_{\lad+\zeta} ) + o(1). \end{align*} By the inductive assumption, we have \[ E_i(x' \fus_\zeta y) \in E_i(x' y \one_\zeta + P(\hgt |x'|-1, \hgt |y| -1) ) \subseteq xy \one_\zeta + P(\hgt |x|-1, \hgt |y| -1) \] and $x' \fus_\zeta {}_i r(y) \in P(\hgt |x|-1, \hgt |y| -1)$. Therefore, setting \[ x \fus_\zeta y:= E_i (x' \fus_\zeta y) - \frac{q_i^{\langle i, \zeta +|x'| +|y| \rangle -2}}{q_i -q_i^{-1}} x' \fus_\zeta {}_i r(y) \] will satisfy \eqref{starxy} and \eqref{starxy1}. Assume that another element $w \in \Udot \one_\zeta$ also satisfies the same property \eqref{starxy} as $x \fus_\zeta y$. Set $z:=w -x \fus_\zeta y,$ and $z_\lad :=z (\xi_{-\lad} \otimes \eta_{\lad+\zeta})$. Then $z_\lad = o(1)$ by using \eqref{starxy} twice. Thus $(z_\lad, z_\lad)$ converges to $0$, as $\lad$ tends to $\infty$. On the other hand, by \cite[26.2.3]{Lu93}, we have $(z_\lad, z_\lad)$ converges to $(z, z)$, as $\lad$ tends to $\infty$. Hence we must have $(z, z)=0$, and whence $z=0$, i.e., $w =x \fus_\zeta y$. (2) For the existence, we shall prove a more precise statement. {\bf Claim 2.} For $u =xy \one_\zeta \in \Udot \one_\zeta$ with $x \in \U^+$ and $y \in \U^-$ homogeneous, there exists $u'' \in P^+(\hgt |x|) \otimes P^-(\hgt |y|)$ of the form \begin{align} \label{starxy2} u'' \in x \otimes y + P^+(\hgt |x|-1) \otimes P^- (\hgt |y| -1) \text{ such that \eqref{Eixy4} holds. } \end{align} We prove the existence by induction on $\hgt |x|$. If $|x| =0$, then the statement follows by \eqref{eq:y}. Assume $\hgt |x| >0$, we can assume $x =E_i x'$, for some $x' \in \U^+$, and so $u =E_i x' y\one_\zeta$. Hence, by the inductive assumption on $x' y \one_\zeta$ (and recalling $E_i \cdot o(1) =o(1)$ by Lemma~\ref{lem:stable}), we have \begin{align} \label{eq:Eiu2} u ( \xi_{-\lad} \otimes \eta_{\lad+\zeta} ) & = E_i \cdot x'y \one_\zeta ( \xi_{-\lad} \otimes \eta_{\lad+\zeta} ) \\ &= E_i (x' \xi_{-\lad} \otimes y \eta_{\lad+\zeta} + \sum_{\ell} x'_{\ell} \xi_{-\lad} \otimes y'_{\ell} \eta_{\lad+\zeta}) + o(1), \notag \end{align} for some $\sum_{\ell} x'_{\ell} \otimes y'_{\ell} \in P^+(\hgt |x|-2) \otimes P^-(\hgt |y| -1)$. By applying \eqref{Eixy1}, we see that RHS\eqref{eq:Eiu2} $=u'' ( \xi_{-\lad} \otimes \eta_{\lad+\zeta} ) +o(1)$, for $u''$ of the form \eqref{starxy2}. The uniqueness can be established by the same arguments as in (1). \end{proof} \subsection{To infinity} Recall Definition~\ref{def:seq} for (asymptotically-zero) bounded standard sequences. \begin{lem} \label{lem:lim0} Let $\zeta \in X$, and let $\{z_\lad \}_{\lad \in X^+} \in {}^\omega L(\lad) \otimes L(\lad+\zeta)$ be a bounded standard sequence. Then, $(z_\lad, u(\xi_{-\lad} \otimes \eta_{\lad+\zeta})) \in \Q(q)$ (and respectively, $(z_\lad, x \xi_{-\lad} \otimes y\eta_{\lad+\zeta}) \in \Q(q)$) converges in $\Q((q^{-1}))$ as $\lad $ tends to $\infty$, for any $u \in \Udot$, $x \in \U^+$, and $y \in \U^-$. Moreover, the following statements (a)--(d) for $\{z_\lad\}_{\lad \in X^+}$ are equivalent: \begin{enumerate} \item[(a)] $\{z_\lad \}_{\lad \in X^+}$ is asymptotically-zero; \item[(b)] $(z_\lad, u(\xi_{-\lad} \otimes \eta_{\lad+\zeta}))$ converges in $\Q((q^{-1}))$ to $0$ as $\lad$ tends to $\infty$, for any $u \in \Udot$; \item[(c)] $(z_\lad, x \xi_{-\lad} \otimes y\eta_{\lad+\zeta})$ converges in $\Q((q^{-1}))$ to $0$ as $\lad $ tends to $\infty$, for any $x \in \U^+$ and $y \in \U^-$; \item[(d)] $(z_\lad, z'_{\lad})$ converges in $\Q((q^{-1}))$ to $0$ as $\lad $ tends to $\infty$, for any bounded standard sequence $\{z'_\lad\}_{\lad \in X^+}$. \end{enumerate} \end{lem} \noindent If one of the conditions (a)--(d) above is satisfied, we say $\lim_{\lad \mapsto \infty}\limits z_\lad =0$. \begin{proof} Let us prove that $(z_\lad, u(\xi_{-\lad} \otimes \eta_{\lad+\zeta})) \in \Q(q)$ converges in $\Q((q^{-1}))$ as $\lad $ tends to $\infty$. Note $(z_\lad, u(\xi_{-\lad} \otimes \eta_{\lad+\zeta})) = (\rho(u) z_\lad, \xi_{-\lad} \otimes \eta_{\lad+\zeta})$, and $\{\rho(u)z_\lad\}$ is a bounded standard sequence spanned by elements of the form \eqref{eq:spanning}--\eqref{eq:spanning2} with coefficients $f_\lad(q)$ as in \eqref{eq:s1}. If $f_\lad(q) u'(\xi_{-\lad} \otimes \eta_{\lad+\zeta})$ is a summand of $\rho(u)z_\lad$, then $(f_\lad(q) u'(\xi_{-\lad} \otimes \eta_{\lad+\zeta}), \xi_{-\lad} \otimes \eta_{\lad+\zeta})$ converges in $\Q((q^{-1}))$ to $\lim\limits_{\lad \mapsto \infty} f_\lad(q) \cdot (u,\one_\zeta)$, by \cite[26.2.3]{Lu93}. If $f_\lad(q) x'\xi_{-\lad} \otimes y'\eta_{\lad+\zeta}$ is a summand of $\rho(u)z_\lad$, then $(f_\lad(q) x'\xi_{-\lad} \otimes y'\eta_{\lad+\zeta}, \xi_{-\lad} \otimes \eta_{\lad+\zeta})$ converges in $\Q((q^{-1}))$ to $\lim\limits_{\lad \mapsto \infty} f_\lad(q) \cdot (x',1)(y',1)$. Summarizing, $(z_\lad, u(\xi_{-\lad} \otimes \eta_{\lad+\zeta})) =(\rho(u) z_\lad, \xi_{-\lad} \otimes \eta_{\lad+\zeta})$ converges in $\Q((q^{-1}))$. Assume (a) holds. Then (b) and (c) follow by the same arguments above together with $\lim\limits_{\lad \mapsto \infty} f_\lad(q)=0$, and subsequently (d) also follows. Therefore, for any bounded standard sequences $\{z_\lad\}_{\lad \in X^+}, \{z'_\lad\}_{\lad \in X^+}$, we have \begin{align} \label{eq:zero} \lim\limits_{\lad \mapsto \infty}(z_\lad, z_\lad') =0, \quad \text{ if either sequence is } o(1). \end{align} Now by Proposition~\ref{prop:approx}(1), $x \xi_{-\lad} \otimes y \eta_{\lad+\zeta} = (x\fus_\zeta y) (\xi_{-\lad} \otimes \eta_{\lad+\zeta}) +o(1)$, and we have already shown that $\lim\limits_{\lad \mapsto \infty}(z_\lad, (x\fus_\zeta y) (\xi_{-\lad} \otimes \eta_{\lad+\zeta}))$ exists. Thus, by applying \eqref{eq:zero}, $\lim\limits_{\lad \mapsto \infty}(z_\lad, x \xi_{-\lad} \otimes y\eta_{\lad+\zeta}) = \lim\limits_{\lad \mapsto \infty}(z_\lad, (x\fus_\zeta y) (\xi_{-\lad} \otimes \eta_{\lad+\zeta}))$ exists in $\Q((q^{-1}))$. The equivalence of (b) and (c) follows by \eqref{eq:zero} and Proposition~\ref{prop:approx}. Clearly Parts (b) and (c) are special cases of (d), and on the other hand, Part (d) follows from (b)--(c) combined easily. It remains to show that (b) $\Rightarrow$ (a). By Proposition~\ref{prop:approx} and \eqref{eq:zero}, we can assume that $z_\lad$ is spanned by elements of the form \eqref{eq:spanning2}, i.e., $z_\lad =\sum_{u} f_{\lad,u} (q) u (\xi_{-\lad} \otimes \eta_{\lad+\zeta})$, for various nonzero $u$ which are orthogonal to each other. Then, for each such $u$, $(z_\lad, u(\xi_{-\lad} \otimes \eta_{\lad+\zeta}))$ converges in $\Q((q^{-1}))$ to $\lim\limits_{\lad \mapsto \infty} f_{\lad,u} (q) \cdot (u,u)$, by \cite[26.2.3]{Lu93}. By the assumption (b), $\lim\limits_{\lad \mapsto \infty} f_{\lad,u} (q)=0$, so $f_{\lad,u} (q)$ is of the form \eqref{eq:s0} and $z_\lad$ is asymptotically zero. The lemma is proved. \end{proof} We have the following reformulation of Proposition~\ref{prop:approx}. Given $x,y \in \bff$, we have $x^+ \in \U^+, y^- \in \U^-$, and we shall simply write $x \fus_\zeta y$ for $x^+ \fus_\zeta y^-$. \begin{prop} \label{prop:limit} Let $\zeta \in X$. \begin{enumerate} \item Given $x, y \in \bff$, there exists a unique element $x \fus_\zeta y \in \Udot \one_\zeta$ such that \begin{align*} \lim_{\lad \mapsto \infty}\limits \Big((x \fus_\zeta y) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) - x^+ \xi_{-\lad} \otimes y^- \eta_{\lad+\zeta} \Big ) =0. \end{align*} \item Given $u \in \Udot \one_\zeta$, there exists a unique element $u''=\sum_k x_{k} \otimes y_{k} \in \U^+ \otimes \U^-$ such that \begin{align*} \lim_{\lad \mapsto \infty}\limits \Big(u (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) - \sum_{k} x_{k} \xi_{-\lad} \otimes y_{k} \eta_{\lad+\zeta} \Big ) =0. \end{align*} \end{enumerate} \end{prop} \subsection{A linear isomorphism for $\Udot \one_\zeta$} We now present the first main result of this paper, which is built on the constructions in Proposition~\ref{prop:approx} or its reformulation in Proposition~\ref{prop:limit}; we retain the notation therein in Part (1) of the theorem below. \begin{thm} \label{thm:obasis} {\quad} \begin{enumerate} \item For any $\zeta \in X$, there exists a $\Q(q)$-linear isomorphism $$ \mathcal F_{\zeta} \colon \U^+ \otimes \U^- \longrightarrow \Udot \one_\zeta $$ such that \[ \mathcal F_\zeta (x \otimes y ) =x \fus_\zeta y, \qquad \mathcal F_\zeta ^{-1} (u) =u''. \] \item Given bases $B_1, B_2$ for $\bff$, the set \begin{align} \label{eq:PBW} B_1 \fus B_2 := \big \{ b_1\fus_\zeta b_2 \mid b_1 \in B_1, b_2 \in B_2, \zeta \in X \big \} \end{align} forms a $\Q(q)$-basis for $\Udot$. Moreover, for $b_1, b_1' \in B_1, b_2, b_2' \in B_2$ and $\zeta, \zeta' \in X$, we have \begin{align} \label{eq:BF2} \big(b_1 \fus_\zeta b_2, \, b'_1 \fus_{\zeta'} b'_2 \big) = \delta_{\zeta, \zeta'} (b_1,b'_1) \, (b_2,b'_2). \end{align} \end{enumerate} \end{thm} \begin{proof} (1) Write $\mathcal F =\mathcal F_\zeta$, and denote $\mathcal G: \Udot \one_\zeta \longrightarrow \U^+ \otimes \U^-$ the linear map such that $\mathcal G(u) =u''$ as given in Proposition~\ref{prop:approx}(2) or Proposition~\ref{prop:limit}(2). It follows by the uniqueness in Proposition~\ref{prop:limit}(2) that $\mathcal G \mathcal F (x\otimes y) =\mathcal G(x \fus_\zeta y) =x \otimes y$. Let $u \in \Udot \one_\zeta$ and $\mathcal G (u) =u'' :=\sum_k x_k \otimes y_k$ as in Proposition~\ref{prop:limit}(2). Then by Proposition~\ref{prop:limit}(1), we have $(\sum_k x_k \fus_\zeta y_k) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) -\sum_{k} x_k \xi_{-\lad} \otimes y_k \eta_{\lad+\zeta} = o(1).$ By the uniqueness in Proposition~\ref{prop:limit}(1), we must have $u= \sum_k x_k \fus_\zeta y_k$, i.e., $u=\mathcal F \mathcal G(u)$. So $\mathcal F$ is a linear isomorphism with inverse $\mathcal G$. (2) The first statement on bases follows by (1). The formula \eqref{eq:BF2} is trivial if $\zeta' \neq \zeta$. Assume $\zeta'=\zeta$. Set \begin{align*} z_\lad &:= (b_1 \fus_{\zeta} b_2) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) - b_1 \xi_{-\lad} \otimes b_2 \eta_{\lad+\zeta}, \\ z'_\lad &:= (b'_1 \fus_{\zeta} b'_2) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) - b'_1 \xi_{-\lad} \otimes b'_2 \eta_{\lad+\zeta}. \end{align*} It follows by Proposition~\ref{prop:limit} that $\lim_{\lad \mapsto \infty}\limits z_\lad =0$ and $\lim_{\lad \mapsto \infty}\limits z'_\lad =0$. Then we have \begin{align*} &\Big((b_1 \fus_{\zeta} b_2) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ), (b'_1 \fus_{\zeta} b'_2) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) \Big) - \big( b_1 \xi_{-\lad} \otimes b_2 \eta_{\lad+\zeta}, b'_1 \xi_{-\lad} \otimes b'_2 \eta_{\lad+\zeta} \big ) \\ &= \big(z_\lad, (b'_1 \fus_{\zeta} b'_2) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) \big ) + \big(z'_\lad, b_1 \xi_{-\lad} \otimes b_2 \eta_{\lad+\zeta} \big ), \end{align*} whose RHS converges in $\Q((q^{-1}))$ to $0$ as $\lad$ tends to $\infty$ by Lemma~\ref{lem:lim0}. By \cite[26.2.3]{Lu93}, the pairing $\big((b_1 \fus_{\zeta} b_2) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ), (b'_1 \fus_{\zeta} b'_2) (\xi_{-\lad} \otimes \eta_{\lad+\zeta} ) \big)$ converges in $\Q((q^{-1}))$ to $\big( b_1 \fus_{\zeta} b_2, b'_1 \fus_{\zeta} b'_2 \big)$ as $\lad$ tends to $\infty$. On the other hand, we have \[ \big( b_1 \xi_{-\lad} \otimes b_2 \eta_{\lad+\zeta}, b'_1 \xi_{-\lad} \otimes b'_2 \eta_{\lad+\zeta} \big ) =\big( b_1 \xi_{-\lad}, b'_1 \xi_{-\lad} \big ) \big(b_2 \eta_{\lad+\zeta}, b'_2 \eta_{\lad+\zeta} \big ), \] which converges to $(b_1, b'_1)(b_2, b'_2)$ as $\lad$ tends to $\infty$. Hence \eqref{eq:BF2} for $\zeta'=\zeta$ follows. The theorem is proved. \end{proof} \subsection{Fused canonical basis for $\Udot$} We have several distinguished choices of bases for $\bff$. If we choose the canonical basis $\B$ for $\bff$, the $\Q(q)$-basis \[ \B \fus \B = \big \{ b_1 \fus_\zeta b_2 \mid b_1, b_2\in \B, \zeta \in X \big \} \] is called a {\em fused canonical basis} for $\Udot$. Note that \[ b^+ \one_\zeta \in \B \fus \B, \; b^- \one_\zeta \in \B \fus \B, \quad \text{ for } b \in \B. \] Recall Lusztig's canonical basis $\{b_1 \diamondsuit_\zeta b_2 | b_1, b_2 \in \B, \zeta \in X \}$ on $\Udot$. Define a partial order $\leq$ on $\B \times \B$ as follows \cite[24.3.1]{Lu93}. We say $(b_1', b_2') \leq (b_1, b_2)$ if $|b'_1| -|b'_2| =|b_1| -|b_2|$ and if either ($b_1 =b_1'$ and $b_2 =b_2'$) or \[ \hgt|b'_1| <\hgt|b_1| \text{ and } \hgt|b'_2|<\hgt|b_2|. \] \begin{thm} \label{thm:CB-fCB} Let $\zeta \in X$, and $b_1, b_2 \in \B$. We have \begin{align*} b_1 \diamondsuit_\zeta b_2 = b_1 \fus_\zeta b_2 + \sum_{(b'_1, b'_2) <(b_1, b_2)} P_{(b'_1, b'_2),(b_1, b_2)} (q) \, b'_1 \fus_\zeta b'_2 \end{align*} where $P_{(b'_1, b'_2),(b_1, b_2)} (q) \in \Z [[q^{-1}]] \cap \Q(q)$. Moreover, for the ADE type, $P_{(b_1', b_2'),(b_1, b_2)}^{\lad} (q)$ lies in $ \N[[q^{-1}]] \cap \Q(q)$. \end{thm} \begin{proof} As $\{b_1' \fus_\zeta b_2' \mid b_1', b_2' \in \B\}$ is a basis for $\Udot \one_\zeta$ by Theorem~\ref{thm:obasis}, we have \begin{align} \label{eq:P} b_1 \diamondsuit_\zeta b_2 = \sum_{(b'_1, b'_2)} P_{(b'_1, b'_2),(b_1, b_2)} (q) \, b'_1 \fus_\zeta b'_2 \end{align} where $P_{(b'_1, b'_2),(b_1, b_2)} (q) \in \Q(q)$. There is a canonical basis $\{ (b_1 \diamondsuit b_2)_{\lad, \lad +\zeta} | b_1 \in \B(\lad), b_2 \in \B(\lad +\zeta) \}$ on ${}^\omega L(\lad) \otimes L(\lad +\zeta)$, for $\lad \in X^+$ and $\lad+\zeta \in X^+$, by \cite[24.3.3]{Lu93}. By \cite[25.2.1]{Lu93}, we have \begin{align} \label{eq:CBCB} (b_1 \diamondsuit_\zeta b_2) (\xi_{-\lad} \otimes \eta_{\lad +\zeta} ) = (b_1 \diamondsuit b_2)_{\lad, \lad +\zeta}, \text{ for } b_1 \in \B(\lad), b_2 \in \B(\lad +\zeta). \end{align} We shall assume $\lad$ is chosen to be large enough below such that $b_1 \in \B(\lad), b_2 \in \B(\lad +\zeta)$. When combining \eqref{eq:CBCB} with \cite[24.3.3]{Lu93} on expansion of $(b_1 \diamondsuit b_2)_{\lad, \lad +\zeta}$ in terms of pure tensors of canonical bases on ${}^\omega L(\lad)$ and $L(\lad +\zeta)$, we have \begin{align} (b_1 \diamondsuit_\zeta b_2) (\xi_{-\lad} \otimes \eta_{\lad +\zeta} ) &= b_1^+ \xi_{-\lad} \otimes b_2^- \eta_{\lad +\zeta} \label{eq:Ptilde}\\ &\quad + \sum_{\stackrel{(b_1',b_2') \in \B(\lad) \times \B(\lad+\zeta)}{(b_1', b_2') < (b_1, b_2)} } \widetilde P_{(b_1', b_2'),(b_1, b_2)}^{\lad} (q) \, b_1'^+ \xi_{-\lad} \otimes b_2'^- \eta_{\lad +\zeta}, \notag \end{align} where $\widetilde P_{(b_1', b_2'),(b_1, b_2)}^{\lad} (q) \in \Z[q^{-1}]$, with the superscript indicating its dependence on $\lad$. We set $\widetilde P_{(b_1', b_2'),(b_1, b_2)}^{\lad} (q) =0$ if $(b_1', b_2') \not \leq (b_1, b_2)$. By \eqref{eq:P} and Proposition~\ref{prop:approx}, we have \begin{align} \label{eq:P2} (b_1 \diamondsuit_\zeta b_2) (\xi_{-\lad} \otimes \eta_{\lad +\zeta} ) = \sum_{(b'_1, b'_2)} P_{(b'_1, b'_2),(b_1, b_2)} (q) \, (b'_1 \xi_{-\lad} \otimes b'_2 \eta_{\lad +\zeta} ) +o(1). \end{align} Comparing \eqref{eq:Ptilde} and \eqref{eq:P2}, we conclude that \begin{align} \label{eq:P3} \lim_{\lad \mapsto \infty}\limits \widetilde P_{(b_1', b_2'),(b_1, b_2)}^{\lad} (q) =P_{(b'_1, b'_2),(b_1, b_2)} (q). \end{align} Hence, we must have $P_{(b'_1, b'_2),(b_1, b_2)}(q) \in \Z[[q^{-1}]]$; moreover, $P_{(b'_1, b'_2),(b_1, b_2)}(q)=0$ unless $(b'_1, b'_2) \le (b_1, b_2)$, and $P_{(b_1, b_2),(b_1, b_2)} (q)=1$. Now assume $\U$ is of ADE type. By \cite[Corollary 8.9]{We15}, $\widetilde P_{(b_1', b_2'),(b_1, b_2)}^{\lad} (q)$ in \eqref{eq:Ptilde} lies in $\N[q^{-1}]$, and then by \eqref{eq:P3}, we have $P_{(b_1', b_2'),(b_1, b_2)}^{\lad} (q) \in \N[[q^{-1}]] \cap \Q(q)$. \end{proof} \section{PBW basis for $\Udot$} In this section, we restrict ourselves to $\U$ of finite type; see however Remark~\ref{rem:affine}. We construct and study the PBW basis for $\Udot$ and its relation to the canonical basis. We present explicit formulas in rank 1. \subsection{PBW basis in finite type} Let $\Phi^+$ be the set of positive roots and $W$ be the associated Weyl group for $\U$. Let $\preceq$ be a convex order on $\Phi^+$ associated to an (arbitrarily) fixed reduced expression of the longest element $w_0$ in the Weyl group $W$, and we denote the (decreasingly) ordered roots as $\Phi^+ =\{\beta_1, \ldots, \beta_N\}$, where $N=|\Phi^+|$. For each $\beta \in \Phi^+$, Lusztig \cite{Lu90, Lu93} defined a root vector $E_\beta$ using braid group action on $\U$, and its corresponding divided power $E_{\beta}^{(m)} =E_\beta^m / [m]_{q_\beta}!$, where $q_\beta =q_i$ if $\beta$ lies in the $W$-orbit of a simple root $\alpha_i$. We shall identify the set $\KP$ of Kostant partitions with $\N^{\Phi^+}$, thanks to the bijection given by ${\bf m} =(m_\beta)_{\beta \in \Phi^+} \in \N^{\Phi^+} \mapsto (\beta^{m_\beta})_{\beta \in \Phi^+}\in \KP$. For each $\bf m \in \KP$, we define \[ E_{{\bf m}} := \prod_{a=1}^N E_{\beta_a}^{(m_{\beta_a})}. \] Then $\bold{W}^+:=\{E_{\bf m} | {\bf m} \in \KP \}$ forms a PBW basis for $\U^+$, which is orthogonal with respect to the bilinear form $(\cdot, \cdot)$ on $\U^+$ (see \cite[38.2.3]{Lu93}). The canonical basis for $\bf f$ can be parametrized by $\KP$ as well \cite{Lu90, Lu93}, and we shall write $\{b_{\bf m} | {\bf m} \in \KP \}$; via the isomorphism ${\bf f} \cong \U^+$, we obtain the canonical basis $\{b_{\bf m}^+ | {\bf m} \in \KP \}$ for $\U^+$ and $\{b_{\bf m}^- | {\bf m} \in \KP \}$ for $\U^-$. Each canonical basis element in $\U^+$ can be characterized by the following 2 properties: \begin{enumerate} \item[(i)] $\overline{b_{\bf m}^+} =b_{\bf m}^+$; \item[(ii)] $b_{\bf m}^+ \in E_{\bf m} + \sum_{\bf m' \in \KP} q^{-1} \Z[q^{-1}] E_{\bf m'}$. \end{enumerate} There is a suitable partial order $\preceq$ on $\KP$ (compatible with the convex order on $\Phi^+$), so that $b_{\bf m}^+ \in E_{\bf m} + \sum_{\bf m' \prec \bf m} q^{-1} \Z[q^{-1}] E_{\bf m'}$ \cite{Lu93} (also cf. \cite{Ka14, BKM14}). Note $\bf m' \preceq \bf m$ implies that $\sum_{\beta \in \Phi^+} m'_\beta \beta =\sum_{\beta \in \Phi^+} m_\beta \beta$. Denote \begin{align} \label{eq:bE} b_{\bf m}^+ =\sum_{\bf m' \preceq \bf m} R_{{\bf m'}, {\bf m}}(q) E_{\bf m'} \end{align} where $R_{{\bf m'}, {\bf m}}(q) \in q^{-1} \Z[q^{-1}]$ for $\bf m' \prec \bf m$, and $R_{{\bf m}, {\bf m}}(q)=1$. Similarly, by applying suitable braid group actions to construct root vectors and their divided powers for $\U^-$ (which lie in $\U^-_{\Z[q^{-1}]}$), we obtain a PBW basis $\bold{W}^- := \{F_{\bf m} | {\bf m} \in \KP \}$ for $\U^-$, which is orthogonal with respect to the bilinear form $(\cdot, \cdot)$ on $\U^-$. The canonical basis $\{b_{\bf n}^- | {\bf n} \in \KP \}$ for $\U^-$ satisfies $b_{\bf n}^- \in F_{\bf n} + \sum_{\bf n' \prec \bf n} q^{-1} \Z[q^{-1}] F_{\bf n'}$. It follows that \begin{align} \label{eq:bF} b_{\bf n}^- =\sum_{\bf n' \preceq \bf n} R_{{\bf n'}, {\bf n}}(q) F_{\bf n'}. \end{align} We formulate the following distinguished case of Theorem~\ref{thm:obasis} separately and a little more precisely, which is the original goal of this paper. \begin{thm} [PBW basis for $\Udot$] \label{thm:PBW} Let $\U$ be finite type. Then the set \[ \bold{W}\fus\bold{W} =\{ E_{\bf m} \fus_\zeta F_{\bf n} \mid {\bf m}, {\bf n} \in \KP, \zeta \in X \} \] forms an orthogonal basis (called a {\em PBW basis}) for $\Udot$. The {\em dual PBW basis} for $\Udot$ is given by \[ \Big \{ \prod_{\beta \in \Phi^+} (q_\beta^{-2}; q_\beta^{-2})_{m_\beta}(q_\beta^{-2}; q_\beta^{-2})_{n_\beta}E_{\bf m} \fus_\zeta F_{\bf n} \mid {\bf m}, {\bf n} \in \KP, \zeta \in X \Big \}. \] \end{thm} \begin{proof} Clearly $\bold{W}\fus\bold{W}$ is a basis for $\Udot$, by Theorem~\ref{thm:obasis}. Its orthogonality follows by \eqref{eq:BF2} and the orthogonality of the PBW bases for $\U^{\pm}$ \cite[38.2.3]{Lu93}. Note that $(E_{\bf m}, E_{\bf m}) =\prod_{\beta \in \Phi^+} (q_\beta^{-2}; q_\beta^{-2})_{m_\beta}^{-1}$, cf. \cite{Lu93}. The dual PBW basis follows from this and \eqref{eq:BF2}. \end{proof} Thanks to $E_{\bf m} \fus_\zeta 1 = E_{\bf m} \one_{\zeta}$ and $1 \fus_\zeta F_{\bf n} = F_{\bf n} \one_{\zeta}$, the PBW bases for $\U^\pm$ are parts of the PBW basis for $\Udot$. One can also formulate hybrid bases $\bold{W}\fus \B$ and $\B \fus \bold{W}$ for $\Udot$. \subsection{PBW basis vs (fused) canonical basis} The fused canonical basis for $\Udot$ in finite type can now be written as $\B \fus \B =\{b_{\bf m} \fus_\zeta b_{\bf n} | {\bf m, \bf n} \in \KP, \zeta\in X \}$. We formulate its relation to the PBW basis $\bold{W}\fus\bold{W}$. \begin{prop} \label{prop:fCB-PBW} Let $\bf m, \bf n \in \KP$ and $\zeta \in X$. We have \begin{align*} b_{\bf m} \fus_\zeta b_{\bf n} &= E_{\bf m} \fus_\zeta F_{\bf n} + \sum_{\stackrel{{\bf (m', n') \neq (m, n)}}{\bf m' \preceq \bf m, \bf n' \preceq \bf n}} R_{{\bf m'}, {\bf m}} (q) R_{{\bf n'}, {\bf n}} (q) \, E_{\bf m'} \fus_\zeta F_{\bf n'}. \end{align*} Moreover, for the ADE type, we have $R_{{\bf m'}, {\bf m}} (q), R_{{\bf n'}, {\bf n}} (q) \in \N[[q^{-1}]] \cap \Q(q)$. \end{prop} \begin{proof} By construction (see Proposition~\ref{prop:approx}), $x\fus_\zeta y$ is bilinear on $x$ and $y$. Thus the expansion formula for $b_{\bf m} \fus_\zeta b_{\bf n}$ follows by \eqref{eq:bE}--\eqref{eq:bF}. Assume $\U$ is of ADE type. Then we have $R_{{\bf m'}, {\bf m}} (q), R_{{\bf n'}, {\bf n}} (q) \in \N[q^{-1}]$. This was first proved in \cite[Corollary 10.7]{Lu90} when the reduced expressions of the longest element $w_0 \in W$ are ``adapted", and then proved by Syu Kato \cite{Ka14} in general (also see \cite{BKM14} and \cite{Oy18} for different approaches). \end{proof} Now we formulate the relation between the canonical basis and PBW basis for $\Udot$. \begin{cor} \label{cor:CB-PBW} Let $\bf m, \bf n \in \KP$ and $\zeta \in X$. We have \begin{align*} b_{\bf m} \diamondsuit_\zeta b_{\bf n} &= \sum_{\stackrel{(b_{\bf m_1}, b_{\bf n_1}) \leq (b_{\bf m}, b_{\bf n})}{\bf m' \preceq \bf m_1, \bf n' \preceq \bf n_1}} P_{(b_{\bf m_1}, b_{\bf n_1}),(b_{\bf m}, b_{\bf n})} (q) \, R_{{\bf m'}, {\bf m_1}} (q) R_{{\bf n'}, {\bf n_1}} (q) \, E_{\bf m'} \fus_\zeta F_{\bf n'}. \end{align*} Moreover, for the ADE type, $P_{(b_{\bf m_1}, b_{\bf n_1}),(b_{\bf m}, b_{\bf n})} (q) \, R_{{\bf m'}, {\bf m_1}} (q) R_{{\bf n'}, {\bf n_1}} (q) \in \N[[q^{-1}]] \cap \Q(q)$. \end{cor} \begin{proof} Follows by combining Theorem~\ref{thm:CB-fCB} and Proposition~\ref{prop:fCB-PBW}. \end{proof} \begin{rem} Assume $\U$ is of ADE type. The (dual) PBW bases for $\U^+$ have been categorified in terms of (proper) standard modules over the KLR or quiver Hecke algebras \cite{Ka14, BKM14}. In light of the positivity properties of the transition matrices in Theorem~\ref{thm:CB-fCB}, Proposition~\ref{prop:fCB-PBW} and Corollary~\ref{cor:CB-PBW}, it is reasonable to ask for a categorification of the fused canonical basis and the PBW basis of $\Udot$. \end{rem} \begin{rem} \label{rem:affine} Let $\U$ be of affine type. By a theorem of \cite{BCP99}, there exists a PBW basis $\bold{W}$ for $\bff$, which is actually a $\Z[q^{-1}]$-basis for the $\bff_{\Z[q^{-1}]}$; it gives rise to PBW bases for $\U^+$ and for $\U^-$. It still holds that the set $\bold{W} \fus \bold{W}$ forms a basis for $\Udot$ by Theorem~\ref{thm:obasis} and may be called its PBW basis. However, this basis is not orthogonal, as the PBW basis for $\U^+$ (or $\U^-$) in \cite{BCP99} is not either. \end{rem} \subsection{PBW basis in rank 1} \label{sec:rank1} In the remainder of this section, we set $\U =\U_q(\mathfrak{sl}_2)$, the $\Qq$-algebra generated by $E$, $F$, $K, K^{-1}.$ We shall write $q_i=q$, identify the weight lattice $X =\Z$ and $X^+ =\N$. We work out explicit formulas of PBW basis via canonical basis for $\Udot$, and vice versa. For $n\in \N$, let $L(n)$ be the highest weight $\U$-module with highest weight vector $\eta_n$, and let ${}^\omega L(n)$ be the lowest weight $\U$-module with lowest weight vector $\xi_{-n}$. For $m \in \Z$, we shall consider the tensor product $\U$-modules ${}^\omega L(p) \otimes L(m+p)$, for various $p\in \N$ such that $m+p\in \N$. Recall $\{\EE{a}|a\in \N\}$ is the PBW basis (and also the canonical basis) for $\U^+$, and $\{\FF{b}| b\in \N\}$ is the PBW basis (and the canonical basis) for $\U^-$. By Proposition~\ref{prop:limit} and Theorem~\ref{thm:PBW}, there exists a unique PBW basis element \[ \mf w_{m} (a,b):= \EE{a} \fus_{m} \FF{b} \in \Udot\one_{m} \] such that \begin{align} \label{tensor0} \lim_{p\mapsto \infty}\limits \Big( \mf w_m(a,b) (\xi_{-p} \otimes \eta_{p+m} ) - \EE{a} \xi_{-p} \otimes \FF{b} \eta_{p+m} \Big ) =0. \end{align} In particular, $\mf w_m(a,0) =\EE{a} \one_{m}$ and $\mf w_m(0,b) =\FF{b} \one_{m}$, for $a, b \in \N$. \begin{prop} \label{prop:PBW1} The set $\{\mf w_m(a,b) \mid a,b \in \N, m\in \Z\}$ forms an orthogonal (PBW) basis for $\Udot$ with respect to $(\cdot, \cdot)$. Moreover, we have \begin{align} \Big(\mf w_m(a,b), \mf w_{m'} (a',b') \Big) &= \frac{\delta_{a,a'} \delta_{b,b'} \delta_{m,m'}}{(q^{-2}; q^{-2})_a (q^{-2}; q^{-2})_b}. \label{norm} \end{align} \end{prop} The dual PBW basis for $\Udot$ is $\{(q^{-2}; q^{-2})_a (q^{-2}; q^{-2})_b \mf w_m(a,b) \mid a,b \in \N, m\in \Z\}$. \begin{proof} The first statement is a rephrasing of Theorem~\ref{thm:PBW}. The explicit bilinear pairing formula \eqref{norm} follows from \eqref{eq:BF2} and the formula \eqref{BFhalf}. \end{proof} \subsection{PBW-expansion of canonical basis in rank 1} Recall the canonical basis for $\Udot$ consists of the following elements (cf. \cite{K93}, \cite[25.3.2]{Lu93}): \[ \EE{a} \FF{b} \one_{m} \;\; (m \le b-a), \qquad \FF{b} \EE{a} \one_{m} \;\; (m \ge b-a), \qquad \text{ for $a, b \in \N$}, \] with the (only) identification \begin{align} \label{EF=FE} \EE{a} \FF{b} \one_{m} =\FF{b} \EE{a} \one_{m}, \quad \text{ for } m =b-a. \end{align} \begin{thm} \label{thm:PBW-CB} Let $a, b \in \N$ and $m\in \Z$. Then we have \begin{align} \EE{a} \FF{b} \one_{m} &= \sum_{s=0}^{\min(a,b)} \frac{q^{-s^2+s(a-b +m)}}{(q^{-2}; q^{-2})_s} \mf w_m(a-s,b-s), \quad \text{for } m \le b-a, \label{CB-W1} \\ \FF{b} \EE{a} \one_{m} &= \sum_{s=0}^{\min(a,b)} \frac{q^{-s^2 \red{-} s(a-b +m)}}{(q^{-2}; q^{-2})_s} \mf w_m(a-s,b-s), \quad \text{for } m \ge b-a. \label{CB-W2} \end{align} \end{thm} Note that the leading coefficients are 1 and all the (non-leading) coefficients in \eqref{CB-W1}--\eqref{CB-W2} lies in $q^{-1} \N[[q^{-1}]] \cap \Q(q)$. \begin{proof} Assume $m\le b-a$. We consider the action of $\U$ on the module ${}^\omega L(p) \otimes L(m+p)$, for various $p\ge |m|$. Recall the following formula from \cite[\S25.3]{Lu93}, where we have replaced Lusztig's $\eta_{q}$ by $\eta_{p+m}$ here and his $(-n+2b)$ by $m$ here: \begin{align} \label{L25.3a} &\EE{a} \FF{b} (\xi_{-p} \otimes \eta_{p+m}) \\ & =\sum_{s\ge 0; s \le a, s\le b} q^{s(a-s-p)} \qbinom{s-b+m+p}{s} \EE{a-s} \xi_{-p} \otimes \FF{b-s} \eta_{p+m}. \notag \end{align} To make even more clear the dependence on $p$ of these coefficients, we rewrite the formula \eqref{L25.3a} as \begin{align} &\EE{a} \FF{b} (\xi_{-p} \otimes \eta_{p+m}) \label{eq:EFpq} \\ & =\sum_{s\ge 0; s \le a, s\le b} q^{-s^2+s(a-b+m)} \prod_{d=1}^s \frac{1 -q^{2b-2m-2d-2p}}{1 -q^{-2d}} \EE{a-s} \xi_{-p} \otimes \FF{b-s} \eta_{p+m} \notag \\ & =\sum_{s\ge 0; s \le a, s\le b} \frac{q^{-s^2+s(a-b+m)} }{ (q^{-2}; q^{-2})_s} \EE{a-s} \xi_{-p} \otimes \FF{b-s} \eta_{p+m} \notag \\ &\; -\sum_{s\ge 0; s \le a, s\le b} \frac{ q^{-s^2+s(a-b+m)}}{(q^{-2}; q^{-2})_s} \big(1- (q^{2b-2m-2-2p};q^{-2})_s \big) \EE{a-s} \xi_{-p} \otimes \FF{b-s} \eta_{p+m}. \notag \end{align} Note that the first summand on the RHS\eqref{eq:EFpq} has coefficients independent of $p$; the second summand is an asymptotically-zero standard sequence as $p$ varies, and so it has limit $0$ as $p$ tends to $\infty$ by Lemma~\ref{lem:lim0}. Therefore, \eqref{CB-W1} follows from the identities \eqref{tensor0} and \eqref{eq:EFpq}. Now assume $m\ge b-a$. Recall the following formula from \cite[\S25.3]{Lu93}: \begin{align} \label{L25.3b} &\FF{b} \EE{a} (\xi_{-p} \otimes \eta_{p+m}) \\ & =\sum_{s\ge 0; s \le a, s\le b} q^{s(b-s-m-p)} \qbinom{s-a+p}{s} \EE{a-s} \xi_{-p} \otimes \FF{b-s} \eta_{p+m}. \notag \end{align} This can be rewritten as \begin{align} &\FF{b} \EE{a} (\xi_{-p} \otimes \eta_{p+m}) \label{eq:EFpq2} \\ & =\sum_{s\ge 0; s \le a, s\le b} q^{-s^2 \red{-}s(a-b+m)} \prod_{d=1}^s \frac{1 -q^{2a-2d-2p}}{1 -q^{-2d}} \EE{a-s} \xi_{-p} \otimes \FF{b-s} \eta_{p+m} \notag \\ & =\sum_{s\ge 0; s \le a, s\le b} \frac{q^{-s^2\red{-}s(a-b+m)} }{ (q^{-2}; q^{-2})_s} \EE{a-s} \xi_{-p} \otimes \FF{b-s} \eta_{p+m} \notag \\ &\quad - \sum_{s\ge 0; s \le a, s\le b} \frac{ q^{-s^2\red{-}s(a-b+m)} }{(q^{-2}; q^{-2})_s} \big(1- (q^{2a-2-2p};q^{-2})_s \big) \EE{a-s} \xi_{-p} \otimes \FF{b-s} \eta_{p+m}. \notag \end{align} The second summand above has limit $0$ as $p$ tends to $\infty$, and \eqref{CB-W2} follows. \end{proof} \begin{cor} \label{cor:BF1} The bilinear pairings between canonical basis elements are given by: \begin{align*} &\big(\EE{a} \FF{b} \one_{m}, \EE{a'} \FF{b'} \one_{m} \big) \\ &=\sum_{s=\min(0,a-a')}^{\min(a,b)} \frac q^{-s^2 -(a'-a+s)^2 +(a'-a+2s)(a-b+m)} }{ (q^{-2}; q^{-2})_{a - s} (q^{-2}; q^{-2})_{b - s} (q^{-2}; q^{-2})_{s} (q^{-2}; q^{-2})_{a' - a + s}} \notag \end{align*} if $m\le b-a =b'-a'$; \begin{align*} & \big(\FF{b} \EE{a} \one_{m}, \FF{b'} \EE{a'} \one_{m} \big) \\ &=\sum_{s=\min(0,a-a')}^{\min(a,b)} \frac q^{-s^2 -(a'-a+s)^2 \red{-} (a'-a+2s)(a-b+m)} }{ (q^{-2}; q^{-2})_{a - s} (q^{-2}; q^{-2})_{b - s} (q^{-2}; q^{-2})_{s} (q^{-2}; q^{-2})_{a' - a + s}} \notag \end{align*} if $m\ge b-a =b'-a'$; The bilinear pairings are $0$ whenever $b-a \neq b'-a'$. \end{cor} \begin{proof} This follows immediately from Theorem~\ref{thm:PBW-CB} and Proposition~\ref{prop:PBW1}. \end{proof} \begin{rem} The formulas in Corollary~\ref{cor:BF1} appeared first in \cite[Proposition 2.8]{La10} (whose convention differs from us by $q\leftrightarrow q^{-1}$), and their direct combinatorial proof is long and occupied \cite[\S10]{La10}. There is a very different formula for the bilinear pairing obtained by adjunctions, cf. \cite{La10}. \end{rem} \begin{rem} \label{rem:BFrank1} An alternative short proof of Corollary~\ref{cor:BF1} is as follows (we focus on the case when $m \le b-a$ below). Use the formula \eqref{L25.3a} to compute the bilinear pairing \[ \Big(\EE{a} \FF{b} (\xi_{-p} \otimes \eta_{p+m}), \EE{a'} \FF{b'} (\xi_{-p} \otimes \eta_{p+m}) \Big) \] on the module ${}^\omega L(p) \otimes L(m+p)$ (which is easy as the summands in \eqref{L25.3a} are orthogonal). Then its limit in $\Q((q^{-1}))$ as $p$ tends to $\infty$ can be directly read off from the reformulation \eqref{eq:EFpq} of \eqref{L25.3a}. This limit gives us $\big(\EE{a} \FF{b} \one_{m}, \EE{a'} \FF{b'} \one_{m} \big)$ according to \cite[26.2.3]{Lu93}. \end{rem} \begin{rem} The bilinear pairing on the modified $\imath$quantum group of rank one was computed \cite{Wa21} in the same approach as in Remark~\ref{rem:BFrank1}. For $\imath$quantum groups, this is the only way of computing for now as there is no characterization of the bilinear form via adjunction like \eqref{eq:ad1}--\eqref{eq:ad3}. \end{rem} \subsection{PBW basis via canonical basis in rank 1} \label{sec:rank1c} Now we give a formula for the PBW basis in terms of canonical basis. \begin{thm} For $a, b\in \N$ and $m\in \Z$, we have \begin{align} \label{wEF} \mf w_m(a,b) = \begin{cases} \sum\limits_{s=0}^{\min(a,b)} (-1)^s \frac{q^{-s +s(a-b +m)}}{(q^{-2}; q^{-2})_s} \EE{a-s} \FF{b-s} \one_{m}, & \quad \text{if } m \le b-a, \\ \\ \sum\limits_{s=0}^{\min(a,b)} (-1)^s \frac{q^{-s \red{-}s(a-b +m)}}{(q^{-2}; q^{-2})_s} \FF{b-s} \EE{a-s} \one_{m}, & \quad \text{if } m \ge b-a. \end{cases} \end{align} \end{thm} \begin{proof} We shall prove the first formula for $\mf w_m(a,b)$ with $m \le b-a$; the proof of the other formula is entirely similar and will be skipped. Let us fix $a,b \in \N$ and $m\in \Z$ such that $m \le b-a$. One could rephrase the formula \eqref{CB-W1} as giving the unital triangular transition matrix from a basis $C :=\{\EE{a-s} \FF{b-s} \one_{m} \mid 0\le s\le \min(a,b) \}$ to another basis $P :=\{\mf w_m(a-s,b-s) \mid 0\le s\le \min(a,b)\}$ (these 2 sets have the same span). Accordingly, the first formula in \eqref{wEF} gives the unital triangular transition matrix from $P$ to $C$, and we shall prove these two transition matrices are inverses to each other. This boils down to verifying the following identity, for $k \ge 1$: \[ \sum_{s=0}^k (-1)^{k-s} \frac{q^{-s^2+s(a-b+k)} q^{(k-s)(a-b+k-1)} }{(q^{-2}; q^{-2})_s (q^{-2}; q^{-2})_{k-s}} =0. \] Upon multiplying with $(-1)^k q^{k(b-a-k+1)} (q^{-2}; q^{-2})_{k}$, we reduce the above identity to the following standard $q$-binomial identity, for $k\ge 1$ (cf. \cite[1.3.4]{Lu93}): \[ \sum_{s=0}^k (-1)^{s} q^{s(1-m)} \qbinom{m}{s} =0. \] This proves the first formula in \eqref{wEF} (actually, we have shown that \eqref{CB-W1} and the first formula in \eqref{wEF} are equivalent). \end{proof} The two different formulas of $\mf w_m(a,b)$ for $m=b-a$ in \eqref{wEF} coincide thanks to \eqref{EF=FE}. The simplest new PBW basis elements are given by \begin{align} \label{eq:w11} \mf w_m(1,1) = \begin{cases} EF \one_{m} - \frac{q^{m-1}}{1-q^{-2}} \one_{m} & \quad \text{if } m \le 0, \\ \\ FE \one_{m} - \frac{q^{-m-1} }{1-q^{-2}} \one_{m}, & \quad \text{if } m \ge 0. \end{cases} \end{align} For $m\le 0$, we have $\mf w_m(1,1) (\xi_{-p} \otimes \eta_{p+m}) = E\xi_{-p} \otimes F \eta_{p+m} -\frac{q^{-1-2p}}{1-q^{-2}} (\xi_{-p} \otimes \eta_{p+m}).$
1,477,468,750,775
arxiv
\section{Introduction} \label{Intr} Interaction between pairs of quasiparticles often leads to broken-symmetry ground states in solids. Typical examples are the formation of Cooper pairs in superconductors, or charge (CDW) and spin (SDW) density waves driven by electron-phonon and electron-electron interactions respectively \cit {Gruner,Monceau12}. A CDW ground state is characterized by a spatial modulation $\sim\cos(Qx+\varphi)$ of the electron density and a periodic lattice distortion with the same $Q_{CDW}=2k_{F}$ wave vector inducing opening of a gap, $\Delta$, in the electron spectrum. From one-dimensional (1D) weak coupling mean field theories, with $\Delta/E_{F}\ll1$, the Peierls instability is driven by the electronic energy gain which originates mostly from the Fermi surface (FS) nesting with $Q=2k_{F}$. In the case of not complete nesting in quasi-one-dimensional (Q1D) compounds or in the case of quasi-two-dimensional (Q2D) or three-dimensional (3D) conductors the ground state in the CDW state is semimetallic because electron and hole pockets remain in the FS. Properties of these carriers can be modified by the CDW ordering. One of the methods to understand this possible modification is to study magnetoresistance. In conventional metals, the Lorentz force caused by an applied magnetic field changes the electron trajectory and gives rise to a positive magnetoresistance (MR) which increases quadratically with the strength of the field \cite{Abrik,Ziman,Kittel}. Only in a few cases the MR may grow linearly with the field (LMR). For the first time such type of behavior was observed by Kapitza \cite{Kapitza} in polycrystalline metals. It was shown that LMR is attributed to the presence of open Fermi surfaces. The quantum mechanism of LMR was proposed by Abrikosov \cit {Abrikosov98,Abrikosov99}. In his model LMR is realized basically in gapless semiconductors or semimetals with a linear energy spectrum and with a very small carrier concentration, so that only one Landau band participates in the conductivity. Parish and Littlewood \cite{Littlewood03} considered a macroscopically disordered and strongly inhomogeneous semiconductor and showed that a classical mechanism will give LMR in this case. In Ref. \onlinecite{Herring60}] it was shown that LMR may occur in weakly inhomogeneous systems, for fields where the cyclotron orbit period exceeds the scattering time. From many published works one can also conclude that this unusual LMR may be a common feature of CDW systems. Indeed, LMR was observed in Q1D compounds exhibiting a CDW with incomplete FS nesting such as NbSe$_{3}$ \cit {Richard87,Coleman90} and in PdTeI \cite{Lei16}. Effect of LMR was reported for Q2D compounds with a CDW: transition metal dichalcogenides 2H-NbSe$_2$; 2H-TaSe$_2$ \cite{Naito82}; 1T-TaTe$_{2}$ \cite{Chen17}, 1T- NbTe$_{2}$ \cit {Chen17Arx}, monophosphate tingsten bronzes (PO$_2$)$_4$(WO$_3$)$_{2m}$ for m=4.6 \cite{Rotger94}; molybdenum purple bronze, K$_{0.9}$Mo$_{4}$O$_{11}$ and molybdenum oxides $\eta$-Mo$_{4}$O$_{11}$ \cite{Schlenker}. In the present work we have studied galvanomagnetic properties in another type of Q2D compounds with a CDW, namely, rare-earth tritellurides. We have measured magnetoresistance in the temperatures range across the Peierls transition temperature and show that effectively LMR appears below this temperature. Rare-earth tritellurides $R$Te$_{3}$ ($R$ =Y, La, Ce, Nd, Sm, Gd, Tb, Ho, Dy, Er, Tm) exhibit an incommensurate CDW through the whole $R$ series with a wave vector $\mathbf{Q}_{CDW1}=(0,0,\sim2/7c^{*})$ with a Peierls transition temperature above 300 K for the light atoms (La, Ce, Nd). For the heavier $R$ (Dy, Ho, Er, Tm) a second CDW occurs with the wave vector \mathbf{Q}_{CDW2}=(\sim2/7a^{*},0,0)$. The superlattice peaks measured from X-ray diffraction are very sharp and indicate a long range 3D CDW order \cit {Ru08,DiMasi95,Brouet08}. Below the Peierls transition, in all $R$Te$_{3}$ compounds, the Fermi surface is partially gapped resulting in a metallic behavior at low temperature. The layered $R$Te$_{3}$ compounds exhibit a large anisotropy between the resistivity along the $b$-axis and that in the $(a,c)$ plane, typically $\sim10^{2}$ below $T_{CDW1}$ and much higher at low temperature \cite{Ru06}. Because the unidirectional character of the upper CDW \cit {Fang07,Lavagnini10R,Yao06}, a conductivity anisotropy in the $(a,c)$ plane arises in the CDW state as was observed experimentally and explained theoretically in Ref. \onlinecite{sinchPRL14}. The effect of the upper CDW on the in-plane resistivity observed in experiments is very weak, no more than a few percents of the total resistance \cite{Ru08,sinchPRL14,Ru06}. For our study we chose two compounds: TbTe$_{3}$ as a system with unidirectional CDW and HoTe$_{3}$ exhibiting a bidirectional CDW. In TbTe _{3}$ the CDW ordering is observed well above room temperature ($T_{CDW1} =336 K). In HoTe$_{3}$ the first and the second CDW transitions take place at $T_{CDW1}=283$ K and $T_{CDW2}=110$ K correspondingly \cite{Ru08}. \section{Experimental} \label{Exp} Single crystals of TbTe$_{3}$ and HoTe$_{3}$ were grown by a self-flux technique under purified argon atmosphere as described previously \cit {SinchPRB12}. Thin single-crystal samples with a square shape and with a thickness less than 1 $\mu$m were prepared by micromechanical exfoliation of relatively thick crystals glued on a sapphire substrate. The untwinned character of selected crystals and the spatial arrangement of crystallographic axes were controlled by X-ray diffraction. Room temperature resistivity of crystals was $26-28 \mu\Omega$cm for TbTe$_3$ and $12-13 \mu\Omega$cm for HoTe$_3$ that is in accordance with previously reported results \cite{Ru08,sinchPRL14}. The quality of crystals was confirmed by high value of resistance residual ratio (RRR), $R(300K)/R(4K)$: 70-90 for HoTe$_3$ and more than 100 for TbTe$_3$. The magnetic field was applied parallel to the $b$ axis, and in-plane magnetoresistance was recorded using the van der Pauw method, sweeping the field between $+6.5$ and $-6.5$ T. Measurements were performed at fixed temperature in the temperature range 350-20 K with the step $\Delta T=10$ K. \begin{figure}[t] \includegraphics[width=8cm]{Fig1} \caption{(color online) Magnetoresistance of HoTe$_{3}$ (a) and TbTe$_{3}$ (b) as a function of magnetic field, $B$, in log-log scale at different temperatures. Blue and red straight line segments indicate linear and square dependencies correspondingly.} \label{F1} \end{figure} \section{Experimental results} \label{Data} \begin{figure}[t] \includegraphics[width=8cm]{Fig2} \caption{(color online)MR$(B)$ for TbTe$_{3}$ (a) and HoTe$_{3}$ (b), at temperatures above $T_{CDW}$ (open blue circles) and below $T_{CDW}$ (open red squares).} \label{F2} \end{figure} \begin{figure}[t] \includegraphics[width=8.5cm]{Fig3} \caption{(color online) Magnetoresistance of HoTe$_{3}$ (a) and TbTe$_{3}$ (b) as a function of square magnetic field, $B^{2}$, at different temperatures. Solid black lines demonstrate the deviation of $MR(B)$ dependencies from a square law at some value of magnetic field, $B^{*}$.} \label{F3} \end{figure} \begin{figure}[t] \includegraphics[width=8.5cm]{Fig4} \caption{(color online) Temperature dependence of the characteristic field B^{*} $ for HoTe$_{3}$ (blue) and TbTe$_{3}$ (red). Dashed lines are guides to the eye.} \label{F4} \end{figure} The temperature variation of the field dependence of magnetoresistance, defined as MR$=[R_{xx}(B)-R_{xx}(0)]/R_{xx}(0)$, in the temperature range from 10 K up to the temperature well above $T_{CDW}$ is shown in Fig. \re {F1} for HoTe$_{3}$ (a) and for TbTe$_{3}$ (b) in a log-log plot. Both compounds demonstrate nearly the same behavior: magnetoresistance changes by more than four order of magnitude as temperature $T$ decreases from 300 K to 20 K. Simultaneously, the power-law field dependence of MR changes monotonically from quadratic (red straight line segment) at high $T$ and at low $B$ to linear (blue straight line segment) at low $T$ and high $B$. Note, that in the studied magnetic field range (up to 6.5 T) we never observed any deviation of MR from quadratic law at temperatures above the Peierls transition temperatures The examples of MR$(B)$ dependencies measured at $T$ above $T_{CDW}$ (330 K and 290 K for TbTe$_3$ and HoTe$_3$ respectively) and below $T_{CDW}$ (40 K for both compounds) are shown in Fig. \re {F2}. Note, that at the same temperature $T=40$ K, the linear $R_{xx}(B)$ is more pronounced for HoTe$_{3}$ in which two CDWs exist at this temperature. To make this quadratic-to-linear MR crossover clearer, in Fig.\ref{F3} (a) and (b)we plot MR as a function of square magnetic field, $B^{2}$ for HoTe _{3}$ and for TbTe$_{3}$ correspondingly. Solid black lines are quadratic dependencies which coincide with the experimental curves at low magnetic fields. At a certain magnetic field, $B^{\ast}$, experimental dependencies deviate from these lines. Temperature dependence of this characteristic field $B^{\ast}$ is shown in Fig.\ref{F4}. As can be seen, $B^{\ast}$ increases rapidly or even diverges when $T$ approaches the CDW transition temperature. \section{Theoretical model and discussion} Thus, most of charge-density wave systems with imperfect nesting exhibit a linear magnetoresistance that is, probably, related to the CDW electronic structure. To propose a possible explanation of this linear MR we invoke a usually neglected scattering mechanism of quasiparticles on the fluctuations of the order parameter of a charge density wave, which violate the space uniformity and lead to the momentum relaxation of quasiparticles. The scattering on CDW fluctuations is the strongest near the so-called \textquotedblright hot spots\textquotedblright\ on the Fermi surface (FS). Somewhat similar mechanism of linear MR but above the CDW transition temperature was proposed in Refs. \cit {Young1968,FalikovLinearMR}. In Ref. \cite{Young1968} the scattering in the hot spots, with large momentum and low energy transfer, involves umklapp processes. In Ref. \cite{FalikovLinearMR} it involves the scattering by soft phonons, appearing due to the proximity to Peierls instability. In our case these hot spots are the FS areas where the FS reconstruction due to CDW is the strongest. Usually the hot spots are the ends of the ungapped FS parts. The electron dispersion in such hot spots depends strongly on the CDW structure, and the electrons in such hot spots may be easily scattered by CDW fluctuations. Thus, in cuprate high-Tc superconductors such hot spots are the ends of Fermi arcs, but the FS reconstruction is driven by the pseudogap or antiferromagnetic ordering, rather than by CDW. In organic metals, e.g. $\alpha $--(BEDT-TTF)$_{2}$KHg(SCN)$_{4}$, where the CDW leads to the FS reconstruction and changes its topology,\cite{KartsLTP2014} such hot spots are the points of intersection of the original FS and the FS shifted by the CDW wave vector. In these hot spots the electron dispersion changes strongly, somewhat similar to the change of electron dispersion at the boundaries of the Brillouin zone in the weak-coupling approximation,\cit {Abrik} where the energy gap is formed due to periodic potential, which is the CDW in our case. Since the periodic potential and the formed energy gap in the electron spectrum is of the order of CDW order parameter and much less that the Fermi energy, in high magnetic field the electron trajectories in these hot spots are subject to magnetic breakdown in addition to the direct scattering by CDW fluctuations. This leads to an additional indirect scattering mechanism of conducting electrons, which may be rather strong \cite{KartsLTP2014}. The electron scattering in the hot spots leads to the linear field dependence of the scattering rate and, hence, to the linear magnetoresistance. To show this linear field dependence of the electron mean-free time $\tau $ we assume that in each hot spot an electron is scattered with some probability $w_{hs}<1$; the possible origin of this scattering is discussed later. If this hot-spot scattering is the main scattering mechanism of conducting electrons, the corresponding electron mean free time $\tau _{hs}=t_{hs}/w_{hs}$, where $t_{hs}$ is the mean time between electron passing through these hot spots. This time $t_{hs}$ is determined by the FS details \cite{Abrik}. In magnetic field $\boldsymbol{H}$ electrons move in momentum space along the Fermi surface due to the Lorentz force, $dp/dt=(e/c)[\boldsymbol{v_{\boldsymbol{\bot }}}\times H]$, and periodically pass through such hot spots. Hence, the mean free\ time $\tau _{hs}$ is proportional to the length of the Fermi-surface between hot spots divided by magnetic field $H$ strength and by electron velocity in real space $v_{\bot }$:\cite{Abrik} \begin{equation} \tau _{hs}=(c/eHw_{hs})\int dl/\boldsymbol{v_{\bot }}. \label{tauhs} \end{equation If the electron trajectory in magnetic field is closed, its motion is periodic given by the cyclotron (or Larmor) period $T_{L}=2\pi /\omega _{c} , where the cyclotron frequency $\omega _{c}=e\hbar H/m^{\ast }c$, and m^{\ast }$\ is the electron effective mass. Then $\tau _{hs}\eqsim T_{L}/w_{hs}n_{hs}$, where $n_{hs}$ is a number of hot spots along the cyclotron period. If the electron trajectory in magnetic field is open, it also periodically pass through hot spots, and the length of the Fermi-surface between hot spots is approximately given by the length $2\pi /a^{\ast }$ of the first Brillouin zone divided by the number of hot spots on this open trajectory, where $a^{\ast }$ is the lattice constant. Then, according to Eq. (\ref{tauhs}), $\tau _{hs}\sim (c/eHw_{hs})2\pi /a^{\ast }\left\vert \boldsymbol{v_{\bot }}\right\vert n_{hs}$. We see that both for closed and open electron trajectories the hot-spot mean free time is inversely proportional to magnetic field: $\tau _{hs}\propto 1/H$. In the $\tau $-approximation for isotropic in-plane dispersion the conductivity tensor $\sigma $ in magnetic field $\boldsymbol{H}$ is given by the well-known formula\cite{Abrik} \begin{equation} \sigma =\frac{n_{e}e^{2}}{m^{\ast }\left( \omega _{c}^{2}+1/\tau ^{2}\right) }\left( \begin{tabular}{ll} $\tau ^{-1}$ & $\omega _{c}$ \\ $-\omega _{c}$ & $\tau ^{-1} \end{tabular \right) , \label{sDrude} \end{equation which gives for the resistivity tensor \begin{equation} R=\sigma ^{-1}=\frac{m^{\ast }}{n_{e}e^{2}}\left( \begin{tabular}{ll} $\tau ^{-1}$ & $-\omega _{c}$ \\ $\omega _{c}$ & $\tau ^{-1} \end{tabular \right) . \label{RDrude} \end{equation When $\tau $ is independent of magnetic field, as in the simplest models of electron scattering by impurities or by phonons, Eq. (\ref{RDrude}) predicts no magnetoresistance: $\Delta R_{xx}(H)\equiv R_{xx}(H)-R_{xx}(0)=0$. The absence of magnetoresistance in this model is the result of the Hall electric field, which balances the Lorentz force. This balance can be maintained and leads to zero magnetoresistance only if the drift velocity $v , included in the equations of motion, is the same for all charge carriers. Therefore in metals with several types of charge carrier, e.g. electrons or holes from different Fermi-surface parts, the quadratic magnetoresistance appears, which saturates at high magnetic field. Thus, for the simplest isotropic model of only two types of carriers the calculation based on the kinetic equation gives (see Eq. 7.163 of \cite{Ziman}) \begin{equation} \frac{\Delta R_{xx}(H)}{R_{xx}(0)}=\frac{\sigma _{1}\sigma _{2}\left( \omega _{c1}\tau _{1}-\omega _{c2}\tau _{2}\right) ^{2}}{\left( \sigma _{1}+\sigma _{2}\right) ^{2}+\left( \omega _{c1}\tau _{1}\sigma _{1}+\omega _{c2}\tau _{2}\sigma _{2}\right) ^{2}}, \end{equation where the subscripts $1$ and $2$ denote the charge carriers of the first and second type respectively. The quadratic dependence here comes from $\omega _{c}$ in numerator, and the saturation at $\omega _{c}\tau \gtrsim 1$ comes from $\omega _{c}$ in denominator. Usually, the relaxation time depends on the speed $v_{i}$ of an individual charge carrier, so that one cannot describe the motion of the carriers in terms of a single drift velocity even for metals with a single electron band. Therefore,\cite{Kittel} even in single-band metals in weak field one observes the quadratic magnetoresistance \begin{equation} \frac{\Delta R_{xx}(H)}{R_{xx}(0)}\equiv \frac{R_{xx}(H)-R_{xx}(0)}{R_{xx}(0 }=\frac{\alpha \left( \omega _{c}\tau \right) ^{2}}{1+\beta \left( \omega _{c}\tau \right) ^{2}}, \label{Rxx} \end{equation where the coefficients $\alpha \sim \beta \sim 1$, which saturates in strong field when $\omega _{c}\tau \gtrsim 1$. This quadratic magnetoresistance in weak fields can be understood also in terms of the curvature of electron trajectories, which geometrically reduces the electron mean-free path l_{i}=\tau v_{i}$ by the quantity $\sim l_{i}\left( l_{i}/r_{L}\right) ^{2} , where $r_{L}\gg l_{i}$ is the Larmor radius.\cite{Abrik} Since l_{i}\propto \tau $, this effect can be taken into account in Eq. (\re {RDrude}) by the renormalization of the electron mean-free time $\tau \left( H\right) $ in weak magnetic field $H$ according to\cite{Abrik} \begin{equation} \frac{\tau \left( 0\right) }{\tau \left( H\right) }=1+\frac{\alpha \left( \omega _{c}\tau \right) ^{2}}{1+\beta \left( \omega _{c}\tau \right) ^{2}}, \label{tauH} \end{equation which leads to Eq. (\ref{Rxx}). \begin{figure}[t] \includegraphics[width=8cm]{Fig5a} \newline \includegraphics[width=8.2cm]{Fig5b} \caption{(color online) The position of hot spots on the Fermi surface of RTe $_{3}$ above, as proposed in Ref. \onlinecite{FalikovLinearMR} (a) and below (b) the CDW transition temperature (our work).} \label{F5} \end{figure} If the electron scattering is dominated by the scattering in the hot spots, instead of Eq. (\ref{tauH}) we have $\tau \left( H\right) \approx \tau _{hs}\propto 1/H$, and Eq. (\ref{RDrude}) gives $R_{xx}\propto H$. However, in real compounds the total scattering rate $\tau ^{-1}$ is a sum of the contribution from various mechanisms, including those from the scattering by impurities $\tau _{i}^{-1}$ and by phonons $\tau _{ph}^{-1}$. The scattering by phonons at high temperature is somewhat similar to scattering by short-range impurities, as in both cases the momentum transfer during each scattering is large and comparable to the Fermi momentum. In rather weak magnetic field, when the Landau levels are not separated and the magnetic quantum oscillation can be neglected,\cite{Comment1} the scattering rates $\tau _{i}^{-1}$ and $\tau _{ph}^{-1}$ depend on magnetic field according to Eq. (\ref{tauH}). Then $\tau ^{-1}\approx \tau _{i}^{-1}+\tau _{ph}^{-1}+\tau _{hs}^{-1}$, and the linear MR appears only in rather strong magnetic field, when $\tau _{hs}^{-1}>\tau _{i}^{-1}+\tau _{ph}^{-1}\equiv \tau _{i+ph}^{-1} , i.e. when $\omega _{c}\tau _{i+ph}\gtrsim 2\pi /w_{hs}n_{hs}$, or when the quadratic magnetoresistance saturates. At high temperature, when the scattering rate by phonons $\tau _{ph}^{-1}$ becomes larger than $\tau _{hs}^{-1}\approx \omega _{c}w_{hs}n_{hs}/2\pi $, one should observe a usual quadratic MR. On contrary, at lower temperature and in higher field in the CDW state, when $\tau _{hs}^{-1}>\tau _{i+ph}^{-1}$, the linear MR should be observed as a general phenomenon. This crossover is clearly seen on experimental data in Figs. \ref{F1}-\ref{F3}, and, according to the theoretical model, the crossover field increases with temperature, as shown in Fig. \ref{F4}. Let us discuss the possible microscopic origin of the electron scattering in the hot spots in more detail. The mechanism of linear MR above the CDW transition temperature $T_{CDW}$, proposed in Ref. \cite{FalikovLinearMR}, assumes a strong scattering by soft phonons with a wave vector close to the nesting vector $Q_{N}$ due to the Peierls instability. Then the hot spots are those connected by the CDW wave vector, as shown in Fig. \ref{F5}a. However, in our experiment the linear MR is observed much below $T_{CDW}$. Then the expected hot spots are the ends of the ungapped FS parts, as shown in Fig. \ref{F5}b. In these hot spots the FS reconstruction due to CDW is the strongest, and the electron dispersion depends strongly on the CDW order parameter. Therefore, electrons in such hot spots may be easily scattered by CDW fluctuations. Unfortunately, the available experimental data do not give detailed informations about the Fermi surface in RTe$_{3}$ compounds in the CDW state: the ARPES data \cite{Brouet08} do not have sufficient resolution to determine even the FS topology in the CDW state, while the magnetic quantum oscillations in the CDW state are complicated by the second CDW transition in some RTe$_{3}$ materials and by magnetic breakdown. Therefore, there are possibly other hots spots in the ungapped FS parts, and without detailed information about the FS in the CDW state we cannot predict the coefficient in the dependence $\tau _{hs}\propto 1/H$. There are two types of CDW fluctuations: amplitude and phase fluctuations. Both may arise, e.g., due to the CDW pinning by crystal defects or local inhomogeneities. The amplitude CDW defects may strongly scatter the conducting electrons, e.g., due to the inhomogeneous magnetic breakdown (MB) \cite{KartsLTP2014} because the MB probability depends exponentially\cit {FalikovLinearMR,KaganovSlutskinReview} on the gap opened in the electron spectrum due to the CDW. At the ends of the gapped region the gap values decrease, as explicitly shown by ARPES data \cite{Brouet08}, and the magnetic breakdown become possible even in low field. Moreover, the amplitude fluctuations of the CDW gap may lead to spatial variations of the boundary between gapped and ungapped FS parts. Therefore in Fig. \ref{F5}b we place the hot spots at the ends of the gapped region. The MB amplitude depends strongly not only on the gap value, but also on the electron velocity and dispersion in the MB region,\cite{KaganovSlutskinReview} which is also affected by the amplitude fluctuations of CDW. The inhomogeneous MB probability leads to the strong electron scattering in the MB regions, playing the role of hot spots. Note, that this strong scattering mechanism may be important not only for transverse but also for the longitudinal MR and may even lead to the phase inversion of magnetic quantum oscillations \cite{KartsLTP2014}. The CDW phase fluctuations mean local variations of the CDW wave vector, which also change the electron dispersion in our hot spots and affect the MB probability in addition to the direct scattering in these hot spots by the CDW periodic potential with inhomogeneous wave vector. Hence, the CDW fluctuations may indeed lead to the strong electron scattering in the hot spots of the Fermi surface, and, consequently, to linear MR. Our theoretical model is in many aspects similar to that in Ref. \cite{FalikovLinearMR}, but there are some important differences. First, the model in Ref. \cite{FalikovLinearMR} is developed and applied only slightly above the CDW transition temperature $T{CDW}$, while our model can be applied much below $T_{CDW}$. Second, the model in Ref. \cite{FalikovLinearMR} is applied only to the unreconstructed FS, while we apply our model to the strongly reconstructed FS. Third, in Ref. \cite{FalikovLinearMR} the CDW fluctuations have the wave-vector equal to the nesting vector, while in our model any Fermi-surface reconstruction due to CDW leads to scattering by CDW fluctuations, even if there are no ungapped Fermi-surface points connected by the CDW wave vector. Fourth, we propose a temperature-driven crossover between the quadratic and linear magnetoresistance, which cannot be found in the model of Ref. \cite{FalikovLinearMR} where the CDW fluctuations are considered only in the vicinity of transition temperature. In conclusion we have shown that in the the CDW state of RTe$_{3}$ compounds there is a crossover from linear magnetoresistance at low temperature to usual quadratic magnetoresistance at higher temperature. We propose a general explanation of this phenomenon as being related to the electron scattering in the hot spots of Fermi surface due to the spatial fluctuations or inhomogeneity of the charge-density-wave order parameter. \acknowledgements The authors gratefully acknowledge the RFBR-CNRS grant 17-52-150007 \ \ P.G. acknowledges the financial support of the Ministry of Education and Science of the Russian Federation in the framework of Increase Competitiveness Program of MISiS. Theoretical part is supported by the Russian Science Foundation (grants \# 16-42-01100).
1,477,468,750,776
arxiv
\section{Introduction} The ability to monitor the gamma-ray sky continuously is extremely important. The majority of gamma-ray sources are variable, exhibiting flares and transient outbursts on time scales from seconds to years. While there are currently several all-sky monitors in the hard X-ray energy range providing daily light curves, e.g. the All-Sky Monitor (ASM) on the {\it Rossi X-ray Timing Explorer} ({\it RXTE}) from 2--10 keV \citep{Levine1996}, the Gas Slit Camera (GSC) on the {\it Monitor of All-sky X-ray Image} ({\it MAXI}) from 1.5--20 keV \citep{Matsuoka2009}, and the Burst Alert Telescope (BAT) on {\it Swift} from 15--50 keV \citep{Gehrels2004}, there has not been an all-sky monitor in the low energy gamma-ray region since the Burst and Transient Source Experiment (BATSE) instrument on the {\it Compton Gamma-Ray Observatory} ({\it CGRO}), which was sensitive from 20--1800 keV \citep{Fishman1989}. The gamma-ray satellite {\it Fermi} was launched on 2008 June 11 and commenced science operations on 2008 August 12. {\it Fermi} contains two instruments: the Large Area Telescope (LAT), sensitive to gamma rays from $\sim20$ MeV to $\sim300$ GeV \citep{Atwood2009}; and the Gamma-ray Burst Monitor (GBM), which is sensitive to X-rays and gamma rays from 8 keV to 40 MeV \citep{Meegan2009}. With its wide field of view, GBM can be used to provide nearly continuous full-sky coverage in the hard X-ray/soft gamma-ray energy range. It is the only instrument currently in orbit that can perform all-sky monitoring above 100 keV (but below the 30 MeV threshold of the {\it Fermi} LAT) with useable sensitivity. The {\it Swift} /BAT sensitivity drops off rapidly above 100 keV, and its energy range effectively ends at 195 keV. {\it INTEGRAL}, which has a relatively narrow field of view, cannot make continuous observations of a large number of individual sources. Also, GBM is not limited by solar pointing constraints, as are most other instruments, which allows the monitoring of sources at times during which other instruments cannot. The Earth occultation technique, used very successfully with BATSE, has been adapted to GBM to obtain fluxes for an input catalog of known or potential sources. A catalog of 82 sources is currently being monitored and regularly updated. This catalog contains predominantly Galactic x-ray binaries, but also includes the Crab, the Sun, two magnetars, five active galactic nulcei (AGN), and two cataclysmic variables. At energies above 100 keV, six persistent sources (the Crab, Cyg X-1, SWIFT J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105) and two transient sources (GX 339-4 and XTE J1752-223) have been detected in the first two years of observations. In section~\ref{sec:obs}, we briefly describe the GBM instrument and outline the Earth occultation technique as applied to GBM, in section~\ref{sec:res} we present the light curves for the eight sources, and in section~\ref{sec:dis} we discuss the results, the GBM capabilities and future work. \section{GBM and the Earth Occultation Technique \label{sec:obs}} GBM consists of 14 detectors: 12 NaI detectors, each 12.7 cm in diameter and 1.27 cm thick; and two BGO detectors, 12.7 cm in diameter and 12.7 cm thick. The NaI detectors are located on the corners of the spacecraft, with six detectors oriented such that the normals to their faces are perpendicular to the z-axis of the spacecraft (the LAT is pointed in the $+z$-direction), four detectors pointed at $45^{\circ}$ from the z-axis, and 2 detectors pointed $20^{\circ}$ off the z-axis. Together, these 12 detectors provide nearly uniform coverage of the unocculted sky in the energy range from 8 keV to 1 MeV. The two BGO detectors are located on opposite sides of the spacecraft and view a large part of the sky in the energy range $\sim150$ keV to $\sim40$ MeV. It should be noted that none of the GBM detectors have direct imaging capability. \begin{figure}[t] \includegraphics[width=77mm]{figure1.eps} \caption{\label{crabstep}Single Crab occultation step seen in the CTIME raw counts data of a single GBM NaI detector (NaI 2) in the 12--25 keV band with 2.048-second time bins. The Crab was $4.5^{\circ}$ from the normal to the detector. The time window is centered on the calculated occultation time for 100 keV.} \end{figure} Known sources of gamma-ray emission can be monitored with non-imaging detectors using the Earth occultation technique, as was successfully demonstrated with BATSE \citep{Ling2000,Harmon2002,Harmon2004}. When a source of gamma rays is occulted by the Earth, the count rate measured by a detector will drop, producing a step-like feature. When the source reappears from behind the Earth's limb, the count rate will increase, producing another step. The occultation has a finite transition time due to the effect of absorption in the Earth's atmosphere. The shapes of the individual occultation steps depend on energy and occultation angle. The occultation angle, $\beta$, is defined as the elevation angle of the source being occulted with respect to the plane of the {\it Fermi} orbit. The transmission through the atmosphere as a function of time is modeled as $T(t) = \exp[-\mu(E) A(h)]$, where $\mu(E)$ is the mass attenuation coefficient of gamma rays at energy $E$ in air and $A(h)$ is the air mass along the line of sight at a given altitude $h(t)$ based on the \citet{atm1976}. This requires instantaneous knowledge of the spacecraft position, the direction to the source of interest as seen from the spacecraft, and a model of the Earth that includes its oblateness. {\it Fermi} was launched into a $i = 25.6^\circ$ inclination orbit at an altitude of 555 km. The orbital period is 96 minutes, and individual occultation steps last for $\sim8/\cos\beta$ seconds. Figure~\ref{crabstep} shows a single step due to a Crab occultation in the count rate in the 12--25 keV band of a single GBM NaI detector observing the occultation nearly face-on. The diameter of the Earth as seen from {\it Fermi} is $\approx 135^\circ$, so roughly 30\% of the sky is occulted by the Earth at any one time. One complete orbit of the spacecraft allows over 85\% of the sky to be observed. The precession of the orbital plane allows the entire sky to be occulted every $\sim26$ days (half the precession period for the {\it Fermi} orbit), though the exposure is not uniform. The Earth occultation technique was developed for BATSE using two separate approaches, one at the NASA Marshall Space Flight Center \citep{Harmon2002} and the other at the NASA Jet Propulsion Laboratory \citep{Ling2000}. For GBM, we follow the \citet{Harmon2002} approach. The primary difference in the implementation of the occultation technique between GBM and BATSE arises from the different pointing schemes of the respective missions. {\it CGRO} was three-axis stabilized for each viewing period, which typically lasted for two weeks. This meant that a source remained at a fixed orientation with respect to the detectors through an entire viewing period. In contrast, {\it Fermi} scans the sky by pointing in a direction $35^\circ$ (August 2008--September 2009) or $50^\circ$ (October 2009--present) north of the zenith for one orbit; it then rocks to $35^\circ$ or $50^\circ$ south for the next orbit, continuing to alternate every orbit unless the spacecraft goes into a pointed mode (which occurs rarely). In addition, the spacecraft performs a roll about the z-axis as it orbits. Because the orientation of a source with respect to the GBM detectors varies as a function of time, the detector response as a function of angle must be accounted for. A detailed instrument modeling and measurement program has been used to develop the GBM instrument response as a function of direction \citep{Hoover2008,Bissaldi2009}, which is incorporated into the occultation analysis. It should be noted that the GBM occultation sensitivity exceeds that of BATSE at energies below $\sim25$ keV and above $\sim1.5$ MeV, since GBM has only a Be window on the NaI detectors instead of the plastic scintillator, aluminum honeycomb, and aluminum window that covered the front of the BATSE scintillators \citep{Case2007}. Due to its larger area, BATSE was more sensitive than GBM between 25 keV and 1.5 MeV. GBM has two continuous data types: CTIME data with nominal 0.256-second time resolution and 8-channel spectral resolution and CSPEC data with nominal 4.096-second time resolution and 128-channel spectral resolution. The results presented in this paper use the low-spectral resolution CTIME data. Detailed spectral analyses using the higher resolution CSPEC data are reserved for future work. The data are selected to remove gamma-ray bursts, solar flares, electron precipitation events, cosmic ray events, terrestrial gamma-ray flashes, etc. The occultation technique requires an input catalog of predetermined source locations, and currently we are monitoring 82 sources. For each day, the occultation times for each source are calculated using the known spacecraft positions. The time of the occultation step is taken to be the time for which the transmission of a 100 keV gamma ray through the atmospheric column is 50\%. The time at which the atmospheric transmission reaches 50\% is energy dependent, e.g. for energies less than 100 keV, a setting step will occur earlier (see Fig.~\ref{crabstep}). This energy dependence is accounted for in the calculation of the atmospheric transmission function, $T(t)$. For each occultation step, a 4-minute window is defined that is centered on the 100 keV occultation time. For each energy band, the count rates in the window are fit separately for each detector viewing the source of interest. In each of these detectors, the count rates are fitted with a quadratic background plus source models for the source of interest and each interfering source that appears in the window. The source models consist of $T(t)$ and a time-dependent model count rate, derived from the time-dependent detector response convolved with an assumed source spectrum. Each source model is multiplied by a scaling factor, and the source flux is then computed by a joint fit to the scaling factors across all detectors in the fit. The best-fit scaling factor is then multiplied by the assumed source flux model integrated over the energy band to obtain the photon flux. Up to 31 occultation steps are possible for a given source in a day, and these steps are summed to get a single daily average flux. This technique can be used with either the NaI or BGO detectors, though the analysis presented here uses only the NaI detectors. A more complete description of the GBM implementation of the occultation technique will be given in \citet{Wilson2010}. \section{Results \label{sec:res}} \begin{deluxetable}{lccccccccc} \tabletypesize{\footnotesize} \tablecaption{Fluxes and Significances in GBM Broad High Energy Bands\label{Flux_table}} \tablehead{ \colhead{} & \multicolumn{3}{c}{50--100 keV} & \multicolumn{3}{c}{100--300 keV} & \multicolumn{3}{c}{300--500 keV} \\ & Flux & Error & Signif. & Flux & Error & Signif. & Flux & Error & Signif. \\ & (mCrab) & (mCrab) & ($\sigma$) & (mCrab) & (mCrab) & ($\sigma$)& (mCrab) & (mCrab) & ($\sigma$) } \startdata Cyg X-1 & 1151.0 & 3.7 & 312 & 1130.7 & 6.9 & 163 & 529.0 & 49.5 & 10.7 \\ Crab & 1000.0 & 3.3 & 307 & 1000.0 & 6.3 & 158 & 1000.0 & 48.0 & 20.9 \\ XTE J1752-223\tablenotemark{a} & 730.8 & 14.2 & 51 & 563.1 & 26.7 & 21 & 226.1 & 204.4 & 1.1 \\ Cen A & 72.4 & 3.6 & 20 & 104.2 & 6.7 & 16 & $<96.2$\tablenotemark{c} & \ldots & \ldots \\ SWIFT 1753.5-0127 & 121.0 & 4.4 & 28 & 126.8 & 8.2 & 15 & 104.5 & 62.0 & 1.7 \\ 1E 1740-29 & 116.3 & 4.7 & 25 & 92.3 & 8.8 & 11 & 126.8 & 65.0 & 2.0 \\ GRS 1915+105 & 128.1 & 3.6 & 35 & 54.9 & 6.8 & 8.0 & $ <101.2$\tablenotemark{c} & \ldots & \ldots \\ GX 339-4\tablenotemark{b} & 399.4 & 18.3 & 22 & 249.5 & 33.8 & 7.4 & $<507.4$\tablenotemark{c} & \ldots & \ldots \\ \enddata \tablenotetext{a}{Fluxes are given for MJD 55129-55218 when XTE J1752-223 was flaring.} \tablenotetext{b}{Fluxes are given for MJD 55244-55289 when GX 339-4 was flaring.} \tablenotetext{c}{$2\sigma$ upper limit.} \end{deluxetable} In \citet{Wilson2009a}, the measured GBM 12--50 keV light curves are compared to the {\it Swift} BAT 15--50 keV light curves for several sources over the same time intervals, and it is seen that the fluxes measured by the two instruments compare well. At energies above the $\sim195$ keV upper energy limit of the {\it Swift} 22-month catalog \citep{Tueller2010}, however, the GBM observations provide the only wide-field monitor available for the low energy gamma-ray sky. Of the catalog sources being monitored with GBM, six persistent sources have been detected above 100 keV with a statistical significance of at least $7\sigma$ after two years of observations, as well as two transient sources. Table \ref{Flux_table} gives the fluxes averaged over all 730 days from 2008 August 12 (MJD 54690, the beginning of science operations) to 2010 August 11 (MJD 55419) for the persistent sources, and over all of the days of the flares for the transient sources. Also given are the significances for each energy band. The errors are statistical only. The sources are sorted by their detection significance in the 100--300 keV band. \subsection{Persistent Sources} The six persistent sources Crab, Cyg X-1, Cen A, GRS 1915+105, 1E 1740-29, and Swift J1753.5-0127 are detected by GBM at energies above 100 keV. In Figures~\ref{Crab}--\ref{Swift} we show light curves for these sources generated from the GBM data in several broad energy bands with five-day resolution. These persistent sources demonstrate the capabilities of the GBM Earth occultation monitoring. \subsubsection{Crab} The Crab emission in the hard X-ray/low energy gamma-ray regime contains a combination of pulsar and pulsar wind nebula contributions. Figure~\ref{Crab} shows the light curves measured by GBM in four broad energy bands from 12 keV up to 500 keV. The spectrum in this regime has been shown by analysis of BATSE occultation data \citep{Much1996,Ling2003b} and data from SPI on board {\it INTEGRAL} \citep{Jourdain2009} to agree with the spectrum measured with other instruments at lower X-ray energies, and then to steepen near 100 keV. Results of the BATSE analysis can be described by a broken power law, while results of the SPI analysis suggest a smoothly steepening spectrum. The BATSE analysis further noted a distinct hardening of the spectrum near 650 keV, although this has not been confirmed by {\it INTEGRAL} or the COMPTEL instrument on {\it CGRO} \citep{Kuiper2001}. \begin{figure}[t] \includegraphics[width=78mm]{figure2.eps}% \caption{\label{Crab}GBM light curve for the Crab. The horizontal scale is in modified Julian days over the 730 day GBM exposure period, and has been binned 5 days per data point. The dashed horizontal lines show the average flux in each of four energy bands increasing from top to bottom. In the bottom plot, the solid line marks the zero flux level. Note that the apparent ``flare'' near MJD 55180 is due to a giant outburst in the nearby accreting pulsar A0535+262.} \end{figure} The {\it INTEGRAL} spectral measurements are consistent with either a smoothly steepening spectrum or a spectrum of the form $ F = 6.6 \times 10^{-4} (E/100 {\rm ~keV})^{-\alpha}$ photons cm$^{-2}$ s$^{-1}$ keV$^{-1}$, where $\alpha = 2.07 \pm 0.01$ for $E < 100$ keV and $\alpha = 2.23 \pm 0.02$ for $E > 100$ keV \citep{Jourdain2009}. This corresponds to a (50--100 keV)/(12--50 keV) flux ratio of $R_{50} = 0.145$ for {\it INTEGRAL} compared to $0.142 \pm 0.001$ for GBM. The (100--300 keV)/(12--50 keV) flux ratio $R_{100} = 0.041$ corresponding to the {\it INTEGRAL} spectrum compares to the GBM value of $0.076 \pm 0.001$, and the (300--500 keV)/(12--50 keV) flux ratio $R_{300} = 0.007$ for {\it INTEGRAL} corresponds to the GBM value of $0.013 \pm 0.001$. The GBM measurements suggest a somewhat flatter spectrum than that derived from {\it INTEGRAL}, particularly above 100 keV, and are best described by a spectrum with $\alpha \sim 2.09-2.12$. These results demonstrate that GBM is able to see significant emission above 300 keV at a level consistent with the canonical hard spectrum of the Crab. Future analysis with the finer spectral resolution of the GBM CSPEC data will allow a better determination of the the break energy and spectral index above the break. \subsubsection{Cyg X-1} \begin{figure}[t] \includegraphics[width=78mm]{figure3.eps} \caption{\label{CygX1} The GBM light curve for Cyg X-1 over 730 days. The light curve has been binned 5 days per data point. The fluxes are in Crab units, and the dashed and solid lines mark the average flux and zero flux levels, respectively.} \end{figure} Cygnus X-1 is a high-mass X-ray binary and was one of the first systems determined to contain a black hole \citep{Bolton1972,Paczynski1974}. The X-ray emission is bimodal, with the $>10$ keV emission anticorrelated with the $<10$ keV emission \citep{Dolan1977}. It has been observed to emit significant emission above 100 keV including a power law tail extending out to greater than 1 MeV \citep{McConnell2000,Ling2005b}. Based on BATSE occultation analysis, \citet{Ling2005b} have shown that in the high gamma-ray intensity (hard) state, the spectrum consists of a Comptonized shape below 200--300 keV with a soft ($\Gamma > 3$) power-law tail extending to at least 1 MeV. In the low-intensity (soft) state, however, the spectrum takes on a different shape: In this case, the entire spectrum from 30 keV to 1 MeV is characterized by a single power law with a harder index $\Gamma \sim 2 - 2.7$. Similar behavior has been reported for two low-mass X-ray binaries (LMXBs) that also contain black holes, the transient gamma-ray sources GRO J0422+32 \citep{Ling2003a} and GRO J1719-24 \citep{Ling2005a}. Figure \ref{CygX1} shows the GBM light curves. The light curves show significant variability with emission above 300 keV up until about MJD 55355. Starting at about MJD 55355, the 100--300 keV band emission began to decrease \citep{WilsonCase2010}, dropping from an average level of about 1200 mCrab down to nearly undetectable levels on MJD 55405--55406. On MJD 55374 MAXI detected a rapid rise in the soft 2--4 keV band \citep{Negoro2010b}, which combined with the decrease in the low energy gamma-ray flux indicated a transition to a thermally-dominated soft state. As of MJD 55419 the one-day average GBM light curves show that the 12--50 keV flux has begun to rise, while the 100--300 keV flux remains at a low level of $\approx 150$ mCrab, consistent with the $\gamma_0$ state of \citet{Ling1997}. We will continue to monitor Cyg X-1 during the thermally-dominated state and follow its transition back to the low/hard state. The GBM light curves (Fig.~\ref{CygX1}) reveal significant emission above 300 keV, consistent with the power law tail observed when Cyg X-1 is in its low/hard state. The 50--100 keV flux level observed by GBM over the full observation period (Table~\ref{Flux_table}) is 1.15 Crab, consistent with the BATSE high gamma-ray state ($\gamma_2$ of \citet{Ling1987,Ling1997}). The observed GBM flux ratios are $R_{50} = 0.226 \pm 0.001$, $R_{100} = 0.119 \pm 0.001$, and $R_{300} = 0.010 \pm 0.001$, inconsistent with a single power law. A single power law with $\Gamma = 2$ would yield flux ratios 0.158, 0.105, and 0.021. Instead, the GBM ratios suggest a spectrum that appears significantly flatter at low energies and steeper at high energies, consistent with the behavior reported earlier from the BATSE analysis. \subsubsection{Cen A} \begin{figure}[t] \includegraphics[width=78mm]{figure4.eps}% \caption{\label{CenA} The GBM light curve for Cen A. The light curve has been binned 5 days per data point. The fluxes are in Crab units, and the dashed and solid lines mark the average flux and zero flux levels, respectively.} \end{figure} The relatively nearby radio galaxy Cen A is a Seyfert 2 galaxy that is the brightest AGN in hard X-rays/low energy gamma rays. It has powerful jets aligned at approximately $70^\circ$ from the line of sight and is seen to vary on time scales of tens of days to years. It has been observed at hard X-ray energies by {\it OSSE} \citep{Kinzer1995}, {\it INTEGRAL} and {\it RXTE} \citep{Rothschild2006}, and at energies $>1$ MeV by COMPTEL \citep{Steinle1998}. The observations below 150 keV are consistent with a hard spectrum with a power law index $\Gamma \sim 1.8-1.9$. The combined OSSE and COMPTEL data are consistent with a steepening of the spectrum at 150 keV to $\Gamma \sim 2.3$, with the spectrum then extending unbroken to beyond 10 MeV. The GBM light curve for Cen A is shown in Fig.~\ref{CenA}. Because Cen A is relatively far below the equatorial plane, with a declination $\delta = -43^{\circ}$, its beta angle (which ranges between $\delta \pm i$) can be larger than the half-angle size of the Earth as seen from {\it Fermi} ($\beta_{\rm{earth}} \approx \pm 67^{\circ}$). When this happens, Cen A is not occulted. This causes periodic gaps in the light curve, with the period of the gaps equal to the precession period of the orbit. The fluxes as measured by GBM and given in Table \ref{Flux_table} are consistent with the hard spectrum measured by previous instruments. The flux ratios measured by GBM are $R_{50} =0.168 \pm 0.008$ and $R_{100} = 0.134 \pm 0.008$ respectively. An unbroken $\Gamma = 1.9$ power law up to 300 keV would result in flux ratios of 0.178 and 0.129, respectively. A $\Gamma = 1.9$ power law extending up to 500 keV would result in a (300--500 keV)/(12--50 keV) flux ratio of 0.028, while GBM measures essentially no flux above 300 keV, $R_{300} = 0.006 \pm 0.010$, consistent with a steepening or cut off somewhere near 300 keV. The GBM results appear to be consistent with the steepening seen in the OSSE-COMPTEL spectra. \subsubsection{GRS 1915+105} \begin{figure}[t] \includegraphics[width=78mm]{figure5.eps}% \caption{\label{GRS1915}GRS 1915+105 light curve. The light curve has been binned 5 days per data point. The fluxes are in Crab units, The fluxes are in Crab units, and the dashed and solid lines mark the average flux and zero flux levels, respectively.} \end{figure} The galactic microquasar GRS 1915+105 is a LMXB with the compact object being a massive black hole \citep{Greiner2001}. It was highly variable over the 9-year observation period of the {\it CGRO} mission \citep{Paciesas1995,Case2005} with significant emission observed out to $\sim 1$ MeV \citep{Zdziarski2001,Case2005}. Combining the BATSE data from multiple outbursts yields a spectrum best fit by a broken power law, with spectral index $\Gamma \sim 2.7$ below 300 keV flattening to $\Gamma \sim 1.5$ above 300 keV. The spectrum derived from BATSE data shows no evidence of the thermal spectrum seen in Cyg X-1. By contrast, observations with {\it INTEGRAL}/SPI in the 20--500 keV energy range \citep{Droulans2009} showed evidence for a time-variable thermal Comptonization component below $\sim 100$ keV along with a relatively steady, hard power law at higher energies, indicating that different emission regions are likely responsible for the soft and hard emission. The GBM daily fluxes integrated over 730 days (Table \ref{Flux_table}) show significant emission above 100 keV, consistent with the relatively hard power law spectrum seen in BATSE and SPI data. The GBM light curve (Fig.~\ref{GRS1915}) shows distinct variability below 100 keV, with statistics above 100 keV insufficient to determine the level of variability of the emission. The flux ratios observed by GBM ($R_{50} = 0.044 \pm 0.001$ and $R_{100} = 0.010 \pm 0.001$) are close to the flux ratios expected from a power law spectrum with $\Gamma \sim 3$ ($R_{50} = 0.046$ and $R_{100} = 0.014$). \subsubsection{1E 1740-29} \begin{figure}[t] \includegraphics[width=78mm]{figure6.eps}% \caption{\label{1e1740}The GBM light curve for 1E 1740-29. The light curve has been binned 5 days per data point. The fluxes are in Crab units, and the dashed and solid lines mark the average flux and zero flux levels, respectively.} \end{figure} The black hole candidate 1E 1740-29 (also known as 1E 1740.7-2942) is a LMXB very near the Galactic Center. With a large double-ended radio jet, it was the first source identified as a microquasar, and spends most of its time in the low/hard state \citep{Mirabel1992}. {\it INTEGRAL} observations indicate the presence of significant emission up to at least 500 keV with a steepening of the spectrum near 140 keV \citep{Bouchet2009}. The spectrum can be modeled either with a thermalized Compton spectrum and a high energy power law tail, or with two superimposed thermal Compton components. Evidence for a broad 511 keV line observed by SIGMA \citep{Bouchet1991, Sunyaev1991} suggests that 1E 1740-29 may be a source of positrons. The GBM results (Fig.~\ref{1e1740}) are consistent with the high energy component observed when 1E 1740-29 is in the low/hard state. Below 100 keV and above 300 keV, GBM sees approximately $20-50\%$ higher flux than {\it INTEGRAL}, while in the 100--300 keV band, GBM observes approximately 90\% of the level reported by {\it INTEGRAL}. \subsubsection{SWIFT J1753.5-0127} \begin{figure}[t] \includegraphics[width=78mm]{figure7.eps}% \caption{\label{Swift} The GBM light curve for SWIFT J1753.5-0127. The light curve has been binned 5 days per data point. The fluxes are in Crab units, with the average flux (dashed lines) and zero flux (solid lines) levels shown.} \end{figure} The X-ray nova SWIFT J1753.5-0127 (Fig.~\ref{Swift}) is a LMXB with the compact object likely being a black hole \citep{Miller2006,Bel2007}. {\it Swift} discovered this source when it observed a large flare in 2005 July \citep{Palmer2005}. The source did not return to quiescence but settled into a low intensity hard state \citep{Miller2006}. {\it INTEGRAL} observations \citep{Bel2007} showing emission up to $\sim 600$ keV were compatible with thermal Comptonization modified by reflection, with evidence for separate contributions from a jet, disk, and corona. BATSE occultation measurements from 1991--2000 showed no significant emission from this source above 25 keV \citep{Case2010}. The GBM results are consistent with this source still remaining in a hard state, with significant emission in excess of 100 mCrab above 100 keV. The light curves show that the emission from the higher energy bands declined beginning about MJD 55200, and that it increased again beginning about MJD 55325 and is currently at or just below its two-year average. The spectrum is inconsistent with a single power law, and future work using the GBM CSPEC data will allow a more detailed analysis of the spectrum. We will continue to monitor this source while it is in the low/hard state. \subsection{Transient Sources} \subsubsection{XTE J1752-223} \begin{figure}[t] \includegraphics[width=78mm]{figure8.eps}% \caption{\label{XTEJ1752} The GBM light curve for XTE J1752-223. The light curve has been binned 5 days per data point. The vertical dashed lines at MJD 55129 and MJD 55218 mark the flaring region used to derive the average fluxes in Table~\ref{Flux_table}. The fluxes are in Crab units, and the horizonial solid lines mark the zero flux levels.} \end{figure} The new transient black hole candidate XTE J1752-223, discovered by {\it RXTE} \citep{Markwardt2009b}, was observed by GBM to rise from undetectable on 2009 October 24 (MJD 55128) to $511\pm 50$ mCrab (12--25 keV), $570\pm 70$ mCrab (25--50 keV), $970\pm 100$ mCrab (50--100 keV), and $330\pm 100$ mCrab (100--300 keV) on 2009 November 2 \citep{Wilson2009a,Wilson2009b}. The light curve is variable, especially in the 12--25 keV band, where the flux initially rose to about 240 mCrab (25-28 Oct), suddenly dropped to non-detectable on October 29-30, then rose again during the period October 31 to November 2 (MJD 55135--55137). The flux remained relatively constant until November 25 (MJD 55160) when it began to rise again, peaking in the high energies on 2009 December 20 (MJD 55185). After an initial slow decline, the high energy flux rapidly declined back to the pre-flare levels. The light curve for the entire mission to date, with 5-day resolution, is shown in Fig.~\ref{XTEJ1752}. The fluxes for XTE J1752-223 in Table~\ref{Flux_table} are integrated over the days when XTE J1752-223 was observed to be in a high gamma-ray intensity state, MJD 55129--55218. RXTE measurements indicate a black hole low/hard spectrum with a power law component ($\Gamma \sim 1.4$) superimposed on a weak black body (kT $\sim 0.8$ keV). A 6.4 keV iron line is also seen, with combined spectral and timing properties similar to those observed in Cyg X-1 and GX 339-4 \citep{Shaposhnikov2009}. Results from {\it RXTE}/HEXTE analysis have shown evidence for emission up to 200 keV \citep{Munoz-Darias2010b}, best fit with a broken power law with a break energy near 130 keV, again markedly similar to Cyg X-1. The flux ratios measured with GBM ($R_{50} = 0.218 \pm 0.005, R_{100} = 0.090 \pm 0.005, R_{300} = 0.006 \pm 0.006$) are similar to those observed for Cyg X-1 and are consistent with a $\Gamma \sim 1.7$ spectrum steepening to at least $\Gamma \sim 1.9$ above 100 keV. \subsubsection{GX 339-4} \begin{figure}[t] \includegraphics[width=78mm]{figure9.eps}% \caption{\label{GX339} The GBM light curve for GX 339-4. The light curve has been binned 5 days per data point. The vertical dashed lines at MJD 55244 and MJD 55289 mark the flaring region used to derive the average fluxes in Table~\ref{Flux_table}. The fluxes are in Crab units, and the horizontial solid lines mark the zero flux levels.} \end{figure} The highly variable LMXB and black hole candidate GX 339-4 \citep{Samimi1979,Doxsey1979} is characterized by rapid time variability and low/hard X-ray states similar to those of Cyg X-1 \citep{Harmon1994,Cowley2002}. The results of analysis of both BATSE \citep{Case2008} and {\it INTEGRAL} \citep{Garcia2009} data have indicated the presence of high energy emission above 200 keV during previous outbursts, with the {\it INTEGRAL} spectrum fitted by a thermal Comptonization component together with synchrotron or self-synchrotron emission possibly originating at the base of a jet. GX 339-4 was observed by MAXI to begin a large flare event starting on 2010 January 3 \citep{Yamaoka2010}. The flux observed by GBM began to increase starting in early 2010 January and continued to increase up to a level of $\sim400$ mCrab (12--25 keV), $\sim650$ mCrab (25--50 keV), $\sim 800$ mCrab (50--100 keV), and $\sim 550$ mCrab (100--300 keV) by early-April 2010, after which it began to rapidly decrease. It returned to quiescence in the higher energy bands by mid-April and in the 12--50 keV band by the end of April. The fluxes for GX 339-4 in Table~\ref{Flux_table} are integrated over the days when GX 339-4 was observed to be in a high gamma-ray state, MJD 55244--55289. Similar to Cyg X-1 and XTE J1752-223, the flux ratios measured by GBM ($R_{50} = 0.223 \pm 0.012$ and $R_{100} = 0.075 \pm 0.011$, with no measurable intensity above 300 keV) appear consistent with a $\Gamma \sim 1.7$ power law steepening above 100 keV to at least $\Gamma \sim 1.9$. Note that there was a weaker double-peaked flare starting around MJD 54888 \citep{Markwardt2009a} and lasting until around MJD 55000 (see Fig.~\ref{GX339}). While the light curve in the 100--300 keV band is suggestive of positive emission in this band, it is only marginally significant ($\sim5\sigma$) with an average flux of $162 \pm 31$ mCrab. \section{Conclusions and Future Prospects \label{sec:dis}} Using the Earth occultation technique, the GBM instrument on {\it Fermi} has been monitoring the gamma-ray sky in the $\sim8-1000$ keV energy range, providing daily measurements for a catalog of 82 sources, including 72 x-ray binaries, five AGN, two magnetars, two cataclysmic variables, the Crab, and the Sun. After the first two years of the {\it Fermi} mission, the Earth occultation technique applied to the GBM CTIME data has been used to detect six persistent sources and two transients at energies above 100 keV with a significance greater than $7\sigma$, demonstrating the capability of GBM to observe and monitor such sources. Two of the sources, the Crab and Cyg X-1, were detected with high significance above 300 keV. Light curves of all eight sources were presented in four broad energy bands from 12--500 keV. The outbursts from the transient sources XTE J1752-223 and GX 339-4 were clearly visible in the 12--50 keV, 50--100 keV, and 100--300 keV broad bands. XTE J1752-223 was a previously unknown source, and the GBM light curves in the hard x-ray/low energy gamma-ray energy bands are consistent with the initial classification of this object as a black hole candidate in a bright low/hard state. The steep decline of the hard x-ray emission starting around MJD 55215 corresponded to an increase in the soft x-ray flux \citep{Homan2010,Negoro2010a}, indicating the transition from the low/hard state to a soft state. When XTE J1752-223 returned to the low/hard state around MJD 55282 \citep{Munoz-Darias2010a}, the hard x-ray emission was below the sensitivity limit of GBM. The hard emission seen from GX 339-4 is consistent with the bright hard states seen in previous outbursts from this object. Monitoring of Cyg X-1 at the onset of a recent state transition showed a steady decrease in the 100--300 keV flux that began about 19 days before the soft x-ray flux began to rise. As of MJD 55419, Cyg X-1 remains in a soft state, and we will continued to monitor Cyg X-1 in anticipation of the transition back to the canonical hard state. While the GBM CTIME data used here does not have enough spectral resolution to produce detailed spectra, the flux ratios between the 12--50 keV broad band and the 50--100, 100--300, and 300--500 keV broad bands for the Crab are generally consistent with those inferred from the measured {\it INTEGRAL} spectrum, with the GBM results suggesting a slightly harder spectrum. The flux ratios observed for the transient sources XTE J1752-223 and GX 339-4 are similar to Cyg X-1 when it is in its canonical low/hard state, again consistent with these transients being observed in bright low/hard states. Future work will use the GBM CSPEC data, with its finer energy binning, to examine the detailed spectra for all of these sources, with particular emphasis on the low energy gamma-ray energy range. Also, the BGO detectors, with their greater sensitivity at higher energies, will be used to obtain additional measurements at energies above 150 keV. Several of the detected sources have spectral breaks or cutoffs in the 100--300 keV range, and we will look for these features and monitor their evolution over time. We will continue to add to the list of sources being monitored as appropriate. We have detected Cen A with GBM in all energy bands up to 300 keV. BATSE detected several BL Lac objects known to exhibit flaring activity on the time scale of days \citep{Connaughton1999}. The flaring behavior of blazars is sometimes accompanied by a shift upwards in the peak of the synchroton spectrum (e.g. above 100 keV in the 1997 Mrk 501 flare \citep{Petry2000}), making detection of these sources possible with GBM. While none have been detected so far, we will continue to expand our monitoring program to include the blazars with flares detected in other wavelengths. We plan to expand our monitoring program to include other AGN as well. We continue to fine tune the algorithms and work to reduce the systematic errors in the flux determination. In the case of the BATSE Earth occultation analysis, there was clear evidence for the presence of sources in the data which were not in the occultation input catalog and which caused uncertainty in the assignment of fluxes to the sources that were in the input catalog. We see similar evidence for these uncataloged sources with GBM, especially below about 50 keV. To address this, our approach is two-fold: (1) We compare our GBM measurements with overlapping energy bands from other operating missions and regularly update our catalog as new transient outbursts are observed with GBM or other instruments. Light curves are regenerated if needed after catalog updates. (2) We are developing an imaging technique for GBM to produce an all-sky map of hard X-ray/soft gamma-ray sources. This map will then be used to identify sources not currently in the GBM occultation catalog, to expand the catalog, and to reduce the uncertainties in the measured fluxes. \acknowledgments This work is supported by the NASA Fermi Guest Investigator program. At LSU, additional support is provided by NASA/Louisiana Board of Regents Cooperative Agreement NNX07AT62A. A.C.A. wishes to thank the Spanish Ministerio de Ciencia e Innovaci\'on for support through the 2008 postdoctoral program MICINN/Fulbright under grant 2008-0116.
1,477,468,750,777
arxiv
\section{The group action case}\label{sec:actioncase} In this section we consider the action of a compact Lie group $G$ on a complete bornological algebra $A$ and then specialize to the case where $A$ is the algebra of smooth functions on a smooth $G$-manifold $M$. The general assumption hereby is always that the action $\alpha: G\times A$, $(g,a)\mapsto g\cdot a$ is smooth in the sense of \cite{KriMicCSGA} that is if each smooth curve in $G\times A$ is mapped by $\alpha$ to a smooth curve in $A$. This is automatically guaranteed when $G$ acts by diffeomorphisms on the manifold $M$ and $A=\mathcal{C}^\infty (M)$. Under the assumptions made the associated \emph{smooth crossed product} $G\ltimes A$ is given by $\mathcal{C}^\infty(G,A)$ equipped with the product \begin{equation} \label{eq:convolution-product-left-action-algebra} (f_1\ast f_2)(g):=\int_Gf_1(h) \, (h\cdot f_2(h^{-1}g)) \, dh \ ,\quad f_1,f_2\in \mathcal{C}^\infty(G,A), \: g \in G \ . \end{equation} \input{brylinskiqism} \subsection{The $G$-manifold case} \label{sec:group-manifold-case} Let $M$ be a manifold endowed with a smooth left $G$-action. Denote by $X = M/G$ the space of $G$-orbits in $M$ and by $\pi : M \to X$ the canonical projection. We consider the action groupoid $\mathsf{G} = G\ltimes M\rightrightarrows M$ and the corresponding convolution sheaf $\mathcal{A} = \mathcal{A}_{G\ltimes M}$ over $X$. It is straightforward to check that in the case of $A=\mathcal{C}^\infty (M)$ the product defined by Eq.~\eqref{eq:convolution-product-right-action-algebra} coincides with the convolution product on $\mathcal{A}(M/G) \cong \mathcal{C}^\infty ( G\ltimes M) \cong \mathcal{C}^\infty ( G,A) $ given by Eq.~\eqref{eq:convolution-product}. Hence $\mathcal{A}(M/G)$ coincides with $A \rtimes G$. According to Proposition \ref{prop:qismequivariantcplx} and Remark \ref{rem:qismequivariantcplx}, we then have for each $G$-invariant open $V \subset M$ a quasi-isomorphism between Hochschild chain complexes \[ \widehat{\phantom{F}}|_{V/G} : C_\bullet \big( \mathcal{A} ( V/G ) \big) \to C_\bullet^G \big(\mathcal{C}^\infty (V)\big) \cong C_\bullet \big(\mathcal{C}^\infty (V), \mathcal{A} ( V/G )\big) \ . \] To compute the Hochschild homology $HH_\bullet \big( \mathcal{A} ( V/G ) \big)$ it therefore suffices to determine the homology of the complex $C_\bullet \big(\mathcal{C}^\infty (V), \mathcal{A} ( V/G )\big)$ which we will consider in the following. Recall that $\mathcal{A} (V/G)$ is isomorphic as a bornological vector space to the completed tensor product $\mathcal{C}^\infty (G) \hat{\otimes} \mathcal{C}^\infty (V)$ and that $\mathcal{A} (V/G)$ carries the (twisted) $\mathcal{C}^\infty (V)$-bimodule structure \[ \mathcal{C}^\infty (V) \hat{\otimes} \mathcal{A}(V/G) \hat{\otimes} \mathcal{C}^\infty (V) \to \mathcal{A}(V/G), \: f \otimes a \otimes f' \mapsto \Big( G \times V \ni (g,v) \mapsto f(g v) \, a(g,v) f'(v) \in {\mathbb R} \Big). \] Since the bimodule structure is compatible with restrictions $r^U_V$ for $G$-invariant open subsets $V \subset U \subset M$ one obtains a complex of presheaves which assigns to every open $V/G$ with $V \subset M$ open and $G$-invariant the complex $C_\bullet (\mathcal{C}^\infty (V),\mathcal{A} (V/G))$. Sheafification gives rise to a sheaf complex which we denote by $\hat{\mathscr{C}}_\bullet \big( \mathcal{C}^\infty_M , \mathcal{A} \big)$. Since $C_\bullet \big( \mathcal{C}^\infty (V) , \mathcal{A} (V/G)\big)\cong\mathcal{A} (V/G) \hat{\otimes} C_\bullet (\mathcal{C}^\infty (V))$ for all $G$-invariant open $V\subset M$, this sheaf complex can be written as \[ \hat{\mathscr{C}}_\bullet \big( \mathcal{C}^\infty_M , \mathcal{A}\big) = \mathcal{A} \hat{\otimes} \pi_* \hat{\mathscr{C}}_\bullet \big( \mathcal{C}^\infty_M \big) \ , \] where, as before, $\hat{\mathscr{C}}_\bullet \big( \mathcal{C}^\infty_M \big)$ denotes the Hochschild sheaf complex of $\mathcal{C}^\infty_M$. We now have the following result. \begin{proposition} Assume to be given a $G$-manifold $M$, let $\mathcal{A}$ be the convolution sheaf of the associated action groupoid $G\ltimes M \rightrightarrows M$ on the orbit space $X =M/G$, and put $A = \mathcal{A}(X)$. Then the chain map \[ \varrho : C_\bullet \big(\mathcal{C}^\infty (M), A \big) \rightarrow \Gamma \big( X, \hat{\mathscr{C}}_\bullet ( \mathcal{C}^\infty_M , \mathcal{A})\big), \enspace c \mapsto ([c]_{\scriptstyle\mathboondoxcal{O}})_{{\scriptstyle\mathboondoxcal{O}}\in X} \] which associates to every $k$-chain $c\in C_k\big( \mathcal{C}^\infty (M), A \big) $ the section $([c]_{\scriptstyle\mathboondoxcal{O}})_{{\scriptstyle\mathboondoxcal{O}}\in X}$, where $[c]_{\scriptstyle\mathboondoxcal{O}}$ denotes the germ of $c$ in the stalk $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{C}^\infty_M ,\mathcal{A})$, is a quasi-isomorphism. \end{proposition} \begin{proof} Observe that the sheaves $\hat{\mathscr{C}}_k ( \mathcal{C}^\infty_M , \mathcal{A})$ are fine and that $\varrho_0 : C_0(\mathcal{C}^\infty (M),A)\to\Gamma \big( X, \hat{\mathscr{C}}_0 (\mathcal{C}^\infty_M ,\mathcal{A})\big)$ is the identity morphism. Using again the homotopies from Section \ref{sec:localization-homotopies}, the proof is completely analogous to the one of Proposition \ref{prop:isomorphism-global-section-space}, hence we skip the details. \end{proof} Next, we compare the sheaf complex $\hat{\mathscr{C}}_\bullet ( \mathcal{C}^\infty_M , \mathcal{A})\big)$ with the complex of relative forms by constructing a morphism of sheaf complexes between them. \begin{proposition} \label{prop:chain-map-horizontal-relative-forms} Under the assumptions of the preceding proposition define for each open $G$-invariant subset $V \subset M$ and $k\in {\mathbb N}$ a $\mathcal{C}^\infty (V/G)$-module map by \[ \begin{split} \Phi_{k,V/G} : \: C_k \big( \mathcal{C}^\infty (V) , \mathcal{A} (V/G)\big) \cong \mathcal{A}(V/G) \hat{\otimes} \, C_k \big( \mathcal{C}^\infty (V) \big) & \to \rOmega{k}{\Lambda_0} \big( \Lambda_0 (G\ltimes V) \big), \\ f_0 \otimes f_1 \otimes \ldots \otimes f_k & \mapsto \big[ f_0 \, d (s_{G\ltimes V}^* f_1 )\wedge \ldots \wedge d (s_{G\ltimes V}^* f_k ) \big]_{\Lambda_0} \ . \end{split} \] Then the $\Phi_{k,V/G}$ are the components of a morphism of sheaf complexes \[ \Phi_\bullet: \hat{\mathscr{C}}_\bullet ( \mathcal{C}^\infty_M , \mathcal{A})\big) \to \pi_* (s_{|\Lambda_0})_* \rOmega{\bullet}{\Lambda_0} \ , \] where the differential on $\rOmega{\bullet}{\Lambda_0}$ is given by the zero differential. The image of a cycle under $\Phi_\bullet$ lies in the sheaf complex of horizontal relative forms $\hrOmega{\bullet}{\Lambda_0}$. \end{proposition} \begin{proof} Let $f_0 \in \mathcal{A} (V/G)$ and $f_1 , \ldots ,f_k \in \mathcal{C}^\infty (V)$. Observe first that $\Phi_{k,V/G} ( f_0 \otimes f_1 \otimes \ldots \otimes f_k )$ is a relative form indeed since $d(s_{G\ltimes V}^*f) \in \Omega^1_{G\ltimes V \to G} (G\ltimes M)$ for all $f \in \mathcal{C}^\infty (V)$. Now let $(g,p) \in \Lambda_0(G\ltimes V)$ and compute: \begin{displaymath} \begin{split} & \Phi_{k-1,V/G} \, b \, ( f_0 \otimes f_1 \otimes \ldots \otimes f_k ) (g,p) = f_0 (g,p) f_1 (p) \,\big[ d(s_{G\ltimes V}^* f_2) \wedge \ldots \wedge d (s_{G\ltimes V}^* f_k)\big]_{(g,p)} +\\ & \: + \sum_{i=1}^{k-1} (-1)^i f_0 (g,p) f_i(p) \big[ d(s_{G\ltimes V}^* f_1) \wedge \ldots \wedge d (s_{G\ltimes V}^* f_{i-1} ) \wedge d (s_{G\ltimes V}^* f_{i+1} ) \wedge \ldots \wedge d (s_{G\ltimes V}^* f_k) \big]_{(g,p)} + \\ & \: + \sum_{i=1}^{k-1} (-1)^i f_0 (g,p) f_{i+1}(p) \big[ d(s_{G\ltimes V}^* f_1) \wedge \ldots \wedge d (s_{G\ltimes V}^* f_i ) \wedge d (s_{G\ltimes V}^* f_{i+2} ) \wedge \ldots \wedge d(s_{G\ltimes V}^* f_k) \big]_{(g,p)} + \\ & \: + (-1)^k f_k (gp) f_0 (g,p) \, \big[ d(s_{G\ltimes V}^* f_1) \wedge \ldots \wedge d (s_{G\ltimes V}^* f_{k-1})\big]_{(g,p)} = 0 \ . \end{split} \end{displaymath} Hence $\Phi_{\bullet,V/G}$ is a chain map in the sense that it intertwines the Hochschild boundary with the zero differential. It remains to show that the image of $\Phi_{\bullet,V/G}$ is in the space of horizontal relative forms. To this end assume for a moment that $V$ is a $G$-invariant open ball around the origin in some euclidean space ${\mathbb R}^n$ which is assumed to carry an orthogonal $G$-action. Consider the Connes--Koszul resolution of $\mathcal{C}^\infty(V)$ provided in \eqref{eq:ResSmoothFunc}. A chain map between the Connes--Koszul resolution and the Bar resolution of $\mathcal{C}^\infty(V)$ over the identity map $\operatorname{id}_{\mathcal{C}^\infty(V)}$ in degree $0$ is given by the family of maps \[ \begin{split} \Psi_{k,V} : \: \Gamma^\infty (V\times V, E_k) & \to B_k \big( \mathcal{C}^\infty(V) \big) = \mathcal{C}^\infty(V\times V) \hat{\otimes} \mathcal{C}^\infty(V^k),\\ \omega & \mapsto \Big( (v,w,x_1,\ldots,x_k) \mapsto \omega_{(v,w)}\big(Y(x_1,w), \ldots , Y(x_k,w) \big) \Big) \ . \end{split} \] Tensoring the Connes--Koszul resolution of $\mathcal{C}^\infty (V)$ with $\mathcal{A}^\infty (V/G)$ results in the following complex: \begin{equation} \label{eq:ConvolutionConnesKoszulChainCpl} \Omega^d_{G\ltimes V \to G} (V) \overset{i_{Y_{G\ltimes V}}}{\longrightarrow} \ldots \overset{i_{Y_{G\ltimes V}}}{\longrightarrow} \Omega^1_{G\ltimes V \to G} (V) \overset{i_{Y_{G\ltimes V}}}{\longrightarrow} \mathcal{C}^\infty (G\ltimes V) \longrightarrow 0 \ , \end{equation} where $Y_{G\ltimes V} : G\ltimes V \to s^*TV$ is defined by $Y_{G\ltimes V}(g,v) =v -gv$. The composition of $\operatorname{id}_{\mathcal{A}^\infty (V/G)} \hat{\otimes} \Psi_{k,V} $ with $\Phi_{k,V/G}$ then is the map which associates to each relative form $\omega \in \Omega^k_{G \ltimes V \to G} (V)$ its restriction $[\omega]_{\Lambda_0}$ to the loop space. It therefore suffices to show that for $\omega \in \Omega^k_{G \ltimes V \to G} (V)$ with $i_{Y_{G\ltimes V}} \omega =0$ the restriction to the loop space is a horizontal relative form. To verify this let $\xi$ be an element of the Lie algebra $\mathfrak{g}$ of $G$ and again $(g,v)\in \Lambda_0(G\ltimes V)$. Then \[ 0 = \left. \frac{d}{dt} \big( i_{Y_{G\ltimes V}} \omega\big)_{(e^{-t\xi} g , v)} \right|_{t=0} = \big( - i_{Y_{G\ltimes V}} i_{\xi_G} d^G\omega + i_{\xi_V} \omega \big)_{(g,v)} = \big( i_{\xi_V} \omega \big)_{(g,v)} \ , \] where $d^G$ denotes the exterior differential with respect to $G$ and $\xi_G$ and $\xi_V$ are the fundamental vector fields of $\xi$ on $G$ and $V$, respectively. So $ i_{\xi_V} \omega \in \mathcal{J}(V) \Omega^{k-1}_{G \ltimes V \to G} (V)$, which means that $[\omega]_{\Lambda_0} \in \hrOmega{k}{\Lambda_0}(G\ltimes V)$. \end{proof} \begin{proposition}\label{prop:manifold-one-isotropy-type} Let $M$ be a $G$-manifold with only one isotropy type and assume that the orbit space $M/G$ is connected. Then the following holds true. \begin{enumerate}[{\rm (1)}] \item\label{ite:quotient-manifold} The quotient space $M/G$ carries a unique structure of a smooth manifold such that $\pi : M \to M/G$ is a submersion. \item\label{ite:loop-space-manifold} The loop space $\Lambda_0 (G\ltimes M)$ is a smooth submanifold of $G\times M$. \item\label{ite:isomorphism-basic-relative-forms-slice} Let $p\in M$ be a point and $V_p \subset M$ a slice to the orbit through $p$ that is \begin{enumerate}[\rm (SL1)] \item $V_p$ is a $G_p$-invariant submanifold which is transverse to the orbit $\mathboondoxcal{O}_p := Gp$ at $p$, \item $V := GV_p$ is an open neighborhood of the orbit $\mathboondoxcal{O}_p$ and $V_p$ is closed in $V$, \item there exists a $G$-equivariant diffeomorphism $\eta: N\mathboondoxcal{O}_p \to V$ mapping the normal space $N_p= T_pM/T_p\mathboondoxcal{O}_p$ onto $V_p$. \end{enumerate} Then for every $k$ the map \[ \Psi_{k,V_p/G_p}:\brOmega{k}{\Lambda_0}\big(\Lambda_0(G\ltimes GV_p)\big) \to \brOmega{k}{\Lambda_0} \big(\Lambda_0 (G_p\ltimes V_p)\big) \quad \omega \mapsto \omega_{|\Lambda_0(G_p\ltimes V_p)} \] is an isomorphism and the space of basic relative $k$-forms $\brOmega{k}{\Lambda_0} \big(\Lambda_0 (G_p\ltimes V_p)\big)$ coincides naturally with $\mathcal{C}^\infty(G_p)^{G_p} \hat{\otimes} \Omega^k (V_p)$. \item\label{ite:quism-twisted-hochschild-homology} The chain map \[ \Phi_{\bullet,M/G} : C_\bullet \big( \mathcal{C}^\infty (M) , \mathcal{A} (M/G)\big) \to \hrOmega{\bullet}{\Lambda_0} \big(\Lambda_0 (G\ltimes M)\big) \] is a quasi-isomorphism when the graded module $\hrOmega{\bullet}{\Lambda_0} \big(\Lambda_0 (G\ltimes M)\big)$ is endowed with the zero differential. \end{enumerate} \end{proposition} \begin{proof} \textit{ad} (\ref{ite:quotient-manifold}). It is a well known result about group actions on manifolds that under the assumptions made the quotient space $M/G$ carries a unique manifold structure such that $\pi : M \to M/G$ is a submersion; see e.g.\ \cite[Sec.~IV.3]{BreICTG} or \cite[Thm.~4.3.10]{PflAGSSS}. \\ \textit{ad} (\ref{ite:loop-space-manifold}). This has been proved in \cite[Prop.~4.4]{FarPflSeaSISCLGA}. Let us outline the argument since we need it for the following claims, too. By the assumptions made there exists a compact subgroup $K\subset G$ such that every point of $M$ has isotropy type $(K)$. Let $p\in M$ be a point and $G_p$ its isotropy group. Without loss of generality we can assume that $G_p=K$. Let $V_p\subset M $ be a slice to the orbit $\mathboondoxcal{O}$ through $p$. The isotropy group of an element $q\in V_p$ then has to coincide with $K$, so $V_p^K = V_p$. Therefore the map \[ \tau : G/K\times V_p \to M ,\quad (gK,q) \mapsto gq \] is a $G$-equivariant diffeomorphism onto a neighborhood of $\mathboondoxcal{O}$. Now choose a small enough open neigborhood of $eK$ in $G/K$ and a smooth section $\sigma : U \to G$ of the fiber bundle $G\to G/K$. The map \[ \widetilde{\tau} : G \times U \times V_p \to G \times \tau(U\times V_p) , \quad (h, gK,q) \mapsto\big(\sigma(gK)h \sigma(gK)^{-1},\sigma (gK) q \big) \] then is a diffeomorphism onto the open set $G\times \tau (U\times V_p)$ of $G\times M$. One observes that \[ \widetilde{\tau} ( K \times U \times V_p) = \left( G \times \tau(U\times V_p) \right) \cap \Lambda_0 (G\ltimes M) \ , \] which shows that $ \Lambda_0 (G\ltimes M)$ is a submanifold of $G\times M$, indeed. \\ \textit{ad} (\ref{ite:isomorphism-basic-relative-forms-slice}). Put $K=G_p$ as before, let $N = GV_p$, and denote by $\mathfrak{g}$ and $\mathfrak{k}$ the Lie algebras of $G$ and $K$, respectively. Choose an $\operatorname{Ad}$-invariant inner product on $\mathfrak{g}$ and let $\mathfrak{m}$ be the orthogonal complement of $\mathfrak{k}$ in $\mathfrak{g}$. Next choose for each $q\in N$ an element $h_q \in G$ such that $h_q q \in V_p$. Then \[ \pi^N : N\to \mathboondoxcal{O}_p , \quad q \mapsto h_q^{-1}p \] is an equivariant fiber bundle. Let $TN\to N$ be the tangent bundle of the total space and $VN \to N$ the vertical bundle. Note that $TN$ and $VN$ inherit from $N$ the equivariant bundle structures. Now put for $q\in N$ \[ H_qN := \operatorname{span} \left\{ \left(\operatorname{Ad}_{h_q^{-1}} (\xi)\right)_N (q) \in T_qN \mid \xi \in \mathfrak{m}\right\} \ , \] where $\xi_N$ denotes the fundamental vector field of $\xi$ on $N$. Then $HN \to N$ becomes an equivariant vector bundle complementary to $VN \to N$. Let $P^{\textup{v}}: TN \to VN$ be the corresponding fiberwise projection along $HN$. By construction, $P^{\textup{v}}$ is $G$-equivariant. After these preliminary considerations let $\omega \in \brOmega{k}{\Lambda_0}\big(\Lambda_0 (G\ltimes GV_p)\big)$. The restriction $\omega|_{\Lambda_0 (K\ltimes V_p)}$ then is a basic relative form again, so $\Psi_{k,V_p/K}$ is well defined. Let us show that it is surjective. Assume that $\varrho \in \brOmega{k}{\Lambda_0}\big(\Lambda_0 (K \ltimes V_p)\big)$. We then put for $(g,q) \in \Lambda_0 (G \ltimes N)$ and $X_1,\ldots ,X_k \in T_q N$ \begin{equation} \label{eq:relation-relform-restriction} \omega_{(g,q)} (X_1,\ldots ,X_k) := \varrho_{(h_qgh_q^{-1},h_qq)} \big( Th_q (P^{\textup{v}} (X_1)),\ldots , Th_q (P^{\textup{v}} (X_k))\big) \ , \end{equation} where $Th : TN \to TN$ for $h\in G$ denotes the derivative of the action of $h$ on $N$. Since $Tk$ for $k\in K$ acts as identity on $TV_p\subset VN$, the value $\omega_{(g,q)} (X_1,\ldots ,X_k)$ does not depend on the particular choice of a group element $h_q$ such that $h_qq \in V_p$. Moreover, since for fixed $q_0 \in N$ one can find a small enough neighborhood $U$ and choose $h_q$ to depend smoothly on $q\in U$, $\omega$ is actually a smooth differential form on $N$. By construction, it is a relative form. If $X_l \in H_qN$ for some $l$, then $\omega_{(g,q)} (X_1,\ldots ,X_k)=0$ by definition. If $X_l = \big(\operatorname{Ad}_{h_q^{-1}} (\xi)\big)_N (q)$ for some $\xi \in \mathfrak{k}$, then $P^{\textup{v}} X_l = X_l$ and $Th_q X_l (q) =\xi_N (h_q q)$ which entails by \eqref{eq:relation-relform-restriction} that $\omega_{(g,q)} (X_1,\ldots ,X_k)=0$ again since $\varrho $ is a horizontal form. So $\omega$ is a horizontal form. It remains to show that it is $G$-invariant. Let $h\in G$ and $(g,q)$ and $X_1,\ldots ,X_k$ as before. Then \begin{equation*} \begin{split} \omega_{(hgh^{-1},hq)} (Th X_1,\ldots ,Th X_k) \, & = \varrho_{(h_qg h_q^{-1},h_qq)} \big( Th_q Th^{-1} (P^{\textup{v}} (Th X_1)),\ldots , Th_q Th^{-1} (P^{\textup{v}} (Th X_k))\big) \\ & = \omega_{(g,q)} (X_1,\ldots ,X_k) \ , \end{split} \end{equation*} so $\omega$ is $G$-invariant and therefore a basic relative form. Hence $\Psi_{k,V_p/K}$ is surjective. To prove injectivity of $\Psi_{k,V_p/K}$ observe that if $\omega\in\brOmega{k}{\Lambda_0}\big(\Lambda_0(G\ltimes GV_p)\big)$ and $\varrho$ is the restriction $\omega|_{\Lambda_0 (K\ltimes V_p)}$, then Eqn.~\eqref{eq:relation-relform-restriction} holds true since $\omega$ is $G$-invariant and horizontal. But this implies that if $\omega|_{\Lambda_0 (K\ltimes V_p)}=0$, then $\omega$ must be $0$ as well, so $\Psi_{k,V_p/K}$ is injective. It remains to show \[ \brOmega{k}{\Lambda_0} \big(\Lambda_0 (K \ltimes V_p)\big)\cong \mathcal{C}^\infty(K)^{K} \hat{\otimes} \Omega^k (V_p) \ . \] To this end observe that $\Lambda_0 (K \ltimes V_p) = K \times V_p$ since $V_p^K =V_p$ which in other words means that very $K$-orbit in $V_p$ is a singleton. The claim now follows immediately. \\ \textit{ad} (\ref{ite:quism-twisted-hochschild-homology}). By Theorem \ref{thm:hochschild-homology-global-sections} it suffices to verify the claim for the case where $M =GV_p$, where $p$ is a point and $V_p$ a slice to the orbit $\mathboondoxcal{O}$ through $p$. As before let $K$ be the isotropy $G_p$. By the slice theorem there exists a $K$-equivariant diffeomorphism $\varphi : V_p \to \widetilde{V}_p\subset N_p\mathboondoxcal{O}$ onto an open zero neighborhood of the normal space $N_p\mathboondoxcal{O}$. Choose a $K$-invariant inner product on $N_p\mathboondoxcal{O}$ and a $G$-invariant inner product on the Lie algebra $\mathfrak{g}$. Again as before let $\mathfrak{m}$ be the orthogonal complement of the Lie algebra $\mathfrak{k}$ in $\mathfrak{g}$. The inner product on $\mathfrak{g}$ induces a $G$-invariant riemannian metric on $G$ which then induces a $G$-invariant riemannian metric on the homogeneous space $G/K$ by the requirement that $G \to G/K$ is a riemannian submersion. Now observe that the map $G/K \times V_p \to M$, $(gK,v) \mapsto gv$ is a $G$-invariant diffeomorphism, so we can identify $M$ with $G/K \times V_p$. The chosen riemannian metrics on $G/K$ and $V_p$ then induce a $G$-invariant metric on $M$. Since ${\mathbb C}$ is faithfully flat over ${\mathbb R}$ we can assume without loss of generality now that smooth functions and forms on $M$ and $G\ltimes M$ are all complex valued, including elements of the convolution algebra. Let $e\in N_p\mathboondoxcal{O} \cong T_pV_p$ be a vector of unit length, and let $Z$ be the vector field on $M$ which maps every point to $e$ (along the canonical parallel transport). Next choose a symmetric open neighborhood $U$ of the diagonal of $G/K \times G/K$ such that for each pair $(gK,hK) \in U$ there is a unique $\xi \in \operatorname{Ad}_h(\mathfrak{m})$ such that $gK = \exp(\xi)hK$. Denote that $\xi$ by $\exp_{hK}^{-1} (gK)$. Let $\chi : G/K \times G/K \to [0,1]$ be a function with support contained in $U$ and such that $\chi =1 $ on a neighborhood of the diagonal. Now define the vector field $Y :M\times M \to \operatorname{pr}^*_2 (TM)$ by \[ Y\big( (gK,v),(hK,w) \big) = \chi (gK,hK) \, \left( \exp_{hK}^{-1} (gK), v-w\right) + \sqrt{-1} \chi' (gK,hK) \, Z\big( (gK,v),(hK,w) \big) \ , \] where $ \operatorname{pr}_2:M\times M \to M$ is projection onto the second coordinate and where the smooth cut-off function $\chi' : G/K \times G/K \to [0,1]$ vanishes on a neighborhood of the diagonal and is identical $1$ on the locus where $\chi \neq 1$. Finally put $E_k:= \operatorname{pr}^*_2 (\bigwedge^k T^*M)$. Then, by \cite[Lemma 44]{ConNDG}, the complex \[ \Gamma^\infty (M\times M ,E_{\dim M}) \stackrel{i_Y}{\longrightarrow} \ldots \stackrel{i_Y}{\longrightarrow}\Gamma^\infty (M\times M ,E_1) \stackrel{i_Y}{\longrightarrow} \mathcal{C}^\infty (M \times M )\longrightarrow \mathcal{C}^\infty (M) \] is a (topologically) projective resolution of $\mathcal{C}^\infty (M)$ as a $\mathcal{C}^\infty (M)$-bimodule. Tensoring this resolution with the convolution algebra $\mathcal{A} (G\ltimes M)$ gives the following complex of relative forms: \begin{equation} \label{eq:quasiisomorphic-equivariant-chain-cplxc} \Omega^{\dim M}_{G\ltimes M\to G} (G\ltimes M) \stackrel{i_{Y_G}}{\longrightarrow} \ldots \stackrel{i_{Y_G}}{\longrightarrow} \Omega^1_{G\ltimes M\to G} (G\ltimes M) \longrightarrow \mathcal{C}^\infty (G\ltimes M) \ , \end{equation} where $Y_G: G \times M \to pr^*_2 TM $ is the vector field \[ (g,(hK,v))\mapsto \chi (ghK,hK) \, \left( \exp_{hK}^{-1} (ghK), 0\right)+\sqrt{-1} \chi' (ghK,hK) \, Z\big( (ghK,v),(hK,v) \big)\ . \] The vector field $Y_G$ vanishes on $(g,(hK,v))$ if and only if $g \in hKh^{-1}$ that is if and only if $(g,(hK,v))\in \Lambda_0(G\ltimes M)$. We will use the parametric Koszul resolution Proposition \ref{prop:parametrizedkoszuleulervectorfield} to show that the complex \eqref{eq:quasiisomorphic-equivariant-chain-cplxc} is quasi-isomorphic to the complex of horizontal relative forms \begin{equation} \label{eq:quasiisomorphic-horizontal-relative-chain-cplxc} \hrOmega{\dim M}{\Lambda_0}(\Lambda_0(G\ltimes M)) \stackrel{0}{\longrightarrow} \ldots \stackrel{0}{\longrightarrow} \hrOmega{1}{\Lambda_0}(\Lambda_0(G\ltimes M)) \stackrel{0}{\longrightarrow} \mathcal{C}^\infty (\Lambda_0(G\ltimes M)) \ . \end{equation} This will then entail the claim. So it remains to show that \eqref{eq:quasiisomorphic-equivariant-chain-cplxc} and \eqref{eq:quasiisomorphic-horizontal-relative-chain-cplxc} are quasi-isomorphic. We first consider the case where $V_p$ consist just of a point. Then $M$ coincides with the homogeneous space $G/K$ and $Y_G$ is an Euler-like vector field on its set of zeros \[ S = \{ (g,hK)\in G \times G/K \mid g \in hKh^{-1} \} \subset M \ . \] Note that $S$ is a submanifold on $M$. That $Y_G$ is Euler-like on $S$ indeed follows from the equality \[ \left.\frac{d}{dt}\exp_{hK}^{-1}\big( \exp(t\xi) ghK \big) \right|_{t=0} = \left.\frac{d}{dt}\exp_{hK}^{-1}\big( \exp(t\xi) hK \big) \right|_{t=0} = \xi \] for all $(g,hK)\in S$, $\xi \in \operatorname{Ad}_{gh} (\mathfrak{m}) = \operatorname{Ad}_h (\mathfrak{m})$. Hence, by Proposition \ref{prop:parametrizedkoszuleulervectorfield}, the complex \begin{equation*} \Omega^{\dim G/K}_{G\ltimes G/K\to G} (G\ltimes G/K) \stackrel{i_{Y_G}}{\longrightarrow} \ldots \stackrel{i_{Y_G}}{\longrightarrow} \Omega^1_{G\ltimes G/K \to G} (G\ltimes G/K) \longrightarrow \mathcal{C}^\infty (G\ltimes G/K) \end{equation*} is quasi-isomorphic to \[ 0 \longrightarrow \ldots \longrightarrow 0\longrightarrow \mathcal{C}^\infty (S) \ . \] Since $\hrOmega{k}{\Lambda_0} (\Lambda_0 (G\ltimes G/K)) = 0$ for $k \geq 1$, the claim follows in the case $V_p= \{ p\}$. Now consider the case $M = G/K \times V_p$ with $V_p$ an arbitrary manifold on which $K$ acts trivially. Observe that in this situation \[ \Omega^k_{G\ltimes M\to G} (G\ltimes M)\cong \bigoplus_{0\leq l \leq k} \Omega^l_{G\ltimes G/K \to G} (G\ltimes G/K)\hat{\otimes}\Omega^{k-l} (V_p) \] and that $Y_G$ acts, near its zero set $S = \Lambda_0(G\ltimes M)$, only on the first components \[ \Omega^l_{G\ltimes G/K \to G} (G\ltimes G/K)\ .\] Hence the chain complex \eqref{eq:quasiisomorphic-equivariant-chain-cplxc} is then quasi-isomorphic to the chain complex \[ \mathcal{C}^\infty(\Lambda_0 (G\ltimes G/K )) \hat{\otimes} \Omega^\bullet (V_p) \] with zero differntial. But since \[ \hrOmega{k}{\Lambda_0} (\Lambda_0 (G\ltimes M)) \cong \mathcal{C}^\infty(\Lambda_0 (G\ltimes G/K )) \hat{\otimes} \Omega^k (V_p) \] the claim is now proved. \end{proof} \begin{conjecture}[Brylinski {\cite[Prop.~3.4]{BryAAGAH}\& \cite[p.~24, Prop.]{BryCHET}}]\label{BryConjBHF} Let $M$ be $G$-manifold and regard $\hrOmega{\bullet}{\Lambda_0} \big(\Lambda_0 (G\ltimes M)\big)$ as a chain complex endowed with the zero differential. Then the chain map \[ \Phi_{\bullet,M/G} : C_\bullet \big( \mathcal{C}^\infty (M) , \mathcal{A} (M/G)\big) \to \hrOmega{\bullet}{\Lambda_0} \big(\Lambda_0 (G\ltimes M)\big) \] is a quasi-isomorphism. \end{conjecture} \begin{remark} Proposition \ref{prop:manifold-one-isotropy-type} shows that Brylinski's conjecture holds true for $G$-manifolds having only one isotropy type. Corollary \ref{cor:finitegroup} tells that Brylinski's conjecture is true for finite group actions. In the following section we will verify it for circle actions. \end{remark} \section{Basic relative forms} Let $M$ be a smooth manifold equipped with a left action of a compact Lie group $G$ which we write as $(g,x)\mapsto gx,$ for $g\in G, x\in M$. Associated to this action is the Lie groupoid $G\ltimes M\rightrightarrows M$ with source map given by the projection $(g,x)\mapsto x$ and target given by the action $(g,x)\mapsto gx$. The {\em loop space} $\Lambda_0(G\ltimes M)\subset G\times M$ coincides in this case with the disjoint union of all fixed point sets $M^g\subset M$ for $g\in G$: \[ \Lambda_0(G\ltimes M):=\big\{(g,p)\in G\times M \mid gp=p\big\}= \bigcup_{g\in G}\{g\}\times M^g \ . \] For fixed $g\in G$, the fixed point subset $M^g\subset M$ is a closed submanifold but it can wildly vary as $g$ varies over $G$. Therefore, the loop space $\Lambda_0(G\ltimes M)$ is a singular subset of $G\times M$. If we let $G$ act on $G\times M$ by \[ h\cdot (g,p):=(hgh^{-1},hp), \quad h\in G , \: (g,p)\in G \times M \ , \] this action preserves $\Lambda_0(G\ltimes M)\subset G\times M$ sending $M^g$ to $M^{hgh^{-1}}$. In \cite{BryAAGAH,BryCHET}, Brylinski introduces the notion of {\em basic relative forms}. Intuitively, a basic relative $k$-form is a smooth family $(\omega_g)_{g\in G} \in \prod_{g\in G} \Omega^k(M^g)$ of differential forms on fixed point subspaces which are \begin{enumerate}[(i)] \item\label{ite:horizontal} {\em horizontal} that is $i_{\xi_{M^g}}\omega_g=0$ for all $g\in G$ and $\xi\in {\rm Lie}(G_g)$, and \item\label{ite:invariant} $G$-{\em invariant} which means that $h^*\omega_g=\omega_{h^{-1}gh}$ for all $g,h \in G$. \end{enumerate} Here, $G_g:=Z_G(g)$ denotes the centralizer of $g\in G$, which acts on $M^g$. Because of the singular nature of $\Lambda_0$, one needs to make sense of what is exactly meant by a {\em smooth} family of differential forms. There are two solutions for this: \subsection*{(A) Sheaf theory} In the sense of Grauert--Grothendieck and following Brylinski \cite{BryCHET}, we define the sheaf of relative forms on $\Lambda_0(G\ltimes M)$ as the quotient sheaf \[ \rOmega{k}{\Lambda_0}:=\iota^{-1}\left(\Omega^k_{G\ltimes M\to G}\slash \left( \mathcal{J} \Omega^k_{G\ltimes M\to G} + d_\textup{rel}\mathcal{J} \wedge \Omega^{k-1}_{G\ltimes M\to G} \right) \right) \ . \] Here, $\Omega^k_{G\ltimes M\to G}$ denotes the sheaf of $k$-forms on $G\times M$ relative to the projection ${\rm pr}_1:G\times M\to G$ and $\iota$ the canonical injection $\Lambda_0(G\ltimes M) \hookrightarrow G\ltimes M$. A form $\omega\in \Omega^k_{G\ltimes M\to G} (\widetilde{U})$ for $\widetilde{U}\subset G\ltimes M$ open is given by a smooth global section of the vector bundle $s^*\bigwedge^kT^*M$ that is by an element $\omega \in \Gamma^\infty (\widetilde{U},s^*\bigwedge^kT^*M)$. The de Rham differential on $M$ defines a differential $d_\textup{rel}: \Omega^k_{G\ltimes M\to G}\to \Omega^{k+1}_{G\ltimes M\to G}$. Finally, $\mathcal{J}$ denotes the vanishing ideal of smooth functions on $G\times M$ that restrict to zero on $\Lambda_0(G\ltimes M)\subset G\times M$. Note that $\mathcal{J} \Omega^\bullet_{G\ltimes M\to G} + d_\textup{rel}\mathcal{J} \wedge \Omega^\bullet_{G\ltimes M\to G} $ is a differential graded ideal in the sheaf complex $\left( \Omega^k_{G\ltimes M\to G}, d_\textup{rel}\right)$, so $\rOmega{\bullet}{\Lambda_0}$ becomes a sheaf of differential graded algebras on the loop space. For open $U\subset \Lambda_0(G\ltimes M)$, an element of $\rOmega{k}{\Lambda_0} (U)$ can now be understood as an equivalence class $[\omega]_{\Lambda_0}$ of forms $\omega \in \Omega^k_{G\ltimes M\to G} (\widetilde{U})$ defined on some open $\widetilde{U}\subset G\ltimes M$ such that $U = \widetilde{U}\cap \Lambda_0(G\ltimes M)$. This explains the definition of the sheaf complex of relative forms on the singular space $\Lambda_0(G\ltimes M)$; confer also \cite{PflPosTanGGCDSSCB}. Next observe that the map which associates to each $p\in M$ the conormal space $N^*_p := \big( T_pM / T_p\mathcal{O}_p \big)^* $ is a generalized subdistribution of the cotangent bundle $T^*M$ in the sense of Stefan-Suessmann, cf.~\cite{SteISVF,SusOFVFID,JotRatSniSRDS}. In the language of \cite{DraLeeParRicSDFG}, $N^*$ is a cosmooth generalized distribution. The restriction of $N^*$ to each orbit, and even to each stratum of $M$ of a fixed isotropy type, is a vector bundle, cf.~\cite{PflPosTanGOSPLG}. Henceforth, the pullback distribution $s^*\bigwedge^k N^*$ is naturally a cosmooth generalized subdistribution of $\bigwedge^k T^*G\ltimes M$. We define the space $\hrOmega{k}{\Lambda_0\mathsf{G}} (U)$ of \emph{horizontal relative $k$-forms on the loop space} (over $U$) as the subspace \[ \hrOmega{k}{\Lambda_0\mathsf{G}} (U) := \big\{ [\omega]_{\Lambda_0} \in \rOmega{k}{\Lambda_0\mathsf{G}} (U) \mid \omega_{(g,p)} \in {\bigwedge}^kN^*_p \text{ for all } (g,p)\in U \big\} \, . \] This implements the above condition \eqref{ite:horizontal}. Observe that the action of $G$ on $TN$ leaves the orbits invariant, hence induces also an action on the conormal distribution $N^*$ in a canonical way \cite[Sec.~3]{PflPosTanGOSPLG}. Call a section $[\omega]_{\Lambda_0} \in \hrOmega{k}{\Lambda_0} (U)$ \emph{invariant}, if \begin{equation} \label{eq:DefInvHorForms} \omega_{hgh^{-1},hp} (hv_1 , \ldots , hv_k ) = \omega_{(g,p)} (v_1 , \ldots , v_k ) \end{equation} for all $(g,p)\in U \subset \Lambda_0 \mathsf{G} $, $h\in G $ such that $(hgh^{-1},hp)\in U$ and $v_1 , \ldots , v_k \in N_p$. Note that the invariance of $[\omega]_{\Lambda_0}$ does not depend on the particular choice of the representative $\omega$ such that $\omega_p \in \bigwedge^kN^*_p$. Condition \eqref{ite:invariant} is covered by defining the space $\brOmega{k}{\Lambda_0} (U)$ of \emph{basic relative $k$-forms on the loop space} (over $U$) now as the space of all invariant horizontal relative $k$-forms $[\omega]_{\Lambda_0} \in \hrOmega{k}{\Lambda_0\mathsf{G}} (U)$. Obviously, one thus obtains sheaves $\hrOmega{k}{\Lambda_0}$ and $\brOmega{k}{\Lambda_0}$ on the loop space $\Lambda_0(G\ltimes M)$. We will call the push forward $\pi_* s_* \brOmega{k}{\Lambda_0}$ by the source map $s$ and canonical projection $\pi:M \to X =M/G$ sheaf of basic relative functions as well and denote it also by the symbol $\brOmega{k}{\Lambda_0}$. This will not lead to any confusion. The interpretion of basic relative forms as smooth families of forms on the fixed point manifolds is still missing, but will become visible in the following approach. \subsection*{(B) Differential Geometry} From a more differential geometric perspective, we consider the family of vector bundles $F\to \Lambda_0$ defined by $F_{(g,p)}:=T_p^*M^g$ for $(g,p)\in \Lambda_0(G\ltimes M)$. Of course, this does not define a (topological) vector bundle over the inertia space $\Lambda_0 (G\ltimes M)$ because in general the rank jumps discontinuously but it is again a cosmooth generalized distribution. Using the canonical projection $s^*T^*M|_{\Lambda_0}\to F$ we say that a local section $\omega\in\Gamma(U,\bigwedge^kF)$ over $U\subset\Lambda_0$ is {\em smooth} if for each $(g,p)\in U$ there exist open neighborhoods $O\subset G$ of $g$ and $V\subset M$ of $p$ together with a \emph{locally representing} smooth $k$-form $\omega_{O,V} \in \Gamma^\infty (O \times V, \bigwedge^k s^* T^*M)$ such that $(O \times V) \cap \Lambda_0 \subset U $ and $\omega_{(h,q)} = \big[\omega_{O,V}\big]_{(h,q)}$ for all $(h,q) \in (O \times V) \cap \Lambda_0(G\ltimes M)$. Hence a smooth section $\omega$ can be identified with the smooth family $(\omega_g)_{g\in \operatorname{pr}_G(U)}$ of forms $\omega_g \in \Omega^k \Big( s \big( U \cap (\{ g \} \times M^g)\big)\Big)$ wich are uniquely defined by the condition that ${\omega_g}_{|V^g} = \iota_{V^g}^* \omega_{O,V}$ for all $g \in O$ and all pairs $(O,V)$ with locally representing forms $\omega_{O,V}$ as before. The $\iota_{V^g} : V^g \hookrightarrow V$ hereby are the canonical embeddings of the fixed point manifolds $V^g$. We denote the space of all smooth sections of $\bigwedge^k F$ over $U$ by $\Gamma^\infty (U,\bigwedge^k F )$ or $\Gamma^\infty_{\bigwedge^k F} (U)$. Obviously, $\Gamma^\infty_{\bigwedge^k F}$ becomes a sheaf on $\Lambda_0$. \begin{proposition} \label{prop:factorization-relative-forms} The canonical sheaf morphism $\theta^k:\iota^{-1}\Gamma^\infty_{\bigwedge^k s^* T^*M} \to \Gamma^\infty_{\bigwedge^k F}$ factors through a unique epimorphism of sheaves $\Theta^k:\rOmega{^\bullet}{\Lambda_0} \to \Gamma^\infty_{\bigwedge^k F}$ making the following diagram commutative: \[ \xymatrix{\iota^{-1}\Gamma^\infty_{\bigwedge^k s^* T^*M} \ar[rr]^{\theta^k}\ar[d]&&\Gamma^\infty_{\bigwedge^k F}\\ \rOmega{^\bullet}{\Lambda_0}\ar[rru]_{\Theta^k}&&} \] \end{proposition} \begin{proof} The claim follows by showing that for open $\widetilde{U} \subset G \times M$ and $U := \widetilde{U} \cap \Lambda_0 (G \ltimes M)$ the canonical map $\theta^k_{\widetilde{U}}: \Gamma^\infty (\widetilde{U} , \bigwedge^k s^* T^*M) \to \Gamma^\infty (U ,\bigwedge^k F)$, $\omega \mapsto [\omega]$ is surjective and has \[ \mathcal{K} (\widetilde{U}) := \mathcal{J} (\widetilde{U}) \, \Gamma^\infty (\widetilde{U} , {\bigwedge}^k s^* T^*M) + d_\textup{rel}\mathcal{J} (\widetilde{U}) \wedge \Gamma^\infty (\widetilde{U},{\bigwedge}^{k-1} s^* T^*M) \] contained in its kernel. The sheaf $\Gamma^\infty_{\bigwedge^k F}$ is a $\mathcal{C}^\infty_{\Lambda_0}$-module sheaf, hence a soft sheaf. This entails surjectivity of $\theta^k_{\widetilde{U}}$. Assume that $\omega \in \Gamma^\infty (\widetilde{U},\bigwedge^k s^* T^*M)$ is of the form $\omega = f\, \varrho$ for some $f \in \mathcal{J} (\widetilde{U})$ and $\varrho \in \Gamma^\infty (\widetilde{U},\bigwedge^k s^* T^*M)$. Then \[ \theta^k_{\widetilde{U}} (\omega)_{(g,p)} = \theta^k_U(f\varrho)_{(g,p)} = f(q,p) \varrho_{(q,p)} =0 \quad \text{for all } (g,p)\in U \ . \] Now assume $\omega = d_\textup{rel}f \wedge \varrho$ with $f$ as before and $\varrho \in \Gamma^\infty (\widetilde{U},\bigwedge^{k-1} s^* T^*M)$. To prove that $ \theta^k_U(\omega) =0 $ it suffices to show that $ \iota^*_{U^g_g} \omega =0$ for all $g \in \operatorname{pr}_G(U)$. Fix some $g \in \operatorname{pr}_G(U)$ and $p \in U^g_g$ and choose an open coordinate neighborhood $V \subset M$ with coordinates $(x_1, \ldots , x_d): V \hookrightarrow {\mathbb R}^d$ such that $V \subset U_g$, $({x_1}_{|V^g}, \ldots , {x_k}_{|V^g}): V^g \hookrightarrow {\mathbb R}^k$ is a local coordinate system of $M^g$ over $V^g$ and such that $V^g$ is the zero locus of the coordinate functions $(x_{k+1},\ldots , x_d): V \hookrightarrow {\mathbb R}^{d-k}$. After possibly shrinking $V$ there exists an open neighborhood $O$ of $g$ in $G$ such that $O \times V \subset \widetilde{U}$. Extend the coordinate functions $(x_1,\ldots,x_d)$ to smooth functions on $O\times V$ constant along the fibers of the source map. Then we have $d_\textup{rel}f = \sum_{l=1}^d \frac{\partial f}{\partial x_l} dx_l $. Since $\frac{\partial f}{\partial x_l} (g,p) =0$ for $p\in V^g$ and $1 \leq l \leq k$ and since $\iota^*_{V^g} dx_l=0$ for $k < l \leq d$ one gets \[ \iota^*_{V^g}\iota^*_{U^g_g} \omega = \iota^*_{V^g} \big( d_\textup{rel}f \wedge \varrho \big) = \sum_{l=1}^d \Big( \iota^*_{V^g} \frac{\partial f}{\partial x_l}\Big) \, \big(\iota^*_{V^g} dx_l \big) \wedge \big(\iota^*_{V^g} \varrho \big) = 0 \ , \] where, by slight abuse of notation, we have also used the symbol $\iota_{V^g}$ for the embedding $V^g \hookrightarrow U$, $p \mapsto (g,p)$. So $\iota^*_{U^g_g} \omega = 0$ and $\mathcal{K}(\widetilde{U})$ is in the kernel of $\theta^k_{\widetilde{U}}$. Hence $\theta^k_{\widetilde{U}}$ factors through some linear map \[ \Theta^k_U: \rOmega{k}{\Lambda_0}(U) \to \Gamma^\infty (U,{\bigwedge}^k F) \ . \] This proves the claim. \end{proof} \begin{remark} Conjecturally, the morphism $\Theta^k$ is an isomorphism, showing that the sheaf theoretic approach (A) and the differential geometric approach (B) above leads to the same definition of basic relative forms. Below, in Section \ref{sec:circleaction}, we prove this conjecture for the case of an $S^1$-action. In the general case this conjecture remains open. Note that the image of the sheaf of horizontal relative $k$-forms under $\Theta^k$ coincides exactly with those families of forms $(\omega_g)_{g\in \operatorname{pr}_G(U)}$ fulfilling condition \eqref{ite:horizontal} above. Since $G$ naturally acts on the generalized distribution $F$ and $\Theta^k$ is obviously equivariant by construction, the original conditions by Brylinski are recovered now also in the differential geometric picture of relative forms. \end{remark} \begin{remark} \label{rem:block-getzler} In \cite{BloGetECHED}, Block and Getzler define a sheaf on $G$ whose stalk at $g\in G$ is given by the space of $G_g$-equivariant differential forms on $M^g$. There are two differentials on this sheaf, $d$ and $\iota$, together constituting the equivariant differential $D:=d+\iota$, which, under an HKR-type map correspond to the Hochschild and cyclic differential on the crossed product algebra $G\ltimes C^\infty(M)$. Taking cohomology with respect to $\iota$ only leads to a very similar definition of basic relative forms as above, however notice that the basic relative forms defined above form a sheaf over the quotient $M\slash G$, not the group $G$. \end{remark} \subsection{The equivariant Hochschild complex} To compute the Hochschild homology of the smooth crossed product $G\ltimes A$, consider the bigraded vector space \[ C= \bigoplus_{p,q\geq 0} C_{p,q},\quad \text{with} \quad C_{p,q}:=\mathcal{C}^\infty(G^{(p+1)},A^{\otimes(q+1)}). \] There exists a bi-simplicial structure on $C$ given by face maps $\delta_i^v:C_{p,q}\to C_{p,q-1}$, $0\leq i\leq q$ and $\delta_j^h:C_{p,q}\to C_{p-1,q}$, $0\leq j\leq p$ defined as follows. The vertical maps are given by \[ \delta_i^v(F)(g_0,\ldots,g_p):= \begin{cases} b_i(F( g_0,\ldots,g_p))&\text{for } 0\leq i\leq q-1,\\ b^{(g_0\cdots g_p)^{-1}}_q(F(g_0,\ldots,g_p))&\text{for } i\leq q, \end{cases} \] where the $b_i$ for $0\leq i\leq q-1$ are the first $q-1$ simplicial maps multiplying the $i$'th and $i+1$'th entry in $A^{\otimes(q+1)}$ underlying the Hochschild chain complex of $A$, and $b_q^g$ is the $g$-twisted version of the last one: \[ b^g_q(a_0\otimes\ldots\otimes a_q):=(g\cdot a_q)a_0\otimes a_1\otimes\ldots\otimes a_{q-1}\ , \quad a_0,\ldots ,a_q \in A , \: g\in G \ . \] The horizontal maps are defined by \[ \delta_j^h(F)(g_0,\ldots,g_{p-1}):= \begin{cases}\int_GF(g_0,\ldots, h,h^{-1}g_j,\ldots g_{p-1})\, dh&\text{for } 0\leq j\leq p-1,\\ \int_G h\cdot F(h^{-1}g_0,g_1,\ldots,g_{p-1},h)\, dh & \text{for }j=p, \end{cases} \] where, in the second line, $h$ acts diagonally on $A^{\otimes(q+1)}$. The following observations now hold true. \paragraph{$(i)$} The diagonal complex $\operatorname{diag}(C_{\bullet,\bullet}):=\bigoplus_{k\geq 0}C_{k,k}$ equipped with the differential \[ d_\textup{diag}:=\sum_i(-1)^i\delta^h_i\delta^v_i \] is isomorphic to the Hochschild complex $C_k(G\ltimes A)=\mathcal{C}^\infty\big( G^{(k+1)},A^{\otimes(k+1)}\big)$ of the smooth crossed product algebra $G \ltimes A$ via the isomorphism $\overline{\phantom{F}}: \operatorname{diag}(C_{\bullet,\bullet}) \to C_\bullet(G \ltimes A))$, $F\mapsto \overline{F}$ defined by \begin{equation} \label{iso-c} \overline{F}(g_0,\ldots,g_k):=(g_k^{-1}\cdots g_0^{-1}\otimes g_k^{-1}\cdots g_1^{-1}\otimes\ldots\otimes g_k^{-1}) \cdot F(g_0,\ldots,g_k), \quad F\in C_{k,k} \ , \end{equation} where the pre-factor on the right hand side acts componentwise via the action of $G$ on $A$. \paragraph{$(ii)$} The vertical differential $\delta^v$ in the total complex is given by a twisted version of the standard Hochschild complex of the algebra $A$. The horizontal differential $\delta^h$ in the $q$-th row can be interpreted as the Hochschild differential of the convolution algebra $\mathcal{C}^\infty(G)$ with values in the $G$-bimodule $\mathcal{C}^\infty(G,A^{\otimes (q+1)})$ with bimodule structure \[ (g\cdot f)(h):=g(f(g^{-1}h)),\quad (f\cdot g)(h):=f(hg),\quad f\in \mathcal{C}^\infty(G,A^{\otimes (k+1)}), \: g,h \in G \ . \] The homology of this complex is isomorphic to the group homology of $G$ with values in the adjoint module $\mathcal{C}^\infty(G,A^{\otimes (k+1)})_{\rm ad}$ given by $\mathcal{C}^\infty(G,A^{\otimes (k+1)})$ equipped with the diagonal action: \[ H_\bullet \big( \mathcal{C}^\infty(G),\mathcal{C}^\infty(G,A^{\otimes (q+1)}) \big)\cong H_\bullet^{\rm diff}\big( G , \mathcal{C}^\infty(G,A^{\otimes (q+1)})_{\rm ad} \big) \ . \] Because $G$ is a compact Lie group, its group homology vanishes except for the zeroth degree: \[ H_k^{\rm diff}\big( G,\mathcal{C}^\infty(G,A^{\otimes (k+1)})_{\rm ad} \big)= \begin{cases} \mathcal{C}^\infty(G,A^{\otimes (k+1)})_{\rm ad}^{\rm inv} & \text{for } k=0 , \\ 0&\text{for } k >0. \end{cases} \] \paragraph{$(iii)$} Filtering the total complex by rows, we obtain a spectral sequence with $E^2$-terms \[ E^2_{0,q}\cong \mathcal{C}^\infty(G,A^{\otimes(q+1)})^{\rm inv},\qquad E^2_{p,q}=0 \text{ for } p\geq 1. \] The spectral sequence therefore collapses and the cohomology of the total complex is computed by the complex $C^G_\bullet (A):=\mathcal{C}^\infty(G,A^{\otimes(\bullet+1)})^{\rm inv}$ equipped with the twisted Hochschild differential \[ (b_{\rm tw}f)(g):=\sum_{i=0}^q(-1)^ib_i(f(g))+(-1)^{q+1}b_{q+1}^{g^{-1}}(f(g))\ , \quad f \in \mathcal{C}^\infty(G,A^{\otimes(q +1)}) , \: g \in G \ . \] This complex is called the {\em equivariant Hochschild complex} in \cite{BloGetECHED}. \paragraph{$(iv)$} By the Eilenberg--Zilber theorem, the diagonal complex is quasi-isomorphic to the total complex ${\rm Tot}(C_{\bullet,\bullet})$ with $\delta_{\rm Tot}:=\delta^h+\delta^v$ where the horizontal and vertical differentials are given by the usual formulas $\delta^{h,v}:=\sum_i(-1)^i\delta_i^{h,v}$. There is an explicit formula for the map $EZ: {\rm diag}(C_{\bullet,\bullet})\to {\rm Tot}(C_{\bullet,\bullet})$ implementing this quasi-isomorphism. Combining items $(i)-(iv)$ above we conclude that the following holds. \begin{proposition} \label{prop:qismequivariantcplx} Given a complete bornological algebra $A$ with a smooth left $G$-action, the composition \[ \widetilde{\phantom{F}}: C_\bullet(G \ltimes A )\stackrel{\overline{\phantom{F}}}{\longrightarrow} {\rm diag}(C)_\bullet\stackrel{EZ}{\longrightarrow}{\rm Tot}(C_{\bullet,\bullet})\longrightarrow C^G_\bullet(A) \] is a quasi-isomorphism of complexes. The explicit formula is given by mapping a chain $F\in C_k(\mathcal{C}^\infty(G,A))$ to the equivariant Hochschild chain $\widetilde{F}\in C^G_{k}(A)$ defined by \[ \widetilde{F}(g):=\int_{G^{k}}(g^{-1}h_1\cdots h_k\otimes 1\otimes h_1\otimes \ldots\otimes h_1\cdots h_{k-1})F(h_k^{-1}\cdots h_1^{-1}g,h_1,\ldots,h_k)dh_1\cdots dh_k. \] \end{proposition} \begin{remark}\label{rem:qismequivariantcplx} % This result has originally been proved by Brylinski in \cite{ BryAAGAH,BryCHET}. Observe that a right $G$-action $\beta$ on an algebra $A$ can be changed to a left $G$-action $\alpha$ on an algebra $A$ by $\alpha(g)(a):=\beta(g^{-1})(a)$. Let $A^{\operatorname{op}}$ be the opposite algebra of $A$ and assume that $\beta$ defines a right $G$ action on $A^{\operatorname{op}}$. Use $A^{\operatorname{op}}\rtimes_\beta G$ to denote the (right) crossed product algebra defined by the right $G$ action on $A^{\operatorname{op}}$. Define a map $\Phi: G{_\alpha \ltimes} A\to A^{\operatorname{op}}\rtimes_\beta G$ by $\Phi(f)(g):=f(g^{-1})$. One directly checks the following identity, \[ \Phi (f_1\ast_{G{_\alpha \ltimes} A} f_2)= \Phi (f_2)\ast_{A^{\operatorname{op}}\rtimes_\beta G}\Phi(f_1), \] and concludes that the map $\Phi$ induces an isomorphism of algebras \[ G{_\alpha \ltimes} A \cong \big(A^{\operatorname{op}}\rtimes_\beta G\big)^{\operatorname{op}}. \] Furthermore notice that for a general algebra $\mathfrak{A}$, the algebra $\mathfrak{A}\otimes \mathfrak{A}^{\operatorname{op}}$ is naturally isomorphic to $\mathfrak{A}^{\operatorname{op}}\otimes \mathfrak{A}$ and therefore $HH_\bullet(\mathfrak{A})\cong HH_\bullet(\mathfrak{A}^{\operatorname{op}})$ since the corresponding Bar resolutions coincide. Applying this observation to $\big(A^{\operatorname{op}}\rtimes_\beta G\big)^{\operatorname{op}}$, one concludes that \[ HH_\bullet(G{_\alpha \ltimes} A )\cong HH_\bullet\big(A^{\operatorname{op}}\rtimes_\beta G\big), \] and that Proposition \ref{prop:qismequivariantcplx} holds also true for a smooth right $G$-action on an algebra $A$ meaning that there is a quasi-isomorphism of chain complexes \[ \widehat{\phantom{F}}: C_\bullet( A\rtimes G ) \longrightarrow C^G_\bullet(A^{\operatorname{op}}) \ . \] Note that for a right $G$-action the convolution product on $\mathcal{C}^\infty (G,A)$ is given by \begin{equation} \label{eq:convolution-product-right-action-algebra} (f_1\ast f_2)(g):=\int_G ( f_1(h) \cdot (h^{-1}g)) \, f_2(h^{-1}g) \, dh \ ,\quad f_1,f_2\in \mathcal{C}^\infty(G,A), \: g \in G \ . \end{equation} Throughout this paper, as it is more natural to have a left $G$-action on a manifold $M$, we will work with a right $G$-action on $\mathcal{C}^\infty(M)$. \end{remark} \section{The circle action case} \label{sec:circleaction} \subsection{Rotation in a plane} Let us consider the case of the natural $S^1$-action on ${\mathbb R}^2$ by rotation. First we describe the ideal sheaf $\mathcal{J} \subset \mathcal{C}^\infty_{S^1 \ltimes {\mathbb R}^2} $ which consists of smooth functions on open sets of $S^1\times {\mathbb R}^2$ vanishing on $\Lambda_0(S^1 \ltimes {\mathbb R}^2)$. To this end denote by $x_j : S^1 \times {\mathbb R}^2 \to {\mathbb R}$, $j=1,2$, the projection onto the first respectively second coordinate of ${\mathbb R}^2$ and by $\tau : S^1\setminus \{ - 1 \} \times {\mathbb R}^2 \to (-\pi,\pi)$ the coordinate map $(g,v) \mapsto \operatorname{Arg} (g)$. By $r = \sqrt{x_1^2 + x_2^2}$ we denote the radial coordinate and by $B_\varrho (v)$ the open disc of radius $\varrho >0$ around a point $v \in {\mathbb R}^2$. Note that the loop space $\Lambda_0(S^1 \ltimes {\mathbb R}^2)$ is the disjoint union of the strata $\{ (1,0) \}$, $ \{ 1\} \times ({\mathbb R}^2\setminus \{0\})$, and $( S^1\setminus \{1\}) \times \{0\}$ and that the loop space is smooth outside the singular point $(1,0)$. \begin{proposition}\label{prop:vanishingideal} Around the point $(1,0)$, the vanishing ideal $ \mathcal{J} \big( (S^1\setminus \{ -1 \}) \times B_\varrho(0) \big) $ consists of all smooth $f : (S^1\setminus \{ -1 \}) \times B_\varrho(0) \to {\mathbb R}$ which can be written in the form \begin{equation} \label{eq:expansion-functions-vanishing-ideal} f = f_1 \tau x_1 + f_2 \tau x_2 , \quad \text{where } f_1,f_2 \in \mathcal{C}^\infty\big( (S^1\setminus \{ -1 \}) \times B_\varrho(0) \big) \ . \end{equation} Around the stratum $ \{ 1\} \times ({\mathbb R}^2\setminus \{0\})$, a function $f \in \mathcal{C}^\infty \big( (S^1\setminus \{ -1 \}) \times ({\mathbb R}^2\setminus \{0\})\big)$ lies in the ideal $ \mathcal{J} \big( ( S^1\setminus \{ -1 \}) \times ({\mathbb R}^2\setminus \{0\})\big)$ if and only if $f$ is of the form $h \tau $ for some $h \in \mathcal{C}^\infty \big( (S^1\setminus \{ -1 \}) \times ({\mathbb R}^2\setminus \{0\})\big)$. Finally, around the stratum $( S^1\setminus \{1\}) \times \{0\}$, a function $f \in \mathcal{C}^\infty \big( (S^1 \setminus \{ 1\}) \times {\mathbb R}^2\big)$ vanishes on $\Lambda_0(S^1 \ltimes {\mathbb R}^2)$ if and only if it is of the form $f_1 x_1 + f_2 x_2$ with $f_1,f_2 \in \mathcal{C}^\infty \big( (S^1 \setminus \{ 1\}) \times {\mathbb R}^2\big)$. \end{proposition} \begin{proof} Since the loop space is smooth at points of the strata $ \{ 1\} \times ({\mathbb R}^2\setminus \{0\})$ and $( S^1\setminus \{1\}) \times \{0\}$, only the case where $f$ is defined on a neighborhood of the singular point $(1,0)$ is non-trivial. So let us assume that $ f \in \mathcal{C}^\infty \big( (S^1\setminus \{ -1 \}) \times B_\varrho(0) \big) $ vanishes on $\Lambda_0(S^1 \ltimes {\mathbb R}^2)$. Using the coordinate functions we can consider $f$ as a function of $t \in (-\pi,\pi)$ and $x \in {\mathbb R}^2$. By the Malgrange preparation theorem one then has an expansion \[ f(t, x) + t = c(t,x) (t + a_0 (x)) , \] where $c$ and $a_0$ are smooth and $a_0(0)=0$. Since $t = c(t,0)t$ for all $t\in (-\pi,\pi)$, one has $c (t,0)=1$. Putting $t=0$ gives $0= c(0,x)a_0(x)$ for all $x \in B_\varrho(0)$. Since $c(0,0)=1$, one obtains $a_0(x)= 0$ for all $x$ in a neighborhood of the origin. After possibly shrinking $B_\varrho (0)$ we can assume that $a_0=0$. Hence \begin{equation} \label{eq:intermediate-expansion} f(t, x) = (c(t,x) - 1 ) t \ . \end{equation} Parametric Taylor expansion of $c(t,x)-1$ gives \[ c(t,x) - 1 = x_1 r_1 (t,x) + x_2 r_2(t,x), \quad \text{where } r_j (t,x) = \int_0^1 (1-s) \, \partial_j c (t, sx) \, ds , \: j=1,2 \ . \] Since the functions $r_j$ are smooth, this expansion together with \eqref{eq:intermediate-expansion} entails \eqref{eq:expansion-functions-vanishing-ideal}. \end{proof} \begin{lemma}\label{lem:coordinate-representations-vector-fields} The vector fields \begin{equation*} \begin{split} Y = Y_{S^1\ltimes {\mathbb R}^2} :\: S^1\times {\mathbb R}^2 \to {\mathbb R}^2 , (g,x) \mapsto x - g x \quad \text{and} \quad Z = Z_{S^1\ltimes {\mathbb R}^2} :\: S^1\times {\mathbb R}^2 \to {\mathbb R}^2 , (g,x) \mapsto \frac{x + gx}{2} \end{split} \end{equation*} have coordinate representations $Y = Y_1 \frac{\partial}{\partial x_1} + Y_2 \frac{\partial}{\partial x_2}$ and $Z = Z_1 \frac{\partial}{\partial x_1} + Z_2 \frac{\partial}{\partial x_2}$ with coefficients given by \begin{equation} \label{eq:expansionY} \begin{split} Y_1 = x_1 (1-\cos \tau) - x_2 \sin \tau \quad \text {and} \quad Y_2 = x_2 (1 -\cos \tau ) + x_1 \sin \tau \end{split} \end{equation} respectively by \begin{equation} \label{eq:expansionZ} \begin{split} Z_1 = x_1 (1+\cos \tau) + x_2 \sin \tau \quad \text {and} \quad Z_2 = x_2 (1+\cos \tau )- x_1 \sin \tau \ . \end{split} \end{equation} Moreover, the vector fields $Y$ and $Z$ have square norms \begin{equation} \label{eq:squarenorms} \| Y \|^2 = 2 r^2 \, (1 - \cos \tau ) = r^2\tau^2 (\xi \circ\tau) \quad \text{and} \quad \| Z \|^2 = 2 r^2 \, (1+\cos \tau ) \ , \end{equation} where $\xi$ is holomorphic with positive values over $(-\pi,\pi)$ and value $1$ at the origin. \end{lemma} \begin{proof} The representations \begin{equation*} \begin{split} Y|_{(S^1 \setminus \{ -1\})\times {\mathbb R}^2} & = \left( x_1 (1-\cos \tau) - x_2 \sin \tau \right) \frac{\partial}{\partial x_1} + \left( x_2 (1 -\cos \tau ) + x_1 \sin \tau \right) \frac{\partial}{\partial x_2} \quad \text{and} \\ Z|_{(S^1 \setminus \{ -1\})\times {\mathbb R}^2} & = \left( x_1 (1+\cos \tau) + x_2 \sin \tau \right) \frac{\partial}{\partial x_1} + \left( x_2 (1+\cos \tau )- x_1 \sin \tau \right) \frac{\partial}{\partial x_2} \end{split} \end{equation*} are immediate by definition of $Y$ and $Z$ and since $S^1$ acts by rotation. Note that these formulas still hold true when extending $\tau$ to the whole circle by putting $\tau (-1) = \pi$. At $g=-1$ the extended $\tau$ is not continuous then, but compositions with the trigonometric functions $\cos$ and $\sin$ are smooth on $S^1$. For the norms of $Y$ and $Z$ one now obtains \[ \| Y \|^2 = x_1^2 (1 -\cos \tau)^2 + x_2^2 \sin^2 \tau \, + x_2^2 (1-\cos \tau)^2 + x_1^2 \sin^2 \tau = 2 r^2 \, (1 - \cos \tau ) \] and \[ \| Z \|^2 = x_1^2 (1+\cos \tau)^2 + x_2^2 \sin^2 \tau + x_2^2 (1+\cos \tau)^2 + x_1^2 \sin^2 \tau = 2 r^2 \, (1+\cos \tau ) \ . \] By power series expansion of $1 -\cos t$ one obtains the statement about $\xi$. \end{proof} \begin{lemma} For all open subsets $U$ of the loop space $\Lambda_0 = \Lambda_0 (S^1 \ltimes {\mathbb R}^2)$ and all $k\in {\mathbb N}$ the map \[ \Theta^k_U : \rOmega{k}{\Lambda_0}(U) \to \Gamma^\infty (U,\wedge^k F) \] from Prop.~\ref{prop:factorization-relative-forms} is injective. \end{lemma} \begin{proof} Since $\rOmega{0}{\Lambda_0}(U) = \mathcal{C}^\infty (U) = \Gamma^\infty (U,\wedge^0 F)$ and $\Theta^0_U = \operatorname{id}$, we only need to prove the claim for $k\geq 1$. To this end we have to show that for $\omega \in \Gamma^\infty (\widetilde{U}, \wedge^k s^* T^*M)$ with $[\omega]_F=0$ the relation $[\omega]_{\Lambda_0} =0$ holds true. Here, as before, $\widetilde{U}\subset S^1\times {\mathbb R}^2$ is an open subset such that $U = \widetilde{U} \cap \Lambda_0(S^1\ltimes {\mathbb R}^2)$. In other words we have to show that each such $\omega$ has the form \[ \omega = \sum_{l \in L} f_l \omega_l + \sum_{j\in J} d_\textup{rel} h_j \wedge \eta_j \ , \] where $L,J$ are finite index sets, $f_l,h_j \in \mathcal{J} (\widetilde{U})$, $\omega_l \in \Gamma^\infty (\widetilde{U},\wedge^k s^* T^*M)$, and $\eta_j \in \Gamma^\infty (\widetilde{U},\wedge^{k-1}s^* T^*M)$. Since the involved sheaves are fine, we need to show the claim only locally. So let $(g,v) \in \Lambda_0(S^1\ltimes {\mathbb R}^2)$. Choose $\varrho>0$ and $\varepsilon >0$ with $\varepsilon < \pi $ such that $0 \notin B_\varrho(v) $ if $v \neq 0$ and such that $e^{\sqrt{-1} t}g \neq 1 $ for all $t$ with $|t|< \varepsilon$ if $g \neq 1$. Let \[ \widetilde{U}= \left\{ (e^{\sqrt{-1} t}g, w) \in S^1\times {\mathbb R}^2\mid |t| < \varepsilon \: \& \: \| v - w \| < \varrho \right\} \ . \] Using the coordinate maps $\tau,x_1,x_2$ we now consider three cases. \textit{1.~Case:} $g=1$ and $v=0$. Then $F_{(1,w)} = T^*_w {\mathbb R}^2$, hence $\omega_{(1,w)} =0$ for all $w$ such that $(1,w) \in \widetilde{U} \cap \Lambda_0$. Hence \[ \omega = \tau \sum_{1\leq i_1< \ldots < i_k \leq 2} \omega_{i_1,\ldots , i_k} dx_1\wedge \ldots \wedge dx_{i_k} \] with $\omega_{i_1,\ldots , i_k} \in \mathcal{C}^\infty (\widetilde{U})$. Now observe that $\tau x_j \in \mathcal{J} (\widetilde{U})$ for $j=1,2$ and that $d_\textup{rel} (\tau x_j) = \tau dx_j$. Therefore $\omega \in d_\textup{rel} \mathcal{J} (\widetilde{U}) \wedge \Gamma^\infty (\widetilde{U}, \wedge^{k-1} s^* T^*M)$. \textit{2.~Case:} $g \neq 1$ and $v=0$. Then $F_{(h,0)} = 0$ for all $h \in S^1$ with $(h,0) \in \widetilde{U}\cap \Lambda_0$. Hence $\omega$ can be any $k$-form on $\widetilde{U}$. But over $\widetilde{U}$ one has $x_1, x_2\in \mathcal{J}(\widetilde{U})$ which entails that \[ \omega = \sum_{1\leq i_1< \ldots < i_k \leq 2} \omega_{i_1,\ldots , i_k} dx_{i_1}\wedge \ldots \wedge dx_{i_k} \in d_\textup{rel} \mathcal{J} (\widetilde{U}) \wedge \Gamma^\infty (\widetilde{U}, \wedge^{k-1} s^* T^*M). \] \textit{3.~Case:} $g=1$ and $v \neq 0$. Then $F_{(1,w)} = T^*{\mathbb R}^2$ for all $w$ such that $(1,w) \in \widetilde{U}\cap \Lambda_0$. Hence \[ \omega = \tau \sum_{1\leq i_1< \ldots < i_k \leq 2} \omega_{i_1,\ldots , i_k} dx_1\wedge \ldots \wedge dx_{i_k} \] with $\omega_{i_1,\ldots , i_k} \in \mathcal{C}^\infty (\widetilde{U})$. Since $\tau\in \mathcal{J}(\widetilde{U})$ one obtains $\omega \in \mathcal{J}(\widetilde{U}) \Gamma^\infty (\widetilde{U}, \wedge^k s^* T^*M)$. So in all three cases $\omega$ is in the differential graded ideal \[ \mathcal{J}(\widetilde{U}) \Gamma^\infty (\widetilde{U}, \wedge^k s^* T^*M) + d_\textup{rel} \mathcal{J}(\widetilde{U}) \wedge \Gamma^\infty (\widetilde{U}, \wedge^{k-1} s^* T^*M) \] and $[\omega]_{\Lambda_0} =0$. Hence $\Theta^k_U$ is injective. \end{proof} \begin{lemma} For every $S^1$-invariant open $V \subset {\mathbb R}^2$ the restriction morphism \[ [-]_{\Lambda_0}: \Omega^\bullet_{S^1 \ltimes V \to S^1} (S^1 \ltimes V ) \to \rOmega{\bullet}{\Lambda_0} \big(\Lambda_0(S^1 \ltimes V )\big) \] maps the space of cycles $Z_k \big( \Omega^\bullet_{S^1 \ltimes V \to S^1} (S^1 \ltimes V ) , Y\lrcorner \big)$ onto the space $\hrOmega{k}{\Lambda_0} \big(\Lambda_0(S^1 \ltimes V )\big)$ of horizontal relative forms. \end{lemma} \begin{proof} Since the sheaf $\hrOmega{\bullet}{\Lambda_0}$ is fine it suffices to verify this claim for $V \subset {\mathbb R}^2$ of the form $V = B_\varrho(0) $ or $ V = B_\varrho (0) \setminus \overline{B}_\sigma (0)$ where $0 < \sigma < \varrho$. So assume that $1\leq k\leq 2$ and $[\omega]_{\Lambda_0} \in \hrOmega{k}{\Lambda_0} \big(\Lambda_0(S^1 \ltimes V )\big)$ for some relative form $\omega \in \Omega^k_{S^1 \ltimes V \to S^1} (S^1 \ltimes V )$. Now observe that $N_v^* = {\mathbb R} dr$ for all $v \in {\mathbb R}^2\setminus \{ 0 \}$ where $dr = \frac{1}{r} (dx_1 + dx_2)$. Hence, $\omega|_{\{ 1\} \times V} = 0$ if $k=2$ and $\omega|_{\{ 1\} \times (V\setminus \{ 0\})} = \varphi \, dr$ with $\varphi \in \mathcal{C}^\infty (V \setminus \{ 0 \}) $ if $k=1$. Since the claim for $k=2$ therefore has been proved, we assume from now on that $k=1$. In cartesian coordinates, $\omega = \omega_1 dx_1 + \omega_2 dx_2$ with $\omega_j \in \mathcal{C}^\infty \big(S^1 \times (V \setminus \{ 0 \})\big)$, $j=1,2$. Comparing with the expansion in polar coordinates gives the following equality over $V \setminus \{ 0 \}$ \begin{equation} \label{eq:comparison-polar-coordinates} \omega_j ( 1, - ) = \frac{\varphi}{r} x_i \text{ for } j=1,2 \ . \end{equation} Note that if the origin is an element of $V$, then $\omega_{(1,0)} =0$, hence $(\omega_j)_{(1,0)} =0$, $j=1,2$. Choose a smooth cut-off function $\chi : S^1 \to [0,1]$ such that $\chi$ is identical $1$ on a neighborhood of $1$ and identical $0$ on a neighborhood of $-1$. Now define the $k$-form $\widehat{\omega} \in \Omega^k ( S^1 \times V ) $ by \[ \widehat{\omega}_{(g,x)} = \begin{cases} \frac{\chi (g) \, \varphi (x)}{\| Z(g,x)\|} \, \left\langle Z(g,x) ,- \right\rangle : {\mathbb R}^2 \to {\mathbb R} & \text{for } g \in \operatorname{supp} \chi \text{ and } x \in V \setminus \{ 0 \} \ , \\ 0 & \text{for } g \in S^1 \setminus \operatorname{supp} \chi \text{ or } x \in V \cap \{ 0 \} \ . \end{cases} \] where $\langle - , - \rangle $ is the euclidean inner product on ${\mathbb R}^2$. It needs to be verified that $\widehat{\omega}$ is smooth on a neighborhood of $S^1\times \{ 0 \}$ in case the origin is in $V$. To simplify notation we denote the composition of a function $f : V \to {\mathbb R}$ with the projection $S^1 \times V \to$ again by $f$ and likewise for a function $\widetilde{f}: S^1 \to {\mathbb R}$. With this notational agreement the formula for $Z$ in \eqref{eq:expansionZ} entails by \eqref{eq:comparison-polar-coordinates} over $(S^1 \setminus \{ -1 \}) \times (V \setminus \{ 0 \})$ \begin{equation*} \begin{split} \widehat{\omega}&|_{(S^1 \setminus \{ -1 \}) \times (V \setminus \{ 0 \})} \, = \\ & = \frac{\chi \, \varphi}{r \, \sqrt{2(1+\cos \tau)}} \, \Big( \left( (1+\cos \tau)x_1 + \sin \tau \, x_2 \right) dx_1 + \left( (1+\cos \tau)x_2 - \sin \tau \, x_1 \right) dx_2 \Big) = \\ & = \frac{\chi}{\sqrt{2(1+\cos \tau)}} \, \Big( \left( (1+\cos \tau)\omega_1 + \sin \tau \, \omega_2 \right) dx_1 + \left( (1+\cos \tau) \omega_2 - \sin \tau \, \omega_1 \right) dx_2 \Big) \ . \end{split} \end{equation*} The right hand side can be extended by $0$ to a smooth form on $S^1 \times V$, hence $ \widehat{\omega}$ is smooth. Moreover, the restriction of $ \widehat{\omega}$ to $\{ 1 \} \times V$ coincides with the restriction $\omega|_{\{ 1 \} \times V}$. Finally check that for $x \neq 0$ and $g\in S^1\setminus \{ -1 \}$ \[ Y (g,x) \lrcorner \, \widehat{\omega}_{(g,x)} = \frac{\chi (g) \, \varphi (x)}{\|x + gx\|} \, \left\langle x + gx , x - gx \right\rangle = 0 \ . \] Hence $\widehat{\omega} \in Z_k \big( \Omega^\bullet_{S^1 \ltimes V \to S^1} (S^1 \ltimes V ) , Y\lrcorner \big)$ and $[ \widehat{\omega}]_{\Lambda_0} = [ \omega]_{\Lambda_0}$. \end{proof} \begin{proposition} For each $S^1$-invariant open $V \subset {\mathbb R}^2$ the chain map \[ \big( \Omega^\bullet_{S^1 \ltimes V \to S^1} (S^1 \ltimes V ), Y \lrcorner \big) \to \big( \hrOmega{\bullet}{\Lambda_0} (\Lambda_0(S^1 \ltimes V )), 0 \big) \] is a quasi-isomorphism. \end{proposition} \begin{proof} It remains to prove that every $\omega \in Z_k \big( \Omega^\bullet_{S^1 \ltimes V \to S^1} (S^1 \ltimes V ), Y \lrcorner \big)$ which satisfies the condition $ [ \omega]_{\Lambda_0} = 0$ is of the form $\omega = Y \lrcorner \, \eta $ for some $\eta \in \Omega^{k+1}_{S^1 \ltimes V \to S^1} (S^1 \ltimes V )$. Let us show this. We consider the three non-trivial cases $k=0,2,1$ separately. \textit{1.~Case:} $k=0$. Then $\omega$ is a smooth function on $S^1 \ltimes V$ vanishing on $\Lambda_0$. By Prop.~\ref{prop:vanishingideal}, the function $\omega$ can be expanded over $S^1\setminus \{ -1\} \times V$ in the form \[ \omega|_{S^1\setminus \{ -1\} \times V} = \omega_1\tau x_1 + \omega_2\tau x_2 \ ,\quad\text{where } \omega_1,\omega_2 \in \mathcal{C}^\infty (S^1\setminus \{ -1\} \times V) \ . \] Moreover, the interior product of a form $\eta = \eta_1 dx_1 + \eta_2 dx_2 \in\Omega^1_{S^1 \ltimes V \to S^1} (S^1 \ltimes V )$ with the vector field $Y$ has the form \[ Y \lrcorner\, \eta = Y_1 \eta_1 + Y_2\eta_2 = (x_1(1- \cos\tau)-x_2\sin \tau)\eta_1 + (x_2(1-\cos\tau)-x_1\sin \tau)\eta_2 \ . \] This means that it suffices to find $\eta_1,\eta_2 \in \mathcal{C}^\infty (S^1 \ltimes V) $ which solve the system of equations \begin{equation} \label{eq:system-equation-coefficient-fcts} \begin{split} \omega_1\tau& = (1- \cos\tau) \eta_1 + (\sin \tau) \eta_2 \ , \\ \omega_2\tau & = - (\sin \tau) \eta_1 + (1 -\cos \tau) \eta_2 \ . \end{split} \end{equation} The $1$-form $\eta = \eta_1 dx_1 + \eta_2 dx_2$ will then satisfy $Y \lrcorner\, \eta = \omega$ which will prove the first case. The functions \begin{equation*} \label{eq:linear-equation-coefficient-fcts} \begin{split} \eta_1& = \frac{\tau(1-\cos \tau)}{(1-\cos \tau)^2+\sin^2\tau} \omega_1 - \frac{\tau\sin \tau}{(1-\cos \tau)^2+\sin^2\tau} \omega_2 = \frac{\tau}{2} \omega_1 - \frac{\tau\sin \tau}{2 (1-\cos \tau)} \omega_2 \ \\ \eta_2 & = \frac{\tau \sin \tau}{(1-\cos \tau)^2+\sin^2\tau} \omega_1 + \frac{\tau (1-\cos \tau)}{(1-\cos \tau)^2+\sin^2\tau} \omega_2 = \frac{\tau\sin \tau}{2 (1-\cos \tau)} \omega_1 + \frac{\tau}{2} \omega_2 \end{split} \end{equation*} now are well-defined and smooth over $(S^1 \times V )\setminus (\{ 1 \} \times {\mathbb R}^2)$. They also solve \eqref{eq:system-equation-coefficient-fcts}. We are done when we can show that they can be extended smoothly to the whole domain $S^1 \times V$. But this is clear since the function $(-\pi, \pi)\setminus \{ 0 \} \to {\mathbb R}$, $t\mapsto \frac{t \sin t}{2 (1-\cos t)}$ has a holomorphic extension near the origin as one verifies by power series expansion. \textit{2.~Case:} $k=2$. Let $\omega \in \Omega^2_{S^1 \ltimes V \to S^1} (S^1 \ltimes V )$ and $Y \lrcorner \, \omega =0$. Then $\omega = \varphi dx_1 \wedge dx_2$ for some smooth function $\varphi \in S^1 \ltimes V \to S^1$. Now compute using \eqref{eq:expansionY} \begin{equation*} \begin{split} 0&=Y\lrcorner\,\omega=\varphi\cdot (Y_1-Y_2)= \varphi\cdot\big( x_1(1-\cos\tau)-x_2\sin\tau -x_2(1-\cos\tau) - x_1\sin\tau \big)=\\ &=\varphi\cdot (x_1-x_2) \cdot (1-\cos\tau-\sin\tau)\ . \end{split} \end{equation*} Hence $\varphi=0$ and $\omega=0$. \textit{3.~Case:} $k=1$. Observe that in this case $\omega$ can be written in the form $\omega= \omega_1 dx_1 + \omega_2 dx_2$ with $\omega_1,\omega_2 \in \mathcal{J} ( S^1 \times V ) \subset \mathcal{C}^\infty( S^1 \times V )$. By Lemma \ref{eq:expansion-functions-vanishing-ideal}, $\omega_j|_{(S^1 \setminus \{ -1\}) \times V} = \tau \Omega_j$ for $j=1,2$ and functions $\Omega_j \in \mathcal{C}^\infty((S^1 \setminus \{ -1\}) \times V) $. The condition $Y \lrcorner \, \omega = 0 $ implies \begin{equation} \label{eq:cycle-condition-one-form} Y_1 \Omega_1 + Y_2 \Omega_2 = Y_1 \omega_1 + Y_2 \omega_2 = 0 \ . \end{equation} Now define the function $\varphi : (S^1 \times V )\setminus \Lambda_0 \to {\mathbb R}$ by $ \varphi = \left. \frac{1}{\|Y\|^2} (-Y_2 \omega_1 + Y_1 \omega_2 )\right|_{(S^1\times V) \setminus \Lambda_0} $. Since $\|Y\|^2 = 2 r^2 (1-\cos \tau)$, the vector field $Y$ vanishes nowhere on $ (S^1 \times V )\setminus \Lambda_0$, so $\varphi$ is well-defined and smooth. By \eqref{eq:cycle-condition-one-form} one computes \begin{equation*} \varphi (g,x) = \begin{cases} \frac{ \omega_2}{Y_1} (g,x)& \text{ if } g \neq 1, x \neq 0 \text{ and } Y_1(g,x) \neq 0 \ , \\ \frac{- \omega_1}{Y_2} (g,x) & \text{ if } g \neq 1, x \neq 0 \text{ and } Y_2(g,x) \neq 0 \ . \end{cases} \end{equation*} Assume that $\varphi$ can be extended smoothly to $S^1 \times V $. Then $\eta = \varphi dx_1 \wedge dx_2 $ is a smooth form on $S^1 \times V$ which satisfies \[ Y \lrcorner \eta = \varphi ( Y_1 dx_2 - Y_2 dx_1 ) = \omega \ . \] So it remains to verify that $\varphi$ can be smoothly extended to $S^1 \times V$. To this end we use the complex coordinate $z= x_1 + \sqrt{-1} x_2$ of $V$ and introduce the complex valued function $\Omega = \Omega_1 + \sqrt{-1} \Omega_2$. Moreover, we define $y: S^1 \times V \to {\mathbb C} $, $(g,z) \mapsto z - gz $. Then \begin{equation} \label{eq:representation-vector-field} y = (1 - e^{\sqrt{-1}\tau})z = Y_1 + \sqrt{-1}Y_2 \end{equation} and, by Eq.~\ref{eq:cycle-condition-one-form}, \begin{equation} \label{eq:complex-cycle-condition} \frac{1}{2}( y\overline{\Omega}+\overline{y}\Omega) = Y_1\Omega_1+Y_2\Omega_2 = 0 \ . \end{equation} Next observe that $1-e^{\sqrt{-1}\tau} = -\sqrt{-1}\tau \big(1-\sqrt{-1}\tau (\zeta \circ \tau)\big)$ for some holomorphic $\zeta :{\mathbb C} \to {\mathbb C}$ which fulfills $\zeta (0)=\frac 12$. Then Eq.~\ref{eq:complex-cycle-condition} entails \[ \big(1-\sqrt{-1}\tau (\zeta \circ \tau)\big) z \overline{\Omega} = \big(1 + \sqrt{-1}\tau (\overline{\zeta} \circ \tau)\big) \overline{z} \Omega \ . \] By power series expansion it follows that $\left. \frac{\partial\Omega}{\partial \overline{z}} \right|_{z=0} = 0$ for all $k \in {\mathbb N}$. Hence, by Taylor's Theorem $\Omega = z \Phi $ for some smooth $\Phi : S^1 \times V \to {\mathbb C}$. Since by Lemma \ref{lem:coordinate-representations-vector-fields} $\|Y\|^2 = r^2 \tau^2 (\xi \circ \tau)$ for some holomorphic function $\xi$ not vanishing on $(-\pi,\pi)$ the following equality holds over $(S^1 \setminus \{ \pm 1\}) \times (V \setminus \{ 0\})$ \begin{equation*} \begin{split} \varphi & = \frac{1}{\tau r^2(\xi \circ \tau)} (- Y_2 \Omega_1 + Y_1 \Omega_2) = \frac{\sqrt{-1}}{2\tau r^2(\xi \circ \tau)}\big(y\overline{\Omega}-\overline{y}\Omega \big) = \\ & = \frac{1}{2r^2(\xi \circ \tau)} \Big( \big(1-\sqrt{-1}\tau (\zeta \circ \tau)\big) z \overline{z} \overline{\Phi} + \big(1+\sqrt{-1}\tau (\overline{\zeta} \circ \tau)\big) z \overline{z} \Phi \Big) = \\ & = \left.\frac{1}{(\xi \circ \tau)} \big(1-\sqrt{-1}\tau (\zeta \circ \tau)\big) \overline{\Phi} \right|_{(S^1 \setminus \{ \pm 1\}) \times (V \setminus \{ 0\})} \ . \end{split} \end{equation*} Since the right hand side has a smooth extension to $S^1\setminus \{ -1 \} \times V$, the function $\varphi$ can be smoothly extended to $S^1 \times V$ and the claim is proved. \end{proof} \input{stitching} \section{The convolution sheaf of a proper Lie groupoid} Throughout this paper, $\mathsf{G}\rightrightarrows M$ denotes a Lie groupoid over a base manifold $M$. Elements of $M$ are called points of the groupoid, those of $\mathsf{G}$ its arrows. The symbols $s,t:\mathsf{G}\to M$ denote the source and target map, respectively, and $u :M \rightarrow \mathsf{G}$ the unit map. By definition of a Lie groupoid, $s$ and $t$ are assumed to be smooth submersions. This implies that the space of $k$-tuples of composable arrows \[ \mathsf{G}_k:=\{(g_1,\ldots,g_k)\in\mathsf{G}^k \mid s(g_i)=t(g_{i+1}) \text{ for $i=1,\ldots,k-1$}\} \] is a smooth manifold, and multiplication of arrows \[ m: \mathsf{G}_2\to\mathsf{G}, \: (g_1,g_2) \mapsto g_1 \, g_2 \] a smooth map. If $g\in \mathsf{G}$ is an arrow with $s(g)=x $ and $t(g)=y$, we denote such an arrow sometimes by $g:y \leftarrow x$, and write $\mathsf{G} (y,x)$ for the space of arrows with source $x$ and target $y$. The $s$-fiber over $x$, i.e.~the manifold $s^{-1} (x)$, will be denoted by $\mathsf{G} ( - , x)$, the $t$-fiber over $y$ by $\mathsf{G} (y, -)$. Note that for each object $x\in M$ multiplication of arrows induces on $\mathsf{G}(x,x)$ a group structure. This group is called the \emph{isotropy group} of $x$ and is denoted by $\mathsf{G}_x$. The union of all isotropy groups \[ \Lambda_0 \mathsf{G} := \bigcup_{x\in M} \mathsf{G}_x = \{ g\in \mathsf{G} \mid s(g)=t(g) \} \] will be called the \emph{loop space} of $\mathsf{G}$. Given a Lie groupoid $\mathsf{G}\rightrightarrows M$ two points $x,y \in M$ are said to lie in the same orbit if there is an arrow $g: y \leftarrow x$. In the following, we will always write $\mathcal{O}_x$ for the orbit containing $x$, and $M/\mathsf{G}$ for the space of orbits of the groupoid $\mathsf{G}$. We assume further that the orbit space always carries the quotient topology with respect to the canonical map $\pi: M \to M/\mathsf{G}$. Note that $M/\mathsf{G}$ need not be Hausdorff unless $\mathsf{G}$ is a proper Lie groupoid, which means that the map $(s,t):\mathsf{G} \rightarrow M\times M$ is a proper map. Sometimes, we need to specify to which groupoid a particular structure map belongs to. In such a situation we will write $s_\mathsf{G}$, $m_\mathsf{G}$, $\pi_\mathsf{G}$ and so on. In the following, we will define a sheaf of algebras $\mathcal{A}$ on $M/\mathsf{G}$ in such a way that the algebra $\mathcal{A}_\textup{c}(M/\mathsf{G})$ of compactly supported global sections of $\mathcal{A}$ coincides with the smooth convolution algebra of the groupoid. To this end, we use a smooth left Haar measure on $\mathsf{G}$. Recall that by a smooth left Haar measure on $\mathsf{G}$ one understands a family of measures $(\lambda^x)_{x\in M}$ such that the following properties hold true: \begin{enumerate}[(H1)] \item For every $x \in \mathsf{G}_0$, $\lambda^x$ is a positive measure on $\mathsf{G}(x,-)$ with $\operatorname{supp} \lambda^x = \mathsf{G}(x,-)$. \item For every $g\in \mathsf{G}$, the family $(\lambda^x)_{x\in M}$ is invariant under left multiplication \[ L_g : \mathsf{G}(s(g),-) \to \mathsf{G}(t(g),-), \: h \mapsto gh \] or in other words \[ \int\limits_{\mathsf{G} (s(g),-) } \! u(gh) \, d\lambda^{s(g)} (h) = \int\limits_{\mathsf{G} (t(g),-) } \! u(h) \, d\lambda^{t(g)} (h) \quad \text{for all $ u\in \mathcal{C}^\infty_\textup{c} (\mathsf{G})$}. \] \item The system is smooth in the sense that for every $u \in \mathcal{C}^\infty_\textup{c} (\mathsf{G})$ the map \[ M \to {\mathbb C}, \: x \mapsto \int\limits_{\mathsf{G} (x,-) } \! u (h) \, d\lambda^x (h) \] is smooth. \end{enumerate} Let us fix a smooth left Haar measure $(\lambda^x)_{x\in M}$ on $\mathsf{G}$. Given an open set $U \subset M/\mathsf{G}$ we first put \begin{equation} \label{eq:DefPreImgsOpen} U_0 := \pi^{-1}(U), \quad U_1 := s^{-1} (U_0) \subset \mathsf{G}_1 \quad\text{and} \quad U_{k+1} := \bigcap_{i=1}^k \sigma_i^{-1} ( U_k) \subset \mathsf{G}_{k+1} \text{ for all $k\in {\mathbb N}^*$}, \end{equation} where $\sigma_i :\mathsf{G}_{k+1} \rightarrow \mathsf{G}_k$, $(g_1,\ldots , g_{k+1}) \mapsto (g_1,\ldots , g_ig_{i+1}, \ldots , g_k)$. Then we define \begin{equation}\label{eq:calAU} \mathcal{A}(U):= \big\{ f \in \mathcal{C}^\infty \big( U_1 \big) \mid \operatorname{supp} f \text{ is longitudinally compact} \big\} \ . \end{equation} Hereby, a subset $K \subset \mathsf{G}$ is called \emph{longitudinally compact}, if for every compact subset $C \subset M/\mathsf{G}$ the intersection $K \cap s^{-1} \pi^{-1} (C)$ is compact. Obviously, every $\mathcal{A}(U)$ is a linear space, and the map which assigns to an open $U\subset M/\mathsf{G}$ the space $\mathcal{A}(U)$ forms a sheaf on $M/\mathsf{G}$ which in the following will be denoted by $\mathcal{A}$ or by $\mathcal{A}_\mathsf{G}$ if we want to emphasize the underlying groupoid. The section space $\mathcal{A}(U)$ over $U\subset M/\mathsf{G}$ open becomes an associative algebra with the \emph{convolution product} \begin{equation} \label{eq:convolution-product} f_1 \ast f_2 \, (g) := \int\limits_{\mathsf{G} (t(g),-)} \! f_1 (h) \, f_2 (h^{-1}g) \, d\lambda^{t(g)} (h) \ , \quad f_1,f_2 \in \mathcal{A}(U), \: g \in \mathsf{G} \ . \end{equation} The convolution product is compatible with the restriction maps, hence $\mathcal{A}$ becomes a sheaf of algebras on $M/\mathsf{G}$. Let us assume from now on that the groupoid $\mathsf{G}$ is proper. Recall from \cite{PflPosTanGOSPLG} that then the orbit space $M/\mathsf{G}$ carries the structure of a differentiable stratified space in a canonical way. The structure sheaf $\mathcal{C}^\infty_{M/\mathsf{G}}$ coincides with the sheaf of continuous functions $\varphi :U \to {\mathbb R}$ with $U \subset M/\mathsf{G}$ open such that $\varphi \circ \pi \in \mathcal{C}^\infty ( U_1 )$. Now observe that the action \[ \mathcal{C}^\infty_{M/\mathsf{G}} (U) \times \mathcal{A} (U) \rightarrow \mathcal{A} (U), \: (\varphi , f ) \mapsto \varphi f := \Big( U_1 \ni g \mapsto \varphi \big(\pi s(g)\big) f (g) \in {\mathbb R} \Big) \] commutes with the convolution product, and turns $\mathcal{A}$ into a $\mathcal{C}^\infty_{M/\mathsf{G}}$-module sheaf. \begin{propdef}\label{propdefn:sheaf} Given a proper Lie groupoid $\mathsf{G} \rightrightarrows M$, the associated sheaf $\mathcal{A}$ is a fine sheaf of algebras over the orbit space $M/\mathsf{G}$ which in addition carries the structure of a $\mathcal{C}^\infty_{M/\mathsf{G}}$-module sheaf. The space $\mathcal{A}_\textup{c}(M/\mathsf{G})$ of global sections of $\mathcal{A}$ with compact support coincides with the \emph{smooth convolution algebra} of $\mathsf{G}$. We call $\mathcal{A}$ the \emph{convolution sheaf} of $\mathsf{G}$. \end{propdef} For later purposes, we equip the spaces $\mathcal{A} (U)$ with a locally convex topology and a convex bornology. To this end, observe first that for every longitudinally compact subset $K\subset U_1$ the space \[ \mathcal{A} (M/\mathsf{G};K) := \big\{ f \in \mathcal{C}^\infty (\mathsf{G}) \mid \operatorname{supp} f \subset K \big\} \] inherits from $\mathcal{C}^\infty (\mathsf{G})$ the structure of a Fr\'echet space. Moreover, since $\mathcal{C}^\infty (\mathsf{G})$ is nuclear, $\mathcal{A} (M/\mathsf{G};K)$ has to be nuclear as well by \cite[Prop.~50.1]{TreTVSDK}. By separability of $U$ there exists a (countable) exhaustion of $U_1$ by longitudinally compact sets, i.e.~a family $(K_n)_{n\in {\mathbb N}}$ of longitudinally compact subset of $U_1$ such that $K_n \subset K_{n+1}^\circ$ for all $n\in {\mathbb N}$, and such that $\bigcup_{n\in {\mathbb N}} K_n = U_1$. The space $\mathcal{A} (U)$ can then be identified with the inductive limit of the strict inductive system of nuclear Fr\'echet spaces $\big( \mathcal{A} (M/\mathsf{G};K_n) \big)_{n\in {\mathbb N}}$. It is straightforward to check that the resulting inductive limit topology on $\mathcal{A} (U)$ does not depend on the particular choice of the exhaustion $(K_n)_{n\in {\mathbb N}}$. Thus, $\mathcal{A} (U)$ becomes a nuclear LF-space, where nuclearity follows from \cite[Prop.~50.1]{TreTVSDK}. As an LF-space, $\mathcal{A} (U)$ carries a natural bornology given by the von Neumann bounded sets, i.e.~by the sets $S \subset \mathcal{A} (U)$ which are absorbed by each neighborhood of $0$. In other words, a subset $S \subset \mathcal{A} (U)$ is bounded if all $f\in S$ are supported in a fixed longitudinally compact subset $K\subset U_1$, and if the set of functions $D(S)$ is uniformly bounded for every compactly supported differential operator $D$ on $U_1$. The bornological point of view is particularly convenient when considering tensor products. In particular one has the following fundamental property. \begin{proposition} \label{Prop:ProdConvSheaves} Let $\mathsf{G} \rightrightarrows M$, and $\mathsf{H}\rightrightarrows N$ be proper Lie groupoids. Denote by $M/\mathsf{G}$ and $N/\mathsf{H}$ their respective orbit spaces. Then $M/\mathsf{G}\times N/\mathsf{H}$ is diffeomorphic as a differentiable stratified space to the orbit space of the product groupoid $\mathsf{G} \times \mathsf{H} \rightrightarrows M\times N$. Moreover, there is a natural isomorphism \begin{equation} \label{eq:tprodconvalg} \mathcal{A}_\mathsf{G} (U) \hat{\otimes} \mathcal{A}_\mathsf{H} (V) \cong \mathcal{A}_{\mathsf{G}\times \mathsf{H}} (U \times V) \end{equation} for any two open sets $U \subset M/\mathsf{G}$ and $V\subset N/\mathsf{H}$. \end{proposition} \begin{proof} The first claim is a consequence of the fact that two elements $(x,y), (x',y') \in M\times N$ lie in the same $(\mathsf{G} \times \mathsf{H})$-orbit if and only if $x$ and $x'$ lie in the same $\mathsf{G}$-orbit and $y$ and $y'$ lie in the same $\mathsf{H}$-orbit. Let us prove the second claim. Let $(K_n)_{n\in {\mathbb N}}$ be an exhaustion of $U_1 := s_\mathsf{G}^{-1} \pi_\mathsf{G}^{-1} (U) $ by longitudinally compact subsets and $(L_m)_{m\in {\mathbb N}}$ an exhaustion of $V_1:= s_\mathsf{H}^{-1} \pi_\mathsf{H}^{-1} (V)$ by such sets. Since $\mathcal{A}_\mathsf{G} (U)$ coincides with the inductive limit $\operatornamewithlimits{colim}\limits_{n\in {\mathbb N}}\mathcal{A}_\mathsf{G} (M/\mathsf{G};K_n)$ and $\mathcal{A}_\mathsf{H} (V)$ with $\operatornamewithlimits{colim}\limits_{m\in {\mathbb N}}\mathcal{A}_\mathsf{H} (N/\mathsf{H};L_m)$, \cite[Cor.~2.30]{MeyACH} entails that \begin{equation} \label{eq:IndLimitTensorProduct} \mathcal{A}_\mathsf{G}(U) \hat{\otimes} \mathcal{A}_\mathsf{H} (V) \cong \operatornamewithlimits{colim}\limits_{n\in {\mathbb N}} \mathcal{A}_\mathsf{G} (M/\mathsf{G};K_n) \hat{\otimes} \mathcal{A}_\mathsf{H} (N/\mathsf{H};L_n) . \end{equation} Now observe that $\mathcal{A}_\mathsf{G} (M/\mathsf{G};K_n) \hat{\otimes} \mathcal{A}_\mathsf{H} (N/\mathsf{H};L_m) \cong \mathcal{A}_{\mathsf{G}\times \mathsf{H}}(M/\mathsf{G}\times N/\mathsf{H}; K_n \times L_m)$ by \cite[Prop.~51.6]{TreTVSDK}, and that $( K_n \times L_n)_{n\in {\mathbb N}}$ is an exhaustion of $U \times V$ by longitudinally compact subsets. Together with Eq.~\eqref{eq:IndLimitTensorProduct} this proves the claim. \end{proof} \section{The cyclic homology of bornological algebras} \label{AppCyHomBornAlg} \subsection{Bornological vector spaces and tensor products} \label{AppBornVecSp} We recall some basic notions from the theory of bornological vector spaces and their tensor products. For details we refer to \cite{HogBFA} and \cite[Chap.~1]{MeyLACH}. \begin{definition}[cf.~{\cite[Chap.~I, 1:1 Def.]{HogBFA}}] By a \emph{bornology} on a set $X$ one understands a set $\mathscr{B}$ of subset of $X$ such that the following conditions hold true: \begin{enumerate}[(BS)] \item $\mathscr{B}$ is a covering of $X$, $\mathscr{B}$ is hereditary under inclusions, and $\mathscr{B}$ is stable under finite unions. \end{enumerate} A map $f: X \to Y$ from a set $X$ with bornology $\mathscr{B}$ to a set $Y$ carrying a bornology $\mathscr{D}$ is called \emph{bounded}, if the following is satisfied: \begin{enumerate}[(BM)] \item The map $f$ preserves the bornologies, i.e.~$f(B) \in \mathscr{D}$ for all $B \in \mathscr{B}$. \end{enumerate} If $V$ is a vector space over $\Bbbk = {\mathbb R}$ or $\Bbbk = {\mathbb C}$, a bornology $\mathscr{B}$ is called a \emph{convex vector bornology} on $V$, if the following additional properties hold true: \begin{enumerate}[(BV)] \item The bornology $\mathscr{B}$ is stable under addition, under scalar multiplication, under forming balanced hulls, and finally under forming convex hulls. \end{enumerate} A set together with a bornology is called a \emph{bornological set}, a vector space with a convex vector bornology a \emph{bornological vector space}. For clarity, we sometimes denote a bornological vector space as a pair $(V,\mathscr{B})$, where $V$ is the underlying vector space, and $\mathscr{B}$ the corresponding convex vector bornology. A bornological vector space $(V,\mathscr{B})$ is called \emph{separated}, if the condition (S) below is satisfied. If in addition condition (C) holds true as well, $(V,\mathscr{B})$ is called \emph{complete}. \begin{enumerate} \item[(S)] The subspace $\{ 0 \}$ is the only bounded subvector space of $V$. \item[(C)] Every bounded set is contained in a completant bounded disk, where a disk $D \subset V$ is called \emph{completant}, if the space $V_D$ spanned by $D$ and semi-normed by the gauge of $D$ is a Banach space. \end{enumerate} \end{definition} As for the category of topological vector spaces there exist functors of separation and completion within the category of bornological vector spaces. \begin{example} Let $V$ be a locally convex topological vector space. The \emph{von Neumann bornology} on $V$ consists of all (von Neumann) bounded subsets of $V$, ie.~of all $B\subset V$ which are absorbed by every $0$-neighborhood. One immediately checks that the von Neumann bornology is a convex vector bornology on $V$. We sometimes denote this bornology by $\mathscr{B}_\textup{vN}$. \end{example} Similarly to the topological case, the bornological tensor product is defined by a universal property. \begin{definition} \label{DefProjBorTensorProd} The (\emph{projective}) \emph{bornological tensor product} of two bornological spaces vector spaces $\big( V_1 , \mathcal{B}_1 \big)$ and $\big( V_2 , \mathcal{B}_2 \big)$ is defined as the up to isomorphism uniquely defined bornological vector space $\big( V_1 \otimes V_2 , \mathscr{B}_1 \otimes \mathscr{B}_2 \big)$ together with a bounded map $V_1 \times V_2 \to V_1 \otimes V_2$ such that for each bornological vector space $( W , \mathscr{D} ) $ and bounded bilinear map $\lambda : V_1 \times V_2 \to W$ there is a unique bounded map $\overline{ \lambda}:V_1 \otimes V_2 \to W$ making the diagram \begin{displaymath} \xymatrix{ V_1 \otimes V_2 \ar[r]^\lambda\ar[d] & W \\ V_1\otimes V_2 \ar[ur]_{\overline{\lambda}} } \end{displaymath} commute. The completion of the bornological tensor product will be denoted by $V_1 \hat{\otimes} V_2$. \end{definition} \begin{remark} \begin{enumerate} \item Note that the underlying vector space of the bornological tensor product coincides with the algebraic tensor product of the vector spaces $V_1 \otimes V_2$. \item Since tensor products of topological vector spaces are also needed in this paper, let us briefly recall that the complete projective (resp.~inductive) topological tensor product $\hat{\otimes}_\pi$ (resp.~$\hat{\otimes}_\iota$) can be defined as the (up to isomorphism) unique bifunctor on the category of complete locally convex topological vector spaces which is universal with respect to jointly (resp.~separately) continuous bilinear maps with values in complete locally convex topological vector spaces. For Fr\'echet spaces, the complete projective and complete inductive tensor products coincide, since separately continuous bilinear maps on Fr\'echet spaces are automatically jointly continuous. See \cite{GroPTTEN} and \cite{MeyLACH} for details. \end{enumerate} \end{remark} \subsection{The Hochschild chain complex} In this section we recall the construction of the cyclic bicomplex associated to a complete bornological algebra $A$ which not necessarily is assumed to be unital. To this end observe first that the space of Hochschild $k$-chains $C_k (A) := A^{\hat{\otimes} (k+1)} $ is defined using the complete projective bornological tensor product $\hat{\otimes}$. Together with the face maps \[ b_{k,i} : C_k (A) \to C_{k-1} (A), \: a_0 \otimes \ldots \otimes a_k \mapsto \begin{cases} a_0 \otimes \ldots \otimes a_ia_{i+1} \otimes \ldots \otimes a_k, & \text{if $0\leq i<k$},\\ a_ka_0 \otimes \ldots \otimes a_{k-1}, & \text{if $i =k$}, \end{cases} \] and the cyclic operators \[ t_k : C_k (A) \to C_k (A), \: a_0 \otimes \ldots \otimes a_k \mapsto (-1)^k \, a_k \otimes a_0 \otimes \ldots \otimes a_{k-1} \] the graded linear space of Hochschild chains $C_\bullet ( A ) := \big( C_k (A) \big)_{k\in {\mathbb N}}$ then becomes a pre-cyclic object (see for example \cite{LodCH} for the precise commutation relations of the face and cyclic operators). From the pre-cyclic structure one obtains two boundary maps, namely the one of the Bar complex $b' : C_k (A) \to C_{k-1} (A)$, $b' := := \sum_{i=0}^{k-1} (-1)^i b_i$ and the Hochschild boundary $b : C_k (A) \to C_{k-1} (A)$, $b := b' + (-1)^k b_k$. The commutation relations for the $b_i$ immediately entail $b^2 = (b')^2 = 0$. This gives rise to the following two-column bicomplex. \begin{displaymath} \xymatrix{ \vdots \ar[d] & \vdots \ar[d] \\ C_2 (A) \ar[d]_b & \ar[l]_{1-t} C_2(A) \ar[d]_{-b'}\\ C_1(A) \ar[d]_b & \ar[l]_{1-t} C_1(A) \ar[d]_{-b'} \\ C_0(A) & \ar[l]_{1-t} C_0(A) } \end{displaymath} We will denote this two-column bicomplex by $C_{\bullet,\bullet} (A)^{\{2\}}$. By definition, the homology of its total complex is the Hochschild homology \begin{equation} \label{eq:DefHochschildHom} HH_\bullet (A ) := H_\bullet \big(\operatorname{Tot}_\bullet\big( C_{\bullet,\bullet}(A)^{\{2\}}\big)\big) \: . \end{equation} \subsection{A twisted version of the theorem by Hochschild--Kostant--Rosenberg and Connes} The classical theorem by Hochschild--Kostant--Rosenberg identifies the Hochschild homology of the algebra of regular functions on a smooth affine variety with the graded module of K\"ahler forms of that algebra \cite{HocKosRosDFRAA}. In his seminal paper \cite{ConNDG}, Connes proved that for compact smooth manifolds an analogous result holds true that is the (continuous) Hochschild homology of the algebra of smooth functions on a manifold coincides naturally with the complex of differential forms over the manifold (see \cite{PflCHHCG} for the non-compact case of that result). Here we show a twisted version of this theorem. That result appears to be folklore, cf.~also \cite{BroDavNis}. Assume that $h$ is an orthogonal transformation acting on some euclidean space ${\mathbb R}^d$. Let $V$ be an open ball around the origin of ${\mathbb R}^d$. Then we denote by $\ltwist{\mathcal{C}^\infty (V)}{h}$ the space $\mathcal{C}^\infty (V)$ with the $h$-twisted $\mathcal{C}^\infty (V)$-bimodule structure \begin{displaymath} \mathcal{C}^\infty (V) \hat{\otimes} \, \ltwist{\mathcal{C}^\infty (V)}{h} \hat{\otimes} \mathcal{C}^\infty (V) \to \ltwist{\mathcal{C}^\infty(V)}{h} , \: f \otimes a \otimes f' \mapsto \Big( V \ni v \mapsto f(h v) \, a(v) f'(v) \in {\mathbb R} \Big) \ . \end{displaymath} In the following we compute the \emph{twisted} Hochschild homology $H_\bullet \big(\mathcal{C}^\infty (V), \ltwist{\mathcal{C}^\infty(V)}{h}\big)$. Denote by $\langle -,-\rangle$ the euclidean inner product on ${\mathbb R}^d$. By the orthogonality assumption $\langle -,-\rangle$ is $G$-invariant, hence $V$ is so, too. Recall that for every topological projective resolution $R_\bullet \to \mathcal{C}^\infty (V)$ of $\mathcal{C}^\infty (V)$ as a $\mathcal{C}^\infty (V)$-bimodule the Hochschild homology groups $H_k (\mathcal{C}^\infty (V), \ltwist{\mathcal{C}^\infty(V)}{h}$ are naturally isomorphic to the homology groups $H_k \big( R_\bullet , \ltwist{\mathcal{C}^\infty(V)}{h} \big)$, see \cite{HelHBTA}. Recall further that a topological projective resolution of the $\mathcal{C}^\infty (V)$-bimodule $\mathcal{C}^\infty (V)$ is given by the Connes-Koszul resolution \cite[p.~127ff]{ConNDG} \begin{equation} \label{eq:ResSmoothFunc} \Gamma^\infty (V \times V , E_d ) \overset{i_Y}{\longrightarrow} \ldots \overset{i_Y}{\longrightarrow} \Gamma^\infty (V \times V , E_1 ) \overset{i_Y}{\longrightarrow} \mathcal{C}^\infty (V \times V) \longrightarrow \mathcal{C}^\infty (V) \longrightarrow 0, \end{equation} where $E_k$ is the pull-back bundle $\operatorname{pr}_2^* \big(\Lambda^k T^*{\mathbb R}^d\big) $ along the projection $\operatorname{pr}_2 : {\mathbb R}^d \times {\mathbb R}^d \to {\mathbb R}^d$, $(v,w) \mapsto w$, and $i_Y$ denotes contraction with the vector field $Y : V \times V \rightarrow \operatorname{pr}_2^* (T{\mathbb R}^d)$, $(v,w) \mapsto w-v$. By tensoring the Connes-Koszul resolution with $\ltwist{\mathcal{C}^\infty(V)}{h}$ one obtains the chain complex \begin{equation} \label{eq:TwistedChainCpl} \Omega^d(V) \overset{i_{Y_h}}{\longrightarrow} \ldots \overset{i_{Y_h}}{\longrightarrow} \Omega^1(V) \overset{i_{Y_h}}{\longrightarrow} \mathcal{C}^\infty (V) \longrightarrow 0 , \end{equation} where the vector field $Y_h : V \rightarrow T{\mathbb R}^d$ is given by $Y_h(v)= v -hv$. Denote by $V^h$ the fixed point set of $h$ in $V$, let $\iota_h : V^h \hookrightarrow V$ be the canonical embedding, and $\pi_h : V \rightarrow V^h$ the restriction of the orthogonal projection onto the fixed point space $({\mathbb R}^d)^h$. One obtains the following commutative diagram. \begin{equation} \label{eq:DiagQuismsTwisted} \vcenter{\vbox{ \xymatrix{ \Omega^d(V) \ar[r]^{\hspace{2em}i_{Y_h}} \ar[d]_{\iota_h^*} & \ldots \ar[r]^{\hspace{-1em}i_{Y_h}} & \Omega^1(V) \ar[r]^{i_{Y_h}} \ar[d]_{\iota_h^*} & \mathcal{C}^\infty (V) \ar[d]_{\iota_h^*} \\ \Omega^d(V^h) \ar[r]^{\hspace{2em}0} \ar[d]_{\pi_h^*} & \ldots \ar[r]^{\hspace{-1em}0} & \Omega^1(V^h) \ar[r]^{0}\ar[d]_{\pi_h^*} & \mathcal{C}^\infty (V^h) \ar[d]_{\pi_h^*}\\ \Omega^d(V) \ar[r]^{\hspace{2em}i_{Y_h}} & \ldots \ar[r]^{\hspace{-1em}i_{Y_h}} & \Omega^1(V) \ar[r]^{i_{Y_h}}& \mathcal{C}^\infty (V) }}} \end{equation} \begin{proposition}\label{prop:homotopy} The chain maps $\iota_h^*$ and $\pi_h^*$ are quasi-isomorphisms. \end{proposition} \begin{proof} Since the restriction of the vector field $Y_h$ to $V^h$ vanishes, the diagram \eqref{eq:DiagQuismsTwisted} commutes, and the $\iota_h^*$ and $\pi_h^*$ are chain maps indeed. Let $W$ be the orthogonal complement of $({\mathbb R}^d)^h$ in ${\mathbb R}^d$, $m = \dim W$, and $\pi_W := \operatorname{id}_V -\pi_h$ the orthogonal projection onto $W$. Since the $h$-action on $W$ is orthogonal and has as only fixed point the origin, there exists an orthonormal basis $w_1, \ldots ,w_m$ of $W$, a natural $l\leq \frac m2$, and $\theta_1, \ldots , \theta_l \in (-\pi,\pi) \setminus \{ 0 \}$ such that the following holds: \[ hw_k = \begin{cases} \cos \theta_i \, w_{2i-1} + \sin \theta_i \, w_{2i} & \text{if $ k = 2i -1 $ with $i \leq l$}, \\ - \sin \theta_i \, w_{2i-1} + \cos \theta_i \, w_{2i} & \text{if $k=2i$ with $i \leq l$}, \\ - w_k & \text{if $2l < k \leq m$}. \end{cases} \] Denote by $\varphi_t : {\mathbb R}^d \to {\mathbb R}^d$, $t \in {\mathbb R}$ the flow of the complete vector field $Y_h$ or in other words the solution of the initial value problem $\frac{d}{dt} \varphi_t = ( \operatorname{id}_V - h ) \varphi_t $, $\varphi_0 = \operatorname{id}_V$. Then $\varphi_t v = v$ for all $v \in ({\mathbb R}^d)^h$, and \begin{equation} \label{eq:FundSolutions} \varphi_t (w_k) = \begin{cases} e^{(1-\cos \theta_i)t}\big( \cos ( t \sin \theta_i ) \, w_{2i-1} + \sin ( t \sin \theta_i ) \, w_{2i} \big), & \text{if $k=2i-1$ with $i \leq l$}, \\ e^{(1-\cos \theta_i)t}\big( - \sin ( t \sin \theta_i ) \, w_{2i-1} + \cos ( t \sin \theta_i ) \, w_{2i} \big), & \text{if $ k = 2i $ with $i \leq l$}, \\ e^{2t} w_k, & \text{if $2l < k \leq m$}. \end{cases} \end{equation} Now let $v_1,\ldots , v_n$ be a basis of $V^h$, and denote by $v^1,\ldots,v^n,w^1, \ldots ,w^m$ the basis of $V'$ dual to $v_1,\ldots , v_n, w_1, \ldots ,w_m$. Then every $k$-form $\omega$ on $V$ is the sum of monomials $dv^{i_1} \wedge \ldots \wedge dv^{i_l} \wedge \omega_{i_1 , \ldots , i_l}$, where $1 \leq i_1 < \ldots < i_l \leq n$ and $\omega_{i_i , \ldots , i_l} = i_{v_{i_1} \wedge \ldots \wedge v_{i_l}} \omega \in \Gamma^\infty \big(\pi_W^* \Lambda^{k-l} T^*W\big)$. Let $d_W$ be the restriction of the exterior differential to $\Gamma^\infty \big(\pi_W^* \Lambda^\bullet T^*W\big)$ and define $S: \Omega^k (V) \to \Omega^{k+1}(V) $ by its action on the monomials: \[ S\omega = \sum_{l=0}^k \sum_{1 \leq i_1 < \ldots < i_l \leq n} dv^{i_1} \wedge \ldots \wedge dv^{i_l} \wedge \int_{-\infty}^0 \varphi_t^* (d_W\omega_{i_1 , \ldots , i_l}) \, dt . \] Note that the integral is well-defined since $\varphi_t ( V ) \subset V$ for all $t \leq 0$ by Eq.~\eqref{eq:FundSolutions}. Observe that ${\varphi_t}_* Y_h = Y_h$ by construction of $\varphi_t$ and that the fibers of the projection $\pi_h$ are left invariant by $\varphi_t$. Hence one concludes by Cartan's magic formula \begin{equation} \label{eq:HomProp} \begin{split} (S i_{Y_h} + i_{Y_h} S) \omega = &\, \sum_{l=0}^k\sum_{1 \leq i_1 < \ldots < i_l \leq n} dv^{i_1} \wedge \ldots \wedge dv^{i_l} \wedge \int_{-\infty}^0 ( d_W i_{Y_h} + i_{Y_h} d_W) \varphi_t^* \omega_{i_1 , \ldots , i_l} \, dt =\\ = &\, \sum_{l=0}^k\sum_{1 \leq i_1 < \ldots < i_l \leq n} dv^{i_1} \wedge \ldots \wedge dv^{i_l} \wedge \int_{-\infty}^0 \mathcal{L}_{Y_h} \varphi_t^* \omega_{i_1 , \ldots , i_l} \, dt = \\ = &\, \sum_{l=0}^k\sum_{1 \leq i_1 < \ldots < i_l \leq n} dv^{i_1} \wedge \ldots \wedge dv^{i_l} \wedge \int_{-\infty}^0 \frac{d}{dt} \varphi_t^* \omega_{i_1 , \ldots , i_l} \, dt = \\ = &\, \sum_{l=0}^{k-1}\sum_{1 \leq i_1 < \ldots < i_l \leq n} dv^{i_1} \wedge \ldots \wedge dv^{i_l} \wedge \omega_{i_1 , \ldots , i_l} \: +\\ & + \sum_{1 \leq i_1 < \ldots < i_k \leq n} dv^{i_1} \wedge \ldots \wedge dv^{i_k} \wedge (\omega_{i_1 , \ldots , i_k} - \pi_h^* \iota_h^* \omega_{i_1 , \ldots , i_k} ) =\\ = & \, \omega - \pi_h^* \iota_h^* \omega . \end{split} \end{equation} To verify the second last equality observe that the $\omega_{i_1 , \ldots , i_k}$ are smooth functions which satisfy $$ \lim_{t \rightarrow -\infty} \varphi_t^* \omega_{i_1 , \ldots , i_k} = \pi_h^* \iota_h^* \omega_{i_1 , \ldots , i_k} . $$ Eq.~\eqref{eq:HomProp} proves the claim. \end{proof} The proposition immediately entails the following twisted version of the theorem by Hochschild--Kostant--Rosenberg and Connes. \begin{theorem}\label{thm:twisted-hochschild-homology} Let $h:{\mathbb R}^d \to{\mathbb R}^d$ be an orthogonal linear transformation and $V \subset {\mathbb R}^d$ an open ball around the origin. Then the Hochschild homology $H_\bullet \big(\mathcal{C}^\infty (V), \ltwist{\mathcal{C}^\infty(V)}{h}\big)$ is naturally isomorphic to $\Omega^\bullet (V^h)$, where $V^h$ is the fixed point manifold of $h$ in $V$. A quasi-isomorphism inducing this identification is given by \[ \ltwist{\mathcal{C}^\infty(V)}{h} \hat{\otimes} \, C_k(\mathcal{C}^\infty (V) ) \to \Omega^k (V^h), \: f_0 \otimes f_1 \otimes \ldots \otimes f_k \mapsto {f_0}_{|V^h} \, d{f_1}_{|V^h} \wedge \ldots \wedge d {f_k}_{|V^h} \ . \] \end{theorem} We consider a finite subgroup $\Gamma$ of the orthogonal linear transformation group of ${\mathbb R}^d$. Let $V\subset {\mathbb R}^d$ be an open ball around the origin that is invariant with respect to the $\Gamma$ action on ${\mathbb R}^d$. We can apply the quasi-isomorphism from Section \ref{sec:group-manifold-case} to compute $HH_\bullet \big(\mathcal{C}^\infty(V)\rtimes \Gamma)$ by the homology of the complex $ C^\Gamma_\bullet(\mathcal{C}^\infty(V))$. Since $\Gamma$ is a finite group, the homology of $ C^\Gamma_\bullet(\mathcal{C}^\infty(V))$ is computed by \[ \Big(\bigoplus_{\gamma\in \Gamma} H_\bullet \big(\mathcal{C}^\infty (V), \ltwist{\mathcal{C}^\infty(V)}{\gamma}\big)\Big)^\Gamma. \] As a corollary to Theorem \ref{thm:twisted-hochschild-homology}, we thus obtain the following computation of the Hochschild homology of $\mathcal{C}^\infty(V)\rtimes \Gamma$. \begin{corollary}\label{cor:finitegroup} The Hochschild homology $HH_\bullet \big(\mathcal{C}^\infty(V)\rtimes \Gamma)$ is naturally isomorphic to \[ \left(\bigoplus_{\gamma \in \Gamma} \Omega^\bullet (V^\gamma) \right)^\Gamma, \] where $\Gamma$ acts on the disjoint union $\coprod_{\gamma \in \Gamma} V^\gamma$ by $\gamma'(\gamma,x)=(\gamma' \gamma(\gamma')^{-1}, \gamma' x)$. \end{corollary} In the case of a smooth affine algebraic variety, Corollary \ref{cor:finitegroup} is proved by \cite[Thm.~2.19]{BroDavNis}. We refer the reader to \cite{BryNis, FeiTsy,Wassermann, Ponge} for related developments. We end with a generalisation of Proposition \ref{prop:homotopy} which is a useful tool in our computations. Observe that in the complex (\ref{eq:TwistedChainCpl}) \[ \Omega^d(V) \overset{i_{Y_h}}{\longrightarrow} \ldots \overset{i_{Y_h}}{\longrightarrow} \Omega^1(V) \overset{i_{Y_h}}{\longrightarrow} \mathcal{C}^\infty (V) \longrightarrow 0 , \] the vector field $Y_h$ can be extended to be a more general linear vector field $Y_H: \mathbb{R}^n\to T\mathbb{R}^d$ of the form $Y_H(v)=H(v) \in T_v \mathbb{R}^d$ where $H: \mathbb{R}^d\to \mathbb{R}^d$ is a diagonalizable linear map. A construction similar to the homotopy operator $S$ in the proof of Proposition \ref{prop:homotopy} (see also \cite{Wassermann}) computes the homology of $(\Omega^\bullet(V), i_{Y_H})$ to be $(\Omega^\bullet(V^H), 0)$ where $V^H=\ker(H)$. Furthermore, if $H:S\to \operatorname{End}(\mathbb{R}^d)$ is a smooth family of diagonalizable linear operators parametrized by a smooth manifold $S$, $H$ is called regular if $H$ satisfies the following properties: \begin{enumerate} \item the kernel $\ker(H):=\{\ker(H(s))\}_{s\in S}\subset S\times \mathbb{R}^d$ is a smooth subbundle of the trivial vector bundle $S\times \mathbb{R}^d$; \item near every $s_0\in S$, there is a local frame of $S\times \mathbb{R}^d$ on a neighborhood $U_{s_0}$ of $s_0$ in $S$ consisting of $\xi_1, \cdots, \xi_d$ such that \begin{itemize} \item the collection $\{\xi_1, \cdots, \xi_k\}$ is a local frame of the subbundle $\ker(H)$ on $U_{s_0}$, \item for every $j=k+1,\cdots, d$, there is a smooth eigenfunction $\lambda_j(s)$ defined on $U_{s_0}$ satisfying $H(s)\xi_j(s)=\lambda_j(s)\xi_j(s)$ and $\lambda_j(s)\ne 0,\ \forall s\in U_{s_0}$. \end{itemize} \end{enumerate} The proof of Proposition \ref{prop:homotopy} generalizes to the following result. \begin{proposition}\label{prop:parametrizedkoszul} Let $H: S\to \operatorname{End}(\mathbb{R}^d)$ be a smooth family of diagonalizable linear operators parametrized by a smooth manifold $S$. Assume that $H$ is regular. Let $i_{\ker(H)}: \ker(H)\to S\times \mathbb{R}^d$ be the canonical embedding, and $\Omega^\bullet \big(\ker(H)\big)$ the restriction of $\mathcal{C}^{\infty}(S, \Omega^\bullet(V))$ to $\ker(H)$ along $i_{\ker(H)}$. Then the restriction map $R_{\ker(H)}:\big(\mathcal{C}^{\infty}(S, \Omega^\bullet(V)), i_{Y_H}\big)\to \Big(\Omega^\bullet \big( \ker(H)\big), 0\Big)$ is a quasi-isomorphism. \end{proposition} In a certain sense, the final result is variant of the latter. To formulate it recall that by an Euler-like vector field for an embedded smooth manifold $S \hookrightarrow M$ one understands a vector field $Y:M\to TM$ such that $S$ is the zero set of $Y$ and such that for each $f\in \mathcal{C}^\infty (M)$ vanishing on $S$ the function $Yf -f$ vanishes to second order on $S$; cf.~\cite[Def.~1.1]{SadHigEulerLike}. \begin{proposition}\label{prop:parametrizedkoszuleulervectorfield} Let $M$ be a smooth manifold of dimension $d$, $S\hookrightarrow M$ an embedded submanifold and $Y :M\to TM$a smooth vector field which is Euler like with respect to $S$. Then the complex \begin{equation}\label{eq:parametrizedkoszuleulervectorfield} \Omega^d(M) \overset{i_{Y}}{\longrightarrow} \ldots \overset{i_{Y}}{\longrightarrow} \Omega^1(M) \overset{i_{Y}}{\longrightarrow} \mathcal{C}^\infty (M) \longrightarrow \mathcal{C}^\infty (S) \longrightarrow 0 , \end{equation} is exact and will be called the \emph{parametrized Koszul resolution} of $\mathcal{C}^\infty (S)$. \end{proposition} \begin{proof} The claim is an immediate consequence of the Koszul resolution as for example stated in \cite[]{Wassermann}. \end{proof} \section{Tools from singularity theory} \label{AppTollsSingularityTheory} \subsection{Differentiable stratified spaces} \label{AppDifferentiableStratifiedSpaces} Recall that for every locally closed subset $X\subset {\mathbb R}^n$ of euclidean space the sheaf $\mathcal{C}_X^\infty$ of smooth functions on $X$ is defined as the quotient sheaf $\mathcal{C}_U^\infty/\mathcal{J}_{X,U}$, where $U\subset {\mathbb R}^n$ is an open subset such that $X \subset U$ is relatively closed, $\mathcal{C}_U^\infty$ is the sheaf of smooth functions on $U$, and $\mathcal{J}_{X,U}$ the ideal sheaf of smooth functions on open subsets of $U$ vanishing on $X$. Note that $\mathcal{C}_X^\infty$ does not depend on the particular choice of the ambient open subset $U\subset {\mathbb R}^n$. \begin{definition} A commutative locally ringed space $(A,\mathcal{O})$ is called an \emph{affine differentiable space} if there is a closed subset $X\subset {\mathbb R}^n$ and an isomorphism of ringed spaces $(f,F) : (A,\mathcal{O}) \to (X,\mathcal{C}_X^\infty)$. By a \textit{differentiable stratified space} we understand a commutative locally ringed space $(X,\mathcal{C}^\infty)$ consisting of a separable locally compact topological Hausdorff space $X$ equipped with a stratification $\mathcal{S}$ on $X$ in the sense of Mather \cite{MatSM} (cf.~also \cite[Sec.~1.2]{PflAGSSS}) and a sheaf $\mathcal{C}^\infty$ of commutative local ${\mathbb C}$-rings on $X$ such that for every point $x\in X$ there is an open neighborhood $U$ together with $\varphi_1, \ldots , \varphi_n \in \mathcal{C}^\infty (U) $ having the following properties: \begin{enumerate}[(DS1)] \item The map $\varphi: U \rightarrow {\mathbb R}^n$, $y \mapsto (\varphi_1 (y) , \ldots , \varphi_n (y) ) $ is a homeomorphism onto a locally closed subset $\widetilde U := \varphi (U) \subset {\mathbb R}^n$. and induces an isomorphism of ringed spaces $\varphi: (U ,\mathcal{C}^\infty_{|U}) \to (\widetilde U,\mathcal{C}^\infty_{\widetilde U})$. \item The map $\varphi$ endows $(U ,\mathcal{C}^\infty_{|U})$ with the structure of an affine differentiable space which means that $(\varphi,\varphi^*): (U ,\mathcal{C}^\infty_{|U}) \to (\widetilde U, \mathcal{C}^\infty_{\widetilde U})$ is an isomorphism of ringed spaces, where $\mathcal{C}^\infty_{\widetilde U}$ denotes the sheaf of smooth functions on $\widetilde U$ as defined above. \item For each stratum $S\subset U$, $\varphi_{| S\cap U}$ is a diffeomorphism of $S\cap U$ onto a submanifold $\varphi (S\cap U) \subset {\mathbb R}^n$. \end{enumerate} A map $\varphi: U \rightarrow {\mathbb R}^n$ fulfilling the axioms (DS1) to (DS3) will often be called a \emph{singular chart} of $X$ (cf.~\cite[Sec.~1.3]{PflAGSSS}). \end{definition} A differentiable stratified space is in particular a reduced differentiable space in the sense of Spallek \cite{SpaDR} or Gonz\'ales--de Salas \cite{GonSalDS}. Moreover, differentiable stratified spaces defined as above coincide with the stratified spaces with smooth structure as in \cite{PflAGSSS}. \begin{proposition}[cf.~{\cite[Thm.~1.3.13]{PflAGSSS}}] The structure sheaf of a differentiable stratified space is fine. \end{proposition} To formulate the next result, we introduce the commutative ringed space $({\mathbb R}^\infty, \mathcal{C}^\infty_{{\mathbb R}^\infty})$. It is defined as the limit of the direct system of ringed spaces $\big( ({\mathbb R}^n , \mathcal{C}^\infty_{{\mathbb R}^n}) , \iota_{nm} \big)_{n,m\in {\mathbb N}, \, n\leq m} $, where $\iota_{nm} :{\mathbb R}^n \hookrightarrow {\mathbb R}^m$ is the embedding given by \[ \iota_{nm} (v_1,\cdots , v_n) =(v_1,\cdots v_n,0,\cdots ,0). \] Note that for each open set $U\subset {\mathbb R}^\infty$ the section space $\mathcal{C}^\infty_{{\mathbb R}^\infty} (U)$ coincides with the inverse limit of the projective system of nuclear Fr\'echet algebras $\big( \mathcal{C}^\infty_{{\mathbb R}^n} (U\cap {\mathbb R}^n) , \iota_{nm}^* \big)_{n,m\in {\mathbb N}, \, n\leq m} $. Hence the $\mathcal{C}^\infty_{{\mathbb R}^\infty} (U)$ and in particular $\mathcal{C}^\infty_{{\mathbb R}^\infty} ({\mathbb R}^\infty)$ are nuclear Fr\'echet algebras by \cite[Prop.~50.1]{TreTVSDK}. % \begin{proposition} \label{ThmEmbAppendix} For every differentiable stratified space $(X,\mathcal{C}^\infty)$ there exists a proper embedding $\varphi: (X,\mathcal{C}^\infty) \hookrightarrow ({\mathbb R}^\infty, \mathcal{C}^\infty_{{\mathbb R}^\infty})$. \end{proposition} \begin{proof} Since $X$ is separable and locally compact there exists a compact exhaustion that is a family $(K_k)_{k\in {\mathbb N}}$ of compact subsets $K_k\subset X$ such that $K_k\subset K_{k+1}^\circ$ for all $k\in {\mathbb N}$ and such that $\bigcup_{k\in {\mathbb N}}K_k = X$. By \cite[Lem.~1.3.17]{PflAGSSS} there then exists an inductively embedding atlas that is a family $(\varphi_k)_{k\in {\mathbb N}}$ of singular charts $\varphi_k : K_{k+1}^\circ \to {\mathbb R}^{n_k}$ together with a family $(U_k)_{k\in {\mathbb N}}$ of relatively compact open subsets $U_k \subset\subset K_{k+1}^\circ$ such that $K_k \subset U_k$ and $\varphi_{k+1}|_{U_k} = \iota_{n_k n_{k+1}}\circ \varphi_k|_{U_k}$ for all $k\in {\mathbb N}$. Now define $\varphi : X \to {\mathbb R}^\infty $ by $\varphi (x) = \varphi_k (x)$ whenever $x \in U_k$. Then $\varphi$ is well defined and an embedding by construction. By a straightforward partition of unity argument one constructs a smooth function $\psi:X \to {\mathbb R}$ such that $\psi(x)\geq k $ for all $x \in K_{k+1} \setminus K_k^\circ$. The embedding $(\varphi,\psi):X \to {\mathbb R}^\infty \times {\mathbb R} \cong {\mathbb R}^\infty$ then is proper. \end{proof} \begin{corollary} \label{Cor:SmoothMetricAppendix} $(X,\mathcal{C}^\infty)$ be a differential stratified space. Then there exists a complete metric $d:X\times X \to {\mathbb R}$ such that $d^2 \in \mathcal{C}^\infty (X \times X)$. \end{corollary} \begin{proof} The euclidean inner product $\langle -,- \rangle_{{\mathbb R}^n}$ extends in a unique way to an inner product $\langle -,- \rangle_{{\mathbb R}^\infty}$ on ${\mathbb R}^\infty$ such that $\langle j_n (x),j_n(y) \rangle_{{\mathbb R}^\infty} = \langle x,y \rangle_{{\mathbb R}^n}$ for all $n\in {\mathbb N} $ and $x,y\in {\mathbb R}^n$, where $j_n: {\mathbb R}^n \hookrightarrow {\mathbb R}^\infty$ is the canonical embedding $(x_1,\ldots ,x_n) \mapsto (x_1,\ldots ,x_n, 0, \ldots,0,\ldots )$. The associated metric $d_{{\mathbb R}^\infty}: {\mathbb R}^\infty \times {\mathbb R}^\infty \to {\mathbb R} $, $(x,y) \mapsto \sqrt{\langle x-y,x-y \rangle_{{\mathbb R}^\infty}}$ then is related to the euclidean metric $d_{{\mathbb R}^n}$ by $d_{{\mathbb R}^\infty} \big( j_n (x),j_n(y) \big) = d_{{\mathbb R}^n}(x,y)$ for $x,y\in {\mathbb R}^n$. Now choose a proper embedding $X \hookrightarrow {\mathbb R}^\infty$ and denote the restriction of $d_{{\mathbb R}^\infty}$ to $X$ by $d$. By construction, $d^2$ then is smooth. Moreover, $d$ is a complete metric since the embedding is proper and each of the metrics $d_{{\mathbb R}^n}$ is complete. \end{proof} \section*{Introduction} \label{Sec:Intro} Let $M$ be a smooth manifold, and $\mathcal{C}^\infty(M)$ be the algebra of smooth functions on $M$. Connes' version \cite{ConNDG} of the seminal Hochschild-Kostant-Rosenberg theorem \cite{HocKosRosDFRAA} states that the Hochschild homology of $\mathcal{C}^\infty(M)$ is isomorphic to the graded vector space of differential forms on $M$. In this paper, we aim to establish tools for a general Hochschild-Kostant-Rosenberg type theorem for proper Lie groupoids. Recall that a Lie groupoid $\mathsf{G}\rightrightarrows M$ is proper if the map $\mathsf{G} \to M\times M$, $g\mapsto (s(g), t(g))$ is a proper map, where $s(g)$ and $t(g)$ are the source and target of $g\in \mathsf{G}$. When the source and target maps are both local diffeomorphisms, the groupoid $\mathsf{G}\rightrightarrows M$ is called \'etale. In efforts by many authors, e.g. \cite{BroDavNis, BryNis, ConNG, CrainicCyc,FeiTsy, Ponge, Wassermann}, the Hochschild and cyclic homology theory of \'etale Lie groupoids has been unvealed. The Hochschild and cyclic homology of a proper \'etale Lie groupoid was explicitly computed by Brylinksi and Nistor \cite{BryNis}. Let us explain this result in the case of a finite group $\Gamma$ action on a smooth manifold $M$, the transformation groupoid $\Gamma\ltimes M\rightrightarrows M$ for a finite group $\Gamma$ action on $M$. The convolution groupoid algebra associated to the transformation groupoid $\Gamma\ltimes M\rightrightarrows M$ is the crossed product algebra $\mathcal{C}^\infty(M)\rtimes \Gamma$, which consists of $\mathcal{C}^\infty(M)$-valued functions on $\Gamma$ equipped with the convolution product, e.g.\ for $f,g\in \mathcal{C}^\infty(M)\rtimes \Gamma$, \[ f\ast g(\gamma)=\sum_{\alpha\beta=\gamma} \beta^*\big(f(\alpha)\big)\cdot g(\beta). \] The algebra $\mathcal{C}^\infty(M)\rtimes \Gamma$ is naturally a Fr\'echet algebra. The Hochschild homology of the algebra $\mathcal{C}^\infty(M)\rtimes \Gamma$ as a bornological algebra is given by the following formula the proof of which is recalled in Corollary \ref{cor:finitegroup}. \begin{equation*}\label{eq:finitegroup} HH_\bullet\big( \mathcal{C}^\infty(M)\rtimes \Gamma \big) \cong \left(\bigoplus_{\gamma \in \Gamma} \Omega^\bullet (M^\gamma) \right)^\Gamma, \end{equation*} where $M^\gamma$ is the $\gamma$-fixed point submanifold, and $\Gamma$ acts on the disjoint union $\coprod_{\gamma \in \Gamma} M^\gamma$ by $\gamma'(\gamma,x)=(\gamma' \gamma(\gamma')^{-1}, \gamma' x)$. Recall that the so called loop space $\Lambda_0(\Gamma, M)$ of the transformation groupoid $\Gamma \ltimes M\rightrightarrows M$ is defined as \[ \Lambda_0(\Gamma, M):=\coprod_{\gamma \in \Gamma} M^\gamma, \] equipped with the same action of $\Gamma$ as above. In other words, the Hochschild homology of $\mathcal{C}^\infty(M)\rtimes \Gamma$ is the space of differential forms on the quotient $\Lambda_0(\Gamma, M)/\Gamma$, which is called the associated inertia orbifold. We would like to remark that just as the classical Hochschild-Kostant-Rosenberg theorem, the above identification can be realized as an isomorphism of sheaves over the quotient $M/\Gamma$. This makes Hochschild and cylic homology of $\mathcal{C}^\infty(M)\rtimes \Gamma$ the right object to work with in the study of orbifold index theory, see e.g.\ \cite{PflPosTanCyc}. Our goal in this project is to extend the study of Hochschild homology of proper \'etale groupoids to general proper Lie groupoids, which are natural generalizations of transformation groupoids for proper Lie group actions. The key new challenge from the study of (proper) \'etale groupoids is that orbits of a general proper Lie groupoid have different dimensions. This turns the orbit space of a proper Lie groupoid into a stratified space with a significantly more complicated singularity structure than an orbifold. Our main result is to introduce a sheaf $ \mathscr{H}\mathscr{H}_\bullet$ on the orbit space $X:=M/\mathsf{G}$ of a proper Lie groupoid $\mathsf{G}\rightrightarrows M$, whose space of global sections computes the Hochschild homology of the convolution algebra of $\mathsf{G}$. To achieve this, we start with introducing a sheaf $\mathcal{A}$ of convolution algebras on the orbit space $X$ in Definition \ref{propdefn:sheaf}. Using the localization method from \cite{BraPflHAWFSS} we introduce the Hochschild homology sheaf $\mathscr{H}\mathscr{H}_\bullet(\mathcal{A})$ for $\mathcal{A}$ as a sheaf of bornological algebras over $X$. Moreover, we prove the following sheafification theorem for the Hochschild homology of the convolution algebra $\mathcal{A}$ of the groupoid $\mathsf{G}$. \vspace{2mm} \noindent\textbf{Theorem \ref{thm:hochschild-homology-global-sections}.} \textit{Let $\mathcal{A}$ be the convolution sheaf of a proper Lie groupoid $\mathsf{G}$. Then the natural map in Hochschild homology \[ HH_\bullet \big( \mathcal{A} (X)\big) \to \mathscr{H}\mathscr{H}_\bullet (\mathcal{A}) (X) = \Gamma \big(X, \mathscr{H}\mathscr{H}_\bullet (\mathcal{A}) \big) \] is an isomorphism.}\vspace{2mm} To determine the homology sheaf $\mathscr{H}\mathscr{H}_\bullet(\mathcal{A})$, we study its stalk at an orbit $\mathcal{O}\in X$. Using the linearization result of proper Lie groupoid developed by Weinstein and Zung (c.f. \cite{CraStrLTPLG, delHFerRMLG,PflPosTanGOSPLG, WeiLRPG, ZunPGMMLAC}), we obtain a linear model of the stalk $\mathscr{H}\mathscr{H}_{\bullet, \mathcal{O}}(\mathcal{A})$ in Proposition \ref{prop:local-model} as a linear compact group action on a vector space. This result leads us to focus on the Hochschild homology of the convolution algebra $\mathcal{C}^\infty (M) \rtimes G $ associated to a compact Lie group action on a smooth manifold $M$ in the second part of this article. The Hochschild homology of compact Lie group actions was studied by several authors, e.g. \cite{BloGetECHED}, \cite{BryAAGAH,BryCHET}. Brylinski \cite{BryAAGAH,BryCHET} proposed a geometric model of basic relative forms along the idea of the Grauert-Grothendieck forms to compute the Hochschild homology. However, a major part of the proof is missing in \cite{BryAAGAH,BryCHET}. We decided to turn this result into the main conjecture of this paper in Section \ref{sec:actioncase}. \vspace{2mm} \noindent\textbf{Conjecture \ref{BryConjBHF}.} \textit{The Hochschild homology of the crossed product algebra $\mathcal{C}^\infty(M)\rtimes G$ associated to a compact Lie group action on a smooth manifold $M$ is isomorphic to the space of basic relative forms on the loop space $\Lambda_0 (G\ltimes M) =\{(g,p)\in G\times M\mid gp = p\}$.}\vspace{2mm} Block and Getzler \cite{BloGetECHED} introduced an interesting Cartan model for the cyclic homology of the crossed product algebra $\mathcal{C}^\infty(M)\rtimes G$. However, the Block-Getzler model is not a sheaf on the orbit space $M/G$, but a sheaf on the space of conjugacy classes of $G$. This makes it impossible to localize the sheaf to an orbit of the group action in the orbit space. It is worth pointing out that the truncation of the Block-Getzler Cartan model at $E^1$-page provides a complex to compute the Hochschild homology of $\mathcal{C}^\infty(M)\rtimes G$. However, the differential $\iota$ introduced in \cite[Section 1]{BloGetECHED} is nontrivial, and makes it challenging to explicitly identify the Hochschild homology of $\mathcal{C}^\infty(M)\rtimes G$ as the space of basic relative forms. We refer the reader to Remark \ref{rem:block-getzler} for a more detail discussion about the Block-Getzler model. In the last part of this paper, we prove Conjecture \ref{BryConjBHF} in the case where the group $G$ is $S^1$; see Proposition \ref{prop:equivariant-koszul}. Our proof relies on a careful study of the stratification of the loop space $\Lambda_0(S^1\ltimes M)\subset S^1\times M$. The crucial property we use in our computation is that at its singular point, $\Lambda_0(S^1\ltimes M)$ locally looks like the union of the hyperplane $\{x_0=0\}$ and the line $\{x_1=\cdots =x_n=0\}$ in $\mathbb{R}^{n+1}$, which are transverse to each other. The loop space $\Lambda_0(G\ltimes M)$ for a general $G$-manifold $M$ is much more complicated to describe. This has stopped us from extending our result for $S^1$-actions to more general compact group actions. It is foreseeable that some combinatorial structures describing the stratifications of the loop spaces and real algebraic geometry tools characterizing basic relative forms on the loop spaces are needed to solve Conjecture \ref{BryConjBHF} in full generality. We plan to come back to this problem in the near future. As is mentioned above, the study of Hochschild and cyclic homology of the convolution algebra of a proper Lie groupoid is closely related to the study of the groupoid index theory, e.g. \cite{PflPosTanCyc}, \cite{PflPosTanLLIT}. We expect that the study of the Hochschild homology and the generalized Hochschild-Kostant-Rosenberg theorem will eventually lead to the correct definition of basic relative forms for proper Lie groupoids, where the right index theorem will be established. \\ \noindent{\bf Acknowledgements}: We would like to thank Marius Crainic, Ralf Meyer, Rapha\"el Ponge and Michael Puschnigg for inspiring discussions. Pflaum's research is partially supported by Simons Foundation award number 359389 and NSF award OAC 1934725. Tang's research is partially supported by the NSF awards DMS 1800666, 1952551. \section{Computation at a stalk} Recall that $\mathsf{G}\rightrightarrows M$ is a proper Lie groupoid, $X$ is its orbit space, and $\mathcal{A}_G$ is the convolution sheaf of $\mathsf{G}$ (Definition \ref{propdefn:sheaf}). Given an orbit $\mathboondoxcal{O}\in X$ of $\mathsf{G}$, we introduce in this section a linear model of the groupoid around the stalk and use it in Proposition \ref{prop:local-model} to construct a quasi-isomorphism between the stalk complex $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_\mathsf{G})$ and the corresponding of the linear model. We divide the construction into two steps. \subsection{Reduction to the linear model} Let us recall the linearization result for the groupoid $\mathsf{G}$ around an orbit $\mathboondoxcal{O}$. Let $N\mathboondoxcal{O}\to \mathboondoxcal{O}$ be the normal bundle of the closed submanifold $\mathboondoxcal{O}$ in $M$, and $\mathsf{G}_{|\mathboondoxcal{O}} \rightrightarrows \mathboondoxcal{O}$ be the restriction of the groupoid $\mathsf{G}$ to $\mathboondoxcal{O}$. $\mathsf{G}_{|\mathboondoxcal{O}}$ acts on $N\mathboondoxcal{O}$ canonically. And we use $\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}\rightrightarrows N\mathboondoxcal{O}$ to denote the associated transformation groupoid. As in Definition \ref{propdefn:sheaf}, let $\mathcal{A}_{N\mathboondoxcal{O}}$ be the sheaf of convolution algebras on $X_{N\mathboondoxcal{O}}=N\mathboondoxcal{O}/ \mathsf{G}_{|\mathboondoxcal{O}}$, the orbit space associated to the groupoid $\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}$. Accordingly, we can consider the presheaf of chain complexes $\mathscr{C}_\bullet(\mathcal{A}_{N\mathboondoxcal{O}})$ and the associated sheaf complex $\hat{\mathscr{C}}_\bullet(\mathcal{A}_{N\mathboondoxcal{O}})$ as in Proposition \ref{prop:isomorphism-global-section-space}. In what follows in this subsection, we will identify the stalk $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_\mathsf{G})$ with the linearized model $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_{N\mathboondoxcal{O}})$, which is the stalk of the sheaf $\hat{\mathscr{C}}_\bullet(\mathcal{A}_{N\mathboondoxcal{O}})$ at the zero section of $N\mathboondoxcal{O}$. The main tool to identify the above two stalks is the linearization result of proper Lie groupoids developed by Weinstein \cite{WeiLRPG} and Zung \cite{ZunPGMMLAC} (See also \cite{CraStrLTPLG,PflPosTanGOSPLG,delHFerRMLG}). The particular approach we take below is from \cite{PflPosTanGOSPLG}. Fix a transversely invariant metric $g$ on $M$. Given a function $\delta: \mathboondoxcal{O}\to \mathbb{R}_{>0}$, let $T^\delta_{\mathboondoxcal{O}, N\mathboondoxcal{O}}$ be the $\delta$-neighborhood of the zero section in $N\mathboondoxcal{O}$. According to \cite[Theorem 4.1]{PflPosTanGOSPLG}, there exists a continuous map $\delta: \mathboondoxcal{O}\to \mathbb{R}_{>0}$ such that the exponential map $\exp_{|T^\delta_{\mathboondoxcal{O}, N\mathboondoxcal{O}}}: T^\delta_{\mathboondoxcal{O}, N\mathboondoxcal{O}}\to T^\delta_\mathboondoxcal{O}:=\exp(T^\delta_{\mathboondoxcal{O}, N\mathboondoxcal{O}})\subset M$ is a diffeomorphism. Furthermore, the exponential map $\exp_{|T^\delta_{\mathboondoxcal{O}, N\mathboondoxcal{O}}}$ lifts to an isomorphism $\Theta$ of the following groupoids \begin{equation}\label{eq:tubular} \Theta:\big(\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O} \big)_{|T^\delta_{\mathboondoxcal{O}, N\mathboondoxcal{O}}}\to \mathsf{G}_{|T^\delta_\mathboondoxcal{O}}. \end{equation} \begin{lemma}\label{prop:stalk-linearization} For each orbit $\mathboondoxcal{O} \subset M$, the pullback map $\Theta^*$ defines a quasi-isomorphism $\Theta_{\bullet,{\scriptstyle\mathboondoxcal{O}}}$ from the stalk complex $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_\mathsf{G})$ to the stalk complex $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_{N\mathboondoxcal{O}})$. \end{lemma} \begin{proof} We explain how $\Theta_{\bullet,{\scriptstyle\mathboondoxcal{O}}}$ is defined on $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_\mathsf{G})$. Let $[f_0\otimes \cdots \otimes f_k]\in \hat{\mathscr{C}}_{k,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_\mathsf{G})$ be a germ of a $k$-chain at $\mathboondoxcal{O}\in X$. Let $U$ be a neighborhood of $\mathboondoxcal{O}$ in $X$ such that $f_0\otimes \cdots \otimes f_k$ is a section of $\mathscr{C}_k(\mathcal{A}(U))$ which is mapped to $[f_0\otimes \cdots \otimes f_k]$ in the stalk complex $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}} (\mathcal{A}_\mathsf{G})$ under the canonical map $\eta$ from Proposition \ref{prop:isomorphism-global-section-space}. By (\ref{eq:calAU}), the support of each of the maps $f_0, \cdots, f_k$ is longitudinally compact. In particular, $\operatorname{supp}(f_i)\cap s^{-1}(\mathboondoxcal{O})$ ($i=0,\cdots, k$) is compact. Therefore, \[ s\big(\operatorname{supp}(f_i)\cap s^{-1}(\mathboondoxcal{O})\big)=t\big(\operatorname{supp}(f_i)\cap s^{-1}(\mathboondoxcal{O})\big)\] and the union $K_{f_0, \cdots, f_k}:=\bigcup_{i=0}^ks\big(\operatorname{supp}(f_i)\cap s^{-1}(\mathboondoxcal{O})\big)$ is also compact in $\mathboondoxcal{O}$. Let $K$ be a precompact open subset of $\mathboondoxcal{O}$ containing $K_{f_0, \cdots, f_k}$ as a proper subset. Observe that the closure of $K$ is compact in $\mathboondoxcal{O}$. Hence, there is a positive constant $\varepsilon$ such that the $\varepsilon$-neighborhood $T^\varepsilon_{K}$ of $K$ is contained inside the $\delta$-neighborhood $T^\delta_{\mathboondoxcal{O}}$, the range of the linearization map $\Theta$ in (\ref{eq:tubular}). Applying the homotopy map $\Psi_ \varepsilon$ defined in Lemma \ref{lem:localization-homotopies} to $f_0\otimes \cdots \otimes f_k$, we may assume without loss of generality that the support of $f_0,\cdots, f_n$ is contained inside $T^\varepsilon_{K}$, and therefore inside the $\delta$-neighborhood $T^\delta_{\mathboondoxcal{O}}$. Accordingly, the pullback function $\Theta^*(f_0\otimes \cdots \otimes f_k)$ is well defined and supported in \[ \big(\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}\big)_{|\Theta^{-1}(T^\varepsilon_{K})}\times \cdots \times \big(\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}\big)_{|\Theta^{-1}(T^\varepsilon_{K})}. \] Let $U^\varepsilon_{\mathboondoxcal{O}}$ be the $\varepsilon$-neighborhood of $\mathboondoxcal{O}$ in $N\mathboondoxcal{O}/\mathsf{G}_{|\mathboondoxcal{O}}$. By the definition of $\Theta$, it is not difficult to check that $\Theta^*(f_i)$ is supported inside $\big(\mathsf{G}|_\mathboondoxcal{O} \rtimes N\mathboondoxcal{O}\big)|_{\Theta^{-1}(T^\varepsilon_{K})}$ for $i=0,\cdots, k$ and therefore $\Theta^*(f_0\otimes \cdots \otimes f_k )$ is a well defined $k$-chain in $\mathscr{C}_k\big(\mathcal{A}_{N\mathboondoxcal{O}}(U^\varepsilon_{\mathboondoxcal{O}})\big).$ Define $\Theta_{\bullet,\mathboondoxcal{O}}\big( [f_0\otimes\cdots \otimes f_k]\big)\in \hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_{N\mathboondoxcal{O}})$ to be the germ of $\Theta^*(f_0\otimes \cdots \otimes f_k )$ at the point $\mathboondoxcal{O}$ in the orbit space $X_{N\mathboondoxcal{O}} = N\mathboondoxcal{O}/\mathsf{G}_{|\mathboondoxcal{O}}$. It is worth pointing out that the construction of $\Theta_{\bullet,\mathboondoxcal{O}} \big( [f_0\otimes\cdots \otimes f_k]\big)$ is independent of the choices of the subset $K$ and the constant $\varepsilon$. Analogously, using the inverse map $\Theta^{-1}$, we can construct the inverse morphism $(\Theta^{-1})_{\bullet,\mathboondoxcal{O}}$ from $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_{N\mathboondoxcal{O}})$ to $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_\mathsf{G})$, and therefore prove that $\Theta_{\bullet,\mathboondoxcal{O}}$ is a quasi-isomorphism. We leave the details to the diligent reader. \end{proof} \subsection{Computation of the linear model} We compute in this subsection the cohomology of $C_\bullet(\mathcal{A}_{N\mathboondoxcal{O}})$. Our method is inspired by the work of Crainic and Moerdijk \cite{CraMoeFGCH}. To start with, recall that we prove in \cite[Cor. 3.11 ]{PflPosTanGOSPLG} that for a proper Lie groupoid $\mathsf{G}\rightrightarrows M$, given $x\in M$, there is a neighborhood $U$ of $x$ in $M$ diffeomorphic to $O\times V_x$ where $O$ is an open ball in the orbit $\mathboondoxcal{O}$ through $x$ centered at $x$, and $V_x$ is a $\mathsf{G}_x$ --the isotropy group of $\mathsf{G}$ at $x$-- invariant open ball in $N_x \mathboondoxcal{O}$ centered at the origin. Under this diffeomorphism $\mathsf{G}_{|U}$ is isomorphic to the product of the pair groupoid $O\times O\rightrightarrows O$ and the transformation groupoid $\mathsf{G}_x\ltimes V_x\rightrightarrows V_x$. Applying this result to the the transformation groupoid $\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}\rightrightarrows N\mathboondoxcal{O}$, we conclude that given any $x\in \mathboondoxcal{O}$, there is an open ball $O$ of $x$ in $\mathboondoxcal{O}$ such that the restricted normal bundle $U_x:=N\mathboondoxcal{O}_{|O}$ is diffeomorphic to $N_x\mathboondoxcal{O} \times O$ and $\big(\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}\big)_{|U_x}$ is isomorphic to the product of the pair groupoid $O\times O$ and the transformation groupoid $\mathsf{G}_x\ltimes N_x\mathboondoxcal{O}$. Following the above local description of $\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}$, we choose a covering $( O_x)_{x\in \mathboondoxcal{O}}$ of the orbit $\mathboondoxcal{O}$, and therefore also a covering $\mathfrak{U}:= ( U_x)_{x\in \mathboondoxcal{O}}$, $U_x:=O_x\times N_x\mathboondoxcal{O}$, of $N\mathboondoxcal{O}$. We choose a locally finite countable subcovering $(O_i)_{i\in I}$ of $\mathboondoxcal{O}$ and the associated covering $(U_i)_{i\in I}$ of $N\mathboondoxcal{O}$. Choose $\varphi_i\in \mathcal{C}_\textup{c}^\infty(\mathboondoxcal{O})$ such that $\varphi_i^2$ is a partition of unity subordinate to the open covering $( O_i)_{i\in I}$ of $\mathboondoxcal{O}$. Lift $\varphi_i\in \mathcal{C}_\textup{c}^\infty(\mathboondoxcal{O})$ to $\tilde{\varphi}_i\in \mathcal{C}^\infty (N\mathboondoxcal{O})$ that is let it be constant along the fiber direction. As $\varphi_i$ is compactly supported, $\tilde{\varphi}_i$ is longitudinally compactly supported and therefore belongs to $\mathcal{A}_{N\mathboondoxcal{O}}$. Now consider the groupoid $\mathsf{H}_{\mathfrak{U}}$ over the disjoint union $\sqcup U_i$, such that arrows from $U_i$ to $U_j$ are arrows in $\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}$ starting from $U_i$ and ending in $U_j$. Composition of arrows in $\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}$ equips $\mathsf{H}_{\mathfrak{U}}$ with a natural Lie groupoid structure that is Morita equivalent to $\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}$. As a consequence of this the orbit spaces of the groupoids $\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}$ and $\mathsf{H}_{\mathfrak{U}}$ are therefore naturally homeorphic, actually even diffeomorphic in the sense of differentiable spaces. We therefore identify them. The following lemma is essentially due to Crainic and Moerdijk \cite{CraMoeFGCH}. \begin{lemma}\label{lem:covering} The map $\Lambda: A(\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}):= \Gamma\big(\mathcal{A}_{\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}}\big)\to A(\mathsf{H}_{\mathfrak{U}}):=\Gamma\big(\mathcal{A}_{\mathsf{H}_{\mathfrak{U}}}\big)$ defined by \[ \Lambda(f):=(\tilde{\varphi}_i f \tilde{\varphi}_j)_{i,j} \] is an algebra homomorphism which induces a quasi-isomorphism $\Lambda_\bullet$ from $C_\bullet\big(A(\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O})\big)$ to $C_\bullet\big(A(\mathsf{H}_{\mathfrak{U}})\big)$. In addition, $\Lambda$ induces a quasi-isomorphism of sheaf complexes \[ \Lambda_\bullet: \hat{\mathscr{C}}_{\bullet} (\mathcal{A}_{\mathsf{G}_{|\mathboondoxcal{O}}\ltimes }) \to \hat{\mathscr{C}}_{\bullet} (\mathcal{A}_{\mathsf{H}_{\mathfrak{U}}}) \] over their joint orbit space $ N\mathboondoxcal{O}/\mathsf{G}_{|\mathboondoxcal{O}} \cong (\mathsf{H}_{\mathfrak{U}})_0/\mathsf{H}_{\mathfrak{U}}$. \end{lemma} \begin{proof} The proof of the claim is a straightforward generalization of the one of \cite[Lemma 5]{CraMoeFGCH}. The slight difference here is that we work with the algebras $A(\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O})$ and $A(\mathsf{H}_{\mathfrak{U}})$ instead of the algebra of compactly supported functions. We skip the proof here to avoid repetition. \end{proof} Next, the groupoid $\mathsf{H}_{\mathfrak{U}}$ can be described more explicitly as follows. Firstly, index the open sets in the covering $(U_i)_{i\in I}$ by natural numbers so in other words assume $I \subset{\mathbb N}^*$. After possibly reindexing again, one can assume that if $k\in I$, then $l\in I$ for all $1\leq l\leq k$. Secondly, given $i$, write $x \in U_i$ as $(x_{\textup{v}}, x_{\textup{o}})$ where $x_{\textup{v}}\in N_{x_i}\mathboondoxcal{O}$ and $x_{\textup{o}}\in O_i$. Choose a diffeomorphism $\psi_i: O_i\to \mathbb{R}^k$, where $k=\dim(\mathboondoxcal{O})$. Thirdly, for any $1< i\in I$, choose an arrow $g_i\in \mathsf{G}$ from $x_1$ to $x_i$. The arrow $g_i$ induces an isomorphism between $N_{x_1}\mathboondoxcal{O}$ and $N_{x_i}\mathboondoxcal{O}$, and conjugation by $g_i$ defines an isomorphism from $\mathsf{G}_{x_i}$ to $\mathsf{G}_{x_1}$. Accordingly, $g_i$ induces a groupoid isomorphism between $\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}$ and $\mathsf{G}_{x_i}\ltimes N_{x_i}\mathboondoxcal{O}$. \begin{lemma}\label{lem:productgroupoid} The groupoid $\mathsf{H}_{\mathfrak{U}}$ is isomorphic to the product groupoid \[ \big(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}\big)\times (I\times I) \times (\mathbb{R}^k\times \mathbb{R}^k) \ . \] \end{lemma} \begin{proof} We define groupoid morphisms \[ \Phi: \: \mathsf{H}_{\mathfrak{U}}\to \big(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}\big)\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k) \] and \[ \Psi: \: \big(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}\big)\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k)\to \mathsf{H}_{\mathfrak{U}}. \] Given an arrow $h\in \mathsf{H}_{\mathfrak{U}}$ with source in $U_i$ and target in $U_j$, we consider $(s(h)_{\textup{o}}, x_i)\in O_i\times O_i$ and $(t(h)_{\textup{o}}, x_j)\in O_j\times O_j$. Define $h_{x_i}\in (\mathsf{G}_{x_i}\ltimes N_{x_i}\mathboondoxcal{O})\times (O_i\times O_i)$ (and $h_{x_j}\in (\mathsf{G}_{x_j}\ltimes N_{x_j}\mathboondoxcal{O})\times (O_i\times O_i)$) by $h_{x_i}=\big((\operatorname{id}, 0), (s(h)_{\textup{o}}, x_i)\big)$ (and $h_{x_j}=\big((\operatorname{id}, 0), (t(h)_{\textup{o}}, x_j)\big)$). The arrow $g_j^{-1}h_{x_j}^{-1}h h_{x_i}g_i$ belongs to ${\mathsf{H}_\mathfrak{U}}_{|U_1}$ and its component in $O_1\times O_1$ is $(x_1, x_1)$. The arrow $\Phi (h)$ now is defined to be \[ \Phi(h):=\big(g_j^{-1}h_{x_j}^{-1}h h_{x_i}g_i, (i,j), (\psi(s(h_{ij}), t(h_{ij}))\big)\in \big(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}\big)\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k). \] Similarly, given $(k, (i,j), (y_i, y_j))\in \big(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}\big)\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k)$, define \[ h_{y_i}:=\big((\operatorname{id}, 0), (\psi_i^{-1}(y_i), x_i)\big)\in \mathsf{G}_{|U_i},\qquad h_{y_j}:=\big((\operatorname{id}, 0), (\psi^{-1}_j(y_j), x_j)\big)\in \mathsf{G}_{|U_j}, \] and $h_1:=\big(k, (x_1, x_1)\big)\in \mathsf{G}_{|U_1}$. Notice $g_{j}h_1g_{i}^{-1}$ is an arrow in $\mathsf{H}_{\mathfrak{U}}$ starting from $x_i$ and ending at $x_j$. We can now define $\Psi(k,(i,j), (y_i, y_j))$ to be \[ \Psi(k,(i,j), (y_i, y_j)):=h_{y_j}g_{j}h_1g_{i}^{-1}h_{y_i}^{-1}\in \mathsf{H}_{\mathfrak{U}}. \] It is straightforward to check that $\Phi$ and $\Psi$ are groupoid morphisms and inverse to each other. \end{proof} Let $A\big((\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k) \big)$ be the space of global sections of the convolution sheaf $\mathcal{A}_{(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k)}$. With the maps $\Phi$ and $\Psi$ introduced in Lemma \ref{lem:productgroupoid}, we have the following induced isomorphisms of chain complexes, \[ \begin{split} \Phi_\bullet \!:\:\, &C_\bullet \Big(A\big(\big(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}\big)\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k) \big)\Big)\to C_\bullet\big(A(\mathsf{H}_{\mathfrak{U}})\big),\\ \Psi_\bullet \!:\:\, &C_\bullet\big(A(\mathsf{H}_{\mathfrak{U}})\big)\to C_\bullet \Big(A\big(\big(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}\big)\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k) \big)\Big). \end{split} \] Since they are induced by an ismorphism of groupoids, we also obtain a pair of mutually inverse isomorphisms of complexes of sheaves which are denoted by the same symbols, \[ \begin{split} \Phi_\bullet \!:\:\, &\hat{\mathscr{C}}_{\bullet} \left(\mathcal{A}_{(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k)}\right)\to \hat{\mathscr{C}}_\bullet\left(\mathcal{A}_{\mathsf{H}_{\mathfrak{U}}}\right),\\ \Psi_\bullet \!:\:\, &\hat{\mathscr{C}}_\bullet\left(\mathcal{A}_{\mathsf{H}_{\mathfrak{U}}}\right)\to \hat{\mathscr{C}}_\bullet \left(\mathcal{A}_{(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k)}\right). \end{split} \] Observe that both groupoids $I\times I$ and $\mathbb{R}^k\times \mathbb{R}^k$ have only one orbit. Therefore, longitudinally compactly supported functions on them are the same as compactly supported functions. Observe that $\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})$ is the algebra of longitudinally compactly supported smooth functions on $\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}$. By Lemma \ref{lem:productgroupoid}, the groupoid algebra $A(\mathsf{H}_{\mathfrak{U}})$ is isomorphic to $A\big((\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k) \big)$. The latter can be identified with $\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\,\hat{\otimes}\, {\mathbb R}^{I\times I}\,\hat{\otimes}\, \mathcal{C}_\textup{c}^\infty(\mathbb{R}^k\times \mathbb{R}^k)$, where ${\mathbb R}^{I\times I}$ is the space of finitely supported functions on $I\times I$. Note that $I\times I$ and $\mathbb{R}^k\times \mathbb{R}^k$ both carry the structure of a pair groupoid, so the corresponding products on ${\mathbb R}^{I\times I}$ and $\mathcal{C}_\textup{c}^\infty(\mathbb{R}^k\times \mathbb{R}^k)$ are given in both cases by convolution which we denote as usual by $*$. Let $\tau_I$ be the trace on ${\mathbb R}^{I\times I}$ defined by \[ \tau_I(d) :=\sum_i d_{ii} \ , \quad d = (d_{ij})_{i,j \in I} \in {\mathbb R}^{I\times I} \] and let $\tau_{\mathbb{R}^k}$ be the trace on $\mathcal{C}_\textup{c}^\infty (\mathbb{R}^k\times \mathbb{R}^k)$ given by \[ \tau_{\mathbb{R}^k}(\alpha):=\int_{\mathbb{R}^k} \alpha(x,x)dx \ , \quad \alpha \in \mathcal{C}_\textup{c}^\infty (\mathbb{R}^k\times \mathbb{R}^k) \ , \] where $dx$ is the Lebesgue measure on $\mathbb{R}^k$. Define a map \[ \tau_m: C_m\big(\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\,\hat{\otimes}\, {\mathbb R}^{I\times I}\,\hat{\otimes}\, \mathcal{C}_\textup{c}^\infty(\mathbb{R}^k\times \mathbb{R}^k)\big)\to C_m\big(\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\big) \] as follows: \[ \begin{split} \tau_m \, & \big( ( f_0\otimes \cdots \otimes f_m)\otimes (d_0\otimes \cdots\otimes d_m) \otimes (\alpha_0\otimes \cdots \otimes \alpha_m)\big)\\ &:=\tau_I(d_0 * \cdots * d_m) \tau_{\mathbb{R}^k}(\alpha_0 * \cdots * \alpha_m) \, f_0\otimes \cdots \otimes f_m\ , \\ &f_0 ,\cdots , f_m \in \mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}) , \: d_0, \cdots , d_m \in {\mathbb R}^{I\times I}, \: \alpha_0,\cdots,\alpha_m\in\mathcal{C}_\textup{c}^\infty (\mathbb{R}^k\times \mathbb{R}^k) \ . \end{split} \] It is easy to check using the tracial property of $\tau_I$ and $\tau_{\mathbb{R}^k}$ that $\tau_\bullet$ is a chain map. Moreover, observe that the whole argument works not only for the global section algebra $\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})$ but for any of the section algebras $\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes V)$ with $V \subset N_x\mathboondoxcal{O}$ an open $\mathsf{G}_{x_1}$-invariant subspace. So eventually we obtain a morphism of sheaf complexes \[ \tau_\bullet : \hat{\mathscr{C}}_\bullet \big(\mathcal{A}_{\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\, \hat{\otimes}\, {\mathbb R}^{I\times I} \, \hat{\otimes} \, \mathcal{C}_\textup{c}^\infty(\mathbb{R}^k\times \mathbb{R}^k)}\big) \to \hat{\mathscr{C}}_\bullet \big(\mathcal{A}_{\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})}\big) \ . \] over the orbit space $N_{x_1}\mathboondoxcal{O}/\mathsf{G}_{x_1}$. \begin{lemma}\label{lem:linearmodel} The chain map \[ \tau_\bullet : C_\bullet \big(\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\, \hat{\otimes}\, {\mathbb R}^{I\times I} \, \hat{\otimes} \, \mathcal{C}_\textup{c}^\infty(\mathbb{R}^k\times \mathbb{R}^k)\big)\to C_\bullet \big(\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\big) \] is a quasi-isomorphism. More generally, \[ \tau_\bullet : \hat{\mathscr{C}}_\bullet \big(\mathcal{A}_{\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\, \hat{\otimes}\, {\mathbb R}^{I\times I} \, \hat{\otimes} \, \mathcal{C}_\textup{c}^\infty(\mathbb{R}^k\times \mathbb{R}^k)}\big) \to \hat{\mathscr{C}}_\bullet \big(\mathcal{A}_{\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})}\big) \] is an isomorphism of complexes of sheaves. \end{lemma} \begin{proof} Choose a function $\beta\in \mathcal{C}_\textup{c}^\infty(\mathbb{R}^k)$ such that \[ \int_{\mathbb{R}^k} \beta^2(x)dx=1. \] Let $\alpha\in \mathcal{C}_\textup{c}^\infty(\mathbb{R}^k\times \mathbb{R}^k)$ be the function $\beta\otimes \beta$. Define an algebra morphism \[ j_\alpha: \mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\to \mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\,\hat{\otimes}\,{\mathbb R}^{I\times I} \, \hat{\otimes} \,\mathcal{C}_\textup{c}^\infty(\mathbb{R}^k\times \mathbb{R}^k) \] by \[ j_\alpha(f)=f\otimes \delta_{(1,1)}\otimes \alpha \ , \] where $\delta_{(1,1)}$ is the function on $I\times I$ that is $1$ on $(1,1)$ and $0$ otherwise. $j_{\alpha,\bullet}$ is the induced map on the cochain complex. It is easy to check $\tau_\bullet \circ j_{\alpha,\bullet}=\operatorname{id}$. Applying $j_{\alpha, \bullet}\circ \tau_\bullet$ to \[ (f_0\otimes \cdots \otimes f_m)\otimes (d_0\otimes \cdots \otimes d_m)\otimes (\alpha_0\otimes \cdots \otimes \alpha_m)\] gives \[ \tau_I (d_0 * \cdots * g_m)\tau_{\mathbb{R}^k}(\alpha_0 * \cdots * \alpha_m)\big( f_0\otimes \cdots \otimes f_m\big)\otimes \big(\delta_{1,1}\otimes\cdots \otimes \delta_{1,1}\big)\otimes \big(\alpha\otimes \cdots \otimes \alpha\big). \] Following the proof of Lemma \ref{lem:localization-homotopies}, we consider the unital algebra $\widetilde{\mathcal{C}}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})$ which is the direct sum of $\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})$ with $\mathcal{C}^\infty(N_{x_1}\mathboondoxcal{O})^{\mathsf{G}_{x_1}}$ and product structure given by Eq.~\eqref{eq:product}. We then have the following split exact sequence in the category of bornological algebras \begin{equation}\label{eq:unitalization} 0\rightarrow \mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\rightarrow \widetilde{\mathcal{C}}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})\rightarrow \mathcal{C}^\infty(N_{x_1})^{\mathsf{G}_{x_1}} \rightarrow 0. \end{equation} It is not hard to see that the chain maps $\tau_\bullet$ and $j_{\alpha, \bullet}$ extend to the corresponding versions of the algebras $ \widetilde{\mathcal{C}}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})$ and $\mathcal{C}^\infty(N_{x_1})^{\mathsf{G}_{x_1}}$. As both algebras are unital, the homotopy maps constructed in the proof of \cite[Lemma 6]{CraMoeFGCH} can be applied to conclude that $j_{\alpha, \bullet}\tau_\bullet$ is a quasi-isomorphism for $ \widetilde{\mathcal{C}}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})$ and $\mathcal{C}^\infty(N_{x_1})^{\mathsf{G}_{x_1}}$. As the algebra $\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1})$ is $H$-unital, we consider the long exact sequence associated to the short exact sequence (\ref{eq:unitalization}). As $j_{\alpha, \bullet}$ and $\tau_{\bullet}$ are quasi-isomorphisms on $ \widetilde{\mathcal{C}}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})$ and $\mathcal{C}^\infty(N_{x_1})^{\mathsf{G}_{x_1}}$, we conclude by the five lemma that $\tau_\bullet$ and $j_{\alpha, \bullet}$ are also quasi-isomorphisms for $\mathcal{C}^\infty(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O})$. The argument generalizes immediately to the sheaf case. \end{proof} Summarizing Lemma \ref{prop:stalk-linearization} -- Lemma \ref{lem:linearmodel}, we thus obtain the following local model for the stalk complex $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}}(\mathcal{A}_\mathsf{G})$. \begin{proposition}\label{prop:local-model} For every orbit $\mathboondoxcal{O}\in X$ the composition $L_{\bullet,\mathboondoxcal{O}}:=\tau_{\bullet,0} \circ \Psi_{\bullet,0}\circ \Lambda_{\bullet,0}\circ \Theta_{\bullet,\mathboondoxcal{O}}$, where $\tau_{\bullet,0}$, $\Psi_{\bullet,0}$, and $\Lambda_{\bullet,0}$ denote the respective sheaf morphisms localized at the zero sections, is a quasi-isomorphism, \[ \begin{split} L_{\bullet,\mathboondoxcal{O}}: \hat{\mathscr{C}}_{\bullet,\mathboondoxcal{O}}(\mathcal{A}_{\mathsf{G}})&\stackrel{\Theta_{\bullet,\mathboondoxcal{O}}}{\longrightarrow}\hat{\mathscr{C}}_{\bullet,\mathboondoxcal{O}} \big(\mathcal{A}_{\mathsf{G}_{|\mathboondoxcal{O}}\ltimes N\mathboondoxcal{O}}\big)\stackrel{\Lambda_{\bullet,0}}{\longrightarrow}\hat{\mathscr{C}}_{\bullet,0} \Big(\mathcal{A}_{\mathsf{H}_{\mathfrak{U}}}\Big) \\ &\stackrel{\Psi_{\bullet,0}}{\longrightarrow} \hat{\mathscr{C}}_{\bullet,0} \big( \mathcal{A}_{(\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}) \times (I\times I)\times (\mathbb{R}^k\times \mathbb{R}^k)}\big)\stackrel{\tau_{\bullet,0}}{\longrightarrow}\hat{\mathscr{C}}_{\bullet,0} \big(\mathcal{A}_{\mathsf{G}_{x_1}\ltimes N_{x_1}\mathboondoxcal{O}}\big). \end{split} \] \end{proposition} \section{Localization of the Hochschild chain complex} \label{Sec:LHCH} In this section, we apply the localization method in Hochschild homology theory, partially following \cite{BraPflHAWFSS}, to the Hochschild chain complex of the convolution algebra. \subsection{Sheaves of bornological algebras over a differentiable space} We start with a (reduced separated second countable) differentiable space $(X, \mathcal{C}^\infty)$ and assume that $\mathcal{A}$ is a sheaf of ${\mathbb R}$-algebras on $X$. We will denote by $A=\mathcal{A} (X)$ its space of global sections. We assume further that $\mathcal{A}$ is a $\mathcal{C}^\infty_X$-module sheaf and that every section space $\mathcal{A} (U)$ with $U\subset X$ open carries the structure of a nuclear LF-space such that each of the restriction maps $\mathcal{A}(U) \to \mathcal{A} (V)$ is continuous and multiplication in $\mathcal{A} (U)$ is separately continuous. Finally, it is assumed that the action $\mathcal{C}^\infty (U) \times \mathcal{A} (U) \to \mathcal{A}(U)$ is continuous. As a consequence of our assumptions, each of the spaces $\mathcal{A}(U)$ carries a natural bornology, namely the one consisting of all von Neumann bounded subsets, i.e.~of all subsets $B \subset \mathcal{A}(U)$ which are absorbed by every neighborhood of the origin. Moreover, by \cite[Lemma 1.30]{MeyLACH}, separate continuity of multiplication in $\mathcal{A}(U)$ entails that the product map is a jointly bounded map, hence induces a bounded map $\mathcal{A}(U) \hat{\otimes} \mathcal{A}(U) \to \mathcal{A} (U)$ on the complete (projective) bornological tensor product of $\mathcal{A}(U)$ with itself. \begin{remark} \begin{enumerate} \item We refer to Appendix \ref{AppCyHomBornAlg} for basic definitions and to \cite{MeyLACH} for further details on bornological vector spaces, their (complete projective) tensor products, and the use of these concepts within cyclic homology theory. We always assume the bornologies in this paper to be convex vector bornologies. \item In this paper, we will often silently make use of the fact, that for two nuclear LF-spaces $V_1$ and $V_2$ their complete bornological tensor product $V_1 \hat{\otimes} V_2$ naturally concides (up to natural equivalence) with the complete inductive tensor product $V_1 \hat{\otimes}_\iota V_2 $ endowed with the bornology of von Neumann bounded sets. Moreover, $V_1 \hat{\otimes}_\iota V_2 $ is again a nuclear LF-space. We refer to \cite[A.1.4]{MeyACH} for a proof of these propositions. Note that for Fr\'echet spaces the projective and inductive topological tensor product coincide. \end{enumerate} \end{remark} \begin{definition} A sheaf of algebras $\mathcal{A}$ defined over a differentiable space $(X,\mathcal{C}_X^\infty)$ such that the above assumptions are fulfilled will be called a \emph{sheaf of bornological algebras over} $(X,\mathcal{C}_X^\infty)$. If all $\mathcal{A} (U)$ are unital and the restriction maps $\mathcal{A} (U) \rightarrow \mathcal{A} (V)$ are unital homomorphisms, we say that $\mathcal{A}$ is a \emph{sheaf of unital bornological algebras} or just that $\mathcal{A} (U)$ is \emph{unital}. If every section space $\mathcal{A} (U)$ is an H-unital algebra, we call $\mathcal{A}$ a \emph{sheaf of H-unital bornological algebras} or briefly \emph{H-unital}. Finally, we call $\mathcal{A}$ an \emph{admissible sheaf of bornological algebras} if $\mathcal{A}$ is H-unital and if for each $k\in {\mathbb N}^*$ the presheaf assigning to an open $U\subset X$ the $k$-times complete bornological tensor product $\mathcal{A}(U)^{\hat{\otimes}k}$ is even a sheaf on $X$. \end{definition} \begin{example} \begin{enumerate} \item The structure sheaf $\mathcal{C}_X^\infty$ of a differentiable space $(X,\mathcal{C}_X^\infty)$ is an example of an admissible sheaf of unital bornological algebras over $(X,\mathcal{C}_X^\infty)$. \item Given a proper Lie groupoid $\mathsf{G}$, the convolution sheaf $\mathcal{A}$ is an admissible sheaf of bornological algebras over the orbit space $(X,\mathcal{C}_X^\infty)$ of the groupoid. This follows by construction of $\mathcal{A}$, Prop.~\ref{Prop:ProdConvSheaves} and \cite[Prop.~2]{CraMoeFGCH}, which entails H-unitality of each of the section spaces $\mathcal{A}(U)$. \end{enumerate} \end{example} \subsection{The Hochschild homology sheaf Assume that $\mathcal{A}$ is a sheaf of bornological algebras over the differentiable space $(X,\mathcal{C}_X^\infty)$. We will construct the Hochschild homology sheaf $\mathscr{H}\mathscr{H}_\bullet (\mathcal{A})$ associated to $\mathcal{A}$ as a generalization of Hochschild homology for algebras; see \cite{LodCH} for the latter and Appendix \ref{AppCyHomBornAlg} for basic definitions and notation used. For each $k\in {\mathbb N}^*$ let $\mathscr{C}_k (\mathcal{A})$ denote the presheaf on $X$ which assigns to an open $U\subset X$ the $(k+)$-times complete bornological tensor product $\mathcal{A}(U)^{\hat{\otimes}(k+1)}$. Note that in general, $\mathscr{C}_k (\mathcal{A})$ is not a sheaf. We denote by $\hat{\mathscr{C}}_k (\mathcal{A})$ the sheafification of $\mathscr{C}_k (\mathcal{A})$. Observe that for $V\subset U \subset X$ open the Hochschild boundary \[ b : \mathscr{C}_k (\mathcal{A}) (U) \to \mathscr{C}_{k-1} (\mathcal{A}) (U) \] commutes with the restriction maps $r^U_V: \mathscr{C}_k (\mathcal{A}) (U) \to \mathscr{C}_k (\mathcal{A}) (V)$, hence we obtain a complex of presheaves $\big( \mathscr{C}_\bullet (\mathcal{A}) , b \big)$ and by the universal property of the sheafification a sheaf complex $\big( \hat{\mathscr{C}}_\bullet (\mathcal{A}) , b \big)$. The Hochschild homology sheaf $\mathscr{H}\mathscr{H}_\bullet (\mathcal{A})$ is now defined as the homology sheaf of $\big( \hat{\mathscr{C}}_\bullet (\mathcal{A}) , b \big)$ that means \[ \mathscr{H}\mathscr{H}_k (\mathcal{A}) := \ker \big( b : \hat{\mathscr{C}}_k (\mathcal{A}) \to \hat{\mathscr{C}}_{k-1} (\mathcal{A}) \big) / \operatorname{im} \big( b : \hat{\mathscr{C}}_{k+1} (\mathcal{A}) \to \hat{\mathscr{C}}_k (\mathcal{A}) \big) . \] By construction, the stalk $\mathscr{H}\mathscr{H}_k (\mathcal{A})_{\scriptstyle\mathboondoxcal{O}}$, $ {\scriptstyle\mathboondoxcal{O}} \in X$ coincides with the $k$-th Hochschild homology $HH_k (\mathcal{A}_{\scriptstyle\mathboondoxcal{O}} )$ of the stalk $\mathcal{A}_{\scriptstyle\mathboondoxcal{O}}$. On the other hand, $HH_k (\mathcal{A} (X) )$ need in general not coincide with the space $\mathscr{H}\mathscr{H}_k (\mathcal{A}) (X)$ of global sections of the $k$-th Hochschild homology sheaf. The main goal of this section is to prove the following result which is crucial for our study of the Hochschild homology of the convolution algebra of a proper Lie groupoid, but also might be intersting by its own. Its proof will cover the remainder of Section \ref{Sec:LHCH}. \begin{theorem} \label{thm:hochschild-homology-global-sections} Let $\mathcal{A}$ be the convolution sheaf of a proper Lie groupoid $\mathsf{G}$. Then the natural map in Hochschild homology \[ HH_\bullet \big( \mathcal{A} (X)\big) \to \mathscr{H}\mathscr{H}_\bullet (\mathcal{A}) (X) = \Gamma \big(X, \mathscr{H}\mathscr{H}_\bullet (\mathcal{A}) \big) \] is an isomorphism. \end{theorem} Before we can spell out the proof we need several auxiliary tools and results. \subsection{The localization homotopies}\label{sec:localization-homotopies} Throughout this paragraph we assume that $\mathcal{A}(X)$ is an admissible sheaf of bornological algebras over the differentiable space $(X,\mathcal{C}^\infty_X)$. To construct the localization morphisms, observe that the complex $C_\bullet (A)$ inherits from $A=\mathcal{A}(X)$ the structure of a $\mathcal{C}^\infty (X)$-module. More precisely, the corresponding action is given by \begin{equation} \label{eq:DefActionSmoothFctChains1} \mathcal{C}^\infty (X) \times C_k(A) \rightarrow C_k(A), \quad (\varphi , a_0 \otimes \ldots \otimes a_k) \mapsto ( \varphi a_0 ) \otimes a_1 \otimes \ldots \otimes a_k \: . \end{equation} By definition, it is immediate that the $\mathcal{C}^\infty (X)$-action commutes with the operators $b$ and $b'$ hence induces a chain map $ \mathcal{C}^\infty (X) \times C_\bullet (A) \to C_\bullet (A)$. In a similar fashion we define an action of $\mathcal{C}^\infty (X^{k+1}) \cong \big( \mathcal{C}^\infty (X)\big)^{\hat{\otimes} (k+1)}$ on $C_k(A)$ by \begin{equation} \label{eq:DefActionSmoothFctChains2} (\varphi_0 \otimes \ldots \otimes \varphi_k , a_0 \otimes \ldots \otimes a_k) \mapsto ( \varphi_0 a_0 ) \otimes \ldots \otimes (\varphi_k a_k ). \end{equation} This allows us to speak of the \emph{support} of a chain $c \in C_k(A)$. It is defined as the complement of the largest open subset $U$ in $ X^{k+1}$ such that $\varphi \cdot c = 0$ for all $\varphi \in \mathcal{C}^\infty (X)$ with $\operatorname{supp} \varphi \subset U$. Next choose a metric $d: X \times X \to {\mathbb R}$ such that the function $d^2$ lies in $\mathcal{C}^\infty (X\times X)$. Such a metric exists by Corollary \ref{Cor:SmoothMetricAppendix}. Then fix a smooth function $\varrho : {\mathbb R} \rightarrow [0,1]$ which has support in $(-\infty , \frac 34]$ and satisfies $\varrho (r) =1$ for $r\leq \frac 12$. For $\varepsilon >0$ we denote by $\varrho_\varepsilon$ the rescaled function $r \mapsto \varrho (\frac{s}{\varepsilon^2} )$. Now define functions $\Psi_{k,i,\varepsilon} \in \mathcal{C}^\infty (X^{k+1})$ for $k\in {\mathbb N}$ and $i= 0, \ldots, k$ by \begin{equation} \Psi_{k,i,\varepsilon} (x_0, \ldots, x_k) = \prod_{j=0}^{i-1} \varrho_\varepsilon \big( d^2( x_j , x_{j+1}) \big), \quad \text{where $x_0,\ldots , x_k \in X$ and $x_{k+1}:= x_0$} \: . \end{equation} Moreover, put $\Psi_{k,\varepsilon} := \Psi_{k,k+1,\varepsilon}$. Using the $\mathcal{C}^\infty (X^{k+1})$-action on $C_k(A)$ we obtain for each $\varepsilon >0$ a graded map of degree $0$ \[ \Psi_\varepsilon : C_\bullet (A) \to C_\bullet(A), \: C_k(A) \ni c \mapsto \Psi_{k,\varepsilon} c \: . \] One immediately checks that $\Psi_\varepsilon$ commutes with the face maps $b_i$ and the cyclic operator $t_k$. Hence, $\Psi_\varepsilon$ is a chain map. One even has more. \begin{lemma}\label{lem:localization-homotopies} Let $\mathcal{A}$ be an admissible sheaf of bornological algebras over the differentiable space $(X,\mathcal{C}^\infty)$, and put $A:= \mathcal{A}(X)$. Let $d$ be a metric on $X$ such that $d^2$ is smooth and fix a smooth map $\varrho:{\mathbb R}\to [0,1]$ with support in $(-\infty,\frac 34]$ such that $\varrho|_{(\infty,\frac12]}=1$. Then, for each $\varepsilon >0$, the chain map $\Psi_\varepsilon: C_\bullet (A) \to C_\bullet(A)$ is homotopic to the identity morphism on $C_\bullet (A)$. \end{lemma} \begin{proof} Let us first consider the case, where $\mathcal{A}$ is a sheaf of unital algebras. The Hochschild chain complex then is a simplicial module with face maps $b_i$ and the degeneracy map \[ s_{k,i} : C_k (A) \rightarrow C_{k+1} (A), \; a_0 \otimes \ldots \otimes a_k \mapsto a_0 \otimes \ldots \otimes a_i \otimes 1 \otimes a_{i+1} \otimes \ldots \otimes a_k \ , \] where $k\in {\mathbb N}$, $i = 0, \ldots , k$. Define $\mathcal{C}^\infty (X)$-module maps $\eta_{k,i,\varepsilon} : C_k (A) \rightarrow C_{k+1} (A)$ for $k\in {\mathbb N}$, $i=1,\cdots,k+2$ and $\varepsilon >0$ by \begin{equation} \eta_{k,i,\varepsilon} (c) := \begin{cases} \Psi_{k+1,i,\varepsilon}\cdot ( s_{k,i-1} c )& \text{for $i\leq k+1$},\\ 0 &\text{for $i=k+2$}. \end{cases} \end{equation} Moreover, put $C_{-1} (A) := \{ 0\}$ and let $\eta_{-1,1,\varepsilon}: C_{-1} (A) \to C_0 (A)$ be the $0$-map. For $k\geq 1$ and $i = 2,\cdots, k$ one then computes \begin{equation*} \begin{split} ( b \eta_{k,i,\varepsilon} + \eta_{k-1,i,\varepsilon} b) c = \, & (-1)^{i-1} \Psi_{k,i-1,\varepsilon} c \, + \Psi_{k,i-1,\varepsilon} \sum_{j=0}^{i-2} \, (-1)^j \, s_{k-1,i-2} b_{k,j} c \, + \\ & + (-1)^i \Psi_{k,i,\varepsilon} c \, + \Psi_{k,i,\varepsilon} \sum_{j=0}^{i-1} \, (-1)^j \, s_{k-1,i-1} b_{k,j} c \ . \end{split} \end{equation*} For the case $i=1$ one obtains \begin{equation*} ( b \eta_{k,1,\varepsilon} + \eta_{k-1,1,\varepsilon} b) c = c \, - \, \Psi_{k,1,\varepsilon} c \, + \, \Psi_{k,1,\varepsilon} s_{k-1,0} b_{k,0} c \ , \end{equation*} and for $i=k+1$ \begin{equation*} ( b \eta_{k,k+1,\varepsilon} + \eta_{k-1,k+1,\varepsilon} b ) c = \Psi_{k,k,\varepsilon} (-1)^k c \, + \, \Psi_{k,k,\varepsilon} \sum_{j=0}^{k-1} \, (-1)^j \, s_{k-1,k-1} b_{k,j} c \, + \, (-1)^{k+1} \Psi_{k,\varepsilon} \, c . \end{equation*} Finally, one checks for $k=0$ and $i=1$ \begin{equation*} ( b \eta_{0,1,\varepsilon} + \eta_{-1,1,\varepsilon} b) c = b \eta_{0,1,\varepsilon} c = 0 \ . \end{equation*} These formulas immediately entail that the maps \begin{displaymath} \begin{split} H_{k,\varepsilon} & = \sum_{i=1}^{k+1} \, (-1)^{i+1} \, \eta_{k,i,\varepsilon} : C_k (A)\rightarrow C_{k+1} (A) \end{split} \end{displaymath} form a homotopy between the identity and the localization morphism $\Psi_\varepsilon$. More precisely, \begin{align} \label{EqHomHom} \big( b H_{k,\varepsilon} + H_{k-1,\varepsilon} b \big) c & = c - \Psi_\varepsilon \, c \quad \text{for all } k\in {\mathbb N} \text{ and } c\in C_k (A) \ . \end{align} This finishes the proof of the claim in the unital case. Now let us consider the general case, where $\mathcal{A}$ is assumed to be a sheaf of H-unital but not necessarily unital algebras. Consider the direct sum of sheaves $\mathcal{A} \oplus \mathcal{C}^\infty_X$, denote it by $\widetilde{\mathcal{A}}$, and put $\widetilde{A} :=\widetilde{\mathcal{A}}(X)$. We turn $\widetilde{\mathcal{A}}$ into a sheaf of unital bornological algebras by defining the product of $(f_1,h_1), (f_2,h_2) \in \widetilde{\mathcal{A}}(U)$ as \begin{equation} \label{eq:product} (f_1,h_1)\cdot (f_2,h_2) := (h_1f_1 + h_2f_1 + f_1\, f_2 , h_1\, h_2) . \end{equation} One obtains a split short exact sequence in the category of bornological algebras \begin{displaymath} \xymatrix{ 0 \ar[r] & A \ar[r] & \widetilde{A} \ar[r]^q & \mathcal{C}^\infty (X) \ar@<1ex>@{-->}[l]^i \ar[r] & 0 } \ . \end{displaymath} This gives rise to a diagram of chain complexes and chain maps \begin{equation} \label{diag} \xymatrix{ 0 \ar[r] & \ker_\bullet q_* \ar@{^{(}->}[r] \ar@<1ex>@{-->}[d]^{\kappa} & C_\bullet (\widetilde{A}) \ar[r]^{q_*} & C_\bullet (\mathcal{C}^\infty (X)) \ar@<1ex>@{-->}[l]^{i_*} \ar[r] & 0\\ & C_\bullet (A) \ar[u]^{\iota} , } \end{equation} where the row is split exact, and $\iota$ denotes the canonical embedding. Since $A$ is H-unital, $\iota$ is a quasi-isomorphsm. Because the chain complexes $\ker_\bullet q_*$ and $C_\bullet (A)$ are bounded from below, there exists a chain map $\kappa$ which is left inverse to $\iota$. Note that the components $\kappa_k$ need not be bounded maps between bornological spaces. By construction, $\Psi_\varepsilon$ acts on each of the chain complexes within the diagram, and all chain maps (besides possibly $\kappa$) commute with this action. By the first part of the proof we have an algebraic homotopy $H : C_\bullet (\widetilde{A}) \to C_{\bullet+1} (\widetilde{A})$ such that \[ \operatorname{id} - \Psi_\varepsilon = b H + H b \: . \] Define $F : C_\bullet (A) \to C_{\bullet+1} (A)$ by $F := \kappa (\operatorname{id} -i_*q_*) H \iota$. Note that $F$ is well-defined indeed, since $q_*(\operatorname{id} -i_*q_*) =0$. Now compute for $c \in C_k (A)$ \[ (bF +Fb)c = \kappa (\operatorname{id} -i_*q_*) (bH + H b) \iota c = \kappa (\operatorname{id} -i_*q_*) (\iota c -\Psi_\varepsilon \iota c) = c -\Psi_\varepsilon \, c \: . \] Hence $F$ is a homotopy between the identity and $\Psi_\varepsilon$ and the claim is proved. \end{proof} \begin{lemma} \label{lem:homotopy-subordinate-partition-unity} Let $\mathcal{A}$ be an admissible sheaf of bornological algebras over the differentiable space $(X,\mathcal{C}^\infty)$, put $A:= \mathcal{A}(X)$, and let the metric $d$ and the cut-off function $\varrho$ as in the preceding lemma. Assume that $(\varphi_l)_{l\in {\mathbb N}}$ is a smooth locally finite partition of unity and that $(\varepsilon_l)_{l\in {\mathbb N}}$ a sequence of positive real numbers. Then \begin{equation} \label{eq:def-localization-morphism} \Psi: C_\bullet (A) \to C_\bullet (A),\enspace C_k (A)\ni c \mapsto \sum_{l\in {\mathbb N}} \varphi_l\Psi_{\varepsilon_l}c \ . \end{equation} is a chain map and there exists a homotopy between the identity on $C_\bullet (A)$ and $\Psi$. \end{lemma} \begin{proof} Recall that the action of $\mathcal{C}^\infty(X)$ commutes with the Hochschild boundary and that each $\Psi_{\varepsilon_l}$ is a chain map. Since $(\varphi_l)_{l\in {\mathbb N}}$ is a locally finite smooth partition of unity, $\Psi$ then has to be a chain map by construction. Now assume that $\mathcal{A}$ is a sheaf of unital algebras. Let $H_{\bullet,\varepsilon_l}:C_\bullet (A)\to C_{\bullet+1}$ be the homotopy from the preceding lemma which fulfills Equation \eqref{EqHomHom} with $\varepsilon = \varepsilon_l$. For all $k\in {\mathbb N}$ let $H_k$ be the map \[ H_k : C_k (A) \to C_{k+1} (A), \enspace c \mapsto \sum_{l\in {\mathbb N}} H_{k,\varepsilon_l} \varphi_l \, c \ . \] Then \begin{align} \label{EqHomHom2} \big( b H_k + H_{k-1} b \big) c & = \sum_{li\in {\mathbb N}} \left( \varphi_l c - \Psi_{\varepsilon_l} \varphi_l \, c \right) = c - \Psi \, c \quad \text{for all } k\in {\mathbb N} \text{ and } c\in C_k (A) \ . \end{align} Hence $H$ is a homotopy between the identity and $\Psi$ which proves the claim in the unital case. In the non-unital case define the unitalizations $\widetilde{\mathcal{A}}$ and $\widetilde{A}$ as before and let $q_*$, $i_*$, $\iota$, $\kappa$ denote the chain maps as in Diagram \eqref{diag}. Let $H : C_\bullet (\widetilde{A}) \to C_{\bullet+1} (\widetilde{A})$ be the algebraic homotopy constructed for the unital case. In particular this means that \[ \operatorname{id} - \Psi = b H + H b \: . \] Defining $F : C_\bullet (A) \to C_{\bullet+1} (A)$ by $F := \kappa (\operatorname{id} -i_*q_*) H \iota$ then gives a homotopy between the identity on $C_\bullet (A)$ and $\Psi$. \end{proof} \begin{lemma} \label{lem:cycle-not-meeting-diagonal} Let $\mathcal{A}$ be an admissible sheaf of bornological algebras over the differentiable space $(X,\mathcal{C}^\infty)$, put $A:= \mathcal{A}(X)$ and let $c \in C_k(A)$ be a Hochschild cycle. If the support of $c$ does not meet the diagonal, then $c$ is a Hochschild boundary. \end{lemma} \begin{proof} Assume that the support of the Hochschild cycle $c$ does not meet the diagonal and let $U = X^{k+1} \setminus \operatorname{supp} c$. Then $U$ is an open neighborhood of the diagonal. By Corollary \ref{Cor:SmoothMetricAppendix} there exists a complete metric $d:X \times X \to {\mathbb R}$ such that $d^2 \in \mathcal{C}^\infty (X \times X)$. Choose a compact exhaustion $(K_n)_{n\in {\mathbb N}}$ of $X$ which means that each $K_n$ is compact, $K_n \subset K_{n+1}^\circ$ for all $n\in {\mathbb N}$ and $\bigcup_{n\in {\mathbb N}}K_n =X$. For each $n\in {\mathbb N}$ there then exists an $\varepsilon_n >0$ such that all $(x_0,\ldots,x_k) \in K_n^{k+1}$ are in $U$ whenever $d(x_j,x_{j+1}) < \varepsilon_n$ for $j=0, \ldots , k$ and $x_{k+1}:=x_0$. Choose a locally finite smooth partition of unity $(\varphi_l)_{l\in {\mathbb N}}$ subordinate to the open covering $(K_n^\circ)_{n\in {\mathbb N}}$ and let $\Psi: C_\bullet (A) \to C_\bullet (A)$ be the associated chain map defined by \eqref{eq:def-localization-morphism}. According to Lemma \ref{lem:homotopy-subordinate-partition-unity} there then exists a chain homotopy $H$ between the identity on $C_\bullet (A)$ and $\Psi$. Since the support of $c$ does not meet $U$ one obtains \[ c = c - \Psi_\varepsilon c = bH (c) \ , \] so $c$ is a Hochschild boundary indeed. \end{proof} \begin{proposition} \label{prop:isomorphism-global-section-space} Assume to be given a proper Lie groupoid with orbit space $X$ and convolution sheaf $\mathcal{A}$. Let $A = \mathcal{A} (X)$ and $\hat{\mathscr{C}}_\bullet (\mathcal{A} )$ be the sheaf complex of Hochschild chains. Denote for each ${\scriptstyle\mathboondoxcal{O}}\in X$ and each chain $c \in C_\bullet \big(\mathcal{A} (U) \big)$ defined on a neighborhood $U\subset X$ of ${\scriptstyle\mathboondoxcal{O}}$ by $[c]_{{\scriptstyle\mathboondoxcal{O}}}$ the germ of $c$ at ${\scriptstyle\mathboondoxcal{O}}$ that is the image of $c$ in the stalk $\hat{\mathscr{C}}_{\bullet,{\scriptstyle\mathboondoxcal{O}}} (\mathcal{A}) = \operatornamewithlimits{colim}\limits_{V \in \mathcal{N}({\scriptstyle\mathboondoxcal{O}})} C_\bullet (\mathcal{A} (V))$, where $\mathcal{N} ({\scriptstyle\mathboondoxcal{O}})$ denotes the filter basis of open neighborhoods of ${\scriptstyle\mathboondoxcal{O}}$. Then the chain map \[ \eta : C_\bullet ( A ) \to \Gamma \big( X , \hat{\mathscr{C}}_\bullet (\mathcal{A})\big), \enspace c \mapsto \big( [c]_{{\scriptstyle\mathboondoxcal{O}}}\big)_{{\scriptstyle\mathboondoxcal{O}}\in X} \] is a quasi-isomorphism. \end{proposition} \begin{proof} Consider a section $s \in \Gamma \big( X , \hat{\mathscr{C}}_k (\mathcal{A})\big)$. Then there exists a (countable) open covering $(U_i)_{i\in I}$ of the orbit space $X$ and a family $(c_i)_{i\in I}$ of $k$-chains $c_i \in C_k \big(\mathcal{A} (U_i) \big)$ such that $[c_i]_{{\scriptstyle\mathboondoxcal{O}}} = s({\scriptstyle\mathboondoxcal{O}})$ for all $i\in I$ and ${\scriptstyle\mathboondoxcal{O}} \in U_i$. After possibly passing to a finer (still countable) and locally finite covering one can assume that there exists a partition of unity $(\varphi_i)_{i\in I}$ by functions $\varphi_i \in \mathcal{C}^\infty (X)$ such that $\operatorname{supp} \varphi_i \subset \subset U_i$ for all $i\in I$. If $s$ is a cycle, then we can achieve after possible passing to an even finer locally finite covering that each $c_i$ is a Hochschild cycle as well. Choose a metric $d : X \times X \to {\mathbb R}$ such that $d^2 \in \mathcal{C}^\infty (X\times X)$. For each $i$ there then exists $\varepsilon_i >0$ such that the space of all ${\scriptstyle\mathboondoxcal{O}}\in X$ with $d ({\scriptstyle\mathboondoxcal{O}}, \operatorname{supp} \varphi_i) \leq (k+1) \varepsilon_i$ is a compact subset of $U_i$. The chain $\Psi_{\varepsilon_i} (\varphi_i c_i)$ then has compact support in $U_i^{k+1}$. Extend it by $0$ to a smooth function on $X^{k+1}$ and denote the thus obtained $k$-chain also by $\Psi_{\varepsilon_i} (\varphi_i c_i)$. Now put \begin{equation} \label{eq:definition-lifting-chain} c := \sum_{i\in I} \Psi_{\varepsilon_i} (\varphi_i c_i) \ . \end{equation} Then $c \in C_k ( A) $ is well-defined since the sum in the definition of $c$ is locally finite. For every ${\scriptstyle\mathboondoxcal{O}} \in X$ now choose an open neigborhood $W_{\scriptstyle\mathboondoxcal{O}}$ meeting only finitely many of the elements of the covering $(U_i)_{i\in I}$. Denote by $I_{\scriptstyle\mathboondoxcal{O}}$ the set of indices $i\in I$ such that $U_i \cap W_{\scriptstyle\mathboondoxcal{O}}\neq \emptyset$. Then each $I_{\scriptstyle\mathboondoxcal{O}}$ is finite. Next let $H_i : C_\bullet \big( \mathcal{A} (U_i)\big) \to C_{\bullet+1} \big( \mathcal{A} (U_i)\big)$ be the homotopy operator constructed in the proof of Lemma \ref{lem:localization-homotopies} such that \[ bH_i +H_i b = \operatorname{id} - \Psi_{\varepsilon_i} \: . \] Let $e_i = H_i (\varphi_i c_i)$ for $i \in I_{\scriptstyle\mathboondoxcal{O}}$ and put $e_{\scriptstyle\mathboondoxcal{O}} = \sum_{i\in I_{\scriptstyle\mathboondoxcal{O}}} e_i|_{W_{\scriptstyle\mathboondoxcal{O}}^{k+2}}$. Then $e_{\scriptstyle\mathboondoxcal{O}} \in C_{k+1} \big( \mathcal{A} (W_{\scriptstyle\mathboondoxcal{O}})\big)$. Now compute for ${\scriptstyle\mathboondoxcal{Q}} \in W_{\scriptstyle\mathboondoxcal{O}}$ \begin{equation*} \begin{split} s({\scriptstyle\mathboondoxcal{Q}}) - [c]_{\scriptstyle\mathboondoxcal{Q}} \, & = \sum_{i\in I_{\scriptstyle\mathboondoxcal{O}}} \left[\varphi_i c_i \right]({\scriptstyle\mathboondoxcal{Q}})-\left[\Psi_{\varepsilon_i}(\varphi_i c_i)\right]_{\scriptstyle\mathboondoxcal{Q}} = \sum_{i\in I_{\scriptstyle\mathboondoxcal{O}}}\left[ b e_i \right]_{\scriptstyle\mathboondoxcal{Q}}+\left[ H_i(\varphi_i b c_i )\right]_{\scriptstyle\mathboondoxcal{Q}}=\\ & = \left[ b e_{\scriptstyle\mathboondoxcal{O}} \right]_{\scriptstyle\mathboondoxcal{Q}}+\sum_{i\in I_{\scriptstyle\mathboondoxcal{O}}}\left[ H_i (\varphi_i b c_i)\right]_{\scriptstyle\mathboondoxcal{Q}} \ . \end{split} \end{equation*} Hence one obtains, whenever $s$ is a cycle, \[ s({\scriptstyle\mathboondoxcal{Q}}) - [c]_{\scriptstyle\mathboondoxcal{Q}} = \left[ b e_{\scriptstyle\mathboondoxcal{O}} \right]_{\scriptstyle\mathboondoxcal{Q}} \quad \text{for all } {\scriptstyle\mathboondoxcal{O}} \in X, \: {\scriptstyle\mathboondoxcal{Q}} \in W_{\scriptstyle\mathboondoxcal{O}}\ . \] This means that $s$ and $\eta (c)$ define the same homology class. So the induced morphism between homologies $H_\bullet\eta : HH_\bullet(A)\to H_\bullet\big(\Gamma\big(X,\hat{\mathscr{C}}_\bullet (\mathcal{A})\big)\big)$ is surjective. It remains to show that $H_\bullet\eta$ is injective. To this end assume that $e \in C_k (A)$ is a cycle such that $H_\bullet\eta (e) = 0$. Then $\eta (e) = bs $ for some $s \in \Gamma \big( X , \hat{\mathscr{C}}_{k+1} (\mathcal{A})\big)$. As before, associate to $s$ a sufficiently fine locally finite open cover $(U_i)_{i\in I}$ together with a subordinate smooth partition of unity $(\varphi_i)_{i\in I}$ and $c_i \in C_{k+1}(\mathcal{A}(U_i))$ such that $[c_i]_{\scriptstyle\mathboondoxcal{O}} = s({\scriptstyle\mathboondoxcal{O}})$ for all ${\scriptstyle\mathboondoxcal{O}} \in U_i$. Let $W_{\scriptstyle\mathboondoxcal{O}}$ and $I_{\scriptstyle\mathboondoxcal{O}}$ also be as above. Define $c \in C_{k+1}(A)$ by Eq.~\eqref{eq:definition-lifting-chain}. Now compute for ${\scriptstyle\mathboondoxcal{Q}}\in W_{\scriptstyle\mathboondoxcal{O}}$ \begin{equation*} \begin{split} [bc - e ]_{\scriptstyle\mathboondoxcal{Q}} \,&= \sum_{i\in I_{\scriptstyle\mathboondoxcal{O}}}\left[ b \Psi_{\varepsilon_i}(\varphi_i c_i)\right]_{\scriptstyle\mathboondoxcal{Q}} - [\varphi_i e]_{\scriptstyle\mathboondoxcal{Q}} =\sum_{i\in I_{\scriptstyle\mathboondoxcal{O}}}\left[\Psi_{\varepsilon_i} (\varphi_i b c_i) \right]_{\scriptstyle\mathboondoxcal{Q}} - [\varphi_i e]_{\scriptstyle\mathboondoxcal{Q}} = \\ & = \sum_{i\in I_{\scriptstyle\mathboondoxcal{O}}} \left[ \varphi_i b c_i \right]_{\scriptstyle\mathboondoxcal{Q}} - [\varphi_i e]_{\scriptstyle\mathboondoxcal{Q}} = \sum_{i\in I_{\scriptstyle\mathboondoxcal{O}}} (\varphi_i b s)({\scriptstyle\mathboondoxcal{Q}}) - (\varphi_i bs)({\scriptstyle\mathboondoxcal{Q}}) = 0 \ . \end{split} \end{equation*} Therefore, $bc - e\in C_k(A)$ is a $k$-cycle such that its support does not meet the diagonal. By Lemma \ref{lem:cycle-not-meeting-diagonal}, $bc - e$ is a boundary which means that the homology of $e$ is trivial. Hence $H_\bullet\eta$ is an isomorphism. \end{proof} Now we have all the tools to verify our main localization result. \begin{proof}[Proof of Theorem \ref{thm:hochschild-homology-global-sections}] First note that we can regard every chain complex of sheaves $\mathcal{D}_\bullet$ as a cochain complex of sheaves under the duality $\mathcal{D}^n := \mathcal{D}_{-n}$ for all integers $n$. We therefore have the hypercohomology $\mathbb{H}_{n} (X, \mathcal{D}_\bullet) := \mathbb{H}^{-n} (X, \mathcal{D}^{\bullet})$; see \cite[Appendix]{WeiCHS}, where the case of cochain complexes of sheaves not necessarily bounded below as we have it here is considered. Observe that $\big( \hat{\mathscr{C}}_\bullet (\mathcal{A}) , b \big)$ and $\big( \mathscr{H}\mathscr{H}_\bullet (\mathcal{A}),0\big)$ are quasi-isomorphic sheaf complexes, hence their hypercohomologies coincide. Recall that for a cochain complex of fine sheaves $\mathcal{D}^\bullet$ \[ \mathbb{H}^{n} (X, \mathcal{D}^{\bullet}) = H^{n} \big( \Gamma (X , \mathcal{D}^{\bullet})\big) \ . \] Since both $\hat{\mathscr{C}}_\bullet (\mathcal{A})$ and $\mathscr{H}\mathscr{H}_\bullet (\mathcal{A})$ are complexes of fine sheaves, these observations together with Proposition \ref{prop:isomorphism-global-section-space} now entails for natural $n$ \begin{equation*} \begin{split} HH_n \big( \mathcal{A} (X)\big) \, & = H_n \big( \Gamma (X , \hat{\mathscr{C}}_\bullet (\mathcal{A})\big) = \mathbb{H}_{n} (X, \hat{\mathscr{C}}_\bullet (\mathcal{A})) = \\ & = \mathbb{H}_{n} (X, \mathscr{H}\mathscr{H}_{\bullet} (\mathcal{A}) ) = H_n \big( \Gamma (X , \mathscr{H}\mathscr{H}_{\bullet})\big) = \Gamma \big(X, \mathscr{H}\mathscr{H}_n (\mathcal{A}) \big) \ . \end{split} \end{equation*} This is the claim. \end{proof} \subsection{$S^1$ rotation in $\mathbb{R}^{2m}$} In this subsection, we work with complex-valued functions, and differential forms over complex numbers. Since tensoring an $\mathbb{R}$-vector space with $\mathbb{C}$ is a faithfully flat functor, our results in this section still hold true for the algebra of real-valued functions. We consider a linear representation of $S^1$ on $\mathbb{R}^{2m}$. We identify $\mathbb{R}^{2m}$ with $\mathbb{C}^m$, and decompose $\mathbb{C}^m$ into the following two subspaces, i.e. \begin{equation}\label{eq:action} \mathbb{C}^m=V_0 \oplus V_1, \end{equation} where $V_0$ is the subspace of $\mathbb{C}^m$ on which $S^1$ acts trivially, and $V_1$ is the $S^1$-invariant subspace of $\mathbb{C}^m$ orthogonal to $V_0$ with respect to an $S^1$-invariant hermitian metric on $\mathbb{C}^m$. Furthermore, $V_1$ is decomposed into irreducible unitary representations of $S^1$, i.e. \[ V_1=\bigoplus_{j=1}^{t} \mathbb{C}_{w_j}, \] where $\mathbb{C}_{w_j}$ is an irreducible representation $\rho_{w_j}$ of $S^1$ with the weight $0\neq w_j\in \mathbb{Z}$, i.e. \[ \rho_{w_j}(\exp(2\pi \sqrt{-1})t)\big(z\big):=\exp(2w_j \pi \sqrt{-1}t)z. \] We observe that $\mathcal{C}^\infty(\mathbb{C}^m)\rtimes S^1$ is isomorphic to $\big(\mathcal{C}^\infty(V_0)\otimes \mathcal{C}^\infty(V_1)\big)\rtimes S^1$. As $S^1$ acts on $V_0$ trivially, we have \[ \mathcal{C}^\infty(\mathbb{C}^m)\rtimes S^1\cong \mathcal{C}^\infty(V_0)\otimes \big( \mathcal{C}^\infty(V_1)\rtimes S^1 \big). \] The K\"unneth formula for Hochschild homology \cite[Theorem 4.2.5]{LodCH} gives \[ HH_{\bullet}\Big( \mathcal{C}^\infty(\mathbb{C}^m)\rtimes S^1\Big)=HH_\bullet\big( \mathcal{C}^\infty(V_0)\big)\otimes HH_\bullet \big( \mathcal{C}^\infty(V_1)\rtimes S^1 \big). \] The Hochschild-Kostant-Rosenberg theorem shows $HH_\bullet\big( \mathcal{C}^\infty(V_0)\big)=\Omega^\bullet (V_0)$. Hence, we have reduced the computation of $HH_\bullet \big( \mathcal{C}^\infty(\mathbb{C}^m)\rtimes S^1 \big)$ to $HH_\bullet \big( \mathcal{C}^\infty(V_1)\rtimes S^1 \big)$. Without loss of generality, we assume in the left of this subsection that $\mathbb{C}^m=V_1$, i.e. \[ \mathbb{C}^m=\bigoplus_{j=1}^{m} \mathbb{C}_{w_j},\ 0\neq w_j\in \mathbb{Z}. \] Let $w$ be the lowest common multiplier of $w_1, ..., w_m$. We observe that for $t\in [0, 1)$, if $t \neq \frac{j}{w}, j=0, ..., w-1$, the fixed point subspace of $t$ is $\{0\}$; if $t=\frac{j}{w}$, the fixed point subspace of $t$ is \[ \mathbb{C}_{w_{k_1}}\oplus \cdots \oplus \mathbb{C}_{w_{k_l}}, \] for $w_{k_1}, ..., w_{k_l}$ that $w$ divides $jw_{k_1},\cdots, jw_{k_l}$. Hence the loop space $\Lambda_0(S^1\ltimes \mathbb{C}^m)$ has the following form, \[ \begin{split} \Lambda_0(S^1\ltimes \mathbb{C}^m)=\Big\{ \big(\exp(2\pi \sqrt{-1}t), & (0,\cdots, z_{w_{k_1}}, \cdots, z_{w_{k_l}},0,\cdots) \big)\Big{|} \\ &(0,\cdots, z_{w_{k_1}}, \cdots, z_{w_{k_l}},0,\cdots)\in \mathbb{C}^m, t w_{k_1}, \cdots, tw_{k_l}\in \mathbb{Z} w\Big\}. \end{split} \] Let $\sigma: \Lambda_0(S^1\ltimes \mathbb{C}^m)\to S^1$ be the forgetful map mapping $(\exp(2\pi \sqrt{-1}t), z)\in \Lambda_0(S^1\ltimes \mathbb{C}^m)$ to $\exp(2\pi \sqrt{-1}t)$. Following Proposition \ref{prop:chain-map-horizontal-relative-forms} and Eq. (\ref{eq:ConvolutionConnesKoszulChainCpl}), we can compute the Hochschild homology of $\mathcal{C}^\infty(\mathbb{C}^m)\rtimes S^1$ is computed by the $S^1$-invariant part of the cohomology of the following Koszul type complex, \begin{equation}\label{eq:S1koszul} \Omega^{2m}_{S^1\ltimes \mathbb{C}^m \to S^1} (S^1\ltimes \mathbb{C}^m) \overset{i_{Y_{S^1\ltimes \mathbb{C}^m}}}{\longrightarrow} \ldots \overset{i_{Y_{S^1 \ltimes \mathbb{C}^m }}}{\longrightarrow} \Omega^1_{S^1 \ltimes \mathbb{C}^m \to S^1} (S^1\ltimes \mathbb{C}^m) \overset{i_{Y_{S^1 \ltimes \mathbb{C}^m}}}{\longrightarrow} \mathcal{C}^\infty (S^1 \ltimes \mathbb{C}^m) \longrightarrow 0 \ , \end{equation} where $Y_{S^1 \ltimes \mathbb{C}^m } : S^1\ltimes \mathbb{C}^m \to s^*T\mathbb{C}^m$ is defined by $Y_{S^1\ltimes \mathbb{C}^m}(g,v) =v -gv$. Below, we sometimes abbreviate the symbol $Y_{S^1 \ltimes \mathbb{C}^m }$ by $Y$ by abusing the notations. Fix a choice of coordinates $(z_1, \cdots, z_m)$ for $z_j\in \mathbb{C}_{w_j}$. The vector field $Y:=Y_{S^1\ltimes \mathbb{C}^m}(\exp(2\pi \sqrt{-1}t),z)$ is written as \[ Y:=Y_{S^1\ltimes \mathbb{C}^m}(\exp(2\pi \sqrt{-1}t),z)=\sum_{k=1}^m \big(\exp(2\pi \sqrt{-1} w_kt)-1\big)z_k\frac{\partial}{\partial z_k}+ \big(\exp(-2\pi \sqrt{-1} w_kt)-1\big)\bar{z}_k\frac{\partial}{\partial \bar{z}_k} \ . \] Define an analytic function $a(z)$ on $\mathbb{C}$ by \[ a(z):=\frac{\exp(2\pi \sqrt{-1}z)-1}{z} \ . \] Then we have \[ \begin{split} \exp(2\pi \sqrt{-1}w_k t)-1&= w_k t a(w_k t),\\ \exp(-2\pi \sqrt{-1}w_k t)-1&=w_k t \bar{a}(w_k t). \end{split} \] Observe that for $t\in \mathbb{R}$, $a(t)=\bar{a}(t)$, and $a(t)\ne 0$ for all $t$ sufficiently close to $0$. For a sufficiently small $\epsilon$, the vector field $Y$ on $(-\epsilon, \epsilon)\times \mathbb{C}^m$ is of the following form \[ Y= t \sum_{k=1}^m w_k \left( a(w_k t) z_k \frac{\partial}{\partial z_k} + \overline{a(w_k t)}\bar{z}_k \frac{\partial }{\partial \bar{z}_k}\right). \] This leads to the following property of the vector field $Y$. \begin{lemma}\label{lem:vectorfieldY} The vector field $Y: S^1\times \mathbb{C}^m\to \mathbb{C}^m,\ (g, z)\mapsto z-gz$ has a coordinate representation $Y=\sum_{k=1}^m Y^k z_k \frac{\partial }{\partial z_k}+\overline{Y}^k \bar{z}_k\frac{\partial }{\partial \bar{z}_k}$ with coefficients given by \[ Y^k\big(\exp(2\pi \sqrt{-1}t)\big)=\exp (2\pi \sqrt{-1}w_k t)-1. \] Denote $w=l.c.m.(w_1, \cdots, w_m)$. When $t_0=\frac{j}{w}$, for $0\leq j<w$, there is a sufficiently small $\epsilon>0$ such that on $ (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)$, $Y^k$ is of the following form, \[ Y^k\big( \exp(2\pi \sqrt{-1}t)\big)= w_k (t-\frac{j}{w}) a\big(w_k( t-\frac{j}{w})\big),\ \text{for}\ w_k j \in \mathbb{Z}w, \] where $a\big(w_k( t-\frac{j}{w})\big)\ne 0$ for all $t\in (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)$. And for $k$ with $w_k j \notin \mathbb{Z}w$, $Y^k\big(\exp(2\pi \sqrt{-1}t)\big)\ne 0$ for all $t\in (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)$. When $t_0\neq\frac{j}{w}$, there is a sufficiently small $\epsilon>0$ such that on $(t_0-\epsilon, t_0+\epsilon)$, $Y^k\big(\exp(2\pi \sqrt{-1}t)\big)\ne 0$ for all $t\in(t_0-\epsilon, t_0+\epsilon) $. \end{lemma} Analogous to the expression of the vector field $Y$, we study in the following lemma the local expression of the vanishing ideal $\mathcal{J}$ of the loop space $\Lambda_0(S^1\ltimes \mathbb{C}^m)$ for the $S^1$ action on $\mathbb{C}^m$ defined by Equation (\ref{eq:action}). \begin{lemma} \label{lem:vanishingideal} The vanishing ideal $\mathcal{J}$ of $\Lambda_0(S^1\ltimes \mathbb{C}^m)$ has the following local form. \begin{itemize} \item Near $\big(\exp(2\pi \sqrt{-1}\frac{j}{w}), 0\big)\in S^1\times \mathbb{C}^m$, the vanishing ideal $\mathcal{J}\big( (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)\times B_{\varrho}(0)\big)$ for a sufficiently small $\epsilon>0$ and a ball $B_{\varrho}(0)\subset \mathbb{C}^m$ centered at $0$ with a sufficiently small radius $\varrho>0$ consists of all smooth functions $f\in \mathcal{C}^\infty\big( (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)\times B_{\varrho}(0)\big)$ which can be written in the form \[ f=(t-\frac{j}{w})\sum _{k, w_k j\in w\mathbb{Z} } (z_k f_k+\bar{z}_k g_k)+\sum_{k, w_kj\notin w\mathbb{Z}} (z_k f_k+\bar{z}_k g_k), \] for $f_k, g_k\in \mathcal{C}^\infty\big( (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)\times B_{\varrho}(0)\big)$. \item Near $\big(\exp(2\pi \sqrt{-1}\frac{j}{w}), Z\big)\in S^1\times \mathbb{C}^m$ with $Z\ne 0$ and $\exp(2\pi \sqrt{-1}\frac{j}{w}) Z=Z$, the vanishing ideal $\mathcal{J}\big( (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)\times B_{\varrho}(Z)\big)$ for a sufficiently small $\epsilon>0$ and a ball $B_{\varrho}(Z)\subset \mathbb{C}^m$ centered at $Z$ with a sufficiently small radius $\varrho>0$ consists of all smooth functions $f\in \mathcal{C}^\infty\big( (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)\times B_{\varrho}(Z)\big)$ which can be written in the form \[ f=(t-\frac{j}{w})f+\sum_{k, w_kj\notin w\mathbb{Z}} (z_k f_k+\bar{z}_k g_k), \] for $f, f_k, g_k\in \mathcal{C}^\infty\big( (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)\times B_{\varrho}(Z)\big)$. \item Near $\big(\exp(2\pi \sqrt{-1}t_0), 0\big)\in S^1\times \mathbb{C}^m$ such that $t_0\ne \frac{j}{w}$ for all $j$ and $0\in \mathbb{C}^m$, the vanishing ideal $\mathcal{J}\big( (t_0-\epsilon, t_0+\epsilon)\times B_{\varrho}(0)\big)$ for a sufficiently small $\epsilon>0$ and a ball $B_{\varrho}(0)\subset \mathbb{C}^m$ centered at $0$ with a sufficiently small radius $\varrho>0$ consists of all smooth functions $f\in \mathcal{C}^\infty\big( (t_0-\epsilon, t_0+\epsilon)\times B_{\varrho}(0)\big)$ which can be written in the form \[ f=\sum_{k=1}^m (z_k f_k+\bar{z}_k g_k), \] for $f_k, g_k\in \mathcal{C}^\infty\big( (t_0-\epsilon, t_0+\epsilon)\times B_{\varrho}(0)\big)$. \end{itemize} \end{lemma} \begin{proof} We will prove the case around the most singular point $(1,0)\in S^1\times \mathbb{C}^m$. A similar proof works for the other points. We leave the details to the reader. For $(1, 0)\in S^1\times \mathbb{C}^m$, choose a sufficiently small $\epsilon>0$ such that there is no other point in the interval $(-\epsilon, \epsilon)$ of the form $\frac{j}{w}$ for an integer $0<j<w$. We identify $(-\epsilon, \epsilon)$ with a neighborhood of $1$ in $S^1$ via the exponential map. For a positive $\varrho$, the loop space $\Lambda_0(S^1\ltimes \mathbb{C}^m)$ in $ (-\epsilon, \epsilon)\times B_{\varrho}(0)$ is of the form \[ \Lambda_0(S^1\times \mathbb{C}^m)_{(0, 0)}=\{ (0, z)|z\in B_{\varrho}(0)\}\cup \{(t, 0)\}. \] A smooth function $f$ on $(-\epsilon, \epsilon)\times B_{\varrho}(0)$ belongs to $\mathcal{J}\big((-\epsilon, \epsilon)\times B_{\varrho}(0)\big)$ if and only if \[ f(0,z)=f(t,0)=0. \] We consider $f$ as a function of $t\in (-\epsilon, \epsilon)$. By the Malgrange preparation theorem, we have the expansion \[ f(t,z)+t=c(t, z)(t+a_0(z)), \] where $c(t, z)$ and $a_0(z)$ are smooth and $a_0(0)=0$. Since $t=c(t,0)t$ for all $t\in (-\epsilon, \epsilon)$, $c(t,0)=1$. Putting $t=0$ gives $0=c(0,z)a_0(z)$ for all $z\in B_{\varrho}(0)$. Recall that $c(0,0)=1$. Therefore, $a_0(z)=0$ for all $z$ in a neighborhood of $0$. After possibly shrinking $\varrho$, we can assume that $a_0(z)=0$ on $B_{\rho}(0)$. Hence, we conclude that \[ f(t,z)=t(c(t,z)-1). \] Taking the parametric Taylor expansion of $c(t,z)-1$ gives \[ c(t,z)-1=\sum_{j=1}^m z_jf_j(t,z)+\bar{z}_j g_j(t,z), \] where $f_j$ and $g_j$ are smooth functions on $(-\epsilon, \epsilon)\times B_{\varrho}(0)$. \end{proof} In the following, we compute the cohomology of the complex (\ref{eq:S1koszul}). We observe that the complex $(\Omega^\bullet_{S^1\ltimes \mathbb{C}^m\to S^1}(S^1\ltimes \mathbb{C}^m), i_{Y})$ for $Y:=Y_{S^1 \ltimes \mathbb{C}^m } $ forms a sheaf of complexes over $S^1$ via the map $\sigma: \Lambda_0(S^1\ltimes \mathbb{C}^m)\to S^1$. Accordingly, we compute the cohomology $\big( \Omega^{\bullet}_{S^1\ltimes \mathbb{C}^m \to S^1}(S^1 \ltimes \mathbb{C}^m ), i_{Y}\big)$ as a sheaf over $S^1$. \begin{proposition} \label{prop:rel-F} For all open subsets $U$ of the loop space $\Lambda_0 = \Lambda_0 (S^1 \ltimes \mathbb{C}^m)$ and all $k\in {\mathbb N}$ the map \[ \Theta^k_U : \rOmega{k}{\Lambda_0}(U) \to \Gamma^\infty (U,\bigwedge^k F) \] from Prop.~\ref{prop:factorization-relative-forms} is injective. \end{proposition} \begin{proof} We will prove the case around the most singular point $(1,0)\in S^1\times \mathbb{C}^m$. A similar proof works for the other points. We leave the detail to the reader. Recall that we show in Lemma \ref{lem:vanishingideal} that near $(1,0)$, the vanishing ideal $\mathcal{J}\big( (-\epsilon, \epsilon)\times B_{\varrho}(0)\big)$ for a sufficiently small $\epsilon>0$ and a ball $B_{\varrho}(0)\subset \mathbb{C}^m$ centered at $0$ with a sufficiently small radius $\varrho>0$ consists of all smooth functions $f\in \mathcal{C}^\infty\big( (-\epsilon, \epsilon)\times B_{\varrho}(0)\big)$ which can be written in the form \[ f=t\sum _{k=1}^m (z_k f_k+\bar{z}_k g_k), \] for $f_k, g_k\in \mathcal{C}^\infty\big( (-\epsilon, \epsilon)\times B_{\varrho}(0)\big)$. Recall that by definition, $\rOmega{p}{\Lambda_0} \big( (-\epsilon, \epsilon)\times B_{\varrho}(0) \big)$ is the quotient \[ \Omega^p_{S^1\ltimes \mathbb{C}^m\to S^1} \big( (-\epsilon, \epsilon)\times B_{\varrho} (0) \big) / \mathcal{J} \Omega^p_{S^1\ltimes \mathbb{C}^m\to S^1} + d\mathcal{J} \wedge \Omega^p_{S^1\ltimes \mathbb{C}^m\to S^1} \big( (-\epsilon, \epsilon)\times B_{\varrho}(0) \big) \ . \] In the following, we will discribe $\rOmega{p}{\Lambda_0} \big( (-\epsilon, \epsilon)\times B_{\varrho}(0) \big)$ in more detail and, for ease of notation, will use the symbols $\Omega^p_{S^1\ltimes \mathbb{C}^m\to S^1}$ and $\rOmega{p}{\Lambda_0}$ to stand for $\Omega^p_{S^1\ltimes \mathbb{C}^m\to S^1}\big((-\epsilon, \epsilon)\times B_{\varrho}(0) \big)$ and $\rOmega{p}{\Lambda_0}\big((-\epsilon, \epsilon)\times B_{\varrho}(0) \big)$, respectively, and $\mathcal{J}$ for the vanishing ideal $\mathcal{J}\big( (-\epsilon, \epsilon)\times B_{\varrho}(0)\big)$. In degree $p=0$, $\rOmega{0}{\Lambda_0}$ coincides with the quotient of $\mathcal{C}^\infty\big((-\epsilon, \epsilon)\times B_{\varrho}(0)\big)$ by $\mathcal{J}\big( (-\epsilon, \epsilon)\times B_{\varrho}(0) \big)$. In degree $p=1$, we know by Lemma \ref{lem:vanishingideal} that $d\mathcal{J}$ consists of $1$-forms which can be expressed as follows: \[ t\sum_{k=1}^m (f_k dz_k+g_k d\bar{z}_k),\ f_k, g_k\in \mathcal{C}^\infty\big( (-\epsilon, \epsilon)\times B_{\varrho}(0)\big). \] Hence, $d\mathcal{J}$ is of the form $t \Omega^1_{S^1\ltimes \mathbb{C}^m\to S^1}$, which contains $\mathcal{J} \Omega^1_{S^1\ltimes \mathbb{C}^m\to S^1}$. Notice that for $(0, z)\in S^1\times \mathbb{C}^m$, $F_{(0, z)}$ coincides with $T^*_z\mathbb{C}^m$. For $\omega=\sum_{k=1}^m f_k dz_k+g_k d\bar{z}_k \in \rOmega{1}{\Lambda_0}$, if $\Theta(\omega)=0$, then $f_k(0,z)=g_k(0,z)=0$ for $1\leq k\leq m$. Therefore, taking the parametric Talyor expansion of $f_k, g_k$ at $(0,z)$, we have that there are $\tilde{f}_k$ and $\tilde{g}_k$ in $\mathcal{C}^\infty\big((-\epsilon, \epsilon)\times B_{\varrho}(0) \big)$ such that $f_k=t\tilde{f}_k$ and $g_k=t\tilde{g}_k$. Hence, $\omega=t\sum_{k=1}^m \tilde{f}_k dz_k +\tilde{g}_k d\bar{z}_k\in d\mathcal{J}$ and $[\omega]=0$ in $\rOmega{1}{\Lambda_0}$. In degree $p>1$, the above description of $\rOmega{1}{\Lambda_0}$ generalizes with the above expression for $d\mathcal{J}$. As $\Omega^k_{S^1\ltimes \mathbb{C}^m\to S^1}$ is of the form \[ \sum_{j}dz_j\wedge \Omega^{k-1}_{S^1\ltimes \mathbb{C}^m\to S^1}+d\bar{z}_j \wedge \Omega^{k-1}_{S^1\ltimes \mathbb{C}^m\to S^1}, \] we conclude that $d\mathcal{J} \wedge\Omega^{k-1}_{S^1\ltimes \mathbb{C}^m\to S^1}$ can be identified as $t\Omega^{k}_{S^1\ltimes \mathbb{C}^m\to S^1}$, which contains $\mathcal{J} \Omega^{k-1}_{S^1\ltimes \mathbb{C}^m\to S^1}$ as a subspace. We notice that at $(0,z)\in S^1\times \mathbb{C}^m$, $\bigwedge^k F_{(0,z)}$ is $\bigwedge^k T^*_{(0,z)}\mathbb{C}^m$. For $\omega=\sum_{I,J}f_{I,J}dz_{I_1}\wedge\cdots \wedge dz_{I_s}\wedge d\bar{z}_{J_{s+1}}\wedge \cdots \wedge d\bar{z}_{J_{k}}$, with $1\leq I_1<\cdots <I_s\leq m$ and $1\leq J_{s+1}<\cdots<J_{k}\leq m$, if $\Theta(\omega)=0$, we then get $f_{I,J}(0,z)=0$ for all $I,J$. And we can conclude from the Taylor expansion that there exists $\tilde{f}_{I,J}$ such that $f_{I,J}=t\tilde{f}_{I,J}$, and $\omega=t\sum_{I, J} \tilde{f}_{I,J}dz_{I_1}\wedge\cdots \wedge dz_{I_s}\wedge d\bar{z}_{J_{s+1}}\wedge \cdots \wedge d\bar{z}_{J_{k}}$ which is an element in $d\mathcal{J} \wedge \Omega^{k-1}_{S^1\ltimes \mathbb{C}^m\to S^1}$. Therefore, $[\omega]=0$ in $\rOmega{k}{\Lambda_0}$. \end{proof} \begin{proposition}\label{prop:equivariant-koszul} For each $S^1$-invariant open $V \subset \mathbb{C}^m$ the chain map \[ \mathfrak{R}: \big( \Omega^\bullet_{S^1 \ltimes V \to S^1} (S^1 \ltimes V ), Y \lrcorner \big) \to \big( \hrOmega{\bullet}{\Lambda_0} (\Lambda_0(S^1 \ltimes V )), 0 \big) \] is a quasi-isomorphism. \end{proposition} \begin{proof} We consider both sides as sheaves over $S^1$, and prove that $\mathfrak{R}$ is a quasi-isomorphism of sheaves over $S^1$. It is sufficient to prove the quasi-isomorphism $\mathfrak{R}$ at each stalk. We split our proof into two parts according to the point $t_0$ in $S^1$, \begin{enumerate} \item at $\exp(2\pi \sqrt{-1}t_0)$ with $t_0\ne \frac{j}{w}$ for $0\leq j<w$ and $t\in [0,1)$, \item at $\exp(2\pi \sqrt{-1}\frac{j}{w})$ for $0\leq j<w$. \end{enumerate} \noindent{\textit{Case (1)}}. We prove that \[ \mathfrak{R}_{\exp(2\pi \sqrt{-1}t_0)}: \big( \Omega^\bullet_{S^1 \ltimes V \to S^1, \exp(2\pi \sqrt{-1}t_0)} (S^1 \ltimes V ), Y \lrcorner \big)\to {\hrOmega{\bullet}{\Lambda_0 ,\exp(2\pi \sqrt{-1}t_0)}} (\Lambda_0(S^1 \ltimes V )) \] is a quasi-isomorphism for $t_0\ne \frac{j}{w}$ for $0\leq j<w$ and $t_0\in [0,1)$. It is crucial to observe that for a sufficiently small $\epsilon>0$, on $(t_0-\epsilon, t_0+\epsilon)\times \mathbb{C}^m$, the vector field $Y$ is of the form \[ Y=\sum_{j=1}^m \big(\exp(2\pi \sqrt{-1}w_j t)-1\big)z_j\frac{\partial}{\partial z_j}+ \big(\exp(-2\pi \sqrt{-1}w_j t)-1\big)z_j\frac{\partial}{\partial \bar{z}_j}. \] Observe that the vector field $Y$ vanishes exactly at $(t, 0)$. Morover, \[ \big( \Omega^\bullet_{S^1 \ltimes V \to S^1, \exp(2\pi \sqrt{-1}t_0)} \big((t_0-\epsilon, t_0+\epsilon)\times \mathbb{C}^m \big), Y \lrcorner \big) \] is a smooth family of generalized Koszul complexes over $t\in (t_0-\epsilon, t_0+\epsilon)$. Its cohomology can be computed using Proposition \ref{prop:parametrizedkoszul} as \[ H^\bullet \big( \Omega^\bullet_{S^1 \ltimes V \to S^1, \exp(2\pi \sqrt{-1}t_0)} \big((t_0-\epsilon, t_0+\epsilon)\times \mathbb{C}^m \big), Y \lrcorner \big)=\left\{ \begin{array}{ll} \mathcal{C}^\infty\big(t_0-\epsilon, t_0+\epsilon\big),& \bullet=0,\\ 0,&\text{otherwise}. \end{array} \right. \] At the same time, for every $t$ in $(t_0-\epsilon, t_0+\epsilon)$, the fixed point of $\exp(2\pi \sqrt{-1}t)$ is $0$ in $\mathbb{C}^m$. And therefore, the complex $ \hrOmega{\bullet}{\Lambda_0}\big( (t_0-\epsilon, t_0+\epsilon)\times \mathbb{C}^m\big)$ identified with $\Gamma^\infty\big((t_0-\epsilon, t_0+\epsilon)\times \{0\}, \bigwedge^\bullet F\big)$ is computed as follows, \[ \Gamma^\infty\big((t_0-\epsilon, t_0+\epsilon)\times \{0\}, {\bigwedge}^\bullet F\big)=\left\{ \begin{array}{ll} \mathcal{C}^\infty\big(t_0-\epsilon, t_0+\epsilon\big),& \bullet=0,\\ 0,&\text{otherwise}. \end{array} \right. \] From the above computation, it is straight forward to conclude that $\mathfrak{R}_{\exp(2\pi \sqrt{-1}t_0)}$ is a quasi-isomorphism. \noindent{\textit{Case (2)}}. We prove that at $\exp(2\pi \sqrt{-1}\frac{j}{w})$, the morphism $\mathfrak{R}_{\exp(2\pi \sqrt{-1}\frac{j}{w})}$ is a quasi-isomorphism. Following Lemma \ref{lem:vectorfieldY}, we write the vector field $Y$ as a sum of two components \[ \begin{split} Y&=Y_1+Y_2\\ Y_1&=\sum_{k, kj\notin w\mathbb{Z}} Y^k z_k\frac{\partial}{\partial z_k}+\overline{Y}^k \bar{z}_k \frac{\partial}{\partial \bar{z}_k}\\ Y_2&=(t-\frac{j}{w}) \sum_{k, kj\in w\mathbb{Z}} w_k(a_k z_k\frac{\partial}{\partial z_k}+\bar{a}_k \bar{z}_k\frac{\partial}{\partial z_k}), \end{split} \] where $a_k=a\big(w_k(t-\frac{j}{w})\big)$. Define $\widetilde{Y}_2$ to be $\sum_{k, kj\in w\mathbb{Z}} w_k(a_k z_k\frac{\partial}{\partial z_k}+\bar{a}_k \bar{z}_k\frac{\partial}{\partial z_k})$. Then we have the following expression for $Y$, \[ Y=Y_1+(t-\frac{j}{w})\widetilde{Y}_2. \] Accordingly, we can decompose $\mathbb{C}^m$ as a direct sum of two subspaces, that is write $\mathbb{C}^m=S_1\times S_2$ with \[ \begin{split} S_1&:=\bigoplus _{k, kj\notin w\mathbb{Z}} \mathbb{C}_{w_k},\\ S_2&:=\bigoplus_{k, kj\in w\mathbb{Z}} \mathbb{C}_{w_k}. \end{split} \] Both $S_1$ and $S_2$ are equipped with $S^1$-actions such that the above decomposition of $\mathbb{C}^m$ is $S^1$-equivariant. As our argument is local, we can assume to work with an open set $V$, which is of the product form $V=V_1\times V_2$ such that $V_1$ (and $V_2$) is an $S^1$-invariant neighborhood of $0$ in $S_1$ (and $S_2$). We consider $\left(\Omega^\bullet_{S^1 \ltimes V_l \to S^1}\left( \big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_l \right), i_{Y_l}\right)$ for $l=1,2$. Observe that each complex $\Omega^\bullet_{S^1 \ltimes V_l \to S^1}\left( \big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_l \right)$ is a $\mathcal{C}^\infty\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big)$-module, and their tensor product over the algebra $\mathcal{C}^\infty\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon \big)$ defines a bicomplex \[ \Omega^p_{S^1 \ltimes V_1 \to S^1} \left( \big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_1 \right)\otimes_{\mathcal{C}^\infty\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big)} \Omega^q_{S^1 \ltimes V_2 \to S^1} \left( \big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2 \right) \] with $i_{Y_1}\otimes 1$ being the horizontal differential and $1\otimes i_{Y_2}$ being the vertical one. The total complex of this double complex is exactly \[ \Omega^\bullet_{S^1 \ltimes V \to S^1} \left( \big( \frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V \right) \] with the differential $i_Y=i_{Y_1}\otimes 1+1\otimes i_{Y_2}$. The $E_1$-page of the spectral sequence associated to the bicomplex \[ \Omega^\bullet_{S^1 \ltimes V_1 \to S^1} \left( \big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_1 \right) \otimes_{\mathcal{C}^\infty\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big)} \Omega^\bullet_{S^1 \ltimes V_2 \to S^1} \left(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\right) \] is \[ H^\bullet\left(\Omega^\bullet_{S^1 \ltimes V_1\to S^1} \big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_1\big), i_{Y_1}\right)\otimes _{\mathcal{C}^\infty\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big)} \Omega^q_{S^1 \ltimes V_2 \to S^1} \big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\big), \] with the differential $1\otimes i_{Y_2}$. We observe that $Y_1$ vanishes only at $0$ for every fixed $t$. Therefore, $\left(\Omega^\bullet_{S^1 \ltimes V_1\to S^1} \big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_1\big), i_{Y_1}\right)$ is a smooth family of generalized Koszul complexes. Its cohomology is computed by Proposition \ref{prop:parametrizedkoszul} as follows, \[ H^\bullet\left(\Omega^\bullet_{S^1 \ltimes V_1\to S^1} \big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_1\big), i_{Y_1}\right)=\left\{ \begin{array}{ll} \mathcal{C}^\infty(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)&\bullet=0\\ 0&\bullet\neq 0 \ . \end{array} \right. \] Therefore, we get the following expression of $E_1^{p,q}$, \[ E_1^{p,q}=\left\{\begin{array}{ll} \Omega^q_{S^1 \ltimes V_2 \to S^1} \big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\big),&p=0\\ 0,&p\neq0. \end{array} \right. \] Next we compute the cohomology of $(E_1^{0,q}, i_{Y_2})$. Recall by Lemma \ref{lem:vectorfieldY} that $Y_2$ has the form $Y_2=(t-\frac{j}{w})\widetilde{Y}_2$, where $\widetilde{Y}_2$ vanishes exactly at $0$ for every fixed $t\in (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)$. At degree $q$, we notice that if an element $\omega\in \Omega^q_{S^1 \ltimes V_2 \to S^1} ((\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V_2)$ belongs to $\ker(i_{Y_2})$, $(t-\frac{j}{w})i_{\widetilde{Y}_2}\omega=0$. Hence $\omega$ belongs to $\ker(i_{\widetilde{Y}_2})$. Hence, we have reached the following equation \[ \ker(i_{Y_2})=\ker(i_{\widetilde{Y}_2}). \] It is also easy to check that \[ i_{Y_2}\Omega^{q+1}_{S^1 \ltimes V_2 \to S^1} \left(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\right)= \big(t-\frac{j}{w}\big)i_{\widetilde{Y}_2} \Omega^{q+1}_{S^1 \ltimes V_2 \to S^1} \left( \big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2 \right) \] We conclude that the quotient $\ker(i_{Y_2})/ i_{Y_2}\Omega^{q+1}_{S^1 \ltimes V_2 \to S^1} \left(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\right)$ is isomorphic to \[ \ker (i_{\widetilde{Y}_2})/(t-\frac{j}{w})i_{\widetilde{Y}_2} \Omega^{q+1}_{S^1 \ltimes V_2 \to S^1} \left(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\right) \] Recall that the cohomology of $\left(\Omega^{\bullet}_{S^1 \ltimes V_2 \to S^1} \big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\big), i_{\widetilde{Y}_2}\right)$ is computed as follows, \[ H^q\left( \Omega^{\bullet}_{S^1 \ltimes V_2 \to S^1} \big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\big), i_{\widetilde{Y}_2}\right)=\left\{ \begin{array}{ll} \mathcal{C}^\infty(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon),&q=0\\ 0,&q\neq 0. \end{array} \right. \] Therefore, for all $q$, we conclude that \[ i_{\widetilde{Y}_2} \Omega^{q+1}_{S^1 \ltimes V_2 \to S^1} \left(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\right)=\ker(i_{\widetilde{Y}_2}), \] and the quotient $\ker(i_{Y_2})/ i_{Y_2}\Omega^{q+1}_{S^1 \ltimes V_2 \to S^1} \left(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\right)$ is isomorphic to \[ \ker(i_{\widetilde{Y}_2})/\big(t-\frac{j}{2}\big) \ker(i_{\widetilde{Y}_2}). \] As the $E_2$ page has only nonzero component when $p=0$, the spectral sequence collapses at the $E^2$ page, and we conclude that the cohomology of the total complex, which is the cohomology of $ \Omega^\bullet_{S^1 \ltimes V \to S^1} \left(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V\right)$ with the differential $i_{Y_1}\otimes 1+1\otimes i_{Y_2}$, is equal to the quotient \[ \ker(i_{\widetilde{Y}_2})/\big(t-\frac{j}{2}\big) \ker(i_{\widetilde{Y}_2}) \] for the contraction $i_{\widetilde{Y}_2}$ on $\Omega^\bullet_{S^1 \ltimes V_2 \to S^1} \left(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2\right)$. We now prove that the morphism \[ \mathfrak{R}: \left( \Omega^q_{S^1 \ltimes V \to S^1} (S^1 \ltimes V ), Y \lrcorner \right) \to \big( \hrOmega{\bullet}{\Lambda_0} (\Lambda_0(S^1 \ltimes V )), 0 \big) \] is a quasi-isomorphism. The above discussion and description of $\Lambda_0((\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V)$ reduces us to prove that the morphism \[ \mathfrak{R}_2: \left( \Omega^\bullet_{S^1 \ltimes V_2 \to S^1} \big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2 \big), Y_2 \lrcorner \right) \to \left( \hrOmega{\bullet}{\Lambda_0} \left(\Lambda_0\big(\big(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon\big) \times V_2 \big)\right), 0 \right) \] is a quasi-isomorphism. We prove this by examination of $\mathfrak{R}_2$ in degree $q$. Hereby, we will work with $\bigwedge^\bullet F$ as it is isomorphic to $\rOmega{\bullet}{\Lambda_0}$ by Proposition \ref{prop:rel-F}. \\ \noindent{ $\bullet\ q\geq 1$}. Recall that $\Gamma^\infty\big((\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V_2 , \bigwedge^q F\big)$ is $\bigwedge^q F_{(\frac{j}{w}, z)}$. We observe that the vector field $\widetilde{Y}_2$ at $t=\frac{j}{w}$ coincides with the fundamental vector field of the $S^1$ action on $V_2$. Hence, if $\phi\in \bigwedge^q F_{(\frac{j}{w}, z)}$ is horizontal, $\phi$ satisfies the equation $i_{\widetilde{Y}_2(\frac{j}{w},z)}\phi=0$. As the cohomology of the $(\Omega^\bullet(V_2), i_{\widetilde{Y}_2(0,z)})$ at degree $q$ vanishes, there is a degree $q+1$ form $\psi \in \Omega^\bullet(V_2)$ such that $i_{\widetilde{Y}_2(\frac{j}{w},z)}\psi=\phi$. Define $\omega\in \Omega^\bullet_{S^1 \ltimes V_2 \to S^1} ((\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V_2 )$ by $\omega:=i_{\tilde{Y}_2}\psi$, where $\psi$ is viewed as an element in $\Omega^\bullet_{S^1 \ltimes V_2 \to S^1} ((\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V_2 )$ constant along the $t$ direction. Then we can easily check that $\omega $ belongs to the kernel of $i_{\widetilde{Y}_2}$ and $\mathfrak{R}_2(\psi)=\phi$. We conclude that $\mathfrak{R}_2$ is surjective. For the injectivity of $\mathfrak{R}_2$, we suppose that $\omega\in \ker(i_{\widetilde{Y}_2})$. Hence, $\mathfrak{R}_2(\omega)(\frac{j}{w},z)=\omega(\frac{j}{w},z)=0$. Then by the parametrized Taylor expansion, we can find a form $\tilde{\omega}\in \Omega^\bullet_{S^1 \ltimes V_2 \to S^1} ((\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V_2 )$ such that $\omega=(t-\frac{j}{w})\tilde{\omega}$. As $0=i_{\widetilde{Y}_2}\omega=(t-\frac{j}{w})i_{\widetilde{Y}_2}\tilde{\omega}_2$, $i_{\widetilde{Y}_2}\tilde{\omega}=0$. Hence $\omega=(t-\frac{j}{w})\tilde{\omega}$ belongs to $(t-\frac{j}{w})\ker{i_{\widetilde{Y}_2}}$, and $[\omega]$ is zero in the cohomology of $i_{Y_2}$. \noindent{$\bullet\ q=0$}. Recall that $\widetilde{Y}_2$ is of the form $\sum_{k} w_k\big(a(w_k(t-\frac{j}{w})) z_k\frac{\partial}{\partial z_k}+\bar{a}(w_k(t-\frac{j}{w}))\big)\bar{z}_k\frac{\partial}{\partial \bar{z}_k}$, where $a(w_k(t-\frac{j}{w}))\neq 0$ for all $t\in (\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon)$. Therefore, the space $(t-\frac{j}{w})i_{\widetilde{Y}_2}$ is of the form \[ \big( t-\frac{j}{w} \big)\sum_k z_k f_k +\bar{z}_k g_k, \] which is exactly the vanishing ideal $\mathcal{J}\big((\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V_2 \big)$. This shows that the cohomology of $ \big( \Omega^\bullet_{S^1 \ltimes V_2 \to S^1} ((\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V_2 ), Y_2 \lrcorner \big)$ at degree 0 coincides with $\mathcal{C}^\infty\big(\Lambda_0(S^1\ltimes V_2)\big)|_{(\frac{j}{w}-\epsilon, \frac{j}{w}+\epsilon) \times V_2}$. One concludes that $\mathfrak{R}_2$ is an isomorphism in degree $0$. \end{proof} \subsection{Stitching it all together} We are now in a position to prove the Conjecture \ref{BryConjBHF} in the case of circle actions: \begin{theorem} Let $M$ be an $S^1$-manifold and regard $\hrOmega{\bullet}{\Lambda_0} \big(\Lambda_0 (S^1\ltimes M)\big)$ as a chain complex endowed with the zero differential. Then the chain map \[ \Phi_{\bullet,M/S^1} : C_\bullet \big( \mathcal{C}^\infty (M) , \mathcal{A} (M/S^1)\big) \to \hrOmega{\bullet}{\Lambda_0} \big(\Lambda_0 (S^1\ltimes M)\big) \] is a quasi-isomorphism. \end{theorem} \begin{proof} Since $\Phi_{\bullet,M/S^1}$ is the global sections of a morphism of fine sheaves on $M/S^1$, it suffices to prove that \[ \Phi_\bullet: \hat{\mathscr{C}}_\bullet ( \mathcal{C}^\infty_M , \mathcal{A})\big) \to \pi_* (s_{|\Lambda_0})_* \rOmega{\bullet}{\Lambda_0} \ , \] is a quasi-isomorphism, i.e., that the induced map on the stalks $\Phi_{\bullet,\mathcal{O}}$ is. Now there are two cases, depending on the isotropies of the orbit $\mathcal{O}$: when the isotropy subgroup $\Gamma_x\subset S^1$ of a point $x\in S^1$ is a finite group, this follows from the (proof of) Corollary \ref{cor:finitegroup}. When the isotropy group is $S^1$ itself, it follows from Proposition \ref{prop:equivariant-koszul}. \end{proof}
1,477,468,750,778
arxiv
\section{Introduction} Generalized K\"ahler geometry and generalized Calabi-Yau structures first arose from investigations into supersymmetric sigma models \cite{Gates}. These structures were rediscovered in the work of Hitchin \cite{Hitchin}, growing out of a search for special geometries defined by volume functionals on differential forms. The relationship between these points of view was elaborated upon in the thesis of Gualtieri \cite{Gualtieri}. These structures have recently attracted enormous interest in both the physics and mathematical communities as natural generalizations of K\"ahler Calabi-Yau structures, inheriting a rich physical and geometric theory. We will focus entirely on the ``classical'' description of generalized K\"ahler geometry (cf. \cite{Gates}), i.e. not relying on the more intrinsic point of view developed by Gualtieri \cite{Gualtieri} using Courant algebroids. For our purposes a generalized K\"ahler manifold is a smooth manifold $M$ with a triple $(g, I, J)$ consisting of two complex structures $I$ and $J$ together with a metric $g$ which is Hermitian with respect to both. Moreover, the two K\"ahler forms $\omega_I$ and $\omega_J$ satisfy \begin{align*} d^c_I \omega_I = H = - d^c_J \omega_J, \qquad dH = 0, \end{align*} where the first equation defines $H$, and $d^c_I = \i (\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} - \partial)$ with respect to the complex structure defined by $I$, and similarly for $J$. A natural notion of Ricci flow adapted to the context of generalized K\"ahler geometry was introduced in work of the author and Tian \cite{STGK}. We will call this flow \emph{generalized K\"ahler-Ricci flow} (GKRF). This evolution equation was discovered in the course of our investigations into the more general ``pluriclosed flow," \cite{ST2}, and has the interesting feature that the complex structures must also evolve to preserve the generalized K\"ahler condition. Explicitly it takes the form \begin{gather} \label{GKRF} \begin{split} \dt g = - 2 \Rc^g + \frac{1}{2} \mathcal{H}, \qquad \dt H = \Delta_d H,\\ \dt I = L_{\theta_I^{\sharp}} I, \qquad \dt J = L_{\theta_J^{\sharp}} J, \end{split} \end{gather} where $\mathcal{H}_{ij} = H_{ipq} H_j^{pq}$, and $\theta_I, \theta_J$ are the Lee forms of the corresponding Hermitian structures. See \S \ref{GKRFsec} for a derivation of these equations. The metric and three-form component of the flow initially arose as the renormalization group flow for nonlinear sigma models coupled to a skew-symmetric $B$-field (cf. \cite{Polchinski}). Supersymmetry considerations eventually related this sigma model to generalized K\"ahler geometry. Given this, one might expect the renormalization group flow to preserve generalized K\"ahler geometry. The surprising observation of \cite{STGK} is that this is indeed so, but only after introducing a further evolution equation for the complex structures themselves. It remains an interesting problem to derive these evolution equations from a Langrangian-theoretic standpoint. A central feature of generalized K\"ahler geometry, observed first by Pontecorvo \cite{Pontecorvo}, Hitchin \cite{HitchinPoisson}, is that the tensor $g^{-1} [I,J]$ defines a holomorphic Poisson structure. Previously the author studied GKRF in one of the natural ``extremes" of generalized K\"ahler geometry, namely when this Poisson structure vanishes. In this setting it was observed that the complex structures actually remain fixed, and that the flow reduces to a nonconvex fully nonlinear parabolic equation for a scalar potential function \cite{SPCFSTB}. In this paper we focus entirely on the case when this Poisson structure is nondegenerate, in which case we will refer to the generalized K\"ahler structure itself as ``nondegenerate.'' It is trivial to note that GKRF will preserve this condition, at least for a short time, since it is an open condition. Whereas in the case $[I,J] = 0$ the flow of complex structures dropped out of the system, as we will see below, the evolving complex structures essentially determine the entire GKRF in the nondegenerate setting. Our main theorem gives a complete picture of the long time existence behavior of this flow in the case of dimension $4$, together with a rough picture of the convergence. \begin{thm} \label{4dLTE} Let $(M^4, g, I, J)$ be a nondegenerate generalized K\"ahler four-manifold. The solution to generalized K\"ahler-Ricci flow with initial data $(g,I,J)$ exists for all time. Moreover, the associated almost hyperK\"ahler structure $\{\omega_{K_i(t)}\}$ converges subsequentially in the $I$-fixed gauge to a triple of closed currents $\{\omega_{K_i}^{\infty}\}$. \end{thm} \begin{rmk} \begin{enumerate} \item See Definition \ref{aktdefn} for the definition of the associated almost hyperK\"ahler structure, and see Remark \ref{gaugermk} for the meaning of the flow in the ``$I$-fixed gauge''. \item The triple of limiting currents can be interpreted as a weak hyperK\"ahler structure. Conjecturally the flow should converge to a hyperK\"ahler metric exponentially in the $C^{\infty}$ topology but this is not yet attainable for technical reasons. In the case of tori the strong convergence follows from (\cite{SBIPCF} Theorem 1.1). \item It had previously been observed (see Remark \ref{generalityrmk}) that one could construct large classes of nondegenerate generalized K\"ahler structures by special deformations of hyperK\"ahler structures. Theorem \ref{4dLTE} roughly indicates that this the \emph{only} way to construct such structures. \item The solutions to GKRF in Theorem \ref{4dLTE} are never solutions to K\"ahler-Ricci flow, unless the initial data is already hyperK\"ahler in which case the K\"ahler-Ricci flow and generalized K\"ahler-Ricci flow are both fixed. \end{enumerate} \end{rmk} It seems natural to expect similar behavior for the generalized K\"ahler-Ricci flow in the nondegenerate setting in all dimensions $n=4k$. In particular, one might expect long time existence and convergence of the flow to a generalized Calabi-Yau structure. In dimensions greater than $4$ it does not follow directly that such a structure is hyperK\"ahler, and it would seem that more general examples should exist, although we do not know of any. While many aspects of our proof will certainly extend to higher dimensions, some key estimates exploit the low-dimensionality. One important breakthrough would be to achieve, if possible, a reduction of the flow to that of a potential function. Local constructions \cite{potential} indicate that one can express generalized K\"ahler structures in terms of a single potential function, but in the nondegenerate setting the objects are described as fully nonlinear expressions in the Hessian of the potential. Thus it remains far from clear if it is possible to reduce the GKRF to a scalar potential flow, as has been achieved in the setting of vanishing Poisson structure (cf. \cite{SPCFSTB}). Our calculations below give hope for the possibility of such a scalar reduction, as we show for instance that all curvature quantities involved in the flow equations can be expressed in terms of the angle function between the complex structures. The proof involves a number of a priori estimates derived using the maximum principle. Much of the structure between the two complex structures in a bihermitian triple $(g,I,J)$ is captured by the so-called angle function $p = \tr(IJ)$. As we will see in \S \ref{mainproof}, a certain function $\mu$ of the angle satisfies the time-dependent heat equation precisely along the flow. This yields a priori control over the angle, and moreover a strong decay estimate for the gradient of $\mu$. This quantity controls the torsion, yielding a priori decay of the torsion along the flow. Given this estimate, we switch points of view and study the flow merely as a solution to pluriclosed flow, and use the reduction of the flow to a parabolic flow of a $(1,0)$-form introduced in \cite{SBIPCF,ST3}. In the presence of this torsion decay we can establish upper and lower bounds for the metric depending on a certain potential function associated to the flow. This potential function can be shown to grow linearly, showing time-dependent upper and lower bounds on the metric. We then apply the $C^{\alpha}$ estimate on the metric established in \cite{SBIPCF} to obtain full regularity of the flow. Using the decay of the torsion we can derive the weak convergence statement in the sense of currents. Here is an outline of the rest of the paper. In \S \ref{background} we establish background results and notation, and also review the generalized K\"ahler-Ricci flow. Next in \S \ref{nondegsec} we explain fundamental properties of nondegenerate generalized K\"ahler surfaces. Then in \S \ref{mainproof} we develop a number of a priori estimates for the flow. Lastly in \S \ref{convsec} we establish Theorem \ref{4dLTE}. \vskip 0.1in \textbf{Acknowledgements:} The author would like to thank Richard Bamler, Joel Fine, Hans-Joachim Hein, Gang Tian, Alan Weinstein, and Qi Zhang for helpful discussions. Also, this work owes a significant intellectual debt to the series of works \cite{Bogaerts,Gates,Hull,Offshell,linearizing,potential} arising from mathematical physics. Moreover, I have benefited from many helpful conversations with Martin Rocek in understanding these papers. The author especially thanks Marco Gualtieri for many very useful conversations on generalized K\"ahler geometry. Lastly, we benefited quite significantly from Vestislav Apostolov, who provided much help in understanding his papers on bihermitian geometry and moreover suggested obtaining convergence results in the sense of currents. \section{Background} \label{background} \subsection{Notation and conventions} In this section we fix notation, conventions, and recall some fundamental constructions we will use in the sequel. First, given $(M^{2n}, g, J)$ a Hermitian manifold, let \begin{align*} \omega(X,Y) = g(X,JY) \end{align*} be the K\"ahler form. The Lee form is defined to be \begin{align*} \theta = d^* \omega \circ J. \end{align*} We let $\nabla$ denote the Levi-Civita connection of $g$. We will make use of two distinct Hermitian connections, in particular the Bismut connection $\nabla^B$ (see \cite{Bismut}) and the Chern connection $\nabla^C$. These are defined via \begin{gather} \label{connections} \begin{split} g(\nabla^B_X Y, Z) =&\ g(\nabla_X Y, Z) + \frac{1}{2} d^c \omega(X,Y,Z),\\ g(\nabla^C_X Y, Z) =&\ g(\nabla_X Y, Z) + \frac{1}{2} d \omega(JX,Y,Z). \end{split} \end{gather} Here $d^c = \i (\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} - \partial)$, hence \begin{align*} d^c \omega(X,Y,Z) = - d \omega(JX,JY,JZ). \end{align*} We will denote the torsion tensors by $H$ and $T$ respectively, i.e. \begin{align*} H(X,Y,Z) =&\ g(\nabla^B_X Y - \nabla^B_Y X - [X,Y], Z)\\ T(X,Y,Z) =&\ g(\nabla^C_X Y - \nabla^C_Y X - [X,Y], Z). \end{align*} Both the Bismut and Chern connections induce unitary connections on $K_M^{-1}$, with associated Ricci-type curvatures, which we denote via \begin{align*} \rho_{B,C} (X,Y) = R_{B,C}(X,Y,J e_i, e_i). \end{align*} We now record some formulas for Hermitian surfaces needed in the sequel. \begin{lemma} \label{leeformsurfaces} Let $(M^4, g, J)$ be a Hermitian surface. Then \begin{align} \label{leeformula} H_{ijk} = - J_i^l \theta_l \omega_{jk} - J_k^l \theta_l \omega_{ij} - J_j^l \theta_l \omega_{ki}. \end{align} \begin{proof} First of all, for a complex surface we have the formula $d \omega = \theta \wedge \omega$, which we express in coordinates as \begin{align*} (d \omega)_{ijk} = \theta_i \omega_{jk} + \theta_k \omega_{ij} + \theta_j \omega_{ki}. \end{align*} Now using that $H = - d \omega(J,J,J)$, we have \begin{align*} H_{ijk} =&\ - d \omega_{pqr} J_i^p J_j^q J_k^r\\ =&\ - \left( \theta_p \omega_{qr} + \theta_r \omega_{pq} + \theta_q \omega_{rp} \right) J_i^p J_j^q J_k^r\\ =&\ - J_i^l \theta_l \omega_{jk} - J_k^l \theta_l \omega_{ij} - J_j^l \theta_l \omega_{ki}, \end{align*} as required. \end{proof} \end{lemma} \begin{lemma} \label{chernlaplace} (cf. \cite{Gauduchon}) Let $(M^4, g, J)$ be a Hermitian surface. Then \begin{align*} \Delta f =&\ \Delta_C f - \IP{\theta, \nabla f}. \end{align*} \begin{proof} First observe that we can express in coordinates that \begin{align*} J_i^p d \omega_{pjk} =&\ J_i^p \left[ \theta_p \omega_{jk} + \theta_k \omega_{pj} + \theta_j \omega_{kp} \right] = J_i^l \theta_l \omega_{jk} + \theta_k g_{ij} - \theta_j g_{ik}. \end{align*} Hence we directly compute that \begin{align*} \Delta f =&\ g^{ij} \nabla_i \nabla_j f\\ =&\ g^{ij} \left(\partial_i \partial_j f - \Gamma_{ij}^k \partial_k f \right)\\ =&\ g^{ij} \left( \partial_i \partial_j f - (\Gamma^C)_{ij}^k \partial_k f + \left( \Gamma - \Gamma^C \right)_{ij}^k \partial_k f \right)\\ =&\ \Delta_C f + g^{ij} \left( - \frac{1}{2} J_i^p d \omega_{pjl} g^{kl} \right) \nabla_k f\\ =&\ \Delta_C f - \frac{1}{2} g^{ij} \nabla^l f \left(J_i^p \theta_p \omega_{jl} + \theta_l g_{ij} - \theta_j g_{il} \right)\\ =&\ \Delta_C f - \frac{1}{2} g^{ij} \nabla^l f \left( J_i^p \theta_p g_{j q} J_l^q + \theta_l g_{ij} - \theta_j g_{il} \right)\\ =&\ \Delta_C f - \IP{\theta, \nabla f}. \end{align*} \end{proof} \end{lemma} \begin{lemma} \label{gradIcalc} Let $(M^4, g, J)$ be a Hermitian surface. Then \begin{align*} \nabla_i J_j^k =&\ \frac{1}{2} \left[ - g^{qk} \theta_q^J \omega^J_{ij} + J_j^q \theta_q^J \delta_i^k - g^{km} J_m^q \theta_q^J g_{ij} - \theta_j^J J_i^k \right]. \end{align*} \begin{proof} Since $J$ is parallel with respect to the Bismut connection we compute using (\ref{leeformula}) that \begin{align*} 2 \nabla_i J_j^k =&\ H_{ij}^l J_l^k - H_{il}^k J_j^l\\ =&\ - g^{lm} \left[ J_i^q \theta^J_q \omega^J_{jm} + J_m^q \theta^J_q \omega^J_{ij} + J_j^q \theta^J_q \omega^J_{mi} \right] J_l^k + g^{km} \left[ J_i^q \theta^J_q \omega^J_{lm} + J_m^q \theta^J_q \omega^J_{il} + J_l^q \theta^J_q \omega^J_{mi} \right] J_j^l\\ =&\ g^{lm} \left[ J_i^q \theta^J_q J_j^r g_{rm} + J_m^q \theta_q^J J_i^r g_{rj} + J_j^q \theta_q^J J_m^r g_{ri} \right]J_l^k - g^{km} \left[ J_i^q \theta_q^J J_l^r g_{rm} + J_m^q \theta_q^J J_i^r g_{rl} + J_l^q \theta_q^J J_m^r g_{ri} \right] J_j^l\\ =&\ J_i^q \theta_q^J J_j^l J_l^k + g^{qk} \theta_q^J J_i^r g_{rj} + J_j^q \theta_q^J g^{rk} g_{ri} - J_i^q \theta^J_q J_l^k J_j^l - g^{km} J_m^q \theta_q^J g_{ij} + g^{km} \theta_j^J J_m^r g_{ri}\\ =&\ - J_i^q \theta^J_q \delta_j^k + g^{qk} \theta_q^J J_i^r g_{rj} + J_j^q \theta_q^J \delta_i^k + J_i^q \theta_q^J \delta_j^k - g^{km} J_m^q \theta_q^J g_{ij} - g^{km} \theta_j^J g_{rm} J_i^r\\ =&\ - g^{qk} \theta_q^J \omega^J_{ij} + J_j^q \theta_q^J \delta_i^k - g^{km} J_m^q \theta_q^J g_{ij} - \theta_j^J J_i^k. \end{align*} \end{proof} \end{lemma} \subsection{Generalized K\"ahler Ricci flow} \label{GKRFsec} In this subsection we review the construction of generalized K\"ahler Ricci flow (GKRF) from \cite{ST3}. To begin we review the pluriclosed flow \cite{ST2}. Let $(M^{2n}, g, J)$ be a Hermitian manifold as above. We say that the metric is \emph{pluriclosed} if \begin{align*} \i\partial\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \omega = d d^c \omega = 0. \end{align*} In \cite{ST2} the author and Tian introduced the \emph{pluriclosed flow} equation for such a metric, \begin{align} \label{PCF} \dt \omega =&\ - (\rho_B)^{1,1} = \partial \partial^* \omega + \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t}^* \omega + \frac{\i}{2} \partial\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \log \det g \end{align} where $\rho^B$ is the curvature of the determinant line bundle induced by the Bismut connection as described above, and the $(1,1)$ superscript indicates the projection onto the space of $(1,1)$ forms. In \cite{ST2} we showed that this flow preserves the pluriclosed condition and agrees with K\"ahler-Ricci flow when the initial data is K\"ahler. As exhibited in (\cite{ST3} Proposition 6.3), the induced pairs of metrics and Bismut torsions $(g_t,H_t)$ satisfy \begin{gather} \label{PCFmetricevs} \begin{split} \dt g =&\ - 2 \Rc^g + \frac{1}{2} \mathcal{H} - \mathcal L_{\theta^{\sharp}} g,\\ \dt H =&\ \Delta_d H - \mathcal L_{\theta^{\sharp}} H, \end{split} \end{gather} where $\mathcal{H}_{ij} = H_{ipq} H_{j}^{pq}$. This crucial formula shows how to construct a flow which preserves generalized K\"ahler geometry. In particular consider $(M^{2n}, g, I, J)$ a generalized K\"ahler structure. Then as $(g, I)$ and $(g, J)$ are pluriclosed structures, we can construct two solutions to pluriclosed flow with these initial data, denoting the K\"ahler forms $\omega^I_t, \omega^J_t$. Then let $\phi_t^I, \phi_t^J$ denote the one parameter families of diffeomorphisms generated by $(\theta^I)^{\sharp}_t, (\theta^J)^{\sharp}_t$ respectively. It follows from (\ref{PCFmetricevs}) that both $((\phi_t^I)^* g^I_t, (\phi_t^I)^* H^I_t)$ and $((\phi_t^J)^* g^J_t, - (\phi_t^J)^* H^J_t)$ are solutions to \begin{gather} \label{RGflow} \begin{split} \dt g =&\ - 2 \Rc^g + \frac{1}{2} \mathcal{H},\\ \dt H =&\ \Delta_d H, \end{split} \end{gather} with the same initial conditions, since the original structure was generalized K\"ahler. It follows that \begin{align*} (\phi_t^I)^* g^I_t =&\ (\phi_t^J)^* g^J_t =: g_t\\ (\phi_t^I)^* H^I_t =&\ - (\phi_t^J)^* H^J_t =: H_t \end{align*} defines a one-parameter family of generalized K\"ahler structures. To make it more manifest, observe that by construction certainly $g_t$ is compatible with both of the integrable complex structures $(\phi_t^I)^* I, (\phi_t^J)^* J$. Thus, in principle the two complex structures must evolve to preserve the generalized K\"ahler condition. Making this explicit we arrive at the generalized K\"ahler-Ricci flow system (cf. \cite{STGK}): \begin{gather*} \begin{split} \dt g = - 2 \Rc^g + \frac{1}{2} \mathcal{H}, \qquad \dt H = \Delta_d H,\\ \dt I = L_{\theta_I^{\sharp}} I, \qquad \dt J = L_{\theta_J^{\sharp}} J, \end{split} \end{gather*} as claimed in the introduction. \begin{rmk} \label{gaugermk} In obtaining estimates for the flow, we will exploit two different points of view, each of which makes performing certain calculations easier. Certain estimates will use the system (\ref{GKRF}) directly, which we will call a solution ``in the $B$-field gauge.'' Other times it is easier to work with pluriclosed flow directly, so we pull back the flow to the fixed complex manifold $(M^{2n}, I)$. In other words by pulling back the entire system by the family of diffeomorphisms $(\phi_t^I)^{-1}$ we return to pluriclosed flow on $(M^{2n}, I)$, which encodes everything about the GKRF except the other complex structure. But the construction above makes clear that the other complex structure is \begin{align*} J_t =&\ \left[ (\phi_t^I)^{-1} \circ \phi_t^J \right]^* J. \end{align*} We will refer to this point of view on GKRF as occurring ``in the $I$-fixed gauge." \end{rmk} \section{Nondegenerate generalized K\"ahler surfaces} \label{nondegsec} In this section we record some basic properties of generalized K\"ahler surfaces with nondegenerate Poisson structure. First we derive special linear algebraic aspects of this structure related to the angle function, (see Definition \ref{angledef}), which plays a central role throughout what follows. Next we record some background on the Poisson structures associated to a generalized K\"ahler manifold, and its relationship to the construction of large families of nondegenerate generalized K\"ahler structures. Then we exhibit some general identites for the curvature and torsion of these structures which further emphasize the central role of the angle function, and which are essential to the analysis to follow. \subsection{Linear algebraic structure} In this subsection we recall well-known fundamental linear algebraic properties of biHermitian four-manifolds. The low dimensionality results in some key simplifications which are central to the analysis to follow. \begin{defn} \label{angledef} Given $(M^{2n}, g, I, J)$ a biHermitian manifold, let \begin{align*} p = \frac{1}{2n} \tr (I \circ J) \end{align*} denote the \emph{angle} between $I$ and $J$. Observe that since $I$ and $J$ are both compatible with $g$, by the Cauchy-Schwarz inequality we obtain \begin{align*} \brs{p} = \frac{1}{2n} \brs{\IP{I,J}_g} \leq \frac{1}{2n} \brs{I}_g \brs{J}_g = 1. \end{align*} \end{defn} \begin{lemma} \label{anticommutelemma} Let $(M^4, g, I, J)$ be a biHermitian manifold where $I$ and $J$ induce the same orientation. Then \begin{align*} \{I,J\} = 2 p \Id. \end{align*} \begin{proof} Since $I$ and $J$ induce the same orientation, $\omega_I$ and $\omega_J$ are both self-dual forms. Fix some point $x \in M$, and consider a $g$-orthonormal basis $\omega_1, \omega_2, \omega_3$ for self-dual two forms at $x$. Direct calculations show that the corresponding endomorphisms given by raising an index via $g$, call them $K_i$, all anticommute and satisfy the quaternion relations. Moreover, since we can express \begin{align*} F_I = a \omega_1 + b \omega_2 + c \omega_3, \end{align*} with $a^2 + b^2 + c^2 = 1$, it follows that $I$ (and similarly $J$) are part of this quaternionic structure. In particular we may write \begin{align*} I = a_I K_1 + b_I K_2 + c_I K_3, \qquad J = a_J K_1 + b_J K_2 + c_J K_3. \end{align*} Since the $K_i$ pairwise anticommute one then directly computes that \begin{align*} \{I,J\} =&\ - 2 (a_I a_J + b_I b_J + c_I c_J) \Id. \end{align*} That is, $\{I,J\}$ is diagonal, and this forces the final equation by the definition of $p$. \end{proof} \end{lemma} \begin{lemma} \label{bracketcalc} Let $(M^4, g, I, J)$ be a biHermitian manifold where $I$ and $J$ induce the same orientation. Then \begin{align*} [I,J]^2 = 4 \left( p^2 - 1 \right) \Id. \end{align*} \begin{proof} We directly compute \begin{align*} [I,J]^2 =&\ (IJ - JI)(IJ - JI)\\ =&\ IJIJ + JIJI - 2 \Id\\ =&\ IJ (- JI + 2p \Id) + JI (-IJ + 2p \Id) - 2 \Id\\ =&\ 2p \{I,J\} - 4 \Id\\ =&\ 4 \left( p^2 - 1 \right) \Id. \end{align*} \end{proof} \end{lemma} Given $(M^{2n}, g, I, J)$ a generalized K\"ahler manifold, let \begin{align*} \sigma = g^{-1} [I,J] \in \Lambda^2 TM. \end{align*} It was observed by Pontecorvo \cite{Pontecorvo} that when $n=4$, this defines a holomorphic Poisson structure. This was extended to higher dimensions by Hitchin \cite{HitchinPoisson}. In particular, $\sigma$ is of type $(2,0) + (0,2)$ with respect to both complex structures, and is holomorphic with respect to both complex structures. \begin{defn} \label{nondegdef} Given $(M^{2n}, g, I, J)$ a generalized K\"ahler manifold, we say that it is \emph{nondegenerate} if $\sigma$ defines a nondegenerate pairing on $TM$. \end{defn} Observe via Lemma \ref{bracketcalc} that a generalized K\"ahler structure in nondegenerate if and only if $\brs{p} < 1$. A nondegenerate holomorphic Poisson structure defines, via its inverse, a holomorphic symplectic form $\Omega$. Complex manifolds admitting such structures are fairly rigid, and in particular for complex surfaces can only be tori or $K3$. Moreover, a nondegenerate generalized K\"ahler structure determines an almost hyperK\"ahler structure as we next define. \begin{defn} \label{aktdefn} Let $(M^4, g, I, J)$ be a nondegenerate generalized K\"ahler structure. The \emph{associated almost hyperK\"ahler structure} is the triple $(K_0,K_1,K_2)$ where \begin{align} \label{aktriple} K_0 =&\ q^{-1}(IJ + p \Id), \qquad K_1 = I, \qquad K_2 = q^{-1} (J - p I), \end{align} for $q = \sqrt{1-p^2}$. Each $K_i$ is an almost complex structure compatible with the given conformal class, and we will equivalently refer to the associated K\"ahler forms $\{\omega_{K_i}\}$ as the almost hyperK\"ahler structure. Direct calculations show that any pair of $\omega_{K_i}$ satisfies \begin{align} \label{hktriple} \omega_{K_i} \wedge \omega_{K_i} = \omega_{K_j} \wedge \omega_{K_j}, \qquad \omega_{K_i} \wedge \omega_{K_j} = 0. \end{align} Later we will need an explicit formula for the K\"ahler form associated to $K_0$. Direct calculations show that \begin{align} \label{K0kahlerform} \omega_{K_0} =&\ \Omega - i \omega_{K_1} \end{align} \end{defn} As it turns out, this associated K\"ahler forms encode the almost complex structures, as made precise in the following lemma: \begin{lemma} (\cite{Apostolov} Lemma 5, cf. \cite{Geiges,LRC}) Let $M$ be an oriented $4$-manifold and $\Phi_1, \Phi_2$ a pair of nondegenerate real $2$-forms on $M$ satisfying the conditions \begin{align*} \Phi_1 \wedge \Phi_1 = \Phi_2 \wedge \Phi_2, \qquad \Phi_1 \wedge \Phi_2 = 0. \end{align*} Then there is a unique almost-complex structure $J$ on $M$ such that the $2$-form $\Omega = \Phi_1 - \i \Phi_2$ is of type $(2,0)$ with respect to $J$. If moreover $\Phi_1$ and $\Phi_2$ are closed, then $J$ is integrable and $\Omega$ defines a holomorphic symplectic structure on $(M, J)$. \end{lemma} \subsection{Local generality} \label{generalityrmk} Given the existence of so much rigid holomorphic Poisson structure, one might think that nondegenerate generalized K\"ahler manifolds are perhaps are fully rigid, with only finite dimensional classes of examples. This is not the case, as was shown by (\cite{Apostolov,Gualtieridphil}). We follow the discussion of (\cite{Gualtieridphil} Examples 6.31, 6.32). There it is shown that the specification of a nondegenerate generalized K\"ahler structure in dimension $n=4$ with the same orientation is equivalent to specifying three closed $2$-forms $B, \omega_1, \omega_2$ such that \begin{align} \label{example10} B \wedge \omega_1 = B \wedge \omega_2 = \omega_1\wedge \omega_2 = \omega_1^2 + \omega_2^2 - 4 B^2 = 0, \qquad \omega_1^2 = \lambda \omega_2^2, \qquad \lambda > 0. \end{align} In particular, given this data, the generalized K\"ahler structure is determined by pure spinors $e^{B + \i \omega_1}, e^{-B + \i \omega_2}$ (see \cite{Gualtieridphil} for the pure spinor description of generalized K\"ahler structures). One can use this interpretation to produce non-hyperHermitian nondegenerate generalized K\"ahler structures. In particular, start with a hyperK\"ahler triple $(M, g, I, J, K)$, and let $F_t$ be a one-parameter family of diffeomorphisms generated by a $\omega_K$-Hamiltonian vector field. For $t$ sufficiently small, the forms \begin{align*} B = \omega_K, \qquad \omega_1 = \omega_I - F_t^* \omega_J, \qquad \omega_2 = \omega_I + F_t^* \omega_J \end{align*} satisfy (\ref{example10}). Moreover, as shown in (\cite{Apostolov} Lemma 6), if $f$ denotes the $\omega_K$-Hamiltonian function generating the $\omega_K$ Hamiltonian vector field, then for the angle function associated to the generalized K\"ahler data one has \begin{align*} \left. \dt p \right|_{t=0} =&\ \frac{1}{2} \Delta f. \end{align*} Hence for any nonconstant $f$ one produces structures with nonconstant angle, which are hence not hyperK\"ahler (cf. Lemma \ref{rigiditylemma} below). This same deformation was used in \cite{Apostolov} to produce non-hyperHermitian, strongly biHermitian, conformal classes on hyperHermitian Hopf surfaces. However, as exhibited in \cite{Apostolov} Corollary 2, the Gauduchon metrics in these conformal classes are never generalized K\"ahler, and so these do not play a role in the analysis of Theorem \ref{4dLTE}. Note that in this example the Poisson structure is $\omega_K$, and is crucial to the construction of the deformation. This type of deformation, arising from the associated Poisson structure, was generalized by Goto \cite{Goto}. Hence there are large families of nondegenerate generalized K\"ahler structures. As it turns out, it is possible to give a simple characterization of when such a structure on a surface is hyperK\"ahler. This lemma is generally known (cf. \cite{Pontecorvo}), and we include a simple proof based on our curvature calculations. \begin{lemma} \label{rigiditylemma} Let $(M^4, g, I, J)$ be a nondegenerate generalized K\"ahler surface. Then $(M^4, g)$ is hyperK\"ahler if and only if $p$ is constant. \begin{proof} It follows from direct calculations that any two complex structures which are part of a hyperK\"ahler sphere have constant angle. Conversely, if $p$ is constant then it follows from Lemma \ref{4dthetacalc} below that $\theta_I = \theta_J = 0$, which since we are on a complex surface implies $d \omega_I = 0$, i.e. that the metric is K\"ahler. It then follows from Lemma \ref{chernricci} that the metric is Calabi-Yau, hence hyperK\"ahler. \end{proof} \end{lemma} \subsection{Torsion and curvature identities} Here we record a number of useful identities for the torsion and curvature of generalized K\"ahler manifolds. Most of these identities have been previously observed in the literature, but we include the short derivations for completeness and to fix conventions/notation. \begin{lemma}(\cite{AG} Proposition 3) \label{leeforms} Let $(M^4, g, I, J)$ be a generalized K\"ahler manifold such that $I$ and $J$ induce the same orientation. Then $\theta^I = - \theta^J$. \begin{proof} Note that $\omega_I$ is self-dual. Moreover, since $I$ induces the metric orientation the action of $I$ on forms commutes with Hodge star. Hence \begin{align*} \theta^I =&\ I d^* \omega_I = I \star d \star \omega_I = I \star d \omega_I = \star I d \omega_I = \star H_I. \end{align*} Similarly one obtains $\theta^J = \star H_J$. Since $H_I = - H_J$ the result follows. \end{proof} \end{lemma} \begin{rmk} Given the result of Lemma \ref{leeforms}, to simplify notation we will adopt the convention $\theta = \theta^I$. \end{rmk} \begin{lemma} (cf. \cite{Apostolov} Lemma 7) \label{dpcalc} Let $(M^4, g, I, J)$ be a nondegenerate generalized K\"ahler four-manifold. Then \begin{align*} d p =&\ \frac{1}{4} \left( \theta^I - \theta^J \right) [I,J] = \frac{1}{2} \theta [I,J]. \end{align*} \begin{proof} By Lemma \ref{leeformsurfaces} we have \begin{align*} H_{ijk} = - J_i^l \theta_l \omega_{jk} - J_k^l \theta_l \omega_{ij} - J_j^l \theta_l \omega_{ki}. \end{align*} Thus we compute \begin{align*} 4 \nabla_a p =&\ \nabla_a \left( I_r^s J_s^r \right)\\ =&\ \nabla_a I_r^s J_s^r + I_r^s \nabla_a J_s^r\\ =&\ \frac{1}{2} \left( (H^I)_{a r}^t I_t^s - (H^I)_{a t}^s I_r^t \right) J_s^r + \frac{1}{2} I_r^s \left[ (H^J)_{a s}^t J_t^r - (H^J)_{a t}^r J_s^t \right]\\ =&\ - \frac{1}{2} \left[ g^{tv} \left( I_a^w \theta^I_w \omega^I_{rv} + I_v^w \theta^I_w \omega^I_{ar} + I_r^w \theta^I_w \omega^I_{va} \right) \right] I_t^s J_s^r\\ &\ + \frac{1}{2} \left[ g^{sv} \left( I_a^l \theta^I_l \omega^I_{tv} + I_v^w \theta^I_w \omega^I_{at} + I_t^w \theta^I_w \omega^I_{va} \right) \right] I_r^t J_s^r\\ &\ - \frac{1}{2} I_r^s \left[ g^{tv} \left( J_a^l \theta^J_l \omega^J_{sv} + J_v^l \theta^J_l \omega^J_{as} + J_s^l \theta^J_l \omega^J_{va} \right) \right] J_t^r\\ &\ + \frac{1}{2} I_r^s \left[ g^{rv} \left( J_a^l \theta^J_l \omega^J_{tv} + J_v^l \theta^J_l \omega^J_{at} + J_t^l \theta^J_l \omega_{va} \right) \right] J_s^t\\ =&\ \sum_{i=1}^{12} A_i. \end{align*} Direct calculations show that $A_1 = A_4 = A_7 = A_{10} = 0$. On the other hand we have \begin{align*} 2 A_2 =&\ - g^{tv} I_v^w \theta_w^I \omega^I_{ar} I_t^s J_s^r= g^{tv} I_v^w \theta_w^I I_a^p g_{pr} I_t^s J_s^r = g^{ws} \theta^I_w I_a^p g_{pr} J_s^r = - \theta_r^I I_a^p J_p^w = - \theta^I (JI)_a,\\ 2 A_3 =&\ - g^{tv} I_r^w \theta^I_w \omega^I_{va} I_t^s J_s^r = g^{tv} I_r^w \theta^I_w I_v^p g_{pa} I_t^s J_s^r = g^{ps} I_r^w \theta^I_w g_{pa} J_s^r = \theta^I (IJ)_a,\\ 2 A_5 =&\ g^{sv} I_v^w \theta_w^I \omega^I_{at} I_r^t J_s^r = - g^{sv} I_v^w \theta_w^I I_a^p g_{pt} I_r^t J_s^r = - g^{sv} I_v^w \theta^I_w g_{ar} J_s^r = g^{sv} I_v^w \theta_w^I J_a^r g_{rs} = \theta^I (IJ)_a,\\ 2 A_6 =&\ g^{sv} I_t^w \theta^I_w \omega^I_{va} I_r^t J_s^r = - g^{sv} I_t^w \theta_w^I I_v^p g_{pa} I_r^t J_s^r = g^{sv} \theta_r^I I_v^p g_{pa} J_s^r = - \theta_r^I I_a^s J_s^r = - \theta^I (JI)_a,\\ 2 A_8 =&\ - I_r^s g^{tv} J_v^l \theta^J_l \omega^J_{as} J_t^r = I_r^s g^{tv} J_v^l \theta^J_l J_a^p g_{ps} J_t^r = I_r^s g^{lr} \theta^J_l J_a^p g_{ps} = - I_p^l \theta_l^J J_a^p = - \theta ^J(IJ)_a,\\ 2 A_9 =&\ - I_r^s g^{tv} J_s^l \theta_l^J \omega^J_{va} J_t^r = I_r^s g^{tv} J_s^l \theta_l^J J_v^p g_{pa} J_t^r = I_r^s g^{pr} J_s^l \theta_l^J g_{pa} = \theta^J (JI)_a,\\ 2 A_{11} =&\ I_r^s g^{rv} J_v^l \theta_l^J \omega_{at}^J J_s^t = - I_r^s g^{rv} J_v^l \theta_l^J J_a^p g_{pt} J_s^t = - I_r^s g^{rv} J_v^l \theta_l^J g_{as} = I_a^v J_v^l \theta^J_l = \theta^J (JI)_a,\\ 2 A_{12} =&\ I_r^s g^{rv} J_t^l \theta^J_l \omega_{va} J_s^t = - I_r^s g^{rv} J_t^l \theta^J_l J_v^p g_{pa} J_s^t = I_r^l g^{rv} \theta_l^J J_v^p g_{pa} = - I_r^l J_a^r \theta_l^J = - \theta^J (IJ)_a. \end{align*} The first claimed formula follows, and the second follows from Lemma \ref{leeforms}. \end{proof} \end{lemma} \begin{lemma} \label{4dthetacalc} Given $(M^4, g, I, J)$ a nondegenerate generalized K\"ahler structure, one has \begin{align*} \theta =&\ \frac{1}{2(p^2 - 1)} dp [I,J]. \end{align*} \begin{proof} Combining Lemmas \ref{bracketcalc} and \ref{dpcalc} we have that \begin{align*} dp [I,J] =&\ \frac{1}{4} (\theta^I - \theta^J) [I,J]^2 = \frac{1}{2} \theta [I,J]^2 = 2 (p^2 - 1) \theta. \end{align*} \end{proof} \end{lemma} \begin{lemma} \label{dptheta} Let $(M^4, g, I, J)$ be a nondegenerate generalized K\"ahler four-manifold. Then \begin{enumerate} \item $\IP{dp,\theta} = 0$. \item $\brs{\theta}^2 = \frac{\brs{dp}^2}{(1 - p^2)}$ \end{enumerate} \begin{proof} We directly compute using Lemma \ref{dpcalc}, \begin{align*} \IP{dp,\theta} =&\ g^{ij} dp_i \theta_j\\ =&\ g^{ij} \left[ \theta_k [I,J]_i^k \theta_j \right]\\ =&\ g^{ij} \left[ \theta_k \theta_j \left( I_l^k J_i^l - J_l^k I_i^l \right) \right]\\ =&\ \theta_k I_l^k J_i^l \theta^i + \theta_k \theta_j J_l^k g^{li} I_i^j\\ =&\ \theta_k I_l^k J_i^l \theta^i - \theta_k \theta_j g^{kl} J_l^i I_i^j\\ =&\ \theta_k I_l^k J_i^l \theta^i - \theta_j I_i^j J_l^i \theta^l\\ =&\ 0. \end{align*} Next using Lemma \ref{4dthetacalc} we have \begin{align*} \brs{\theta}^2 =&\ g^{ij} \theta_i \theta_j\\ =&\ g^{ij} \left( \frac{1}{2 (p^2 - 1)} dp_k [I,J]_i^k \right) \left( \frac{1}{2 (p^2 - 1)} dp_l [I,J]_j^l \right)\\ =&\ - \frac{1}{4(p^2-1)^2} dp_k g^{ki} [I,J]_i^j [I,J]_j^l dp_l\\ =&\ \frac{1}{(1-p^2)} \brs{dp}^2. \end{align*} \end{proof} \end{lemma} \begin{lemma} \label{chernricci} Let $(M^{4n}, g, I, J)$ be a nondegenerate generalized K\"ahler structure. Then \begin{align*} (\rho_C^I)^{1,1} =&\ - d I d \log \sqrt{\det [I,J]}. \end{align*} In particular, when $n=1$ we have that \begin{align*} \rho_C^I =&\ - d I d \log (1-p^2). \end{align*} \begin{proof} First we observe that since $\mho := \Omega^{n}$ is a holomorphic volume form, the Chern connection on the canonical bundle associated to the volume form $\mho \wedge \bar{\mho}$ is flat. Hence \begin{align*} \rho^I_C(\omega_I^n) =&\ \rho^I_C(\mho \wedge \bar{\mho}) - d I d \log \frac{\omega_I^n}{\mho \wedge \bar{\mho}} = - d I d \log \frac{\omega_I^n}{\mho \wedge \bar{\mho}} \end{align*} Then we note that \begin{align*} \omega_I^n =&\ dV_g\\ =&\ \sqrt{\det g_{ij}} dx^1 \dots \wedge dx^{4n}\\ =&\ \sqrt{ \det \left( \Omega [I,J] \right)} dx^1 \wedge \dots \wedge dx^{4n}\\ =&\ \sqrt{\det [I,J]} \Pf \Omega\\ =&\ \sqrt{\det [I,J]} \mho \wedge \bar{\mho}. \end{align*} In the case $n=1$, using Lemma \ref{bracketcalc} we see that \begin{align*} \sqrt{\det [I,J]} = 4(1 - p^2). \end{align*} Hence the second result follows. \end{proof} \end{lemma} \section{Nondegenerate Generalized Kahler Ricci flow} \label{mainproof} In this section we derive the main a priori estimates employed in the proof of Theorem \ref{4dLTE}. The a priori estimates roughly break into two parts. First we derive evolution equations for functions associated to the angle function in the $B$-field flow gauge. Very surprisingly, a certain function of the angle is a solution to the time-dependent heat equation with no reaction terms. Direct maximum principle arguments based on this simple evolution equation lead to a number of strong a priori estimates on the torsion, which play a central role in the proof. Second, we study the flow in the $I$-fixed gauge, utilizing a certain reduction of the pluriclosed flow to a flow for a $(1,0)$-form and a potential function to obtain further a priori estimates, including uniform equivalence of the evolving volume form. Once these estimates are in place we can obtain the long time existence and convergence of the flow by a familiar path. In particular, one can exploit the potential function to obtain an a priori estimate for the trace of the metric with respect to a background metric. Since we have already estimated the volume form, this yields uniform equivalence of the metric along the flow. Once these are in place we can invoke the $C^{\alpha}$ estimate for the metric shown in \cite{SBIPCF} to obtain the full regularity of the flow. Many of these estimates are not uniform as time goes to infinity, but we can exploit the decay of the torsion tensor to obtain the weak convergence claims. \subsection{A priori estimates using the angle function} \label{eveqns} \begin{lemma} \label{pevol} Let $(M^4, g_t, I_t, J_t)$ be a solution to GKRF with nondegenerate initial condition in the $B$-field gauge. Then \begin{align*} \dt p =&\ \Delta p + \frac{2 p \brs{dp}^2}{(1 - p^2)}. \end{align*} \begin{proof} Recall that for a complex structure $J$ and a vector field $X$ we have \begin{align*} (L_X J)_k^l =&\ X^q \nabla_q J_k^l - J_k^p \nabla_p X^l + \nabla_k X^p J_p^l. \end{align*} Thus we can compute \begin{align*} \dt 4 p =&\ \dt (I_j^k J_k^j)\\ =&\ (L_{\theta} I)_j^k J_k^j - I_j^k (L_{\theta} J)_k^j\\ =&\ \left[ \theta^p \nabla_p I_j^k - I_j^p \nabla_p \theta^k + \nabla_j \theta^p I_p^k \right] J_k^j - I_j^k \left[ \theta^q \nabla_q J_k^j - J_k^p \nabla_p \theta^j + \nabla_k \theta^p J_p^j \right]\\ =&\ \theta^p \nabla_p I_j^k J_k^j - I_j^k \theta^q \nabla_q J_k^j + 2 \tr \left( \nabla \theta^{\sharp} \cdot [J,I] \right). \end{align*} We observe using Lemmas \ref{gradIcalc} and \ref{dptheta} that \begin{align*} \theta^p \nabla_p I_j^k J_k^j =&\ \theta^p \left[ - g^{qk} \theta_q^I \omega^I_{pj} + I_j^q \theta_q^I \delta_p^k - g^{km} I_m^q \theta_q^I g_{pj} - \theta_j^I I_p^k \right] J_k^j\\ =&\ \theta^p \left[ g^{qk} \theta_q g_{rj} I_p^r J_k^j + I_j^q \theta_q J_p^j + g^{km} I_m^q \theta_q g_{kj} J_p^j - \theta_j I_p^k J_k^j \right]\\ =&\ - \theta_r I_j^r J_k^j \theta^k + \theta_q I_j^q J_p^j \theta^p + \theta_q I_m^q J_p^m \theta^p - \theta_j J_k^j I_p^k \theta^p\\ =&\ \IP{\theta, \theta [I,J]}\\ =&\ 2 \IP{dp, \theta}\\ =&\ 0. \end{align*} A very similar calculation yields \begin{align*} - I_j^k \theta^q \nabla_q J_k^j =&\ 0. \end{align*} Then we note using Lemmas \ref{bracketcalc} and \ref{4dthetacalc} that \begin{align*} 2 \tr \left( \nabla \theta^{\sharp} \cdot [J,I] \right) =&\ 2 \nabla_q \theta^p [J,I]_p^q\\ =&\ 2 \nabla_q \theta_r g^{pr} [J,I]_p^q\\ =&\ 2 \nabla_q \left[ \frac{1}{2 (p^2 - 1)} \nabla_s p [I,J]_r^s \right] g^{pr} [J,I]_p^q\\ =&\ \nabla_q \left[ \frac{1}{p^2 - 1} \nabla_s p [I,J]_r^s g^{pr} [J,I]_p^q \right] - \frac{1}{p^2 - 1} \nabla_s p [I,J]_r^s g^{pr} \nabla_q [J,I]_p^q\\ =&\ \nabla_q \left[ \frac{1}{p^2 - 1} \nabla_s p g^{sr} [J,I]_r^p [J,I]_p^q \right] - 2\theta^p \nabla_q [J,I]_p^q\\ =&\ 4 \nabla_q \nabla_s p g^{sq} - 2\theta^p \nabla_q [J,I]_p^q\\ =&\ 4 \Delta p + 2\theta^p \nabla_q [I,J]_p^q. \end{align*} Now we simplify the remaining term \begin{align*} 2 \theta^p \nabla_q [I,J]_p^q =&\ 2 \theta^p \nabla_q \left( I_r^q J_p^r - J_r^q I_p^r \right)\\ =&\ 2 \theta^p \left[ (\nabla_q I_r^q) J_p^r + I_r^q \nabla_q J_p^r - (\nabla_q J_r^q) I_p^r - J_r^q (\nabla_q I_p^r) \right)\\ =&\ \sum_{i=1}^4 A_i. \end{align*} Then \begin{align*} A_1 =&\ \theta^p J_p^r \left( - g^{tq} \theta_t^I \omega^I_{qr} + I_r^t \theta_t \delta_q^q - g^{qm} I_m^t \theta_t g_{qr} - \theta_r I_q^q \right)\\ =&\ 4 I_r^t J_p^r \theta^p \theta_t + \theta^p \theta_t J_p^r g^{tq} I_q^v g_{vr} - \theta^p \theta_t J_p^r I_m^t g^{qm} g_{qr}\\ =&\ 4 I_r^t J_p^r \theta^p \theta_t - \theta^p \theta_t J_p^r I_r^t - \theta^p \theta_t J_p^r I_r^t\\ =&\ 2 I_r^t J_p^r \theta^p \theta_t\\ =&\ \left( IJ + JI \right)_p^t \theta^p \theta_t\\ =&\ 2 p \brs{\theta}^2. \end{align*} Next \begin{align*} A_2 =&\ \theta^p I_r^q \left( - g^{tr} \theta_t^J \omega^J_{qp} + J_p^t \theta_t^J \delta_q^r - g^{rm} J_m^t \theta_t^J g_{qp} - \theta_p^J J_q^r \right)\\ =&\ (\tr IJ) \brs{\theta}^2 - \theta^p I_r^q g^{tr} \theta_t J_q^v g_{vp} + \theta^p I_r^q g^{rm} J_m^t \theta_t g_{qp}\\ =&\ (\tr IJ) \brs{\theta}^2 - 2 \theta^r \theta_v I_r^q J_q^v\\ =&\ 2 p \brs{\theta}^2. \end{align*} Also \begin{align*} A_3 =&\ - \theta^p I_p^r \left( - g^{tq} \theta_t^J \omega^J_{qr} + J_r^t \theta^J_t \delta_q^q - g^{qm} J_m^t \theta^J_t g_{qr} - \theta^J_r J_q^q \right)\\ =&\ 4 \theta^p I_p^r J_r^t \theta_j + \theta^p I_p^r g^{tq} \theta_t J_q^v g_{vr} - \theta^p I_p^r J_r^t \theta_t\\ =&\ 4 \theta^p I_p^r J_r^t \theta_j - \theta^p I_p^r \theta_t J_q^t g^{qv} g_{vr} - \theta^p I_p^r J_r^t \theta_t\\ =&\ 2 \theta^p \theta_t J_r^t I_p^r\\ =&\ 2 p \brs{\theta}^2. \end{align*} Lastly \begin{align*} A_4 =&\ - \theta^p J_r^q \left( - g^{tr} \theta_t^I \omega^I_{qp} + I_p^t \theta_t^J \delta_q^r - g^{rm} I_m^t \theta_t^I g_{qp} - \theta_p^I I_q^r \right)\\ =&\ \tr (IJ) \brs{\theta}^2 - \theta^p J_r^q g^{tr} \theta_t I_q^v g_{vp} + \theta^p J_r^q g^{rm} I_m^t g_{qp} \theta_t\\ =&\ \tr(IJ) \brs{\theta}^2 - \theta_v I_q^v J_r^q \theta^r - \theta_t I_m^t J_r^m \theta^r\\ =&\ 2 p \brs{\theta}^2. \end{align*} It follows that $2 \theta^p \nabla_q [I,J]_p^q = 8 p \brs{\theta}^2$. Hence, also applying Lemma \ref{dptheta} we obtain \begin{align*} \dt 4p =&\ \Delta 4p + 8 p \brs{\theta}^2 = \Delta 4p + \frac{8 p \brs{dp}^2}{1 - p^2}. \end{align*} \end{proof} \end{lemma} \begin{lemma} \label{logpev} Let $(M^4, g_t, I_t, J_t)$ be a solution to GKRF with nondegenerate initial condition in the $B$-field gauge. Then \begin{align*} \dt \log \frac{1+ p}{1- p} =&\ \Delta \log \frac{1+p}{1-p}. \end{align*} \begin{proof} We directly compute using Lemma \ref{pevol} \begin{align*} \dt \log \frac{1+p}{1-p} =&\ \frac{1-p}{1+p} \dt \frac{1+p}{1-p}\\ =&\ \frac{1-p}{1+p} \left[ \frac{(1-p) \dt p - (1+p) (- \dt p)}{(1-p)^2}\right]\\ =&\ \frac{2}{(1-p^2)} \left[ \Delta p + \frac{2p}{(1-p^2)} \brs{dp}^2 \right] \end{align*} Similarly we have \begin{gather} \label{logpev10} \begin{split} \Delta \log \frac{1+p}{1-p} =&\ \nabla^i \left[ \frac{1-p}{1+p} \nabla_i \frac{1+p}{1-p} \right]\\ =&\ \nabla^i \left\{ \frac{1-p}{1+p} \left[ \frac{(1-p) \nabla_i p - (1+p) (- \nabla_i p)}{(1-p)^2}\right] \right\}\\ =&\ \nabla^i \left\{ \frac{2}{(1-p^2)} \nabla_i p \right\}\\ =&\ \frac{2}{(1-p^2)} \left[ \Delta p + \frac{2p}{(1-p^2)} \brs{dp}^2 \right]. \end{split} \end{gather} The result follows. \end{proof} \end{lemma} This very simple evolution equation leads to a number of crucial a priori estimates for the flow, and the evolution equations themselves are very useful in constructing test functions. As is well-known, for a solution to the heat equation against a Ricci flow background, the gradient function satisfies a particularly clean evolution equation, with the evolution of the metric exactly canceling the Ricci curvature term arising from the Bochner formula. For a solution to the $B$-field flow, the contribution from the positive definite tensor $\mathcal{H}$ makes the corresponding evolution equation even more useful. \begin{lemma} \label{gradientevlemma} Let $(M^n, g_t, H_t)$ be a solution to (\ref{RGflow}), and let $\phi_t$ be a solution to \begin{align*} \dt \phi =&\ \Delta_{g_t} \phi. \end{align*} Then \begin{align*} \dt \brs{\nabla \phi}^2 =&\ \Delta \brs{\nabla \phi}^2 - 2 \brs{\nabla^2 \phi}^2 - \frac{1}{2} \IP{\mathcal{H}, \nabla \phi \otimes \nabla \phi}. \end{align*} \begin{proof} Using the given evolution equations and the Bochner formula we have \begin{align*} \dt \brs{\nabla \phi}^2 =&\ \IP{ 2 \Rc - \frac{1}{2} \mathcal{H}, \nabla \phi \otimes \nabla \phi} + 2 \IP{\nabla \Delta \phi, \nabla \phi}\\ =&\ 2 \IP{ \Delta \phi, \phi} - \frac{1}{2} \IP{\mathcal{H}, \nabla \phi \otimes \nabla \phi}\\ =&\ \Delta \brs{\nabla \phi}^2 - 2 \brs{\nabla^2 \phi}^2 - \frac{1}{2} \IP{\mathcal{H}, \nabla \phi \otimes \nabla \phi}, \end{align*} as required. \end{proof} \end{lemma} \begin{lemma} \label{Hsquaredlemma} Let $(M^4, g)$ be a Riemannian manifold, and $H \in \Lambda^3$. Then \begin{align*} \mathcal{H} =&\ \brs{\star H}^2 g - (\star H) \otimes (\star H). \end{align*} \begin{proof} We express $H = \star \alpha$, and then choose coordinates where $g$ is the identity. It follows that \begin{align*} \mathcal{H}_{ij} = H_{ipq} H_{j}^{pq} =&\ \alpha^r (dV_g)_{ripq} \alpha^s (dV_g)_{sjpq}. \end{align*} It is clear that for any unit vector $v$ orthogonal to $\alpha^{\sharp}$, one has $\mathcal{H}(v,v) = \brs{\alpha}^2$. On the other hand certainly $\mathcal{H}(\alpha^{\sharp}, \alpha^{\sharp}) = 0$, and so the result follows. \end{proof} \end{lemma} \begin{lemma} \label{gradmuprop} Let $(M^4, g_t, I_t, J_t)$ be a solution to GKRF with nondegenerate initial condition in the $B$-field gauge. Furthermore let $\mu = \log \frac{1+p}{1-p}$. Then \begin{align*} \dt \brs{\nabla \mu}^2 =&\ \Delta \brs{\nabla \mu}^2 - 2 \brs{\nabla^2 \mu}^2 - \frac{1-p^2}{8} \brs{\nabla \mu}^4 = \Delta \brs{\nabla \mu}^2 - 2 \brs{\nabla^2 \mu}^2 - \frac{2}{1-p^2} \brs{\theta}^4. \end{align*} \begin{proof} Combining Proposition \ref{logpev} with Lemma \ref{gradientevlemma} yields \begin{align*} \dt \brs{\nabla \mu}^2 =&\ \Delta \brs{\nabla \mu}^2 - 2 \brs{\nabla^2 \mu}^2 - \frac{1}{2} \IP{\mathcal{H}, \nabla \mu \otimes \nabla \mu}. \end{align*} We observe that in four dimensions, $\theta = \star H$. Moreover, $\nabla \mu$ is a multiple of $\nabla p$, which is orthogonal to $\theta$ via Lemma \ref{dptheta}. It follows from Lemma \ref{Hsquaredlemma} that \begin{align*} \mathcal{H}(\nabla \mu, \nabla \mu) = \brs{\theta}^2 \brs{\nabla \mu}^2. \end{align*} On the other hand using the definition of $\mu$ (cf. \ref{logpev10}) and Lemma \ref{dptheta} we have \begin{align*} \brs{\nabla \mu}^2 =&\ \frac{4 \brs{dp}^2}{(1-p^2)^2} = \frac{4}{1-p^2} \brs{\theta}^2, \end{align*} and the result follows. \end{proof} \end{lemma} Now we derive two key a priori estimates from these evolution equations via the maximum principle. \begin{prop} \label{pthetaestimate} Let $(M^4, g_t, I_t, J_t)$ be a solution to GKRF with nondegenerate initial condition in the $B$-field gauge. Then there is a constant $\delta = \delta(I_0,J_0)$ such that \begin{align*} -1< \inf p_0 \leq p_t \leq \sup p_0 < 1, \qquad \sup_{M \times \{t\}} \brs{\nabla \mu}^2 \leq \left[ \left( \sup \brs{\nabla \mu_0}\right)^{-2} + \delta t \right]^{-1}. \end{align*} \begin{proof} The first inequalities follow by applying the maximum principle to the evolution equation of Lemma \ref{pevol}. For the second we first observe that $\frac{1}{8} \inf (1-p_t^2) \geq \frac{1}{8} \inf (1-p_0^2) = \delta > 0$. Then we apply the maximum principle to the result of Lemma \ref{gradmuprop} to show that $\sup \brs{\nabla \mu_t}^2$ is bounded above by the solution to the ODE \begin{align*} \frac{dF}{dt} =&\ - \delta F^2, \qquad F(0) = \sup \brs{\nabla \mu_0}^2. \end{align*} The proposition follows. \end{proof} \end{prop} \subsection{Estimates from the decomposed pluriclosed flow} \label{decflowsec} In this section we derive further a priori estimates for the generalized K\"ahler-Ricci flow, purely from the point of view of pluriclosed flow. In \cite{ST3} the author and Tian observed that the pluriclosed flow reduces naturally to a degenerate parabolic flow of a $(1,0)$-form. In \cite{SBIPCF} we exhibited a further decomposition into a scalar flow coupled to a parabolic flow for a $(1,0)$-form, which naturally reduces to the parabolic complex Monge-Ampere equation when the $(1,0)$-form vanishes. We review this construction in our special setting below. First, as in the reduction of K\"ahler-Ricci flow to the parabolic complex Monge-Ampere equation (cf. \cite{TZ}), one must choose an appropriate family of background pluriclosed metrics whose Aeppli cohomology classes agree with those of the flowing metric. However, in our setting we already know that $(M^4, I)$ admits a holomorphic volume form. It follows that $c_1 = 0$, and so we may choose a Hermitian background metric $h$ such that $\rho_C(h) = 0$. Now suppose $\omega_t$ is a solution to pluriclosed flow on $(M^4, I)$. One can directly check using (\ref{PCF}) (cf. \cite{SBIPCF} Lemma 3, with $\mu = 0$) that if $\alpha_t$ solves \begin{gather} \label{alphaflow} \begin{split} \dt \alpha =&\ \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t}^*_{g_t} \omega_t - \frac{\i}{2} \partial \log \frac{\det g_t}{\det h},\\ \alpha_0 =&\ 0, \end{split} \end{gather} then the one-parameter family of pluriclosed metrics $\omega_{\alpha} = \omega_0 + \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \alpha + \partial \bar{\ga}$ is the given solution to pluriclosed flow. For technical reasons in the proof of convergence, we will actually choose a different initial value of $\alpha$, which corresponds to a different background metric, which is K\"ahler. First of all we claim that $(M^4, I)$ is indeed a K\"ahler manifold, an observation originally appearing in (\cite{AG} Proposition 2). By the Enriques-Kodaira classification of surfaces, the canonical bundle being trivial implies that $(M^4, I)$ is either a torus, a K3 surface, or a (non-K\"ahler) primary Kodaira surface (see \cite{Barth}). However, one can rule out the existence of any kind of biHermitian structure (let alone generalized K\"ahler structure) on primary Kodaira surfaces by observing that it would imply the existence of 3 distinct harmonic self-dual forms, contradicting that $b_2^+(M) = 2$ for such a surface (see \cite{Apostolov} pg. 426 for more details). Since we have now shown that $(M^4, I)$ is K\"ahler, (\cite{Buchdahl} Theorem 12) asserts that given any pluriclosed metric $\omega_0$ on $M$, we can find $\alpha_0 \in \Lambda^{1,0}$ such that $\omega := \omega_0 - \partial \bar{\ga}_0 - \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \alpha_0$ is a K\"ahler metric. We then express $\omega_{\alpha} = \omega + \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \left[ \alpha + \alpha_0 \right] + \partial \left[\bar{\alpha} + \bar{\alpha}_0 \right]$. We will always make such a choice of initial condition for $\alpha$ without further comment. The natural local decomposition of a pluriclosed metric as $\omega = \partial \bar{\ga} + \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \alpha$ is not canonical, as one may observe that $\alpha + \partial f$ describes the same metric, where $f \in C^{\infty}(M, \mathbb R)$. Due to this ``gauge-invariance," the equation (\ref{alphaflow}) is not parabolic, and admits large families of equivalent solutions. In \cite{SBIPCF} we resolved this ambiguity by giving a different description of (\ref{alphaflow}) which is parabolic. In particular, as exhibited in (\cite{SBIPCF} Proposition 3.9, in the case the background metric is fixed and K\"ahler), if one has a family of functions $f_t$ and $(1,0)$-forms $\beta_t$ which satisfy \begin{gather} \label{decflow} \begin{split} \dt \beta =&\ \Delta_{g_t} \beta - T_{g_t} \circ \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta,\\ \dt f =&\ \Delta_{g_t} f + \tr_{g_t} g + \log \frac{\det g_t}{\det h},\\ \alpha_0 =&\ \beta_0 - \i \partial f_0, \end{split} \end{gather} then $\alpha_t := \beta_t - \i \partial f_t$ is a solution to (\ref{alphaflow}). The term $T \circ \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta$ is defined by \begin{align*} (T \circ \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta)_i = g^{\bar{l} k} g^{\bar{q} p} T_{ik\bar{q}} \nabla_{\bar{l}} \alpha_p. \end{align*} We will use this decomposition to obtain two estimates crucial to Theorem \ref{4dLTE}. First we record a prior result: \begin{lemma} \label{betaevlemma} (\cite{SBIPCF} Proposition 4.4) Given a solution to (\ref{decflow}) as above, one has \begin{align} \label{betanormev} \dt \brs{\beta}^2 =&\ \Delta \brs{\beta}^2 - \brs{\nabla \beta}^2 - \brs{\bar{\nabla} \beta}^2 - \IP{Q, \beta \otimes \bar{\beta}} + 2 \Re \IP{\beta, T^{\alpha} \circ \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta}, \end{align} where \begin{align*} Q_{i \bar{j}} =&\ g^{\bar{l} k} g^{\bar{q} p}T_{i k \bar{q}} T_{\bar{j} \bar{l} p}. \end{align*} \end{lemma} In fact the lemma above applies in any dimension, but the next corollary is special to $n=2$. \begin{cor} \label{betaestcor} (\cite{SBIPCF} Corollary 4.5) Given a solution to (\ref{decflow}) as above, one has \begin{align} \label{betanormev2} \dt \brs{\beta}^2 \leq&\ \Delta \brs{\beta}^2 - \brs{\nabla \beta}^2. \end{align} In particular, one has \begin{align} \label{betaest} \sup_M \brs{\beta_t}^2_{g_t} \leq \sup_M \brs{\beta_0}^2_{g_0}. \end{align} \begin{proof} The estimate (\ref{betanormev2}) follows directly from (\cite{SBIPCF} Corollary 4.5) using that the background metric is K\"ahler. The estimate (\ref{betaest}) follows directly from the maximum principle. \end{proof} \end{cor} The estimate (\ref{betaest}) holds for any pluriclosed flow on a K\"ahler surface. The next two propositions require that we are studying a pluriclosed flow associated to a generalized K\"ahler-Ricci flow with nondegenerate initial data. In particular we will assume the evolution equations and a priori estimates of \S \ref{eveqns}. \begin{prop} \label{volumeestimate} Let $(M^4, g_t, I, J_t)$ be a solution to GKRF with nondegenerate initial data in the $I$-fixed gauge. Then there exists a constant $C = C(g_0,I_0,J_0)$ such that $C^{-1} \leq \frac{\det g}{\det g_0} \leq C$. \begin{proof} Using Lemmas \ref{chernlaplace}, \ref{dptheta}, and \ref{logpev} it follows that $\mu$ satisfies, in the $I$-fixed gauge, $\mu$ satisfies \begin{align*} \dt \mu =&\ \Delta \mu - L_{\theta^{\sharp}} \mu = \Delta \mu = \Delta_C \mu. \end{align*} A simple calculation then yields \begin{align} \label{musqev} \dt \mu^2 =&\ \Delta_C \mu^2 - 2 \brs{\nabla \mu}^2 = \Delta_C \mu^2 - \frac{8}{1-p^2} \brs{\theta}^2 \leq \Delta_C \mu^2 - \delta \brs{\theta}^2, \end{align} for some universal constant $\delta$. On the other hand, as discussed above one has $c_1(M, I) = 0$, and hence there exists a background Hermitian metric $h$ such that $\rho_C(h) = 0$. Since we are in the $I$-fixed gauge, the metric is evolving by pluriclosed flow, and hence from \cite{SBIPCF} Lemma 6.1 we conclude that \begin{align*} \dt \log \frac{\det g}{\det h} =&\ \Delta_C \log \frac{\det g}{\det h} + \brs{\theta}^2. \end{align*} Applying the maximum principle directly to this yields an a priori lower bound for the volume form of $g$. On the other hand, setting $\Phi = \log \frac{\det g}{\det h} + \delta^{-1} \mu^2$ we obtain \begin{align*} \dt \Phi \leq&\ \Delta_C \Phi. \end{align*} The maximum principle implies a uniform upper bound for $\Phi$, which implies a uniform upper bound for the volume form of $g$ since $\mu$ is bounded above via Proposition \ref{pthetaestimate}. \end{proof} \end{prop} \begin{prop} \label{dtfest} Let $(M^4, g_t, I, J_t)$ be a solution to GKRF with nondegenerate initial data in the $I$-fixed gauge. Given a solution to (\ref{decflow}) as above, there exists a constant $C$ depending only on the initial data so that for any existence time $t$ one has \begin{align*} \sup_{M \times \{t\}} \brs{\frac{\partial f}{\partial t}} \leq C. \end{align*} \begin{proof} We construct a test function \begin{align*} \Phi = \dt f + A_1 \brs{\beta}^2 + A_2 \brs{\nabla \mu}^2 + A_3 \mu^2, \end{align*} where the choices of constants $A_i$ will be made explicit below. We first compute \begin{align*} \dt \dt f =&\ \dt \left[ n - \tr_{g_t} \left( \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\beta} \right) + \log \frac{\det g_t}{\det h} \right]\\ =&\ \IP{\frac{\partial g}{\partial t}, \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\beta}} - \tr_{g_t} \left[ \dt \left( \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\beta} \right) \right] + \tr_{g_t} \frac{\partial g}{\partial t}\\ =&\ \IP{\frac{\partial g}{\partial t}, \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\beta}} + \tr_{g_t} \partial\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} f_t\\ =&\ \Delta_{g_t}^C f_t + \IP{\frac{\partial g}{\partial t}, \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\beta}}. \end{align*} Combining this with Lemmas \ref{gradmuprop}, Lemma \ref{betaevlemma} and (\ref{musqev}) yields that there is a small constant $\delta > 0$ such that \begin{align*} \left(\dt - \Delta_C \right) \Phi =&\ \IP{\frac{\partial g}{\partial t}, \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\beta}} + A_1 \left[ - \brs{\nabla \beta}^2 - \brs{\bar{\nabla} \beta}^2 - \frac{1}{2} \brs{T}^2 \brs{\beta}^2 + 2 \Re \IP{\beta, T^{\alpha} \circ \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta} \right]\\ &\ + A_2 \left[ \theta \star \nabla \brs{\nabla \mu}^2 - 2 \brs{\nabla^2 \mu}^2 - \delta \brs{\theta}^4 \right] + A_3 \left[ -\delta \brs{\theta}^2 \right]. \end{align*} Using (\ref{betaest}) we have that \begin{align*} 2 \Re \IP{\beta, T^{\alpha} \circ \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta} \leq&\ \ge \brs{\bar{\nabla} \beta}^2 + C \ge^{-1} \brs{\beta}^2 \brs{T}^2\\ \leq&\ \ge \brs{\bar{\nabla} \beta}^2 + C \ge^{-1} \brs{\theta}^2. \end{align*} Similarly, using that $\brs{\theta}^2$ is bounded we can estimate \begin{align*} \theta \star \nabla \brs{\nabla \mu}^2 =&\ \nabla^2 \mu \star \theta^{*2}\\ \leq&\ \ge \brs{\nabla^2 \mu}^2 + C \ge^{-1} \brs{\theta}^4\\ \leq&\ \ge \brs{\nabla^2 \mu}^2 + C \ge^{-1} \brs{\theta}^2. \end{align*} We also note that as $\frac{\partial g}{\partial t}$ is expressed in terms of one derivative of the Lee form and the Chern-Ricci curvature, it follows from Lemma \ref{chernricci} that there is a constant $C = C((1-p^2)^{-1})$ such that \begin{align*} \brs{\frac{\partial g}{\partial t}}^2 \leq C \left( \brs{\nabla^2 \mu}^2 + \brs{\theta}^4 \right). \end{align*} Hence we can estimate \begin{align*} \brs{\IP{\frac{\partial g}{\partial t}, \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\beta}}} \leq&\ C \left( \brs{\nabla^2 \mu}^2 + \brs{\theta}^4 \right) + C \brs{\bar{\nabla} \beta}^2. \end{align*} Putting these preliminary estimates together and choosing $\ge$ sufficiently small we obtain \begin{align*} \left( \dt - \Delta_C \right) \Phi \leq&\ \brs{\bar{\nabla} \beta}^2 \left( C - \frac{A_1}{2} \right) + \brs{\nabla^2 \mu}^2 \left( C - \frac{A_2}{2} \right) + \brs{\theta}^2 \left( C + C A_1 + C A_2 - 2 \delta A_3 \right). \end{align*} It is now clear that if we choose $A_1$ and $A_2$ large with respect to controlled constants, then choose $A_3$ large with respect to controlled constants $A_1, A_2$, and $\delta$ we obtain \begin{align*} \left( \dt - \Delta_C \right) \Phi \leq 0. \end{align*} Applying the maximum principle yields an upper bound for $\frac{\partial f}{\partial t}$, and a very similar estimate can be obtained on $-\frac{\partial f}{\partial t}$, finishing the result. \end{proof} \end{prop} \begin{prop} \label{ftraceest} Given the setup above, there exists a constant $C$ depending only on the initial data such that \begin{align*} \brs{f} \leq C(1 + t), \qquad \brs{ \tr_{g_t} (\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\gb})} \leq C. \end{align*} \begin{proof} The first estimate follows directly from Proposition \ref{dtfest}. For the second we observe using Propositions \ref{volumeestimate} and \ref{dtfest} that \begin{align*} \brs{ \tr_{g_t}(\bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \beta + \partial \bar{\gb})} =&\ \brs{n - \dt f + \log \frac{\det g_t}{\det h}} \leq C. \end{align*} \end{proof} \end{prop} \begin{prop} \label{traceest} Given the setup above, there exists a constant $C$ depending only on the initial data such that \begin{align*} \tr_{g_t} g_0 \leq C e^{C(f - \inf f)}. \end{align*} \begin{proof} Fix some constant $A$, and let \begin{align*} \Phi =&\ \log \tr_{g_t} g_0 - A (f - \inf f). \end{align*} The function $\Phi$ is smooth on $M$, and Lipschitz in $t$ due to Proposition \ref{dtfest}. Using (\cite{SBIPCF} Lemma 6.2 and standard estimates, and combining with (\ref{decflow}) yields \begin{align*} \dt \Phi \leq&\ \Delta \Phi + \left(C - A \right) \tr_{g_t} g_0 + A \dt \inf f + A \log \frac{\det g_t}{\det h}. \end{align*} As we noted, $\inf f$ is only Lipschitz in time, and so inequality holds in the sense of limsups of difference quotients. Considering this at a spatial maximum for $\Phi$, using the result of Proposition \ref{volumeestimate}, and choosing $A$ sufficiently large yields \begin{align*} \dt \Phi \leq -\frac{A}{2} \tr_{g_t} g_0 + C \leq 0, \end{align*} where the last line follows if the maximum value for $\Phi$, and hence $\tr_{g_t} g_0$, is sufficiently large. This yields an a priori upper bound for $\Phi$, after which the result follows. \end{proof} \end{prop} \section{Proof of Theorem \ref{4dLTE}} \label{convsec} In this section we complete the proof of Theorem \ref{4dLTE}. We first establish the long time existence, and then prove a series of lemmas leading to the weak convergence statement. \begin{prop} Let $(M^4, g, I, J)$ be a nondegenerate generalized K\"ahler four-manifold. The solution to generalized K\"ahler-Ricci flow with initial data $(g,I,J)$ exists for all time. \end{prop} \begin{proof} Fix $(M^4, g, I, J)$ a nondegenerate generalized K\"ahler four-manifold. Let $(g_t, I, J_t)$ be the solution to GKRF with this initial data in the $I$-fixed gauge. From Proposition \ref{pthetaestimate}, we have a priori estimates for $(1-p^2)^{-1}$ and $\brs{\theta}^2$ in the $B$-field gauge. As these are estimates on scalar quantities associated to the time-dependent data, they hold automatically in every gauge. Next we choose a solution $(\beta_t, f_t)$ to the decomposed flow as in \S \ref{decflowsec}. Proposition \ref{volumeestimate} provides a uniform bound for $\brs{\log \frac{\det g_t}{\det g_0}}$. Moreover, it follows from Proposition \ref{ftraceest} that $f - \inf f \leq C(1+t)$, hence Proposition \ref{traceest} yields a uniform upper bound for $\tr_{g_t} g_0$ on any finite time interval. Since the volume form is already controlled, this implies $C^{-1} g_0 \leq g_t \leq C g_0$, on $[0,T)$, for a constant $C(T)$. We can now apply (\cite{SBIPCF} Theorems 1.7,1.8), there are higher order estimates for $g$ on any finite time interval. The claim of long time existence follows from standard arguments. \end{proof} \begin{lemma} \label{weakconvlem10} Let $(M^4, g_t, I, J_t)$ be a solution to (\ref{GKRF}) in the I-fixed gauge. Given $a \in \Lambda^1(M)$, one has \begin{align*} \lim_{t \to \infty} \int_M \brs{dp}_g \brs{a}_g dV_g = 0. \end{align*} \begin{proof} To begin we estimate \begin{align*} \int_M \brs{a}^2_g dV_g =&\ \int_M \i a \wedge \bar{a} \wedge \omega_{\alpha}\\ \leq&\ \int_M \brs{a}^2_{\omega} \brs{ \omega_{\alpha}}_{\omega} \omega \wedge \omega\\ \leq&\ C \int_M (\tr_{\omega} \omega_{\alpha}) \omega \wedge \omega\\ =&\ C \int_M \omega_{\alpha} \wedge \omega\\ =&\ C \int_M \omega \wedge \omega\\ =&\ C. \end{align*} Then, using Propositions \ref{pthetaestimate} and \ref{volumeestimate} we have that \begin{align*} \int_M \brs{dp}_g \brs{a}_g dV_g \leq&\ \left( \int_M \brs{dp}^2_g dV_g \right)^{\frac{1}{2}} \left( \int_M \brs{a}^2_g dV_g \right)^{\frac{1}{2}}\\ \leq&\ C \brs{dp}_g \Vol(g_t)\\ \leq&\ C t^{-1}. \end{align*} The lemma follows. \end{proof} \end{lemma} \begin{prop} \label{weakconvprop20} Let $(M^4, g_t, I, J_t)$ be a solution to (\ref{GKRF}) in the I-fixed gauge. Then $\{\omega_{K_i}(t) \}$ converge subsequentially as $t \to \infty$ a triple of closed currents $\{\omega_{K_i}^{\infty} \}$. \begin{proof} First recall that, as explained in \S \ref{decflowsec}, we know that there is a K\"ahler metric $\omega$ such that \begin{align*} \omega_I(t) = \omega + \partial \alpha_t + \bar{\partial}}\newcommand{\dt}{\frac{\partial}{\partial t} \bar{\alpha}_t. \end{align*} It follows that \begin{align*} \int_M \omega_I(t) \wedge \omega = \int_M \omega \wedge \omega \leq C. \end{align*} It follows from (\cite{Demailley} Chapter III Proposition 1.23) that the set $\{\omega_I(t) \}$ is weakly compact in the sense of positive currents. Similarly we have \begin{align*} \brs{ \int_M (\omega_J)^{1,1}_I \wedge \omega} = \brs{ \int_M p \omega_I \wedge \omega} \leq \brs{\int_M \omega_I \wedge \omega} = C. \end{align*} We next claim that $(\omega_J)^{2,0 + 0,2}_I$ is bounded in $L^{\infty}$. First we note that it follows from Propositions \ref{pthetaestimate} and \ref{volumeestimate} that the holomorphic symplectic structure $\Omega$ is uniformly bounded. Since $(\omega_J)^{2,0 + 0,2}_I = q^{-2} \Omega$, Proposition \ref{pthetaestimate} implies the $L^{\infty}$ estimate. Using this and (\cite{Demailley} Chapter III Proposition 1.23) as above we see that $\{\omega_J(t)\}$ is weakly compact in the sense of currents. It follows directly from (\ref{aktriple}), (\ref{K0kahlerform}), and Proposition \ref{pthetaestimate} that $\{\omega_{K_i}(t)\}$ is weakly compact for $i= 0,1,2$. Hence any sequence of times admits a subsequence converging to a triple of limiting currents $\{\omega_{K_i}^{\infty}\}$ as claimed. Next we show that these currents are all closed. We fix $a \in \Lambda^{1}$ and compute, \begin{align*} \brs{ \int_M \omega_{K_1}^{\infty} \wedge d a } =&\ \lim_{t \to \infty} \brs{ \int_M \omega_{K_1} \wedge d a}\\ =&\ \lim_{t \to \infty} \brs{ \int_M d \omega_I \wedge a}\\ \leq&\ \lim_{t \to \infty} \int_M \brs{T_g} \brs{a}_g dV_g\\ \leq&\ \lim_{t \to \infty}\int_M \brs{dp}_g \brs{a}_g dV_g\\ =&\ 0, \end{align*} where the last line follows from Lemma \ref{weakconvlem10}. Similarly, \begin{align*} \brs{ \int_M \omega_{K_2}^{\infty} \wedge d a } =&\ \lim_{t \to \infty} \brs{ \int_M \omega_{K_2} \wedge d a}\\ =&\ \lim_{t \to \infty} \brs{ \int_M d \left( q^{-1} \omega_J - p \omega_I \right) \wedge a}\\ \leq&\ \lim_{t \to \infty} C( (1-p^2)^{-1}) \int_M \brs{T_g} \brs{a}_g dV_g\\ \leq&\ \lim_{t \to \infty} C( (1-p^2)^{-1}) \int_M \brs{dp}_g \brs{a}_g dV_g\\ =&\ 0, \end{align*} where again the last line follows from Lemma \ref{weakconvlem10}. Given this, it follows directly from (\ref{K0kahlerform}) that $d \omega_{K_0}^{\infty} = 0$. The proposition follows. \end{proof} \end{prop} \bibliographystyle{hamsplain}
1,477,468,750,779
arxiv
\section{Introduction} The rod-shaped bacterium \textit{Escherichia coli} usually divides in the center of its long axis between the two segregated copies of its chromosome. The position of the division plane is determined by the Z-ring, a structure built from the protein FtsZ, which is associated with the cytoplasmic membrane and encircles the cytoplasm~\cite{lutk02}. Assembly of the Z-ring is targeted to the cell center by two mechansims. For one, formation of the ring around the two copies of the chromosome is inhibited by proteins binding to DNA, a mechanism termed nucleoid occlusion~\cite{yu99,bern05}. For the other, the proteins MinC, MinD, and MinE suppress ring formation close to the cell poles~\cite{boer89,bi93}, leaving the center as the only possible site. MinC is able to depolymerize FtsZ filaments, while MinD and MinE direct MinC to the cell poles. The spatial distributions of MinD and MinE, and hence of MinC, periodically change in time: after dwelling in the vicinity of one pole for about 40s, the proteins get redistributed to the opposite pole~\cite{rask99,hu99}. These oscillations do not require the presence of MinC. Theoretical works have provided strong evidence that the pole-to-pole oscillations are formed by self-organization of MinD and MinE~\cite{howa05}. All mechanisms proposed so far rely essentially in one way or another on the formation of aggregates of membrane-bound MinD. Such aggregates have been observed \textit{in vitro} and \textit{in vivo}~\cite{hu02,shih03}. The mechanisms can roughly be divided into two classes. In cooperative attachment (CA) models, MinD-aggregates are formed through collective effects during binding to the cytoplasmic membrane~\cite{mein01,howa01,huan03,drew05,pavi06}. In aggregation current (AC) models, aggregates are formed by mutual attraction after the proteins have bound to the membrane~\cite{krus02,meac05}. CA as well as AC models can capture the qualitative features of the Min-oscillations and there is experimental evidence for both processes in \textit{E.~coli}. The strongest hint for aggregation currents is provided by a study of MinD attachment to phospholipid vesicles in the presence of ATP$\gamma$S, a non-hydrolyzable ATP analog~\cite{hu02}. This work suggests a two-step mechanism for the formation of aggregates of membrane-bound MinD involving first the binding of MinD to the membrane and subsequent aggregation. Furthermore, using a yeast two-hybrid assay MinD-MinD interactions were shown to be stronger if both proteins were membrane-bound than if at least one partner was cytoplasmic~\cite{tagh06}. On the other hand, the concentration-dependence of MinD binding to phospholipid membranes deviates from Langmuir isotherms~\cite{lack03}. In addition, the amount of MinD binding to liposomes as a function of the MinD-concentration in the surrounding solution could be fitted by a Hill equation with a Hill coefficient of 2~\cite{mile03}. These two findings clearly suggest some cooperativity during MinD binding to the membrane. In order to reveal whether cooperative attachment or an aggregation current is the primary cause of the Min oscillations, a quantitative comparison of the models with experiments is necessary. This requires, in particular, to fix the model parameters by measurements. There are several techniques to measure protein mobilities using fluorescence microscopy. Direct measurements of the displacement of individual proteins have been used to determine the mobility of membrane proteins in \textit{Caulobacter crescentus}~\cite{deic04}. Fluoresence recovery after photobleaching (FRAP), where the fluorescent proteins present in a defined region are bleached and the recovery of the fluorescence is monitored, was used to measure the diffusion constants of cytoplasmic proteins~\cite{elow99}. Fluorescence correlation spectroscopy (FCS) exploits the fluctuations in the fluorescence intensity emanating from an illuminated region in order to assess dynamic properties~\cite{kric02}. To this end, the autocorrelation function is measured for fluctuations around the mean signal. Fitting this to the autocorrelation curve theoretically expected for the process under study then yields the searched-for values. In bacteria, FCS was used to measure the concentration of phosphorylated CheY involved in chemotaxis~\cite{cluz00} and transcription activity at the RNA level~\cite{le05,le06}. We have used FCS to measure the mobility of MinD and MinE tagged to Green Fluorescent Protein (GFP) in \textit{E. coli}. We found that a simple diffusion process cannot account for the measured autocorrelation curves. We have analyzed the data assuming that either the mobility of membrane-bound proteins or the binding-unbinding dynamics is dominant, and thus obtained key parameters of the various models. As a control we also measured the mobility of GFP and found significant deviations to previous measurements ~\cite{elow99}. \section{Materials and Methods} \subsection{Strains} EGFP and His6-EGFP were expressed in BL21(DE3)pLysS using the vectors pBAT4 and pET9d, respectively (Novagen, CN Biosciences). GFP-MinD was expressed in JS964~\cite{hu03} (J. Lutkenhaus, U. Kansas, USA) and WM1255~\cite{corb02} (W. Margolin, U. Texas, USA), and MinE-GFP in WM1079~\cite{corb02} (W. Margolin, U. Texas, USA). Bacteria were grown overnight in 3ml LB medium at 37$^{\circ}$C together with a concentration of 25$\mu g/ml$ Spectinomycin, 25$\mu g/ml$ Kanamycin, 20$\mu g/ml$ Chloramphenicol and 50$\mu g/ml$ Ampicillin. Of the overnight culture 500$\mu l$ were put in 50$ml$ of fresh LB medium containing the same concentration of antibiotics as above, and grown at 37$^{\circ}$C until the optical density (OD) at 600$nm$ reached $\approx$0.2. Expression of GFP-MinD in JS964 and His6-EGFP in BL21(DE3)pLysS was induced by adding 20$\mu M$ isopropyl-$\beta$-D-thiogalactopyranoside (IPTG). Expression of MinE-GFP in WM 1079 was induced by adding 0.005$\%$ L-arabinose. No inducer was used for GFP-MinD expression in WM1255. Then the bacteria were grown at 30$^{\circ}$C for 1-2 hours, usually sufficient to produce visible fluorescence and to see Min oscillations. To reduce background fluorescence from the buffer, LB medium was prepared with 1g of yeast extract per liter. For microscopy an approximatelty $0.5$mm thick solid slab of $1\%$ agarose (Invitrogene, 15510-027) in LB medium was prepared between a 25mm$\times$75mm glass slide and a 18mm$\times$18mm cover slide. For sample preparation the cover slide was removed and 3$\mu$l of cell culture were spread on the agarose pad, which was then recovered with the slide. For each slide, data collection did not last for longer than 2h. Data were acquired at room temperature and none of the bacteria was dividing during a measurement. \subsection{Optical setup} Fluorescence correlations spectroscopy (FCS) measurements were performed on a LSM Meta 510 system (Carl Zeiss, Jena, Germany) using a 40$\times$ NA 1.2 UV-VIS-IR C-Apochromat water immersion objective and a home-built detection unit at the fiber output channel: A bandpass filter (AHF Analyse Technik, T\"ubingen, Germany) was used behind a collimating achromat to reject the residual laser and background light. Another achromat (LINOS Photonics, G\"ottingen, Germany) with a shorter focal length was used to image the internal pinhole onto the aperture of the fiber of the single photon counting avalanche photo diode (SPCM-CD 3017, PerkinElmer, Boston, MA, USA). The correlation curves were obtained with a hardware correlator Flex 02-01D (correlator.com, Bridgewater, NJ, USA). High magnification laser scanning microscope (LSM) images were taken of the area of interest and the detection volume was placed at the desired position, along the small dimension always in the center, so that there were mainly hoizontal parts of the membrane. Since the LSM and the FCS use the same beam path, the correspondence between FCS spot and LSM image is excellent. The z-position of the spot was stable for many minutes with an accuracy of 100nm, see \cite{chia06} where the same experimental setup had been used. No high frequency oscillations in the image plane were detectable in the alignment correlation curves. We did not observe any drift within 40 consecutive measurements of 5s each. The waist $w_{0}$ of the detection volume was determined in calibration measurements with the fluorescent dye Alexa Fluor 488 diffusing freely in water to be $w_{0}=157\pm12$nm. \subsection{Theoretical autocorrelation curves} The experimental autocorrelation curves were analyzed by fitting autocorrelation curves expected for different processes. Since the height of the detection volume is larger than the diameter of the bacterium, the cytoplasmic diffusion can be approximated to occur in two dimensions. Fitting with a more refined model taking into account the geometry of the detection volume in the bacterium~\cite{genn00} did not significantly change the values we obtained assuming the simplified geometry. For two independent species diffusing with respective diffusion constants $D_{1}$ and $D_{2}$ the correlation curve is~\cite{elso74,kric02} \begin{equation} \label{eq:doublediff} G_{\rm diff}(\tau) = \frac{1}{N_{1}+N_{2}}\left\{F\frac{1}{1+\tau/\tau_{1}} + (1-F)\frac{1}{1+\tau/\tau_{2}}\right\}\quad. \end{equation} Here, the number fraction of particles of one species is given by $F=N_{1}/(N_{1}+N_{2})$, where $N_{1}$ and $N_{2}$, respectively, are the average numbers of particles of the different species in the detection volume. The characteristic relaxation times $\tau_{1}$ and $\tau_{2}$ are linked to the respective diffusion constants and the width $w_{0}$ of the detection volume through $\tau_{i}=w_{0}^{2}/(4D_{i})$, $i=1,2$. For particles changing between a mobile state (diffusion constant $D$) and an immobile state, we assume the following reaction kinetics for the fraction $F$ of the mobile state $dF/dt = -F/\tau_{1} + (1-F)/\tau_{2}$, where $\tau_{1}$ and $\tau_{2}$ are the cytoplasmic and membrane residence times, respectively. The autocorrelation of the fluctuations has the form~\cite{elso74,kric02} \begin{equation} \label{eq:exchange} G_{\rm ex}(\tau)=\frac{(2\pi)^{-2}w_{0}^{2}}{(N_1+N_2)}\ \int_0^{\infty} dk \ k \ e^{-\frac{w_{0}^2}{4}(k_x^2+k_y^2)}\left\{ A_{1}e^{\lambda_1 \tau} + A_{2}e^{\lambda_2 \tau} \right\}\quad,\nonumber \end{equation} where $\lambda_{1,2}=-(Dk^{2}+\tau_{1}^{-1}+\tau_{2}^{-1})/2\pm\left\{ (D k^{2}+\tau_{1}^{-1}+\tau_{2}^{-1})^{2}-4Dk^{2}/\tau_{2}\right\}^{1/2}/2$, $A_{1,2}=\left\{\lambda_{2,1}+Dk^{2}\tau_{1}/(\tau_{1}+\tau_{2})\right\}/(\lambda_{2,1}-\lambda_{1,2})$. For a single species diffusing anomalously in two dimensions the autocorrelation function is given by~\cite{schw99a} \begin{equation} \label{eq:singleanodiff} G_{\rm a}(\tau) = \frac{1}{N}\frac{1}{1+\left(\frac{\tau}{\tau_{\rm a}}\right)^{\alpha}}\quad. \end{equation} Here, $\tau_{\rm a}^{-\alpha}=4\Gamma/w_{0}^{2}$, where the anomalous exponent $\alpha$ governs the spreading of an initially localized distribution $\langle x^{2}\rangle\sim t^{\alpha}$ and where $\Gamma$ is the anomalous transport coefficient. Since the cytosplasmic pH of \textit{E.~coli} is about 7.7~\cite{pada76}, pH-dependent blinking can be neglected~\cite{haup98}. \subsection{Data analysis} The correlation curves were fitted in the time interval $\tau\in[2\mu s,1s]$ with a weighted nonlinear least-squares fitting algorithm. Curves were selected automatically based on convergence of the fit algorithm and goodness of the fit ($\chi^{2}<1.2$ for EGFP and $\chi^2<1.4$ for Min proteins). For the Min proteins, curves were first hand-selected for low and high intensity phases and then selected automatically for quasi-steady states. The latter was checked by requiring a constant fluorescence intensity during the measurement. \\ \section{Results and discussion} \subsection{EGFP} We first measured the autocorrelation of the fluorescence fluctuations of Enhanced Green Fluorescent Protein (EGFP) in living \textit{E.~coli}, see Materials and Methods. A typical correlation curve is depicted in figure~\ref{fig:EGFP}a. From a fit of the correlation curve $G_{\rm diff}$ (\ref{eq:doublediff}) expected for a single diffusing species with $F=1$ an apparent diffusion constant of $D=12\pm2.3\mu{\rm m}^2/s$ is obtained. There are two sources contributing to the error in the value of the diffusion constant. First, a systematic error results from uncertainties in determining the size of the detection volume. The size of the detection volume is needed for transforming the relaxation time that can be extracted from the correlation curve into a diffusion constant. We estimate this error to be 15\%. Secondly, the fit of the expected correlation curve to the data is of finite accuracy due to noise present in the experimental correlation curve (around $10\%$). For the curve in figure~\ref{fig:EGFP}a, the fit quality is reasonable with $\chi^{2}=1.58$. In view of the measurements on MinD and MinE, other models were used for analyzing the correlation curves. Fitting the data to the autocorrelation $G_{\rm diff}$ (\ref{eq:doublediff}) expected for two independent populations of diffusing particles, where $F$ is now a fit parameter, the fit quality was significantly improved, $\chi^{2}=1.08$. For the curve in figure~\ref{fig:EGFP}a, the apparent diffusion constant of the fast component is $D_{1}=15.6\pm3.2\mu{\rm m}^2/s$. Furthermore, we considered the case of molecules switching between a mobile and an immobile state. The corresponding autocorrelation is $G_{\rm ex}$, see (\ref{eq:exchange}). For the diffusion constant in the mobile state, we found $D=14.8\pm5.0\mu{\rm m}^2/s$ with $\chi^{2}=1.08$. Previous reports suggest deviations from normal diffusion of EGFP in vivo, which was attributed to crowding in the cellular environment~\cite{weis04}. We therefore considered anomalous diffusion of EGFP, where the mean square displacement grows as $\sim t^{\alpha}$ with $\alpha<1$. Fitting the correlation $G_{\rm a}$ (\ref{eq:singleanodiff}) we obtained an anomalous exponent of $\alpha=0.85\pm0.14$ and an anomalous transport coefficient $\Gamma=5.9\pm0.94{\mu{\rm m}^{2}}/{s^{\alpha}}$ with $\chi^{2}=1.07$. As can be seen in figure~\ref{fig:EGFP}a, the different fits are barely distinguishable. A histogram of the diffusion constants obtained by fitting $G_{\rm diff}$ to 1021 curves is presented in figure~\ref{fig:EGFP}b. The histogram is well described by a log-normal distribution with a geometric mean of $D=17.9^{+4.3}_{-3.4}\mu{\rm m}^2/s$. Within the accuracy of our measurements, different cells give the same value for the EGFP diffusion constant. Hand-selection of curves as is often done in FCS measurements reduced the $1\sigma$-confidence interval, but did not change the geometric mean. The fraction of the fast component was $F=0.96\pm0.03$, indicating that most of the dynamics can be attributed to diffusion. We arrived at the same conclusion using $G_{\rm ex}$ for the data analysis, see Table~\ref{table:diff}. Figure~\ref{fig:EGFP}c presents a histogram of anomalous exponents from analyzing the same curves using $G_{\rm a}$. The mean value is $\alpha=0.88\pm0.1$ The values of the diffusion constants are surprisingly large in view of previous measurements of the EGFP diffusion constant using FRAP, yielding $D_{\rm GFP}\simeq7.5\mu $m$^{2}/s$, see~\cite{elow99}. There, it was also found that the diffusion constant can be changed significantly by adding a His-tag. We examined His6-EGFP expressed in the same strain as was used for the measurement of EGFP mobility. Using either $G_{\rm diff}$ or $G_{\rm ex}$, we found a decrease in the diffusion constant of about 20\% compared to EGFP. Based on the anomalous diffusion model, we found a slightly reduced value for the anomalous mobility, $\Gamma=5.6^{+5.7}_{-2.8}{\mu{\rm m}^{2}}/{s^{\alpha}}$, while the anomalous exponent remained the same, $\alpha=0.88\pm0.1$ \subsection{Quasi-steady states during Min-oscillations} The analysis of fluorescence fluctuations requires a well-defined average state. Seemingly, this is not the case for the Min-system, which oscillates with a period of about 80s~\cite{rask99,hu99,meac05}, see figure~\ref{fig:qss}a. However, there are regions in the bacterium in which the fluorescence signal is quasi-stationary for about 10s. In figure~\ref{fig:qss}b, we present the fluorescence intensity in a confocal volume positioned in one cell half. There are phases of high and low constant fluorescence as well as phases of strongly varying fluorescence. Respectively, these phases reflect the dwelling of MinD in one cell half for a large fraction of a half-period as well as the comparatively rapid transition to the opposite cell half. Figure~\ref{fig:qss}c displays the fluorescence intensity along the bacterial long axis for six different times separated by 2s. The intensity variations during this period are less then $5\%$. The fluorescence profiles in cross-sections perpendicular to the long axis also show only moderate fluctuations, figure~\ref{fig:qss}d,e. The form of the mean profiles in the low- and high-intensity regions differ significantly: while the profile in the low-intensity region is uni-modal, it is bi-modal in the high-intensity region. This results from a low fraction of membrane-bound MinD in the low-intensity region and a high fraction in the high-intensity region~\cite{rask99}. The fluorescence profiles for different times then indicate that the respective amounts of cytoplasmic and membrane-bound MinD are quasi-stationary within the 10s shown. \subsection{GFP-MinD} We measured MinD-motility in the strain JS964. For the FCS analysis, we considered only fluorescence curves taken from regions in quasi-steady state. Every individual measurement lasted for 5s. A typical autocorrelation curve is shown in figure~\ref{fig:MinDAC}a. From the graph it is obvious that two distinct time-scales are present. We first checked that neither of them is due to bleaching. To this end we adsorbed EGFP on an untreated cover slip. Then we recorded intensity traces and correlation curves for this immobilized EGFP. The intensity curves could be fitted to an exponential curve with a decay time of a few seconds, see figure~\ref{fig:MinDAC}a inset. The corresponding FCS curves show a decay with a similar characteristic time. These times are larger than the two time-scales apparent in figure~\ref{fig:MinDAC}a. Furthermore, the correlation curves were largely independent of the excitation intensity (data not shown). We conclude that neither of the time-scales is due to bleaching of immobilized molecules. To reduce the already weak contribution to the correlations by bleaching of immobilized molecules even further, we recorded the first correlation curve in an experiment only a few seconds after the laser was switched on. One of the time-scales detectable in figure~\ref{fig:MinDAC}a is readily attributed to MinD diffusing freely in the cytoplasm. The existence of MinD bound to the membrane suggests two obvious candidate processes leading to the other time-scale visible in the correlation curves. First of all, it could be attributed to the diffusion of MinD on the membrane. Secondly, it could result from the exchange of MinD between the membrane and the cytoplasm. We analyzed the measured correlation curves using separately the two different models. Of course, the two processes are not mutually exclusive. It would thus be desirable to analyze the correlation curves using a model that accounts for diffusion on the membrane as well as for binding and unbinding. However, the expected correlation curve differs only by small amounts from the curves for either of the two alternatives separately. The accuracy of our measurements does not allow distinguishing between them. Note, that a significant fraction of membrane-bound MinD might be immobile as it is incorporated into helices~\cite{shih03}. Since these molecules do not contribute to fluctuations in the average fluorescence intensity, FCS cannot detect them. We first present the results assuming two states of different mobility. Figure~\ref{fig:MinDAC}b displays the two diffusion constants obtained from fits of $G_{\rm diff}$ (\ref{eq:doublediff}) to different correlation curves measured on a single cell. We interpret the faster diffusion constant to represent the mobility of cytoplasmic MinD. It is of the same order as the diffusion constant of EGFP, see Table~\ref{table:diff}. The smaller diffusion constant is interpreted as resulting from the mobility of membrane-bound MinD. This is supported by the estimated value of the fraction of the fast component. In agreement with the measurements of the cross-sections, figure~\ref{fig:qss}d, e, the fraction of fast moving proteins is larger in the low-intensity regions than in the high-intensity regions, see figure~\ref{fig:MinDAC}c. The difference is 10 to 15\%, less than one might have expected from an investigation of the cross-sectional profiles in figure~\ref{fig:qss}d and e. As mentioned in the previous paragraph, FCS possibly overestimates the fraction of cytoplasmic proteins because some fraction of membrane-bound MinD might be immobile as it forms helices. Note, that the standard deviation of the mean diffusion constant is smaller than the estimated error of a single measurement, showing that the quality of our results is not limited by variations within a cell. Histograms of fast and slow diffusion constants summarizing series of measurements on different cells are shown in figure~\ref{fig:MinDAC}d,~e. Both histograms are well described by a log-normal distribution. The geometric mean value for the fast diffusion constant is $D_{1}= 17.0^{+3.0}_{-2.5}\mu{\rm m}^2/s$. For the slow diffusion constant we find $D_{2}= 0.17^{+0.14}_{- 0.08}\mu{\rm m}^2/s$. This value is one order of magnitude higher than the diffusion constant for the transmembrane histidine kinase PleC measured by single protein tracking in \textit{C.~crescentus}~\cite{deic04}. Since PleC is a transmembrane protein, while MinD binds to the polar heads of the lipids forming the membrane, the values seem to be compatible. No correlation could be detected between the values of the fast and slow diffusion constants (data not shown). Separating the curves into those with low and high average intensity does not reveal significant differences between the respective fast and slow diffusion constants, see Table~\ref{table:diff}. In the low-intensity regions, however, the fraction $F=0.81\pm0.1$ of the fast-diffusing component is larger than in the high-intensity regions, where $F=0.71\pm0.1$. The difference in the fractions is more pronounced when averaging over several measurements on a single cell than when averaging over measurements on different cells, figure~\ref{fig:MinDAC}c. This presumably reflects different protein concentrations in different cells. We analyzed the same data based on the exchange of MinD between a mobile (cytoplasmic) state and an immobile (membrane-bound) state, disregarding diffusion of membrane-bound proteins. As suggested by the cross-section profiles, figure~\ref{fig:qss}d, e, we assume the average fraction of mobile molecules to be constant during one measurement. In that case, the residence times $\tau_{1}$ and $\tau_{2}$ of MinD in the mobile and immobile states, respectively, are related to the fraction $F$ of mobile molecules by $F=\tau_{1}/(\tau_{1}+\tau_{2})$. The results obtained from analyzing the same curves as in figure~\ref{fig:MinDAC}b, c are displayed in figure~\ref{fig:MinDCA}a, b. The diffusion constants are in the same range as the values of the fast diffusion constant obtained above. The same holds for the value of the mobile fraction $F$. Histograms of the diffusion constant and the residence time in the mobile state are presented in figure~\ref{fig:MinDCA}c, d. Differences in the values for low- and high-intensity regions are not significant, although the residence times are on average larger in the low-intensity regions, see Table~\ref{table:diff}. We repeated the measurement using a different strain (WM1255). The average cytoplasmic diffusion constants are smaller in this strain, while the average residence time is a little larger, see Table~\ref{table:diff}. In view of the broadness of the distributions, however, the differences are not significant. \subsection{MinE-GFP} For measuring the mobility of MinE we employed the same strategy as for MinD. An example of a quasi-steady state of the MinE distribution is shown in figure~\ref{fig:MinE}a. As for MinD, two distinct relaxation times can be detected in the correlation curves. We analyzed these curves using the same models as for MinD. Histograms of the two different diffusion constants and of the diffusion constant together with the residence time in the mobile state, respectively, are presented in figure~\ref{fig:MinE}b-e. As before, the histograms are well described by log-normal distributions. Assuming two independent populations with different mobilities, we find $D_{1}=11.2^{+2.9}_{-2.3}\mu{\rm m}^2/s$ and $D_{2}=0.20^{+0.23}_{-0.11}\mu{\rm m}^2/s$. The fraction of the faster diffusion population is $F=0.79\pm0.10$. While cytoplasmic diffusion of MinE is thus smaller than of MinD, the diffusion constants for membrane-bound MinD and MinE are the same. This is compatible with MinE being bound to MinD on the membrane. Assuming the other model, we obtain for MinE $D=9.3^{+2.3}_{-1.9}\mu{\rm m}^2/s$ and $\tau_{1}=396^{+888}_{-274}$ms. The mobile fraction is in this case $F=0.86\pm0.09$. Separating the curves into those from a low-intensity and those of a high-intensity phase, no significant differences between neither the values of the diffusion constants nor the residence times in the different phases can be detected, see Table~\ref{table:diff}. \section{Conclusion and outlook} In the present work we have used FCS to determine Min-protein mobility in living \textit{E.~coli}. The possibility to apply FCS relies on the existence of quasi-stationary steady states in some regions of the bacterium for time intervals of at least 10s, see figure~\ref{fig:qss}c-e and \ref{fig:MinE}a. Our correlation data clearly show the existence of more than one relaxation time, which can satisfactorily be explained by assuming for both MinD and MinE two states of different mobility. We interpret the faster component as resulting from diffusion of cytoplasmic proteins. The second time-scale could result from the mobility of proteins in the membrane-bound state or from transitions between the cytoplasm and the membrane. We find that all in all both models fit equally well to the data, even though for individual curves there can be significant differences in the fit quality. Using either of the corresponding correlation curves, $G_{\rm diff}$ or $G_{\rm ex}$, for analyzing the experimental data, we find values around 16$\mu$m$^{2}/s$ and 10$\mu$m$^{2}/s$ for the respective cytoplasmic diffusion constants of GFP-MinD and MinE-GFP. Therefore, a cytoplasmic MinD molecule explores the volume of a 4$\mu$m long cell within roughly a second. cytoplasmic MinE, which readily forms dimers, needs about 1.5s, i.e., only slightly longer. The diffusion constants we measured for membrane-bound proteins are about two orders of magnitude smaller than the cytoplasmic diffusion constants. For membrane-bound MinD, it is of the same order as the value assumed in the AC model studied in \cite{meac05}. This shows that the mobility of membrane-bound MinD is sufficiently large to allow for an AC mechanism causing the oscillations. In CA models it is usually assumed that membrane-bound proteins are immobile. However, it was found in \cite{fang06} that oscillations can still be generated in a CA model if diffusion of MinD on the membrane is two orders of magnitude smaller than in the cytoplasm. For the average residence time of MinD in the cytoplasm we find a value of about 300ms. In order to generate ``striped'' patterns in long bacteria, the CA model introduced in \cite{huan03} requires the exchange of ATP for ADP on cytoplasmic MinD to be not too fast. For the parameters used there, the authors find an upper critical rate of 1/s. Since the measured residence time provides a lower bound on the exchange rate of about 3/s (only after rebinding of ATP can MinD attach again to the membrane), a re-investigation of the model is in order. The residence time of MinE in the cytoplasm is somewhat larger than for MinD which is compatible with the fact that MinE requires MinD as a substrate in order to bind to the membrane. From the residence time in the cytoplasm and the cytoplasmic diffusion constants, we can determine the diffusion length $\ell=(Dt)^{1/2}$. This is the average distance travelled by a cytoscolic molecule. For MinD and MinE we find $\ell\simeq2\mu$m. This value indicates that in small bacteria of about 2$\mu$m in length, the distribution of cytoplasmic MinD and MinE should be homogenous. As AC models, but not CA models produce oscillations under these conditions, a detailed investigation of short cells might be helpful. Particular attention should be paid to the MinE-ring in these cells. The reason is that the investigation of the CA model by Huang et al.~\cite{huan03} suggests disappearance of the MinE-ring if $\ell$ is increased in comparison to the cell length. The presence or absence of the MinE-ring in short cells might therefore provide interesting information on the mechanism of its formation. Comparing the different values measured in high- and low-intensity phases, respectively, we find that the fraction of cytoplasmic proteins is always larger in the low-intensity phases. This is not only an effect due to averaging but is also present in individual cells, see Figs.~\ref{fig:MinDAC}c and \ref{fig:MinDCA}b. Based on the CA models, a shorter cytoplasmic residence time of MinD in the high-intensity phase than in the low-intensity phase is expected. Indeed, on average, our measurements confirm this expectation, see Table~\ref{table:diff}. Caution should be taken, though, because the error bars are quite large. The average residence time of MinE in the cytoplasm, too, depends on being in a high- or low-intensity phase. This is expected since a higher number of membrane-bound MinD should lead to a higher rate of MinE binding to the membrane. The results presented here are compatible with the mechanism of forming MinD aggregates on the membrane suggested in~\cite{zhou04}: cytoplasmic MinD dimerizes. As a consequence the membrane targeting sequence which is associated with the MinD's C-teminal helix is exposed and the dimer binds to the membrane. Subsequently, MinD-aggregates are formed through attractive interactions between the membrane-bound dimers. In order to test this hypothesis further, mobility measurements on mutant proteins might be helpful. Furthermore, the consequences of this mechanism for the Min-oscillations have to be explored by theoretical analysis. \ack We thank W. Margolin for donation of the strains WM1079 and WM1255 and J. Lutkenhaus for donation of the strain JS964 and R. Hartmann for help with the expression of EGFP and preparation of bacteria. \section*{Glossary} \begin{description} \item[Cytoplasm.] The internal content of a cell (in eukaryotes: except the nucleus). It is surrounded by the cytoplasmic membrane, a lipid bilayer. \item[Fluorescence Corrleation Spectroscopy (FCS).] A spectroscopy method that exploits fluorescence fluctuations around an average value. Fluorescence fluctuations can be due to mobility of the fluorophore or due to chemical reactions. \item[Green Fluorescent Protein (GFP).] A protein from the jellyfish \textit{Aequorea aequorea} that fluoresces green with an emission maximum at 509nm when exposed to blue light. EGFP is a bright mutant of GFP with an emission maximum at 511nm. The molecular weight is about 27kD. \item[Min system.] A set of proteins involved in the determination of the division site in bacteria. Mutations in these proteins lead to the formation of not viable small cells (mini-cells). \item[MinD.] Protein of the Min system with a molecular weight of about 30kD. It is able to hydrolyse ATP. MinD-ATP has a high affinity for the cytoplasmic membrane, while MinD-ADP is cytoplasmic. \item[MinE.] Protein of the Min system in \textit{E. coli} with a molecular weight of about 10kD. When bound to MinD, it is able to speed up ATP-hydrolysis by MinD. \end{description} \section*{References}
1,477,468,750,780
arxiv
\section{Introduction} Jammers typically aim to cause a denial of service (DoS) or reduction of quality (RoQ) at the receiver \cite{orakcal2014jamming} without getting detected. They exploit the wireless transmission by mixing their signals with legitimate communication. As a result, the received frame becomes undecodable, which causes anomalies such as increased repeat requests, reduced throughput, prolonged delays, or a complete breakdown \cite{dos-attacks-ieeecomm-2011}. Powerful jammers that blast channels with unrestrained amounts of energy can be detected easily by the receiver. More subtle jammers, on the other hand, might seek to inject short bursts or lower levels of energy to disrupt communication while circumventing their detection, causing a DoS. In general, jammers must demonstrate high energy efficiency, low detection probability, high levels of DoS, and resistance against physical layer (PHY) anti-jamming techniques. From an information-theoretic perspective, uniform jammers are the most effective for reducing the channel capacity and the code rate \cite{turner1979}. However, emerging techniques such as rate-adaptation algorithms propose efficient countermeasures for such jammer attacks \cite{Gawas2017}. On the other hand, bursty jammers \cite{orackal2012} can be an effective approach for increasing the block-error rate (BLER), where an adversary jams a burst of bits in a transmitted frame. Bursty jammers become more effective in increasing the BLER when their burst patterns are unpredictable to the receiver. With increased BLER, the receiver must compensate by reducing the code rate, which sacrifices information throughput. Therefore it is essential to study countermeasures to such jammer attacks. Most traditional security approaches for wireless technologies are applied to upper layers in the protocol stack \cite{mukherjee2014principles}. However, with the rapid growth in use cases and network density, maintaining security for 5G-and-beyond technologies has become a challenge \cite{Ghasempour2022}. PHY-layer security is an emerging solution to threats that arise with evolving adversaries \cite{wu2018survey}. Under such adversarial behavior, machine learning-based approaches \cite{yuxin2021,upad2019} and spectrum sensing-based approaches \cite{upad2021} have been proposed to counter jamming. Our paper specifically focuses on jamming attacks on soft-information decoders, a topic that has received scant attention in the literature. Our anti-jamming approach applies to general coding schemes and can be effortlessly supported on the physical layer with minimal computational overhead. In this work, we consider a smart, reactive jammer that is bursty and only active during a fraction of the transmission. It is assumed that transmission parameters, such as the modulation and the subcarrier frequency, are known to the adversary. To counter such an attack, we propose a modified log-likelihood ratio (LLR) computation that takes the conditional probability of jamming into account for each index of the received frame. The computation of this posterior probability is performed in two steps. First, an initial value is calculated based on the received signal strength. Anchor points in the received frame, for which the conditional jamming probability is high, are then used to inform the jamming estimates of neighbouring points, based on Markov state transition probabilities. The proposed method is general to any receiver and carried out before decoding. Simulation results show that the proposed method unveils a significant amount of the attack, and therefore the attacker cannot maintain their deniability. Using the universal ORBGRAND algorithm \cite{duffy2021orbgrand,duffy2022orbgrand}, it is shown that an order of magnitude of BLER performance can be recovered with the proposed method and a complete DoS is prevented, using different codebooks, \textit{i.e.} random linear codes (RLCs) and 5G cyclic redundancy check-aided Polar codes (CA-Polar). The rest of the paper is organized as follows. In Section~\ref{sec:bg}, preliminaries are detailed. In Section~\ref{sec:llr}, the smart bursty jammer model and proposed LLR approach with the conditional jamming probability computation is presented. Section~\ref{sec:pj} explains how to approximate the conditional probability of jamming. Results are presented in Section~\ref{sec:res}, followed by concluding remarks in Section~\ref{sec:conc}. \section{Preliminaries}\label{sec:bg} \subsection{PHY Jammer Models}\label{sec:bg:jammer} Protection against an adversary is not possible if the adversary has unlimited resources. Hence, we assume that the adversary must operate under a set of constraints. A fully modeled adversary must have assumptions, goals, and capabilities \cite{do2019role}. Although there are numerous categorizations of jammers in the literature, the PHY jammer models can be summarized in the following two categories \cite{dos-attacks-ieeecomm-2011,xu2005feasibility}. \subsubsection{Constant jammers} As their name suggests, constant jammers continuously emit disruptive signals over the communication medium. Constant jammers are primitive and often can be detected through the radio signal strength indicator (RSSI) component of the receiver. Simple measures such as frequency hopping can be taken as a precaution against these types of jammers \cite{rty-freqhopping}. Moreover, constant jammers are power inefficient, which limits their ability to be mobile. \subsubsection{Reactive jammers} As a power-efficient and more intelligent alternative, reactive jammers emit signals only when it senses a legitimate transmission taking place. This type of jammer causes a signal collision at the receiver that disrupts either part of or all of the frame. Prevention techniques for these types of jammers include interference and RSS sampling \cite{strasser2010detection}. Carefully engineered, smart, reactive jammers are the most challenging type of jammer \cite{huynh2020jammer}. Usually, the error correction algorithms embedded in the PHY can be considered as a first response against such undesired attacks. However, as the error-correcting codes (ECCs) are standardized, their error correction capability is known to the adversary. Therefore, a jammer can corrupt just enough amount of transmission to cause the decoding to fail, eventually causing a DoS. \subsection{Channel model}\label{sec:bg:llr} Every soft-information decoder requires LLR as an input which determines the hard output value of each received signal, and also acts as a measure of \textit{reliability} for those signals. In regular conditions, a larger LLR magnitude indicates more confidence in the received signal. Let $b^n$, a binary channel input of length $n$, be modulated using binary phase-shift keying (BPSK) with the mapping \begin{equation*} b^n \in \{0, 1\}^n \rightarrow x^n \in \{+1, -1\}^n, \end{equation*} where $x^n$ is the modulated channel input variable sequence. Assuming equiprobable symbols and IID noise, given a realization of the received signal, $y^n = (y_0, y_1, \cdots, y_{n-1})$, the LLRs can be calculated per-bit as \begin{equation}\label{eqn:llr:awgn} L(y_i|A) = \frac{2y_i}{\sigma_A^2} \text{, for each } i \in \{1,\ldots,n\}, \end{equation} where $i$ indicates the bit index of the received frame, the conditioning on $A$ indicates it is an AWGN channel without jamming, and $\sigma_A$ is the standard deviation of the channel noise. \section{Evaluating LLRs Under Jamming}\label{sec:llr} \subsection{Threat Model} The adversary is modeled as a jammer which disguises itself by injecting zero-mean Gaussian noise into the system. It is assumed that the smart jammer can retrieve the modulation and subcarrier frequency of operation and therefore injects jammer signals at the legitimate transmission frequency. In order not to alert RSSI of the transmission system, the jammer interferes only a fraction of the time and does so randomly in a bursty fashion. The occurrence of jamming is modeled as a Markov chain at the level of transmitted bits. \begin{figure} \includegraphics[width=\columnwidth]{./figures/markov_diagram.pdf} \caption{Two-state Markov chain model for the reactive jammer model, with transition probabilities $b$ and $g$. The state of the chain for bit $i$ is denoted $S_i$.} \vspace{-1em} \label{fig:markovchain} \end{figure} Fig.~\ref{fig:markovchain} depicts the two-state Markov chain for the jammed channel model. The state $A$ is AWGN only with zero mean and variance $\sigma^2_{A}$. The $J$ state denotes that jamming is present in the channel, with total variance $\sigma^2_{J}$: \begin{equation}\label{eqn:sigma_j} \sigma^2_{J} = \sigma^2_{V} + \sigma^2_{A} \text{.} \end{equation} Here, $\sigma^2_{V}$ is the variance of the signal introduced by the jammer, which is an independent Gaussian random variable. The state transitions are modeled to occur per-bit. The state transition parameters $b$ and $g$ denote the probabilities of passing from the AWGN state to the jamming state and vice versa, respectively. The parameters $b$, $g$, $\sigma^2_{J}$, and $\sigma^2_{A}$ can be estimated, and so are assumed known to the receiver. \subsection{LLR Calculation Under Jamming} Given that a received signal $y_i$ is certainly affected by jamming, then its noise is independent from that impacting other bits and the LLR would be \begin{equation}\label{eqn:llr:jam} L(y_i|J) = \frac{2y_i}{\sigma_J^2} \end{equation} instead of (\ref{eqn:llr:awgn}), where $\sigma_J^2$ is obtained using (\ref{eqn:sigma_j}). In practice, however, the receiver does not have certainty on whether a signal has been impacted by jamming and that induces hidden Markov dependencies in the calculation of the LLRs. Regardless, the decoder will treat the LLR of each bit as being an independent random variable and so the objective is to provide the best marginal estimate of the LLR of each bit given the jamming uncertainty. Let $\{S_i\}$ denote the Markov state process, with $S_i$ taking values of $A$ for the AWGN state and $J$ for the jamming state. Then, the conditional probability of the transmitted binary variable $B_i$ at index $i$ being a $0$ can be computed as \begin{align} p_{B_i|Y^n}(0|y^n) & = \sum_{s^n\in\{A,J\}^n} p_{B_i,S^n|Y^n}(0,s^n|y^n) \notag \\ & = \sum_{s^n\in\{A,J\}^n} p_{B_i|S^n,Y^n}(0|s^n,y^n) p_{S^n|Y^n}(s^n|y^n) \label{eq:bpost} \end{align} taking the entire received signal into account and accordingly, its marginal LLR would be \begin{equation} L(y_i) = \ln \frac{p_{B_i|Y^n}(0|y^n)}{p_{B_i|Y^n}(1|y^n)} \end{equation} which can be expanded to incorporate the jamming uncertainty using equation \eqref{eq:bpost}. Given the received signal sequence $y^n$, the conditional probability of a jamming sequence $s^n\in\{A,J\}^n$ can be computed as \vspace{-0.5em} \begin{equation}\label{eqn:pjx} p_{S^n|Y^n}(s^n|y^n) = \frac{f_{Y^n|S^n}(y^n|s^n) p_{S^n}(s^n)}{f_{Y^n}(y^n)}. \end{equation} where $f$ is the probability density function (PDF). As the noise is independent of the channel states, we have that \begin{equation}\label{eqn:pj_pmf} f_{Y^n|S^n}(y^n|s^n) = \prod_{i=1}^n f_{Y|S}(y_i|s_i) \text{.} \end{equation} Incorporating (\ref{eqn:pj_pmf}) into (\ref{eqn:pjx}), we get \begin{equation}\label{eqn:pjx_new} p_{S^n|Y^n}(s^n|y^n) = \frac{\prod_{i=1}^n f_{Y|S}(y_i|s_i) p_{S^n}(s^n) }{ f_{Y^n}(y^n) }\text{,} \end{equation} where $s^n$ ranges over $2^n$ possible jamming sequences. The probability of a received signal at an arbitrary index $i$ being in the $J$ state can be evaluated from \eqref{eqn:pjx} as \begin{equation}\label{eqn:bruteforce} p_{S_i|Y^n}(J|y^n) = \sum_{s^n\in\{A,J\}^n:s_i=J} p_{S^n|Y^n}(s^n|y^n). \end{equation} The brute force evaluation in \eqref{eqn:bruteforce} requires a burdensome $2^{n-1}$ computations, so in the following section we propose an efficient estimation technique for the marginal probability of jamming. Moreover, for reduced computation, we employ a linear approximation to the full LLR computation unconditioned on jamming state: \vspace{-0.5em} \begin{equation}\label{eqn:llr:proposed} \hat{L}(y_i) = L(y_i|A) p_{S_i|Y^n}(A|y^n) + L(y_i|J) p_{S_i|Y^n}(J|y^n). \end{equation} \section{Approximating the Conditional Probability of Jamming}\label{sec:pj} \subsection{The Impact of False Positives/Negatives on BLER}\label{sec:pj:errortypes} The collected statistical data, which is the received signal in our case, may lead to incorrect conclusions in terms of misidentifying the $A$ and $J$ states. Therefore, it is essential to assess the impact of false positives and false negatives on the BLER performance. \begin{figure} \centering \input{figures/awareness2.tikz} \input{figures/awareness.tikz} \vspace{-1em} \caption{The quantified impact of (a) false positives and (b) false negatives on the BLER performance, using $\text{RLC}[128,105]$ with the ORBGRAND algorithm.} \label{fig:type12} \vspace{-1em} \end{figure} False positives occur when a non-jammed index is mistaken for being jammed. In this scenario, $L(y_i|J)$ in equation \eqref{eqn:llr:jam} is used instead of $L(y_i|A)$ in equation \eqref{eqn:llr:awgn} for the mistaken index $i$. False negatives occur when a jammed index is mistaken for being non-jammed and $L(y_i|A)$ is used instead of $L(y_i|J)$ for the mistaken index $i$. To understand and quantify the impact of mistaking the events on the BLER performance, a set of genie-aided simulations is carried out. A random linear code $\text{RLC}[n, k] = \text{RLC}[128, 105]$ is used as an example where $n$ denotes the code length and $k$ denotes the code dimension, and the universal ORBGRAND algorithm is used to derive the BLER performance. The state information for each received bit is provided to the genie-aided decoder, therefore, $L(y_i|A)$ is used for indices belonging to state $A$, and $L(y_i|J)$ is used otherwise. To quantify the impact of false positives, BLER is measured when $L(y_i|J)$ is used for a proportion of indices that belong to state $A$. Similarly, to quantify the impact of false negatives, BLER is measured when $L(y_i|A)$ is used for a proportion of indices that belong to state $J$. Fig.~\ref{fig:type12} presents the simulated BLER performance for the percentage of false positives (a) and false negatives (b). The SNRs for the AWGN channel and the jammer are selected as $\text{SNR}_A = 12$ and $\text{SNR}_J = 0$ dB, respectively. In both performance assessments, it can be observed that the BLER performance degrades as the number of errors increases. However, the degradation with false negatives is far more severe than the degradation with false positives. For instance, $5\%$ of false negatives has the same amount of impact on BLER performance as about $40\%$ of false positives. This means that the correct identification of jammed indices is far more important than the incorrect identification of the non-jammed indices, and our algorithm should prioritize identifying jammed indices correctly. \subsection{Calculating the Jamming Probability}\label{sec:pj:calc1} The estimation of probability of jamming is performed in two steps. In the first step, an initial estimate of the probability that the $i$-th bit experienced jamming, $p_{S_i|Y^n}(J|y^n)$, is derived based on the marginal distribution given $y_i$ alone $p_{S_i|Y_i}(J|y_i)$. Then, using the Markov state transition probabilities, the probability of jamming for specific indices neighboring those with high jamming likelihoods are recomputed to improve the estimates of their probabilities. The sign of a received signal $y_i$ does not have an impact on $p_{S_i|Y_i}(J|y_i)$. Hence, we consider a new random variable, ${|Y|}$, that is based on the magnitude of $Y$. In this case, the new random variable is a folded Gaussian distribution with PDF, $f_{{|Y|}}({|y_i|})$, equal to \begin{equation}\label{eqn:foldedGauss} \frac{1}{\sigma\sqrt{2\pi}} \bigg{(} \exp\Big{(}\frac{-({|y_i|}-1)^2}{2\sigma^2}\Big{)} + \exp\Big{(}\frac{-({|y_i|}+1)^2}{2\sigma^2}\Big{)} \bigg{)} \end{equation} for $0 \leq i < n$. In the first step, our estimate of $p_{S_i|Y^n}(J|y^n)$ is \begin{equation}\label{eqn:pj_init} p_{S_i|Y_i}(J|y_i) = \frac{f_{{|Y|}|S_i}({|y_i|}\big{|}J) p_{S_i}(J)}{f_{{|Y|}}({|y_i|})}. \end{equation} The conditional PDF expression in (\ref{eqn:pj_init}) can be obtained by substituting the jamming variance in the expression in (\ref{eqn:foldedGauss}). Using the law of total probability, the PDF at the denominator in (\ref{eqn:pj_init}) is expanded as \begin{equation}\label{eqn:bayes1:end} f_{{|Y|}}({|y_i|}) = \sum_{s_i\in\{A,J\}} f_{{|Y|}|S_i}({|y_i|}|s_i) p_{S_i}(s_i) \text{.} \end{equation} Substituting (\ref{eqn:foldedGauss}) and (\ref{eqn:bayes1:end}) into (\ref{eqn:pj_init}), the first approximation for the conditional probability of bit $i$ having experienced jamming can be calculated. \begin{figure} \centering \input{figures/pj.tikz} \caption{$p_{S_i|Y_i}(J|y_i)$ as a function of received signal magnitude, $|y_i|$, based on (\ref{eqn:pj_init}). The SNR of the AWGN channel is fixed at $\text{SNR}_A=12$ dB, and several probabilities are depicted based on various jamming SNRs.} \label{fig:pj} \end{figure} Fig.~\ref{fig:pj} presents $p_{S_i|Y_i}(J|y_i)$ as a function of the received signal magnitude ${|y_i|}$. It is minimized at the absolute value of the BPSK constellation point, $1$, and is maximized as the received signal magnitude drifts away from the constellation. Note that the $p_{S_i|Y_i}(J|y_i)$ takes the stationary probability of jamming at the constellation point since there is always a chance that the received signal could be a result of jamming. \begin{figure} \centering \input{figures/llr_new.tikz} \caption{LLR magnitudes based on AWGN only (\ref{eqn:llr:awgn}), jamming only (\ref{eqn:llr:jam}), and proposed approach (\ref{eqn:llr:proposed}) using the first approximation to marginal conditional jamming probabilities. The SNRs of the AWGN channel and the jamming channel are fixed at 12 dB and 0 dB, respectively.} \label{fig:llr_new} \end{figure} Fig.~\ref{fig:llr_new} depicts LLR magnitude trend lines based on AWGN and jamming conditions, as well as the proposed LLR computation (\ref{eqn:llr:proposed}) when the first approximation $p_{S_i|Y_i}(J|y_i)$ (\ref{eqn:pj_init}) is incorporated. With increasing signal magnitude, the proposed method switches from the AWGN LLR trend line toward the jamming LLR trend line. This behavior reduces the strength of the LLRs at higher magnitudes as a result of the suspicion of jamming, which is then evaluated at soft-information decoders as a less reliable bit index. Consequently, such indices are naturally prioritized for correction, in attempts to identify the transmitted codeword. When the jammer yields signal magnitude that is great enough to come under suspicion $p_{S_i|Y_i}(J|y_i)$ is a good estimate of $p_{S_i|Y^n}(J|y^n)$, as demonstrated in Fig.~\ref{fig:pj} and Fig.~\ref{fig:llr_new}. On the other hand, solely relying on the signal magnitudes would not allow us to detect a substantial portion of the jammed indices as indices with signal magnitudes close to the constellation point would mostly be inferred to be as non-jammed, which is a major limiting factor on the performance improvement. To tackle this issue, we take advantage of the burstiness of the two-state Markov chain. If an index $i$ has a low initial $p_{S_i|Y_i}(J|y_i)$ value, but is neighboring an index $i \mp 1$ that has sufficiently high value, as governed by a threshold, then our estimate of $p_{S_i|Y_i}(J|y_i)$ is increased using a heuristic. This is illustrated in Fig.~\ref{fig:anchor} for a sequence of signals. On the top, the sequence $S^n$ represents the Markov state of a series of indices and is hidden from the receiver. The receiver calculates $p_{S_i|Y_i}(J|y_i)$, from which it determines a subset of indices that have a relatively high values. The indices at which $p_{S_i|Y_i}(J|y_i)$ yields a significantly high value are called \textit{anchor indices}. Using the Markov chain state transition probabilities, the $p_{S_i|Y_i}(J|y_i)$ for the indices adjacent to these anchor indices can be recalculated recursively. As a result, we derive a new, improved set of jamming probability estimations, $\widehat{p_{S_i|Y^n}}(J|y^n)$ for $i\in\{0,\ldots,n-1\}$. \begin{figure} \includegraphics[width=\columnwidth]{./figures/anchor.pdf} \caption{Example state transition probability and their associated $p_{S_i|Y_i}(J|y_i)$. Indices with high $p_{S_i|Y_i}(J|y_i)$ values are designated as anchor indices (represented with the anchor symbol) and Markov chain state transition probabilities are used to recalculate $p_{S_i|Y_i}(J|y_i)$ for the neighboring indices resulting in a better estimate, $\widehat{p_{S_i|Y^n}}(J|y^n)$.} \label{fig:anchor} \end{figure} In order to reconsider the jamming probability of an index, it must either be neighboring to an anchor index or be sandwiched between two distinct anchor indices. Otherwise, the initial $p_{S_i|Y_i}(J|y_i)$ is used. \subsubsection{Index Neighboring to a Single Anchor Index} In the first case, the index of interest neighbors an anchor index on one side and a non-anchor index on the other. For simplicity, let us consider the subject index $i$ and the anchor index $i-1$. Using the Markov property, we create an updated $\widehat{p_{S_i|Y^n}}(J|y^n)$ from its anchoring neighbour. Assuming the anchor is in the $i-1$ position, using the Markov property we set \begin{align}\label{eqn:singleanchor} &\widehat{p_{S_i|Y^n}}(J|y^n) = \notag \\ &bp_{S_{i-1}|Y_{i-1}}(A|y_{i-1})+(1-g)p_{S_{i-1}|Y_{i-1}}(J|y_{i-1}) \text{.} \end{align} \subsubsection{Index Neighboring to Two Anchor Indices} Similar to (\ref{eqn:singleanchor}), we derive the updated jamming probability for an index that is in between two anchor indices. For the subject index located at $i$, the anchor indices are at $i-1$ and $i+1$. Unlike the previous case, the new probability is conditioned on two different states. Based on the values of $p_{S_{i-1}|Y_{i-1}}(J|y_{i-1})$, $p_{S_{i+1}|Y_{i+1}}(J|y_{i+1})$, $b$ and $g$ values, again using the Markov property $\widehat{p_{S_i|Y^n}}(J|y^n)$ is expressed as: \begin{align}\label{eqn:doubleanchor} &\widehat{p_{S_i|Y^n}}(J|y^n) =\notag\\ &\hspace{-0em}\frac{(1-g)(1-g)}{(1-g)(1-g) + bg} p_{S_{i-1}|Y_{i-1}}(J|y_{i-1}) ~ p_{S_{i+1}|Y_{i+1}}(J|y_{i+1}) + \notag\\ &~ \hspace{-0em} \frac{(1-g)}{(1-g) + (1-b)} p_{S_{i-1}|Y_{i-1}}(A|y_{i-1}) p_{S_{i+1}|Y_{i+1}}(J|y_{i+1}) + \notag\\ &~ \hspace{-0em} \frac{(1-g)}{(1-g) + (1-b)} p_{S_{i-1}|Y_{i-1}}(J|y_{i-1}) p_{S_{i+1}|Y_{i+1}}(A|y_{i+1}) + \notag\\&~ \hspace{-0em} \frac{bg}{bg+(1-b)(1-b)} p_{S_{i-1}|Y_{i-1}}(A|y_{i-1}) p_{S_{i+1}|Y_{i+1}}(A|y_{i+1}). \end{align} One possible drawback of estimating $p_{S_i|Y^n}(J|y^n)$ from $(p_{S_1|Y_1}(J|y_1),\ldots, p_{S_n|Y_n}(J|y_n))$ based on temporal correlation is the risk of increasing the number of false negatives, especially at non-jammed indices neighboring jammed indices. These false negatives could potentially have a negative impact on performance. However, as discussed in Section~\ref{sec:pj:errortypes} and as presented in Section~\ref{sec:res}, their impact on BLER performance is negligible. \section{Simulation Results}\label{sec:res} \begin{figure} \centering \input{figures/pj_result.tikz} \vspace{0.5em} \input{figures/pa_result.tikz} \vspace{-1.5em} \caption{Simulated $\widehat{p_{S_i|Y^n}}(J|y^n)$ based on the received signal magnitude, when $S=J$ (top) and $S=A$ (bottom). The SNR of the AWGN channel is fixed at $\text{SNR}_A=12$ dB. All parameters are kept the same as in Fig.~\ref{fig:pj}.} \label{fig:pj2} \vspace{-1.0em} \end{figure} The proposed jamming-aware LLR calculation using $\widehat{p_{S_i|Y^n}}(J|y^n)$ is evaluated. The state transition probabilities are set to $b=0.01$ and $g=0.25$, referring to an overall stationary jamming probability of $\frac{b}{b+g} = 3.84\%$. The SNR for the AWGN state is set as $\text{SNR}_A = 12$ dB. An empirical threshold probability of $0.2$ is used to derive the anchor indices, and the neighboring indices are re-evaluated recursively, \textit{i.e.} until the estimates $\widehat{p_{S_i|Y^n}}(J|y^n)$ of $p_{S_i|Y^n}(J|y^n)$ of the neighboring index fall below the threshold. Fig.~\ref{fig:pj2} visualizes $\widehat{p_{S_i|Y^n}}(J|y^n)$ when the ground truth is $S=J$ (top) and $S=A$ (bottom) with respect to the received signal magnitude of an arbitrary index $i$. Distinct than Fig.~\ref{fig:pj}, the statistics from states $A$ and $J$ states are kept separate to demonstrate the impact of Markov state transitions. All other parameters are kept the same as in Fig.~\ref{fig:pj}. Compared to Fig.~\ref{fig:pj}, the estimate of $p_{S_i|Y^n}(J|y^n)$ near the constellation point has increased significantly for all considered $\text{SNR}_J$ values when $S=J$. This means that the amount of false negatives that originally arise with using (\ref{eqn:pj_init}) solely has decreased significantly. In return, the estimate of $p_{S_i|Y^n}(J|y^n)$ when $S=A$ has not changed significantly compared to the first approximation in Fig.~\ref{fig:pj}. Therefore, false positives due to leveraging temporal correlation with the neighboring indices is negligible. Fig.~\ref{fig:bler2} presents the BLER performance comparison using $\text{RLC}[128,105]$ and 5G NR $\text{CA-Polar}[128,105]$. The ORBGRAND algorithm \cite{duffy2021orbgrand,duffy2022orbgrand} is selected to evaluate the performance of selected codes, since it is a universal soft-information decoder that allows to evaluate distinct codebooks. Moreover, despite its recent introduction to the literature, several works report the practicality of its algorithm family with demonstrated circuit implementations \cite{riaz2021grand,abbas2022orbgrand,condo2022orbgrand}. The jammer SINR represents the legitimate transmission power to the jammer interference power ratio, \textit{i.e.} low SINR indicates a powerful jammer. For both comparison scenarios, the performance using the regular LLR approach (\ref{eqn:llr:awgn}) is the baseline BLER. The red curves represent the proposed approach using $\widehat{p_{S_i|Y^n}}(J|y^n)$. The BLER performance for $p_{S_i|Y_i}(J|y_i)$ without using Markov chain state transitions in (\ref{eqn:singleanchor})-(\ref{eqn:doubleanchor}) is also shown as a reference. The baseline performance shows that a strong jammer yields a BLER close to $1$, \textit{i.e.} almost no packets can be decoded, therefore causing a DoS. The proposed LLR computation (\ref{eqn:llr:proposed}) using $\widehat{p_{S_i|Y^n}}(J|y^n)$ is shown to improve the baseline BLER performance by an order of magnitude at the DoS region, \textit{i.e.} about $9$ out of $10$ packets can be decoded correctly despite the strong jammer interference. The proposed approach demonstrates $2.7$ dB SINR gain at a BLER of $10^{-2}$ and $0.75$ dB gain at a BLER of $10^{-6}$ for both codes. Note that the high SINR values indicate weak jammers which are not typical since they can only degrade the performance marginally and cannot cause a DoS. Nonetheless, the proposed approach is shown to outperform the baseline even in the high SINR region. \begin{figure} \centering \scalebox{1.00}{ \input{figures/results_BLER_RLC_N128_K105.tikz}} \scalebox{1.00}{ \input{figures/results_BLER_CAPolar_N128_K105.tikz}} \caption{BLER comparison of the proposed approach against conventional LLR, with respect to jammer SINR, using $\text{RLC}[128,105]$ (top) and 5G $\text{CA-Polar}[128,105]$ (bottom) codes. The SNR of the AWGN channel is fixed at $\text{SNR}_A = 12$ dB.} \label{fig:bler2} \end{figure} \section{Conclusion}\label{sec:conc} In this work, a novel and general physical layer security approach against a smart bursty jammer is developed. First, the adversary is modeled as disguised in the channel as a Gaussian variable with zero mean. In addition, the overall active duration for the jammer is determined by a two-state Markov chain with low interference time to avoid RSSI detection. To tackle this challenging model, we proposed a new approach based on LLR calculation under adversarial constraints, to improve the BLER performance. The new LLR calculation is based on a conditional probability of jamming, calculated using the received signal and the Markov chain state transition probabilities. The proposed approach is implemented prior to decoding and works with any soft-information decoder. Simulation results with the universal ORBGRAND algorithm using $\text{RLC}[128,105]$ and 5G $\text{CA-Polar}[128,105]$ codes show that the proposed solution can substantially improves the reliability estimates for the received signals, preventing denial of service, and yields a substantial SNR gain of up to $2.7$ dB. Future work includes further improvement of jamming detection accuracy, and comparing with other available soft-information decoders. \section*{Acknowledgements} This work was partially supported by Defense Advanced Research Projects Agency Contract number HR00112120008 and by National Science Foundation ECCS Award numbers 2128517 and 2128555. The content of the information does not necessarily reflect the position or the policy of the US Government, and no official endorsement should be inferred. This publication has emanated from research supported in part by a grant from Science Foundation Ireland under grant number 18/CRT/6049. The opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Science Foundation Ireland. \bibliographystyle{IEEEtran} \section{Introduction} Jammers typically aim to cause a denial of service (DoS) or reduction of quality (RoQ) at the receiver \cite{orakcal2014jamming} without getting detected. They exploit the wireless transmission by mixing their signals with legitimate communication. As a result, the received frame becomes undecodable, which causes anomalies such as increased repeat requests, reduced throughput, prolonged delays, or a complete breakdown \cite{dos-attacks-ieeecomm-2011}. Powerful jammers that blast channels with unrestrained amounts of energy can be detected easily by the receiver. More subtle jammers, on the other hand, might seek to inject short bursts or lower levels of energy to disrupt communication while circumventing their detection, causing a DoS. In general, jammers must demonstrate high energy efficiency, low detection probability, high levels of DoS, and resistance against physical layer (PHY) anti-jamming techniques. From an information-theoretic perspective, uniform jammers are the most effective for reducing the channel capacity and the code rate \cite{turner1979}. However, emerging techniques such as rate-adaptation algorithms propose efficient countermeasures for such jammer attacks \cite{Gawas2017}. On the other hand, bursty jammers \cite{orackal2012} can be an effective approach for increasing the block-error rate (BLER), where an adversary jams a burst of bits in a transmitted frame. Bursty jammers become more effective in increasing the BLER when their burst patterns are unpredictable to the receiver. With increased BLER, the receiver must compensate by reducing the code rate, which sacrifices information throughput. Therefore it is essential to study countermeasures to such jammer attacks. Most traditional security approaches for wireless technologies are applied to upper layers in the protocol stack \cite{mukherjee2014principles}. However, with the rapid growth in use cases and network density, maintaining security for 5G-and-beyond technologies has become a challenge \cite{Ghasempour2022}. PHY-layer security is an emerging solution to threats that arise with evolving adversaries \cite{wu2018survey}. Under such adversarial behavior, machine learning-based approaches \cite{yuxin2021,upad2019} and spectrum sensing-based approaches \cite{upad2021} have been proposed to counter jamming. Our paper specifically focuses on jamming attacks on soft-information decoders, a topic that has received scant attention in the literature. Our anti-jamming approach applies to general coding schemes and can be effortlessly supported on the physical layer with minimal computational overhead. In this work, we consider a smart, reactive jammer that is bursty and only active during a fraction of the transmission. It is assumed that transmission parameters, such as the modulation and the subcarrier frequency, are known to the adversary. To counter such an attack, we propose a modified log-likelihood ratio (LLR) computation that takes the conditional probability of jamming into account for each index of the received frame. The computation of this posterior probability is performed in two steps. First, an initial value is calculated based on the received signal strength. Anchor points in the received frame, for which the conditional jamming probability is high, are then used to inform the jamming estimates of neighbouring points, based on Markov state transition probabilities. The proposed method is general to any receiver and carried out before decoding. Simulation results show that the proposed method unveils a significant amount of the attack, and therefore the attacker cannot maintain their deniability. Using the universal ORBGRAND algorithm \cite{duffy2021orbgrand,duffy2022orbgrand}, it is shown that an order of magnitude of BLER performance can be recovered with the proposed method and a complete DoS is prevented, using different codebooks, \textit{i.e.} random linear codes (RLCs) and 5G cyclic redundancy check-aided Polar codes (CA-Polar). The rest of the paper is organized as follows. In Section~\ref{sec:bg}, preliminaries are detailed. In Section~\ref{sec:llr}, the smart bursty jammer model and proposed LLR approach with the conditional jamming probability computation is presented. Section~\ref{sec:pj} explains how to approximate the conditional probability of jamming. Results are presented in Section~\ref{sec:res}, followed by concluding remarks in Section~\ref{sec:conc}. \section{Preliminaries}\label{sec:bg} \subsection{PHY Jammer Models}\label{sec:bg:jammer} Protection against an adversary is not possible if the adversary has unlimited resources. Hence, we assume that the adversary must operate under a set of constraints. A fully modeled adversary must have assumptions, goals, and capabilities \cite{do2019role}. Although there are numerous categorizations of jammers in the literature, the PHY jammer models can be summarized in the following two categories \cite{dos-attacks-ieeecomm-2011,xu2005feasibility}. \subsubsection{Constant jammers} As their name suggests, constant jammers continuously emit disruptive signals over the communication medium. Constant jammers are primitive and often can be detected through the radio signal strength indicator (RSSI) component of the receiver. Simple measures such as frequency hopping can be taken as a precaution against these types of jammers \cite{rty-freqhopping}. Moreover, constant jammers are power inefficient, which limits their ability to be mobile. \subsubsection{Reactive jammers} As a power-efficient and more intelligent alternative, reactive jammers emit signals only when it senses a legitimate transmission taking place. This type of jammer causes a signal collision at the receiver that disrupts either part of or all of the frame. Prevention techniques for these types of jammers include interference and RSS sampling \cite{strasser2010detection}. Carefully engineered, smart, reactive jammers are the most challenging type of jammer \cite{huynh2020jammer}. Usually, the error correction algorithms embedded in the PHY can be considered as a first response against such undesired attacks. However, as the error-correcting codes (ECCs) are standardized, their error correction capability is known to the adversary. Therefore, a jammer can corrupt just enough amount of transmission to cause the decoding to fail, eventually causing a DoS. \subsection{Channel model}\label{sec:bg:llr} Every soft-information decoder requires LLR as an input which determines the hard output value of each received signal, and also acts as a measure of \textit{reliability} for those signals. In regular conditions, a larger LLR magnitude indicates more confidence in the received signal. Let $b^n$, a binary channel input of length $n$, be modulated using binary phase-shift keying (BPSK) with the mapping \begin{equation*} b^n \in \{0, 1\}^n \rightarrow x^n \in \{+1, -1\}^n, \end{equation*} where $x^n$ is the modulated channel input variable sequence. Assuming equiprobable symbols and IID noise, given a realization of the received signal, $y^n = (y_0, y_1, \cdots, y_{n-1})$, the LLRs can be calculated per-bit as \begin{equation}\label{eqn:llr:awgn} L(y_i|A) = \frac{2y_i}{\sigma_A^2} \text{, for each } i \in \{1,\ldots,n\}, \end{equation} where $i$ indicates the bit index of the received frame, the conditioning on $A$ indicates it is an AWGN channel without jamming, and $\sigma_A$ is the standard deviation of the channel noise. \section{Evaluating LLRs Under Jamming}\label{sec:llr} \subsection{Threat Model} The adversary is modeled as a jammer which disguises itself by injecting zero-mean Gaussian noise into the system. It is assumed that the smart jammer can retrieve the modulation and subcarrier frequency of operation and therefore injects jammer signals at the legitimate transmission frequency. In order not to alert RSSI of the transmission system, the jammer interferes only a fraction of the time and does so randomly in a bursty fashion. The occurrence of jamming is modeled as a Markov chain at the level of transmitted bits. \begin{figure} \includegraphics[width=\columnwidth]{./figures/markov_diagram.pdf} \caption{Two-state Markov chain model for the reactive jammer model, with transition probabilities $b$ and $g$. The state of the chain for bit $i$ is denoted $S_i$.} \vspace{-1em} \label{fig:markovchain} \end{figure} Fig.~\ref{fig:markovchain} depicts the two-state Markov chain for the jammed channel model. The state $A$ is AWGN only with zero mean and variance $\sigma^2_{A}$. The $J$ state denotes that jamming is present in the channel, with total variance $\sigma^2_{J}$: \begin{equation}\label{eqn:sigma_j} \sigma^2_{J} = \sigma^2_{V} + \sigma^2_{A} \text{.} \end{equation} Here, $\sigma^2_{V}$ is the variance of the signal introduced by the jammer, which is an independent Gaussian random variable. The state transitions are modeled to occur per-bit. The state transition parameters $b$ and $g$ denote the probabilities of passing from the AWGN state to the jamming state and vice versa, respectively. The parameters $b$, $g$, $\sigma^2_{J}$, and $\sigma^2_{A}$ can be estimated, and so are assumed known to the receiver. \subsection{LLR Calculation Under Jamming} Given that a received signal $y_i$ is certainly affected by jamming, then its noise is independent from that impacting other bits and the LLR would be \begin{equation}\label{eqn:llr:jam} L(y_i|J) = \frac{2y_i}{\sigma_J^2} \end{equation} instead of (\ref{eqn:llr:awgn}), where $\sigma_J^2$ is obtained using (\ref{eqn:sigma_j}). In practice, however, the receiver does not have certainty on whether a signal has been impacted by jamming and that induces hidden Markov dependencies in the calculation of the LLRs. Regardless, the decoder will treat the LLR of each bit as being an independent random variable and so the objective is to provide the best marginal estimate of the LLR of each bit given the jamming uncertainty. Let $\{S_i\}$ denote the Markov state process, with $S_i$ taking values of $A$ for the AWGN state and $J$ for the jamming state. Then, the conditional probability of the transmitted binary variable $B_i$ at index $i$ being a $0$ can be computed as \begin{align} p_{B_i|Y^n}(0|y^n) & = \sum_{s^n\in\{A,J\}^n} p_{B_i,S^n|Y^n}(0,s^n|y^n) \notag \\ & = \sum_{s^n\in\{A,J\}^n} p_{B_i|S^n,Y^n}(0|s^n,y^n) p_{S^n|Y^n}(s^n|y^n) \label{eq:bpost} \end{align} taking the entire received signal into account and accordingly, its marginal LLR would be \begin{equation} L(y_i) = \ln \frac{p_{B_i|Y^n}(0|y^n)}{p_{B_i|Y^n}(1|y^n)} \end{equation} which can be expanded to incorporate the jamming uncertainty using equation \eqref{eq:bpost}. Given the received signal sequence $y^n$, the conditional probability of a jamming sequence $s^n\in\{A,J\}^n$ can be computed as \vspace{-0.5em} \begin{equation}\label{eqn:pjx} p_{S^n|Y^n}(s^n|y^n) = \frac{f_{Y^n|S^n}(y^n|s^n) p_{S^n}(s^n)}{f_{Y^n}(y^n)}. \end{equation} where $f$ is the probability density function (PDF). As the noise is independent of the channel states, we have that \begin{equation}\label{eqn:pj_pmf} f_{Y^n|S^n}(y^n|s^n) = \prod_{i=1}^n f_{Y|S}(y_i|s_i) \text{.} \end{equation} Incorporating (\ref{eqn:pj_pmf}) into (\ref{eqn:pjx}), we get \begin{equation}\label{eqn:pjx_new} p_{S^n|Y^n}(s^n|y^n) = \frac{\prod_{i=1}^n f_{Y|S}(y_i|s_i) p_{S^n}(s^n) }{ f_{Y^n}(y^n) }\text{,} \end{equation} where $s^n$ ranges over $2^n$ possible jamming sequences. The probability of a received signal at an arbitrary index $i$ being in the $J$ state can be evaluated from \eqref{eqn:pjx} as \begin{equation}\label{eqn:bruteforce} p_{S_i|Y^n}(J|y^n) = \sum_{s^n\in\{A,J\}^n:s_i=J} p_{S^n|Y^n}(s^n|y^n). \end{equation} The brute force evaluation in \eqref{eqn:bruteforce} requires a burdensome $2^{n-1}$ computations, so in the following section we propose an efficient estimation technique for the marginal probability of jamming. Moreover, for reduced computation, we employ a linear approximation to the full LLR computation unconditioned on jamming state: \vspace{-0.5em} \begin{equation}\label{eqn:llr:proposed} \hat{L}(y_i) = L(y_i|A) p_{S_i|Y^n}(A|y^n) + L(y_i|J) p_{S_i|Y^n}(J|y^n). \end{equation} \section{Approximating the Conditional Probability of Jamming}\label{sec:pj} \subsection{The Impact of False Positives/Negatives on BLER}\label{sec:pj:errortypes} The collected statistical data, which is the received signal in our case, may lead to incorrect conclusions in terms of misidentifying the $A$ and $J$ states. Therefore, it is essential to assess the impact of false positives and false negatives on the BLER performance. \begin{figure} \centering \input{figures/awareness2.tikz} \input{figures/awareness.tikz} \vspace{-1em} \caption{The quantified impact of (a) false positives and (b) false negatives on the BLER performance, using $\text{RLC}[128,105]$ with the ORBGRAND algorithm.} \label{fig:type12} \vspace{-1em} \end{figure} False positives occur when a non-jammed index is mistaken for being jammed. In this scenario, $L(y_i|J)$ in equation \eqref{eqn:llr:jam} is used instead of $L(y_i|A)$ in equation \eqref{eqn:llr:awgn} for the mistaken index $i$. False negatives occur when a jammed index is mistaken for being non-jammed and $L(y_i|A)$ is used instead of $L(y_i|J)$ for the mistaken index $i$. To understand and quantify the impact of mistaking the events on the BLER performance, a set of genie-aided simulations is carried out. A random linear code $\text{RLC}[n, k] = \text{RLC}[128, 105]$ is used as an example where $n$ denotes the code length and $k$ denotes the code dimension, and the universal ORBGRAND algorithm is used to derive the BLER performance. The state information for each received bit is provided to the genie-aided decoder, therefore, $L(y_i|A)$ is used for indices belonging to state $A$, and $L(y_i|J)$ is used otherwise. To quantify the impact of false positives, BLER is measured when $L(y_i|J)$ is used for a proportion of indices that belong to state $A$. Similarly, to quantify the impact of false negatives, BLER is measured when $L(y_i|A)$ is used for a proportion of indices that belong to state $J$. Fig.~\ref{fig:type12} presents the simulated BLER performance for the percentage of false positives (a) and false negatives (b). The SNRs for the AWGN channel and the jammer are selected as $\text{SNR}_A = 12$ and $\text{SNR}_J = 0$ dB, respectively. In both performance assessments, it can be observed that the BLER performance degrades as the number of errors increases. However, the degradation with false negatives is far more severe than the degradation with false positives. For instance, $5\%$ of false negatives has the same amount of impact on BLER performance as about $40\%$ of false positives. This means that the correct identification of jammed indices is far more important than the incorrect identification of the non-jammed indices, and our algorithm should prioritize identifying jammed indices correctly. \subsection{Calculating the Jamming Probability}\label{sec:pj:calc1} The estimation of probability of jamming is performed in two steps. In the first step, an initial estimate of the probability that the $i$-th bit experienced jamming, $p_{S_i|Y^n}(J|y^n)$, is derived based on the marginal distribution given $y_i$ alone $p_{S_i|Y_i}(J|y_i)$. Then, using the Markov state transition probabilities, the probability of jamming for specific indices neighboring those with high jamming likelihoods are recomputed to improve the estimates of their probabilities. The sign of a received signal $y_i$ does not have an impact on $p_{S_i|Y_i}(J|y_i)$. Hence, we consider a new random variable, ${|Y|}$, that is based on the magnitude of $Y$. In this case, the new random variable is a folded Gaussian distribution with PDF, $f_{{|Y|}}({|y_i|})$, equal to \begin{equation}\label{eqn:foldedGauss} \frac{1}{\sigma\sqrt{2\pi}} \bigg{(} \exp\Big{(}\frac{-({|y_i|}-1)^2}{2\sigma^2}\Big{)} + \exp\Big{(}\frac{-({|y_i|}+1)^2}{2\sigma^2}\Big{)} \bigg{)} \end{equation} for $0 \leq i < n$. In the first step, our estimate of $p_{S_i|Y^n}(J|y^n)$ is \begin{equation}\label{eqn:pj_init} p_{S_i|Y_i}(J|y_i) = \frac{f_{{|Y|}|S_i}({|y_i|}\big{|}J) p_{S_i}(J)}{f_{{|Y|}}({|y_i|})}. \end{equation} The conditional PDF expression in (\ref{eqn:pj_init}) can be obtained by substituting the jamming variance in the expression in (\ref{eqn:foldedGauss}). Using the law of total probability, the PDF at the denominator in (\ref{eqn:pj_init}) is expanded as \begin{equation}\label{eqn:bayes1:end} f_{{|Y|}}({|y_i|}) = \sum_{s_i\in\{A,J\}} f_{{|Y|}|S_i}({|y_i|}|s_i) p_{S_i}(s_i) \text{.} \end{equation} Substituting (\ref{eqn:foldedGauss}) and (\ref{eqn:bayes1:end}) into (\ref{eqn:pj_init}), the first approximation for the conditional probability of bit $i$ having experienced jamming can be calculated. \begin{figure} \centering \input{figures/pj.tikz} \caption{$p_{S_i|Y_i}(J|y_i)$ as a function of received signal magnitude, $|y_i|$, based on (\ref{eqn:pj_init}). The SNR of the AWGN channel is fixed at $\text{SNR}_A=12$ dB, and several probabilities are depicted based on various jamming SNRs.} \label{fig:pj} \end{figure} Fig.~\ref{fig:pj} presents $p_{S_i|Y_i}(J|y_i)$ as a function of the received signal magnitude ${|y_i|}$. It is minimized at the absolute value of the BPSK constellation point, $1$, and is maximized as the received signal magnitude drifts away from the constellation. Note that the $p_{S_i|Y_i}(J|y_i)$ takes the stationary probability of jamming at the constellation point since there is always a chance that the received signal could be a result of jamming. \begin{figure} \centering \input{figures/llr_new.tikz} \caption{LLR magnitudes based on AWGN only (\ref{eqn:llr:awgn}), jamming only (\ref{eqn:llr:jam}), and proposed approach (\ref{eqn:llr:proposed}) using the first approximation to marginal conditional jamming probabilities. The SNRs of the AWGN channel and the jamming channel are fixed at 12 dB and 0 dB, respectively.} \label{fig:llr_new} \end{figure} Fig.~\ref{fig:llr_new} depicts LLR magnitude trend lines based on AWGN and jamming conditions, as well as the proposed LLR computation (\ref{eqn:llr:proposed}) when the first approximation $p_{S_i|Y_i}(J|y_i)$ (\ref{eqn:pj_init}) is incorporated. With increasing signal magnitude, the proposed method switches from the AWGN LLR trend line toward the jamming LLR trend line. This behavior reduces the strength of the LLRs at higher magnitudes as a result of the suspicion of jamming, which is then evaluated at soft-information decoders as a less reliable bit index. Consequently, such indices are naturally prioritized for correction, in attempts to identify the transmitted codeword. When the jammer yields signal magnitude that is great enough to come under suspicion $p_{S_i|Y_i}(J|y_i)$ is a good estimate of $p_{S_i|Y^n}(J|y^n)$, as demonstrated in Fig.~\ref{fig:pj} and Fig.~\ref{fig:llr_new}. On the other hand, solely relying on the signal magnitudes would not allow us to detect a substantial portion of the jammed indices as indices with signal magnitudes close to the constellation point would mostly be inferred to be as non-jammed, which is a major limiting factor on the performance improvement. To tackle this issue, we take advantage of the burstiness of the two-state Markov chain. If an index $i$ has a low initial $p_{S_i|Y_i}(J|y_i)$ value, but is neighboring an index $i \mp 1$ that has sufficiently high value, as governed by a threshold, then our estimate of $p_{S_i|Y_i}(J|y_i)$ is increased using a heuristic. This is illustrated in Fig.~\ref{fig:anchor} for a sequence of signals. On the top, the sequence $S^n$ represents the Markov state of a series of indices and is hidden from the receiver. The receiver calculates $p_{S_i|Y_i}(J|y_i)$, from which it determines a subset of indices that have a relatively high values. The indices at which $p_{S_i|Y_i}(J|y_i)$ yields a significantly high value are called \textit{anchor indices}. Using the Markov chain state transition probabilities, the $p_{S_i|Y_i}(J|y_i)$ for the indices adjacent to these anchor indices can be recalculated recursively. As a result, we derive a new, improved set of jamming probability estimations, $\widehat{p_{S_i|Y^n}}(J|y^n)$ for $i\in\{0,\ldots,n-1\}$. \begin{figure} \includegraphics[width=\columnwidth]{./figures/anchor.pdf} \caption{Example state transition probability and their associated $p_{S_i|Y_i}(J|y_i)$. Indices with high $p_{S_i|Y_i}(J|y_i)$ values are designated as anchor indices (represented with the anchor symbol) and Markov chain state transition probabilities are used to recalculate $p_{S_i|Y_i}(J|y_i)$ for the neighboring indices resulting in a better estimate, $\widehat{p_{S_i|Y^n}}(J|y^n)$.} \label{fig:anchor} \end{figure} In order to reconsider the jamming probability of an index, it must either be neighboring to an anchor index or be sandwiched between two distinct anchor indices. Otherwise, the initial $p_{S_i|Y_i}(J|y_i)$ is used. \subsubsection{Index Neighboring to a Single Anchor Index} In the first case, the index of interest neighbors an anchor index on one side and a non-anchor index on the other. For simplicity, let us consider the subject index $i$ and the anchor index $i-1$. Using the Markov property, we create an updated $\widehat{p_{S_i|Y^n}}(J|y^n)$ from its anchoring neighbour. Assuming the anchor is in the $i-1$ position, using the Markov property we set \begin{align}\label{eqn:singleanchor} &\widehat{p_{S_i|Y^n}}(J|y^n) = \notag \\ &bp_{S_{i-1}|Y_{i-1}}(A|y_{i-1})+(1-g)p_{S_{i-1}|Y_{i-1}}(J|y_{i-1}) \text{.} \end{align} \subsubsection{Index Neighboring to Two Anchor Indices} Similar to (\ref{eqn:singleanchor}), we derive the updated jamming probability for an index that is in between two anchor indices. For the subject index located at $i$, the anchor indices are at $i-1$ and $i+1$. Unlike the previous case, the new probability is conditioned on two different states. Based on the values of $p_{S_{i-1}|Y_{i-1}}(J|y_{i-1})$, $p_{S_{i+1}|Y_{i+1}}(J|y_{i+1})$, $b$ and $g$ values, again using the Markov property $\widehat{p_{S_i|Y^n}}(J|y^n)$ is expressed as: \begin{align}\label{eqn:doubleanchor} &\widehat{p_{S_i|Y^n}}(J|y^n) =\notag\\ &\hspace{-0em}\frac{(1-g)(1-g)}{(1-g)(1-g) + bg} p_{S_{i-1}|Y_{i-1}}(J|y_{i-1}) ~ p_{S_{i+1}|Y_{i+1}}(J|y_{i+1}) + \notag\\ &~ \hspace{-0em} \frac{(1-g)}{(1-g) + (1-b)} p_{S_{i-1}|Y_{i-1}}(A|y_{i-1}) p_{S_{i+1}|Y_{i+1}}(J|y_{i+1}) + \notag\\ &~ \hspace{-0em} \frac{(1-g)}{(1-g) + (1-b)} p_{S_{i-1}|Y_{i-1}}(J|y_{i-1}) p_{S_{i+1}|Y_{i+1}}(A|y_{i+1}) + \notag\\&~ \hspace{-0em} \frac{bg}{bg+(1-b)(1-b)} p_{S_{i-1}|Y_{i-1}}(A|y_{i-1}) p_{S_{i+1}|Y_{i+1}}(A|y_{i+1}). \end{align} One possible drawback of estimating $p_{S_i|Y^n}(J|y^n)$ from $(p_{S_1|Y_1}(J|y_1),\ldots, p_{S_n|Y_n}(J|y_n))$ based on temporal correlation is the risk of increasing the number of false negatives, especially at non-jammed indices neighboring jammed indices. These false negatives could potentially have a negative impact on performance. However, as discussed in Section~\ref{sec:pj:errortypes} and as presented in Section~\ref{sec:res}, their impact on BLER performance is negligible. \section{Simulation Results}\label{sec:res} \begin{figure} \centering \input{figures/pj_result.tikz} \vspace{0.5em} \input{figures/pa_result.tikz} \vspace{-1.5em} \caption{Simulated $\widehat{p_{S_i|Y^n}}(J|y^n)$ based on the received signal magnitude, when $S=J$ (top) and $S=A$ (bottom). The SNR of the AWGN channel is fixed at $\text{SNR}_A=12$ dB. All parameters are kept the same as in Fig.~\ref{fig:pj}.} \label{fig:pj2} \vspace{-1.0em} \end{figure} The proposed jamming-aware LLR calculation using $\widehat{p_{S_i|Y^n}}(J|y^n)$ is evaluated. The state transition probabilities are set to $b=0.01$ and $g=0.25$, referring to an overall stationary jamming probability of $\frac{b}{b+g} = 3.84\%$. The SNR for the AWGN state is set as $\text{SNR}_A = 12$ dB. An empirical threshold probability of $0.2$ is used to derive the anchor indices, and the neighboring indices are re-evaluated recursively, \textit{i.e.} until the estimates $\widehat{p_{S_i|Y^n}}(J|y^n)$ of $p_{S_i|Y^n}(J|y^n)$ of the neighboring index fall below the threshold. Fig.~\ref{fig:pj2} visualizes $\widehat{p_{S_i|Y^n}}(J|y^n)$ when the ground truth is $S=J$ (top) and $S=A$ (bottom) with respect to the received signal magnitude of an arbitrary index $i$. Distinct than Fig.~\ref{fig:pj}, the statistics from states $A$ and $J$ states are kept separate to demonstrate the impact of Markov state transitions. All other parameters are kept the same as in Fig.~\ref{fig:pj}. Compared to Fig.~\ref{fig:pj}, the estimate of $p_{S_i|Y^n}(J|y^n)$ near the constellation point has increased significantly for all considered $\text{SNR}_J$ values when $S=J$. This means that the amount of false negatives that originally arise with using (\ref{eqn:pj_init}) solely has decreased significantly. In return, the estimate of $p_{S_i|Y^n}(J|y^n)$ when $S=A$ has not changed significantly compared to the first approximation in Fig.~\ref{fig:pj}. Therefore, false positives due to leveraging temporal correlation with the neighboring indices is negligible. Fig.~\ref{fig:bler2} presents the BLER performance comparison using $\text{RLC}[128,105]$ and 5G NR $\text{CA-Polar}[128,105]$. The ORBGRAND algorithm \cite{duffy2021orbgrand,duffy2022orbgrand} is selected to evaluate the performance of selected codes, since it is a universal soft-information decoder that allows to evaluate distinct codebooks. Moreover, despite its recent introduction to the literature, several works report the practicality of its algorithm family with demonstrated circuit implementations \cite{riaz2021grand,abbas2022orbgrand,condo2022orbgrand}. The jammer SINR represents the legitimate transmission power to the jammer interference power ratio, \textit{i.e.} low SINR indicates a powerful jammer. For both comparison scenarios, the performance using the regular LLR approach (\ref{eqn:llr:awgn}) is the baseline BLER. The red curves represent the proposed approach using $\widehat{p_{S_i|Y^n}}(J|y^n)$. The BLER performance for $p_{S_i|Y_i}(J|y_i)$ without using Markov chain state transitions in (\ref{eqn:singleanchor})-(\ref{eqn:doubleanchor}) is also shown as a reference. The baseline performance shows that a strong jammer yields a BLER close to $1$, \textit{i.e.} almost no packets can be decoded, therefore causing a DoS. The proposed LLR computation (\ref{eqn:llr:proposed}) using $\widehat{p_{S_i|Y^n}}(J|y^n)$ is shown to improve the baseline BLER performance by an order of magnitude at the DoS region, \textit{i.e.} about $9$ out of $10$ packets can be decoded correctly despite the strong jammer interference. The proposed approach demonstrates $2.7$ dB SINR gain at a BLER of $10^{-2}$ and $0.75$ dB gain at a BLER of $10^{-6}$ for both codes. Note that the high SINR values indicate weak jammers which are not typical since they can only degrade the performance marginally and cannot cause a DoS. Nonetheless, the proposed approach is shown to outperform the baseline even in the high SINR region. \begin{figure} \centering \scalebox{1.00}{ \input{figures/results_BLER_RLC_N128_K105.tikz}} \scalebox{1.00}{ \input{figures/results_BLER_CAPolar_N128_K105.tikz}} \caption{BLER comparison of the proposed approach against conventional LLR, with respect to jammer SINR, using $\text{RLC}[128,105]$ (top) and 5G $\text{CA-Polar}[128,105]$ (bottom) codes. The SNR of the AWGN channel is fixed at $\text{SNR}_A = 12$ dB.} \label{fig:bler2} \end{figure} \section{Conclusion}\label{sec:conc} In this work, a novel and general physical layer security approach against a smart bursty jammer is developed. First, the adversary is modeled as disguised in the channel as a Gaussian variable with zero mean. In addition, the overall active duration for the jammer is determined by a two-state Markov chain with low interference time to avoid RSSI detection. To tackle this challenging model, we proposed a new approach based on LLR calculation under adversarial constraints, to improve the BLER performance. The new LLR calculation is based on a conditional probability of jamming, calculated using the received signal and the Markov chain state transition probabilities. The proposed approach is implemented prior to decoding and works with any soft-information decoder. Simulation results with the universal ORBGRAND algorithm using $\text{RLC}[128,105]$ and 5G $\text{CA-Polar}[128,105]$ codes show that the proposed solution can substantially improves the reliability estimates for the received signals, preventing denial of service, and yields a substantial SNR gain of up to $2.7$ dB. Future work includes further improvement of jamming detection accuracy, and comparing with other available soft-information decoders. \section*{Acknowledgements} This work was partially supported by Defense Advanced Research Projects Agency Contract number HR00112120008 and by National Science Foundation ECCS Award numbers 2128517 and 2128555. The content of the information does not necessarily reflect the position or the policy of the US Government, and no official endorsement should be inferred. This publication has emanated from research supported in part by a grant from Science Foundation Ireland under grant number 18/CRT/6049. The opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Science Foundation Ireland. \bibliographystyle{IEEEtran}